source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
391,117 | I tried to define a variable in a 'sh -c' command string: sh -c "TMP=??; echo $TMP;" Nothing was printed. Why can't I define a variable in a 'sh -c' string? | sh -c 'TMP=??; echo $TMP;' When using double quotes the parameter expansion occurs when the command line is built i.e. the shell does not see TMP=??; echo $TMP; as its parameter but TMP=??; echo ; if $TMP is empty in the calling shell environment. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250225/"
]
} |
391,210 | I would like to have a script that would prepend each line of a stdin with information how long it took to generate it.Basically for input: foobarbaz I would like to have 0 foo10 bar5 baz Where 10 is 10 seconds passed between printing foo and printing bar, similar for 5, it took 5 seconds after printing bar to print baz. I know there is a utility ts that shows timestamps and I know about https://github.com/paypal/gnomon , but I would prefer not to use javascript to do that. Is there a standard tool for that or should I use awk and do processing? | Let's suppose that the script generating the output is called generate . Then, to display the number of seconds it takes for it to generate each line: $ generate | ( t0=$(date +%s); while read -r line; do t1=$(date +%s); echo " $((t1-t0)) $line"; t0=$t1; done ) 2 foo 0 foo 5 foo 3 foo The same commands spread out over multiple lines looks like: generate | ( t0=$(date +%s) while read -r line do t1=$(date +%s) echo " $((t1-t0)) $line" t0=$t1 done ) Alternatively, for convenience, we can define a shell function that contains this code: timer() { t0=$(date +%s); while read -r line; do t1=$(date +%s); echo " $((t1-t0)) $line"; t0=$t1; done; } We can use this function as follows: $ generate | timer 0 foo 2 foo 4 foo 3 foo How it works t0=$(date +%s) This captures the current time at the start of the script in seconds-since-epoch. while read -r line; do This starts a loop which reads from standard input t1=$(date +%s) This captures the time in seconds-since-epoch at which the current line was captured. echo " $((t1-t0)) $line" This prints out the time in seconds that it took for the current line. t0=$t1 This updates t0 for the next line. done This signals the end of the while loop. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18728/"
]
} |
391,216 | I have a very strange case of DVD reading failure. It's a video DVD recorded a couple of years ago on a DVD-R disc. Two weeks ago our video DVD player would start having trouble reading parts of it. The problem was reproducible on two PCs, where only part of the videos could be played. Plus, current playing time and total duration of VOB files would me messed up, indicating that the files were corrupted somehow, likely due to disc aging. Three days later, I wanted to play the same videos again (same setup, same laptop DVD reader) but I couldn't even view the DVD's file structure. To avoid further losses, I launched $ ddrescue -n -b 2048 /dev/sr0 ~/dvd_dump After six hours and since I needed to shutdown my laptop, I interrupted the process and decided I would restart it later. However, two days later, the DVD reader would not even recognize the presence of a disc, throwing a no medium found error whenever I tried. Also the disc would not start spinning upon closing the tray. This situation was reproducible on three different DVD readers. Some details: $ dmesg | grep sr[ 3.078673] sr 3:0:0:0: [sr0] scsi3-mmc drive: 52x/52x writer dvd-ram cd/rw xa/form2 cdda tray[ 3.078891] sr 3:0:0:0: Attached scsi CD-ROM sr0[ 3.078960] sr 3:0:0:0: Attached scsi generic sg4 type 5 $ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 931,5G 0 disk ├─sda1 8:1 0 100M 0 part ├─sda2 8:2 0 597,5G 0 part /mnt/win├─sda3 8:3 0 1K 0 part ├─sda4 8:4 0 1G 0 part ├─sda5 8:5 0 323,2G 0 part /└─sda6 8:6 0 9,8G 0 part [SWAP]sde 8:64 0 1,8T 0 disk ├─sde1 8:65 0 1K 0 part ├─sde5 8:69 0 398,7G 0 part ├─sde6 8:70 0 951,8G 0 part └─sde7 8:71 0 512,5G 0 part sr0 11:0 1 1024M 0 rom $ cd-info --dvdcd-info version 0.83 x86_64-pc-linux-gnuCopyright (c) 2003, 2004, 2005, 2007, 2008, 2011 R. BernsteinThis is free software; see the source for copying conditions.There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR APARTICULAR PURPOSE.CD location : /dev/cdromCD driver name: GNU/Linux access mode: IOCTLVendor : TSSTcorpModel : CDDVDW SH-S223C Revision : ME00Hardware : CD-ROM or DVDCan eject : YesCan close tray : YesCan disable manual eject : YesCan select juke-box disc : NoCan set drive speed : NoCan read multiple sessions (e.g. PhotoCD) : YesCan hard reset device : YesReading.... Can read Mode 2 Form 1 : Yes Can read Mode 2 Form 2 : Yes Can read (S)VCD (i.e. Mode 2 Form 1/2) : Yes Can read C2 Errors : Yes Can read IRSC : Yes Can read Media Channel Number (or UPC) : Yes Can play audio : Yes Can read CD-DA : Yes Can read CD-R : Yes Can read CD-RW : Yes Can read DVD-ROM : YesWriting.... Can write CD-RW : Yes Can write DVD-R : Yes Can write DVD-RAM : Yes Can write DVD-RW : No Can write DVD+RW : No__________________________________Disc mode is listed as: Error in getting information++ WARN: error in ioctl CDROMREADTOCHDR: No medium foundcd-info: Can't get first track number. I give up. $ sudo mount -t iso9660 /dev/sr0 /mnt/dvdmount: block device /dev/sr0 is write-protected, mounting read-onlymount: no medium found on /dev/sr0 I am very surprised to see a DVD going from large parts still readable to totally undetectable within a week. I treated the DVD with maximal care, did not carry it around anywhere, and no physical damage (scratches or the like) were visible before, nor are there any now. My questions: Is there a way to force-read the disc with a low-level command that would ignore the no medium found error, or cd-info 's Can't get first track number error (see above)? Is it plausible that a faulty DVD reader would overwrite the DVD-R with zeros when it was only supposed to read-access it via the ddrescue command quoted above? What options do I have left? Is there any chance that professional data rescuing services will be capable of salvaging my disc? Are there high-end DVD readers on the market with superior error-correcting capabilities that might extract something from that disc? (Before someone asks: Yes, I am sure it was one and the same disc!) Edit: The disc is a TDK DVD-R Data/Video 4.7 GB 1-8x. DVD readers (from cd-info output): TSSTcorp CDDVDW SH-S223C Revision ME00 (3.5-inch drive on desktop PC at workplace, age unknown) MATSHITA DVD-RAM UJ-844 Revision RC06 (on Lenovo Thinkpad X301, ~8 years old) unknown (I will edit this once I get the info) | Initial Disk Quality Since we're not talking about a hard drive here, which can be recovered, you're sadly experiencing the reality that most consumer grade DVDs are NOT reliable. A physical hard disk has magnetic charged particles in a solid surface, and recovery in worst case scenarios happens by taking the disk apart physically, and then using a special read head to go sector by sector and reading the magnetic data. A dye based writable dvd has no such option, the dye is the data, and if it's degraded, there is nothing to recover. The only likely prospect given your scenario is that the dye used to create DVD-R disk data simply is failing. Unlike commercial music CDs, which usually use a sort of laser enscribed aluminum sheet for the actual data, burnable dvds use a layer of dye, which, and this is particularly relevant with lower end stuff, tends to fade away and fail over years. Note that 'cheaper' does not necessarily refer to price or brand, it refers to the factory that actually made the disks. Some major brands that people would have believed were high quality were not in fact high quality. This is also, by the way, why you always have to buy name DVDs, like Taiyo Yuden (back when I did a lot of burning, that's the only brand I would use), or quality 'archival' dvds [which cost a LOT more than regular ones]. Cheaper dvds/cds use cheaper dyes, which can and do simply fail over time. The only brand I ever trusted was taiyo yuden, because it never outsourced its disk production and was known to be be high quality, and was made in Japan [this could have changed since I did a lot of work with CDs/DVDs]. If the disk was made by a no name or whitebox brand, then it's junk, for sure. I've seen dyes on CDs fade after just a few years when they were cheap no name brands. You may have heard the term 'archival DVDs', this is what it refers to, the expected life of a properly stored optical storage medium. Since you did not mention the brand of the disk, that suggests to me that you were not aware of these realities, since the key to all optical disk storage is the quality of the disk and its dyes, and that's a function of what brand and model version it is. It's also worth mentioning that rewritable CD/DVDs are far far worse, and should in no case be relied on for really anything at all, at least that's been my experience, over years, I think the data loss become so high that I no longer even consider optical rewritable anything to be a storage medium at all. How to Kill your DVD/CD Things that can damage these dyes: heat, probably number one, like, putting disk on or near radiators, electronic components that generate heat, direct sunlight, etc. Once you have damaged the dye there is no longer any data there to restore, and if you damaged the actual data table on the dvd itself that says where the data is, then there is nothing to restore either. Other good ways to lose your data is to buy cheap or no name disks, that means you will wake up one day and find your data gone, without having to do a thing!! Just from the dye failing on its own, though I'm sure environmental causes can contribute, like the place it's stored getting a bit warmer for a few days in a row, or whatever. Last Chances Before you give up, you might want to try the following: Take a clean soft cloth, dampen it slightly, and carefully wash the surface of the disk. In particular pay close attention to the inner, not outer part. The disks burn from the inside out, so if there is dirt or scratches there, it can make the reads fail. Try it in a very high quality DVD reader, like a plextor, something with a very good laser. Lasers can and do wear out, and, coupled with dye fade and failure, that can make disks fail to read. Do not assume a DVD reader laser is in good condition, they die over the life the reader, so the newer it is, the better. As with dyes, there is a significant difference between the quality of the lasers used by CD/DVD readers/writers. The laser is what heated the dye when it was burned, and it's what tries to read it when it reads. The better the laser, the more likely it can pick up faint traces of fading dyes. take a magnifier and closely examine the disk under strong light close to the inside rings, to see if you can see anything unusual there, like cloudy surface, or something like that. This is what it has to read to discover that there is a disk with data present. laptop dvd drives are junk, cheap, low end, flimsy, lightweight, I wouldn't even consider them to be a valid test, make sure to use a real PC dvd reader, that is not too used or old, or cheap. The device you listed: http://www.driverscape.com/download/tsstcorp-cddvdw-sh-s223c-ata-device appears to be over 10 years old, is that correct? If so, that's clearly not going to be a good tool to test this with. Back in the old days, you could actually rely on the fact that certain brands and models had superior lasers, but in my opinion, those days are sadly gone. But if you research it, you may find that there are still certain specific models that are known to have a superior laser, obviously, I would expect those to be higher end and expensive. Note that the age of the reader is also important, because these lasers basically begin to die as soon as they are used, so the newer the high end reader is, the higher your chances at recovery. However, with that said, the marked decline in reading over a short time suggests to me that something started breaking down the DVD dyes, until it failed, possibly an inadvertent leaving it in direct sun or on a hot surface, without you having realized it, or simply the dye itself breaking down because it was either cheap or defective in the first place. Likewise the drives not finding any data suggests the file system data table in the start of the dvd is either gone or corrupted beyond repair or read. [Update: user data added to post] As I suspected, you used no name dvd blanks, which are basically guaranteed to be non trustworthy, and your dvd readers are old. Old burners by the way can also have weaker lasers, which makes the dye imprint weaker, so it looks like you're suffering from all the worst case scenarios. Where to get good disks? I haven't bought these in a while, but to make this complete, I searched, and was very happy to find that supermediastore.com still exists, and still sells Taiyo Yuden. https://www.supermediastore.com/products/jvc-taiyo-yuden-dvd-r-8x-silver-thermal-dvd-recordable-single-layer-media-jdmr-zz-sb8-100pk This was the best place to buy 10-15 years ago, and it appears to still be around, which is great, I always trusted that store and their products, which is an unusual thing to be able to say nowadays. Note that other good archival options are things like Verbatim DataLifePlus, but in general, I only stick with brands where I know that the brand is actually made by the company whose brand the disks carry. But the key thing to remember is: if the data on the disk is important, PAY FOR QUALITY DISKS!! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250321/"
]
} |
391,223 | I use Cygwin on my laptop (DOS). I have a collection of scripts from my colleagues, and my own. I am not an IT person, not knowledgeable in Unix. I am following my colleagues' syntax and able to manage a few simple things. The scripts worked well on my old laptop. I just changed laptop and installed Cygwin. When I run my scripts, they do not work. Here is one example of the error message I get: line 1: $':\r': command not foundline 5: syntax error near unexpected token `$'\r''line 5: `fi Here are the top 5 lines of my script :iter=1if [ -f iter.txt ] then rm ./iter.txt fi Can someone please explain how I can get around this problem? | You have Windows-style line endings. The no-op command : is instead read as :<carriage return> , displayed as :\r or more fully as $':\r' . Run dos2unix scriptname and you should be fine. If you don't have dos2unix , the following should work almost anywhere (and I tested on MobaXterm on Windows): vi -b filename Then in vi , type: :%s/\r$//:x You're good to go. In vim , which is what you are using on Cygwin for vi , there are multiple ways of doing this. Another one involves the fileformat setting, which can take the values dos or unix . Either explicitly change it after loading the file with set fileformat=unix or explicitly force the file format when writing out the file with :w +fileformat=unix For more on this, see the many questions and answers here covering this subject, including: Remove ^M character from log files Why is vim creating files with DOS line endings? How to add a carriage return before every newline? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/391223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250338/"
]
} |
391,293 | Appending a variable which contains the tee command and log file name, not getting the expected result, since echo is printing the variable content. Below is the file content, actual output and expected result. Content of shell script: #!/bin/bashlog="2>&1 | tee -a report.txt"echo ""echo '***************-:START OF THE REPORT:-***********' $log After running the script. console op:***************-:START OF THE REPORT:-*********** 2>&1 | tee -a report.txt expected- console op:***************-:START OF THE REPORT:-***********report.txt file content:***************-:START OF THE REPORT:-*********** Also note that ,variable $log should contain both the tee command and file name, since I don't want to hard code the tee command at end of each echo command. | I'm assuming you would like to use this unusual way of tagging the actual pipe and tee onto the end of the log message because you don't want to have to pipe every echo ? Well, you can do it this way too: logfile='report.txt'log () { if [ -z "$1" ]; then cat else printf '%s\n' "$@" fi | tee -a "$logfile"}log "some message"log "some other message"log "multi" "line" "output"{ cat <<LETTERDear Sir,It has come to my attention, that a whole slew of things may becollected and sent to the same destination, just by using a singlepipe. Just use { ... } | destinationSincerely, $LOGNAMELETTER cat <<THE_PSPS.Here's the output of "ls -l":THE_PS ls -l echo "LOL"} | log That is, wrap the awkward tee command in a simple shell function whose name is easy to type and just pipe the output to it. In this example, the log function uses printf to output the data given on its command line, or switches to reading from standard input if these was no command line arguments. You could even use ./your_original_script_without_special_logging 2>&1 | log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391293",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250395/"
]
} |
391,344 | When I run gpg --with-fingerprints --with-colons keyfile.key , I get a machine parsable output on stdout containing the key fingerprint for the key inside the keyfile (which is exactly what I want), plus the following error on stderr: gpg: WARNING: no command supplied. Trying to guess what you mean ... So GnuPG is guessing the command correctly, but for my life I can't figure out what command it is guessing. I have tried almost all of the commands listed on the man page. I'm using GnuPG 2.2. Does anybody know the correct command to read a key file and show information about the key? Edit : Ideally the mechanism would be able to read the keyfile from stdin, such as cat keyfile.key | gpg --some-command I should have mentioned this earlier but so many commands for gpg work with stdin I didn't even consider it a relevant constraint. | The good folks at the [email protected] mailing list had the answer: For versions >= 2.1.23: cat keyfile.key | gpg --with-colons --import-options show-only --import For versions >= 2.1.13 but < 2.1.23: cat keyfile.key | gpg --with-colons --import-options import-show --dry-run --import | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/391344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128786/"
]
} |
391,419 | Is there a way to display a miniature version of the secondary display's contents on the primary display? This is for doing a live demo during a presentation, while keeping a dual-display setup to also show notes on the primary display. I have a vague feeling that this should be possible somehow , but I don't have enough experience with xrandr. A coarse internet search hasn't revealed anything useful. I'm aware of solutions like LibreOffice's presenter mode, or pdf-presenter-console . This question is about displaying interactive contents on the secondary console. A solution doesn't need to involve xrandr, a utility that captures a portion of the screen and clones it in another window would also work. I'm on Ubuntu 17.04. | actually you can use vlc for that purpose (if I understand your needs correctly). First you click on Media -> Open Capture Device Then set Capture mode to Desktop .Then you check show more options and at the end you can add a few options as sen there . But I guess you can figure it out yourself by trying. It depends on your screen resolution and which screen you want to record and display on which. Comes from there : How to record the desktop in VLC media player - second screen | Super User | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391419",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19205/"
]
} |
391,450 | I want to execute a find search with multiple maxdepth s, depending on the directory. Is this possible? It seems that -maxdepth is "global", but I was curious if there is some workaround. (I'm aware it's possible to execute two separate commands, but using one would be faster, and it would keep the calling code simpler) | You can sort of emulate it using -prune on different matching pathnames. For example, to match /etc to depth 1, and /lib to depth 2: find /etc /lib/ \ \( -regex '/etc/[^/]*/.*' -prune \) \ -o \( -regex '/lib/[^/]*/[^/]*/.*' -prune \) \ -o -print You need to be careful to add the last line to print or otherwise operate on the remaining files and directories. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12814/"
]
} |
391,456 | When you want to redirect both stdout and stderr to the same file, you can do it by using command 1>file.txt 2>&1 , or command &>file.txt .But why is the behavior of command 1>file.txt 2>file.txt different from the above two commands? The following is a verification command. $ cat redirect.sh#!/bin/bash{ echo -e "output\noutput" && echo -e "error" 1>&2; } 1>file.txt 2>&1{ echo -e "output\noutput" && echo -e "error" 1>&2; } 1>file1.txt 2>file1.txt{ echo -e "error" 1>&2 && echo -e "output\noutput"; } 1>file2.txt 2>file2.txt{ echo -e "output" && echo -e "error\nerror" 1>&2; } 1>file3.txt 2>file3.txt{ echo -e "error\nerror" 1>&2 && echo -e "output"; } 1>file4.txt 2>file4.txt$ ./redirect.sh$ echo "---file.txt---"; cat file.txt;\echo "---file1.txt---"; cat file1.txt; \echo "---file2.txt---"; cat file2.txt; \echo "---file3.txt---"; cat file3.txt; \echo "---file4.txt----"; cat file4.txt; ---file.txt---outputoutputerror---file1.txt---erroroutput---file2.txt---outputoutput---file3.txt---errorerror---file4.txt----outputrror As far as the results are seen, it looks like that the second echo string overwrites the first echo string when you run command 1>file.txt 2>file.txt , but I do not know why it will. (Is there a reference somewhere?) | You need to know two things: An open file descriptor known to the application-mode side of a process references an internal kernel object known as a file description , which is an instance of an open file. There can be multiple file descriptions per file, and multiple file descriptors sharing a file description. The current file position is an attribute of a file description . So if multiple file descriptors map to a single file description, they all share the same current file position, and a change to the file position enacted using one such file descriptor affects all of the other such file descriptors. Such changes are enacted by processes calling the read() / readv() , write() / writev() , lseek() , and suchlike system calls. The echo command calls write() / writev() of course. So what happens is this: command 1>file.txt 2>&1 only creates one file description, because the shell only opens a file once. The shell makes both the standard output and standard error file descriptors map to that single file description. It duplicates standard output onto standard error. So a write via either file descriptor will move the shared current file position: each write goes after the previous write the common file description. And as you can see the results of the echo commands do not overwrite one another. command 1>file.txt 2>file.txt creates two file descriptions, because the shell opens the same file twice, in response to the two explicit redirections. The standard output and standard error file descriptors map to two different file descriptions, which then in turn map to the same single file. The two file descriptions have entirely independent current file positions, and each write goes immediately the previous write on the same file description. And as you can see the result is that what is written via one can overwrite what is written via the other, in various different ways according to what order you execute the writes in. Further reading What is an open file description? What exactly is a file offset in lsof output? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/391456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219904/"
]
} |
391,458 | the following syntax print the output1 down echo "$status" output1: component_name : TEZ_CLIENT recovery_enabled : true component_name : WEBHCAT_SERVER recovery_enabled : true component_name : YARN_CLIENT recovery_enabled : true component_name : ZKFC recovery_enabled : true component_name : ZOOKEEPER_CLIENT recovery_enabled : true component_name : ZOOKEEPER_SERVER recovery_enabled : true how to add the printf syntax in order to get the following lines: expected output component_name : TEZ_CLIENT recovery_enabled : true component_name : WEBHCAT_SERVER recovery_enabled : true component_name : YARN_CLIENT recovery_enabled : true component_name : ZKFC recovery_enabled : true component_name : ZOOKEEPER_CLIENT recovery_enabled : true component_name : ZOOKEEPER_SERVER recovery_enabled : true | echo "$status" | awk '{printf("%s %s %-20s %20s %s %s\n", $1, $2, $3, $4, $5, $6)}' Will produce component_name : TEZ_CLIENT recovery_enabled : truecomponent_name : WEBHCAT_SERVER recovery_enabled : truecomponent_name : YARN_CLIENT recovery_enabled : truecomponent_name : ZKFC recovery_enabled : truecomponent_name : ZOOKEEPER_CLIENT recovery_enabled : truecomponent_name : ZOOKEEPER_SERVER recovery_enabled : true The %-20s format will reserve 20 characters for a left-aligned string, while %20s reserves 20 characters for a right-aligned string. Adjust the 20 s to fit your desired format. In a previous incarnation of this question, you had various transformations using sed and filtering with grep . It is likely these could also be done within the same awk script, directly from a source file. Or, if the file is a JSON file (as you say in comments), directly by jq from that same file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
391,464 | I'm trying to tidy-up my photos which are, for various historic reasons, scattered all over my system. To enable me to make a start on this task, I've been trying to use the command line to construct a list of all directories that contain one or more jpg files. I'm certain that I don't have to be concerned about looking for other image file formats, but I do have to allow for jpg appearing in upper and lower case. I'd like each directory name to appear only once in the final list. To provide an example, if I have the following directories each of which contain one or more jpg or JPG files.... ~Mike/Pictures~Mike/Pictures/London/Olympics~Mike/Pictures/London~Mike/Pictures/London/Holiday~Mike/Photos~Mike/Family History/Swaine I'd like the results to appear with each directory listed only once - irrespective of the number of image files it might contain - preferably sorted and then written to a file ~Mike/Family History/Swaine~Mike/Photos~Mike/Pictures~Mike/Pictures/London~Mike/Pictures/London/Holiday~Mike/Pictures/London/Olympics My command line skills are just not up to this! I can use a lot of the simpler forms of single commands, but once they get complex and/or have to be piped things tend to go wrong. | Assuming JPEG image files have the suffix .jpg : find "$HOME" -type f -name '*.jpg' \ -exec sh -c 'for d; do dirname "$d"; done' sh {} + | sort -u -o jpeg_dirs.txt This relies on you not having funky directory names with newlines in their names. With GNU find : find "$HOME" -type f -name '*.jpg' -printf '%h\n' | sort -u -o jpeg_dirs.txt These find commands will find all JPEG images under your home directory and print the names of the directories where they were found. The sort -u will take this list of directory names, sort it, and remove duplicates. The result will be written to the file jpeg_dirs.txt in the current directory. Looking back at this in early 2021 (3.3 years later) I cringe a bit because my solution above, albeit not wrong per se, is a bit backwards. It also makes the obvious assumption about "nice filenames" (no newlines). When you're using find to search for directories, don't search for regular files as I did above; actually search for directories. Once we have the directories, we can look in each of them and see if the is a file matching *.jpg or *.JPG (further filename suffixes are easy to add): find "$HOME" -type d -exec bash -O nullglob -O dotglob -O extglob -c ' for dirpath do set -- "$dirpath"/*.@(jpg|JPG) [[ "$#" -gt 0 ]] && printf "%s\n" "$dirpath" done' bash {} + This peeks into each directory from your home directory down and tries to expand the globbing pattern *.@(jpg|JPG) in each. This pattern, which also could have been written as two separate patterns, *.jpg and *.JPG , matches all the files that we're looking for. If one name matches, we assume that this is a directory that we want to output the name of. This will give false positives for directories that contain only sub directories with these suffixes. The shell options that we run our internal bash script with allows us to match hidden names ( dotglob ), allows the globbing pattern to disappear completely if it doesn't match anything rather than remain unexpanded ( nullglob ), and allows us the use of the ksh -inspired extended globbing pattern @(...|...) . Using the zsh shell: typeset -U list=(~/**/*.(jpg|JPG)(.DN:h))print -rC1 $list This creates an array variable, list , that has the property that it only stores unique elements. It is initialized to the result of expanding a filename globbing pattern. The pattern matches all JPEG image files in or below the home directory, and the :h at the end removes the actual filename from the generated pathnames. The . makes the pattern only match regular files, and D and N acts like dotglob and nullglob in bash . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250523/"
]
} |
391,467 | I'm using Debian Stretch and would like to install flashplugin-nonfree .My apt/sources.list file contains deb http://ftp.ca.debian.org/debian/ stretch main contrib non-freedeb-src http://ftp.ca.debian.org/debian/ stretch main contrib non-freedeb http://security.debian.org/debian-security stretch/updates main contrib non-freedeb-src http://security.debian.org/debian-security stretch/updates main contrib non-free# stretch-updates, previously known as 'volatile'deb http://ftp.ca.debian.org/debian/ stretch-updates main contrib non-freedeb-src http://ftp.ca.debian.org/debian/ stretch-updates main contrib non-freedeb http://download.virtualbox.org/virtualbox/debian stretch contribdeb http://www.deb-multimedia.org stretch main non-freedeb [arch=i386,amd64] http://mariadb.mirror.globo.tech/repo/10.2/debian stretch main But when I run sudo apt-get install flashplugin-nonfree , I get this error message: sudo apt-get install flashplugin-nonfreeReading package lists... DoneBuilding dependency tree Reading state information... DonePackage flashplugin-nonfree is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, or is only available from another sourceE: Package 'flashplugin-nonfree' has no installation candidate I tried to do an update or commenting everything except the first line in my apt/sources.list but still have the same error message. Any idea on how to install it anyway? | The flashplugin-nonfree package is no longer maintained , if you need the Flash plug-in you should install it manually : Download the latest release of the plugin in tar.gz format from Adobe . As root, extract the downloaded archive and copy libflashplayer.so to /usr/lib/flashplugin-nonfree . Fix the file’s ownership and permissions: chmod 644 /usr/lib/flashplugin-nonfree/libflashplayer.sochown root:root /usr/lib/flashplugin-nonfree/libflashplayer.so If necessary, install the alternative so Firefox will find the plug-in. If update-alternatives --list flash-mozilla.so returns /usr/lib/flashplugin-nonfree/libflashplayer.so , it’s set up correctly (this would be the case if you had the plug-in working in the past), but if it doesn’t, you need to run update-alternatives --quiet --install /usr/lib/mozilla/plugins/flash-mozilla.so flash-mozilla.so /usr/lib/flashplugin-nonfree/libflashplayer.so 50 For future upgrades, you only need to repeat the first three steps. Alternatively, pepperflashplugin-nonfree still works and will install the Flash plug-in for Chromium. You’ll need to download the package manually and install it using dpkg -i , but it will download the plug-in and set everything up for you. You can keep the plug-in up-to-date by running update-pepperflashplugin-nonfree --install (and check its status using --status ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250521/"
]
} |
391,480 | How can I remove all punctuation from a file using sed, with the exception of certain characters? Specifically, I want to keep these characters: @-_$% I am currently using this to remove all punctuation, but I am not sure how to modify it to keep those characters: cat input.txt | sed -e "s/[[:punct:]]\+//g" > output.txt Alternatively, how can I remove only certain punctuation? Like: .!?,'/\"()[]^* | The flashplugin-nonfree package is no longer maintained , if you need the Flash plug-in you should install it manually : Download the latest release of the plugin in tar.gz format from Adobe . As root, extract the downloaded archive and copy libflashplayer.so to /usr/lib/flashplugin-nonfree . Fix the file’s ownership and permissions: chmod 644 /usr/lib/flashplugin-nonfree/libflashplayer.sochown root:root /usr/lib/flashplugin-nonfree/libflashplayer.so If necessary, install the alternative so Firefox will find the plug-in. If update-alternatives --list flash-mozilla.so returns /usr/lib/flashplugin-nonfree/libflashplayer.so , it’s set up correctly (this would be the case if you had the plug-in working in the past), but if it doesn’t, you need to run update-alternatives --quiet --install /usr/lib/mozilla/plugins/flash-mozilla.so flash-mozilla.so /usr/lib/flashplugin-nonfree/libflashplayer.so 50 For future upgrades, you only need to repeat the first three steps. Alternatively, pepperflashplugin-nonfree still works and will install the Flash plug-in for Chromium. You’ll need to download the package manually and install it using dpkg -i , but it will download the plug-in and set everything up for you. You can keep the plug-in up-to-date by running update-pepperflashplugin-nonfree --install (and check its status using --status ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250541/"
]
} |
391,535 | On Ubuntu 16.04 server I'd like to have another name for eth0, for example. | From man ip-link : alias NAME give the device a symbolic name for easy reference. Example giving an alias to the lo interface: $ sudo ip link set lo alias mycustomaliasforlo$ ip link show lo1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 alias mycustomaliasforlo However, note that this only creates a symbolic reference, meaning you cannot use this alias as a real device name. For example, the following will fail: $ ip link show mycustomaliasforloDevice "mycustomaliasforlo" does not exist. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146335/"
]
} |
391,566 | I've around 40k lines of file with paths which I need to take size from remote site (using rsh & du -scL command). I tried with while read line but due to remote connection, it exits after 100+ lines. So I tried to copy all the lines in to a file with du -scL and input the file in to one rsh but again it's crashed saying 'command too long'. I need to do a script which calc the size of all these paths from remote site using rsh and du . #!bin/bashfor line in `cat $destbang1`do rsh vnc.<remotesite> du -sL $line | awk '{print $1}' >> /tmp/size1.txtdonetotal=`gawk '{ sum += $1 }; END { print sum}' /tmp/size1.txt`echo $total | From man ip-link : alias NAME give the device a symbolic name for easy reference. Example giving an alias to the lo interface: $ sudo ip link set lo alias mycustomaliasforlo$ ip link show lo1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 alias mycustomaliasforlo However, note that this only creates a symbolic reference, meaning you cannot use this alias as a real device name. For example, the following will fail: $ ip link show mycustomaliasforloDevice "mycustomaliasforlo" does not exist. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249219/"
]
} |
391,568 | I have an external SSD with Linux Mint Installed onto it (so not as a live USB, but as if it were an internal SSD). I used this around 1 week ago, and it worked fine. Tried to boot into it today, and suddenly it's not fine anymore. When I boot I get a blue screen with a grey alert box saying: Could not start the Xserver (your graphical environment)due to some internal error. And it goes on to tell me to contact my system administrator and restart MDM when the error was corrected. The only option I can select there is 'ok' which will reboot the system. This error persistently repeats itself on every boot attempt. When I boot into the recovery mode, either of two things happen, seemingly randomly. Either it will boot into recovery mode fine, but that obviously doesn't give me a fully functional system (e.g. I can't use a dual monitor setup because it doesn't load the display manager and I can't seem to manually start it either). The other thing that can happen when booting into recovery mode is that I end up in a terminal-like environment which is definitely not the root shell, but appears to simply be Linux Mint without a gui. I can log in and seemingly access the terminal, but haven't done much other than sudo reboot now in order to try and boot back into something that works a little better. After logging into the terminal-like environment, I do get an error which reads: sktemp: failed to create file via template `/var/lib/update-notifier/tmp.XXXXXXXXXX/': read-only file systemrun-parts: /etc/update-motd.d/95/hwe-e01 exited with return code 1/usr/lib/update-notifier/update-motd-fsck-at-reboot: 33: /usr/lib/update-notifier/update-metd-fsck-at-reboot: cannot create /var/lib/update-notifier/fsck-at-reboot: Read-only file system So it appears the system believes the file system to be read-only, where it shouldn't be. Now I could (maybe? possibly?) simply CHOWN the entire system, but that doesn't seem like a wise idea. I have also looked through the syslog , but that didn't really tell me anything. The word 'error' appears 16 times, but I have no idea as to how to interpret this information. I have, of course, done my research prior to posting here. Following some of the things I found, I ran fsck -Af -M Both as sudo and su , but both times all I got back was fsck from util-linux 2.20.1 which doesn't really tell me anything. Also I found that the OS may put a filesystem in Read-Only to prevent corruption, but I'm uncertain what would have caused said corruption, much less how to fix it. Now I'm not looking for someone to 'fix this for me'. Instead, I'd love if any of you would be able to point me in the right direction as to what could be going on, if there are any other tests I can run to narrow down the issue etc. Some specs: Release: LinuxMint 17.2 (rafaela) GNOME: 3.8.4 (Ubuntu 2015-12-02) Xorg: 1.15.1 (20 July 2017 07:11:13PM) CPU: Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz Graphics: Intel onboard | From man ip-link : alias NAME give the device a symbolic name for easy reference. Example giving an alias to the lo interface: $ sudo ip link set lo alias mycustomaliasforlo$ ip link show lo1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 alias mycustomaliasforlo However, note that this only creates a symbolic reference, meaning you cannot use this alias as a real device name. For example, the following will fail: $ ip link show mycustomaliasforloDevice "mycustomaliasforlo" does not exist. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209933/"
]
} |
391,577 | I am using egrep ( grep -E ) with a PATTERN file. ( -f path/to/file ). This is done in an infinite loop on a stream of text. This implies that I cannot accumulate and pass ALL the input to grep at once (like *.log ). Is there a way to make grep "save" the NFA it is building from the PATTERN file to use for it's next run? I have searched Google and read the documentation with no luck. I'll try to explain it a little bit more. I need to locate a fixed number of strings with regexes (This is not a part of a question but feel free to suggest otherwise) such as IP addresses, domains etc. The search is done on a feed from the internet. You can think about it as a stream of text.I can't use grep on all of the input since it's a stream.I can accumulate a chunk of stream and use grep on it (thus not using grep on each line) but this is also limited (let's say for 30 seconds). I know grep is building an NFA from all of its patterns (in my case from a file).So my question here is: can I tell grep to save that NFA for the next run, since it is not going to change? That would save me the time of building that NFA every time. | No, there's no such thing. Generally the cost of starting grep (fork a new process, load the executable, shared library, dynamic linkage...) would be a lot greater than compiling the regexps, so this kind of optimisation would make little sense. Though see Why is matching 1250 strings against 90k patterns so slow? about a bug in some versions of GNU grep that would make it particularly slow for a great number of regexps. Possibly here, you could avoid running grep several times by feeding your chunks to the same grep instance, for instance by using it as a co-process and use a marker to detect the end. With zsh and GNU grep and awk implementations other than mawk : coproc grep -E -f patterns -e '^@@MARKER@@$' --line-bufferedprocess_chunk() { { cat; echo @@MARKER@@; } >&p & awk '$0 == "@@MARKER@@"{exit};1' <&p}process_chunk < chunk1 > chunk1.greppedprocess_chunk < chunk2 > chunk2.grepped Though it may be simpler to do the whole thing with awk or perl instead. But if you don't need the grep output to go into different files for different chunks, you can always do: { cat chunk1 while wget -qO- ...; done # or whatever you use to fetch those chunks ...} | grep -Ef patterns > output | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250621/"
]
} |
391,578 | During bootup, the storage partition is getting loaded. However I see this error message. What is this error 16 represents? UBI error: ubi_open_volume: cannot open device 0, volume 0, error -16 I could also see errors like initvars_srom_pci, SROM CRC ErrorUBI error: ubi_wl_init: wl_init done 58 avail pebs, 688 reserved, free_count 146 Can someone point-out what are these errors are about? If the UBIFS file system is mounted with these errors, what is the effect? | No, there's no such thing. Generally the cost of starting grep (fork a new process, load the executable, shared library, dynamic linkage...) would be a lot greater than compiling the regexps, so this kind of optimisation would make little sense. Though see Why is matching 1250 strings against 90k patterns so slow? about a bug in some versions of GNU grep that would make it particularly slow for a great number of regexps. Possibly here, you could avoid running grep several times by feeding your chunks to the same grep instance, for instance by using it as a co-process and use a marker to detect the end. With zsh and GNU grep and awk implementations other than mawk : coproc grep -E -f patterns -e '^@@MARKER@@$' --line-bufferedprocess_chunk() { { cat; echo @@MARKER@@; } >&p & awk '$0 == "@@MARKER@@"{exit};1' <&p}process_chunk < chunk1 > chunk1.greppedprocess_chunk < chunk2 > chunk2.grepped Though it may be simpler to do the whole thing with awk or perl instead. But if you don't need the grep output to go into different files for different chunks, you can always do: { cat chunk1 while wget -qO- ...; done # or whatever you use to fetch those chunks ...} | grep -Ef patterns > output | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250201/"
]
} |
391,590 | Hi will get list of lines from the command prompt like the following. After doing logread and applying some filters Sat Sep 9 07:28:18 2017 notifications.google.com 192.168.150.201Sat Sep 9 07:29:18 2017 notifications.google.com 192.168.150.201Sat Sep 9 07:31:19 2017 plus.l.google.com 192.168.150.201Sat Sep 9 07:34:19 2017 plus.l.google.com 192.168.150.201Sat Sep 9 07:34:53 2017 mail.google.com fe80::dc5f:57fd:640c:6661Sat Sep 9 07:34:53 2017 mail.google.com 192.168.150.128Sat Sep 9 07:35:53 2017 www.google.com fe80::dc5f:57fd:640c:6661Sat Sep 9 07:37:53 2017 www.google.com 192.168.150.128Sat Sep 9 07:37:40 2017 24-courier.push.apple.com 192.168.150.182Sat Sep 9 07:38:40 2017 www-cdn.icloud.com.akadns.net 192.168.150.182Sat Sep 9 07:38:40 2017 e6858.dsce9.akamaiedge.net 192.168.150.182Sat Sep 9 07:38:40 2017 origin.guzzoni-apple.com.akadns.net 192.168.150.182Sat Sep 9 07:39:46 2017 beacons.gcp.gvt2.com fe80::dc5f:57fd:640c:6661Sat Sep 9 07:40:46 2017 beacons.gcp.gvt2.com 192.168.150.128 Now I want to get the only records for the last 5 second changed records After running the filter command I need in the following format I could have done with logread|awk '$4 > "07:35:00"' , but the problem is I need to pass the time always. I there anything like > '5 seconds' , so that I can get the following: Sat Sep 9 07:37:53 2017 www.google.com 192.168.150.128 Sat Sep 9 07:37:40 2017 24-courier.push.apple.com 192.168.150.182 Sat Sep 9 07:38:40 2017 www-cdn.icloud.com.akadns.net 192.168.150.182 Sat Sep 9 07:38:40 2017 e6858.dsce9.akamaiedge.net 192.168.150.182 Sat Sep 9 07:38:40 2017 origin.guzzoni-apple.com.akadns.net 192.168.150.182 Sat Sep 9 07:39:46 2017 beacons.gcp.gvt2.com fe80::dc5f:57fd:640c:6661 Sat Sep 9 07:40:46 2017 beacons.gcp.gvt2.com 192.168.150.128 | No, there's no such thing. Generally the cost of starting grep (fork a new process, load the executable, shared library, dynamic linkage...) would be a lot greater than compiling the regexps, so this kind of optimisation would make little sense. Though see Why is matching 1250 strings against 90k patterns so slow? about a bug in some versions of GNU grep that would make it particularly slow for a great number of regexps. Possibly here, you could avoid running grep several times by feeding your chunks to the same grep instance, for instance by using it as a co-process and use a marker to detect the end. With zsh and GNU grep and awk implementations other than mawk : coproc grep -E -f patterns -e '^@@MARKER@@$' --line-bufferedprocess_chunk() { { cat; echo @@MARKER@@; } >&p & awk '$0 == "@@MARKER@@"{exit};1' <&p}process_chunk < chunk1 > chunk1.greppedprocess_chunk < chunk2 > chunk2.grepped Though it may be simpler to do the whole thing with awk or perl instead. But if you don't need the grep output to go into different files for different chunks, you can always do: { cat chunk1 while wget -qO- ...; done # or whatever you use to fetch those chunks ...} | grep -Ef patterns > output | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/242939/"
]
} |
391,591 | I've got two sorted tab-delimited files. input.txt10 282035 282125 RNA1 -10 4134522 4134564 RNA1 -10 5299783 5299910 RNA2 -10 5900317 5900359 RNA1 -ref.txt 1 9 137792944 1 9 137792945 1 10 282074 4 10 282095 4 10 5900329 I want to print a sum on values IF certain criteria is met. Namely: IF ref$2==input$1 AND ref$3 falls within a range of min==input$2 && max==input$3 Print input$0 and sum of ref$1 (as input$6) else print zero (as input$6)So the result should look like that: 10 282035 282125 RNA1 - 510 4134522 4134564 RNA1 - 010 5299783 5299910 RNA2 - 010 5900317 5900359 RNA1 - 4 This is what I came up with: awk 'NR == FNR {min[NR]=$2; max[NR]=$3; chr[NR]=$1; next} { for (id in min) if (($2==chr[NR])&&(min[id] < $3 && $3 < max[id])) { print $0, sum+=$1 break }} ' input.txt ref.txt > output.txt There's clearly something wrong here, since I don't get any output. Also, I'm still missing "else print zero". Can somebody help me please? | No, there's no such thing. Generally the cost of starting grep (fork a new process, load the executable, shared library, dynamic linkage...) would be a lot greater than compiling the regexps, so this kind of optimisation would make little sense. Though see Why is matching 1250 strings against 90k patterns so slow? about a bug in some versions of GNU grep that would make it particularly slow for a great number of regexps. Possibly here, you could avoid running grep several times by feeding your chunks to the same grep instance, for instance by using it as a co-process and use a marker to detect the end. With zsh and GNU grep and awk implementations other than mawk : coproc grep -E -f patterns -e '^@@MARKER@@$' --line-bufferedprocess_chunk() { { cat; echo @@MARKER@@; } >&p & awk '$0 == "@@MARKER@@"{exit};1' <&p}process_chunk < chunk1 > chunk1.greppedprocess_chunk < chunk2 > chunk2.grepped Though it may be simpler to do the whole thing with awk or perl instead. But if you don't need the grep output to go into different files for different chunks, you can always do: { cat chunk1 while wget -qO- ...; done # or whatever you use to fetch those chunks ...} | grep -Ef patterns > output | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391591",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250609/"
]
} |
391,603 | I have file test1 with lines like this: A B CD E F... and I want to have file test2 with lines: DDD EEE FFF ADDD EEE FFF D... where A and D are copied from first column of test1 file after phrase DDD EEE FFF to test2 file I started like below, cat test1 | echo "DDD EEE FFF " `awk '{print $1}'` > test2 but of course it only adds phrase DDD EEE FFF once and then appends A, D to it which is not what I want DDD EEE FFF A D | First remove everything from the first whitespace on, then add your phrase to the beginning: sed 's/ .*//;s/^/DDD EEE FFF /' test1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137139/"
]
} |
391,607 | I'd like to use more of the iproute2 ( ip command) utility instead of the deprecated net-tools ( ifconfig , route , ...). The main reason I keep going back to net-tools is the output of ip route which in my humble opinion is lacking in clarity compared to the old route that notably provides column titles : ip route: default via 192.168.134.254 dev enp1s0 proto static metric 100 10.42.0.0/24 dev wlp2s0 proto kernel scope link src 10.42.0.1 metric 600 10.56.30.0/24 dev enx00133b0402c2 proto kernel scope link src 10.56.30.143 169.254.0.0/16 dev wlp2s0 scope link metric 1000 192.168.57.0/24 dev vboxnet1 proto kernel scope link src 192.168.57.1 linkdown 192.168.134.0/24 dev enp1s0 proto kernel scope link src 192.168.134.142 metric 100 route: Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 192.168.134.254 0.0.0.0 UG 100 0 0 enp1s010.42.0.0 * 255.255.255.0 U 600 0 0 wlp2s010.56.30.0 * 255.255.255.0 U 0 0 0 enx00133b0402c2link-local * 255.255.0.0 U 1000 0 0 wlp2s0192.168.57.0 * 255.255.255.0 U 0 0 0 vboxnet1192.168.134.0 * 255.255.255.0 U 100 0 0 enp1s0 Question: Is there a way to have a clear/pretty display of the routes using the ip command? | This awk script assumes, perhaps wrongly, that the output values are in pairs of keyword value , e.g. scope link , with some exceptions like the first column, and the linkdown keyword. It accumulates the columns and data and prints the result: awk '{ i = 1; h = " ip" hdr[h] = 1 col[h,NR] = $i for(i=2;i<=NF;){ if($i=="linkdown"){extra[NR] = $i; i++; continue} hdr[$i] = 1 col[$i,NR] = $(i+1) i += 2 }}END{ #PROCINFO[sorted_in] = "@ind_str_asc" n = asorti(hdr,x) for(i=1;i<=n;i++){ h = x[i]; max[h] = length(h) } for(j = 1;j<=NR;j++){ for(i=1;i<=n;i++){ h = x[i] l = length(col[h,j]) if(l>max[h])max[h] = l } } for(i=1;i<=n;i++){ h = x[i]; printf "%-*s ",max[h],h } printf "\n" for(j = 1;j<=NR;j++){ for(i=1;i<=n;i++){ h = x[i]; printf "%-*s ",max[h],col[h,j] } printf "%s\n",extra[j] }}' The result is wider than 80 columns: ip dev metric proto scope src via default enp1s0 100 static 192.168.134.254 10.42.0.0/24 wlp2s0 600 kernel link 10.42.0.1 10.56.30.0/24 enx00133b0402c2 kernel link 10.56.30.143 169.254.0.0/16 wlp2s0 1000 link 192.168.57.0/24 vboxnet1 kernel link 192.168.57.1 linkdown192.168.134.0/24 enp1s0 100 kernel link 192.168.134.142 The script uses associative array hdr to hold the keywords as they are found, and the two-dimensional col array is indexed by this keyword and the line number to hold the value. The first column is treated specially with an invented ip keyword which has a leading space to ensure it gets sorted to the first column. The extra array notes the linkdown lone keyword. At the end of data, the headers are sorted into an indexing array x , and we go through all the values finding the maximum column width. The column headers are then printed, and then the saved data. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207557/"
]
} |
391,629 | I couldn't find the answer to this anywhere. How can I know who renamed a directory? ls -al shows only the name of user who created that dirctory. | That is not information that is normally recorded, unless you took special disposition to that effect (like via some audit system). The service through which the user has renamed the directory (like over FTP, SFTP, WebDAV, samba...) may have logs that can help. You can try and check those logs, the last , lastcomm , audit , authentication logs around the time the folder was renamed. If you're administrator, you can look at the history file of the shells of the users that had the permissions to rename it (if the directory was renamed from /A/dir to /B/newdir , it's whoever had write access to both /A and /B (assuming /A didn't have the t bit in its permissions and /A/dir and /B are on the same filesystem)). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/391629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250647/"
]
} |
391,658 | I have a file composed by lines like these ones: 199240050;0180209199240050;0199240241;0180209199240241;0199240207;0180209199240207;0199240400;0180209199240400;0 I should replace last number with "Active" if it is "0", Or "Inactive" if it is "1". I've tried with sed but the occurrence of ";0" in the middle of the line gets changed too. It should be used in a linux bash script. I tried the solution given by DopeGhoti but it fails: $ sed 's/0$/active/;s/1$/inactive/' myLines.txt800600346 078136521 active While the format of the input file should not change. The second solution appends ";active" at the end, without replacing: $ awk -F\; 'BEGIN {OFS=";"} { if( $3 == 1 ) { print $1,$2,"inactive" } else { print $1,$2,"active" } }' myLines.txt800010654 0295445503 0;;active And George Vasiliou's also fails: $ awk '$NF?$NF="active":$NF="inactive"' FS=';' OFS=';' myLines.txtactiveactiveactiveactive | For your given data in a file called input : $ awk -F\; 'BEGIN {OFS=";"} { if( $3 == 1 ) { print $1,$2,"inactive" } else { print $1,$2,"active" } }' input199240050;0180209199240050;active199240241;0180209199240241;active199240207;0180209199240207;active199240400;0180209199240400;active Alternatively, with sed : $ sed 's/\;0$/active/;s/\;1$/inactive/' input199240050;0180209199240050;active199240241;0180209199240241;active199240207;0180209199240207;active199240400;0180209199240400;active | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229870/"
]
} |
391,668 | I saved some dump from tcpdump using tcpdump -n -i eth0 -tttt -Q in > "dump01.dump" so i got output like this: 20:39:12.808672 IP 94.xx.xxx.202.49183 > 151.xx.xx.xx.61479: UDP, length 10420:39:12.835025 IP 213.xx.xx.25.51197 > 151.xx.xx.xx.61479: Flags [P.], seq 4125053309:4125053343, ack 1004545214, win 194, length 3420:39:12.936971 IP 222.xxx.xxx.182.59953 > 151.xx.xx.xx.61479: UDP, length 28720:39:12.948822 IP 195.xx.xxx.30.62384 > 151.xx.xx.xx.61479: UDP, length 10120:39:12.987527 IP 79.xxx.xxx.216.56394 > 151.xx.xx.xx.443: Flags [P.], seq 700421627:700422382, ack 377141587, win 257, length 75520:39:12.988554 IP 79.xxx.xxx.216.55621 > 151.xx.xx.xx.443: Flags [P.], seq 3192357072:3192357827, ack 3940752659, win 260, length 75520:39:12.989291 IP 79.xxx.xxx.216.56517 > 151.xx.xx.xx.443: Flags [P.], seq 3172129891:3172130644, ack 3568957121, win 257, length 75320:39:12.990879 IP 79.xxx.xxx.216.56394 > 151.xx.xx.xx.443: Flags [.], seq 755:2207, ack 1, win 257, length 145220:39:12.991845 IP 79.xxx.xxx.216.56394 > 151.xx.xx.xx.443: Flags [P.], seq 2207:3465, ack 1, win 257, length 125820:39:12.992794 IP 79.xxx.xxx.216.56254 > 151.xx.xx.xx.443: Flags [P.], seq 1723903877:1723904632, ack 3204952387, win 260, length 755 of course I replaced part of IP's with xxx . Now the more interesting part - I was DDoSed by someone and I captured whole attack on dump, but I want to see graph of this incident. Unfortunately, as I didn't used -w with tcpdump my output isn't binary and Wireshark refuses to import file - it tries to read hex data with isn't there. Is there a way to force Wireshark to load this dump without packets details, convert my file or use another program to print graph for me? | For your given data in a file called input : $ awk -F\; 'BEGIN {OFS=";"} { if( $3 == 1 ) { print $1,$2,"inactive" } else { print $1,$2,"active" } }' input199240050;0180209199240050;active199240241;0180209199240241;active199240207;0180209199240207;active199240400;0180209199240400;active Alternatively, with sed : $ sed 's/\;0$/active/;s/\;1$/inactive/' input199240050;0180209199240050;active199240241;0180209199240241;active199240207;0180209199240207;active199240400;0180209199240400;active | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250675/"
]
} |
391,679 | I have a bash terminal window where I execute exclusively one command foo . Because of that I'd like every line (after the prompt of course) to begin with „foo ” so that I just have to type the functions' options and arguments, but not the recurrent function name. Of course it would be nice to be able to alter the automatically inserted string too, but that's not essential for me. Example When I open the terminal, without typing anything what I want to see is: user@host:~/ $ foo Then I type --option argument and when I press Enter the function foo is called with the given --option and argument . What I tried I tried to fiddle around with $PS1 and $PROMPT_COMMAND using xdotool type "foo " and astonishingly that actually works, but unfortunately it also prints “foo ” before the prompt, which is quite ugly: user@host:~/ $ PROMPT_COMMAND='xdotool type "foo "'foo user@host:~/ $ foo I also found and tried the preexec function from Ryan Caloras' bash-preexec script , but it has exactly the same problem. How to echo (a (executable-)string) to the prompt, so that the cursor flashes at the end of the line? is related, but the answers there don't make it possible to add something ( --option argument ) to the command to be executed. I didn't test zsh though – there ought to be a bash solution for such a simple thing, don't you think? | With zsh , it should just be a matter of: precmd() print -z 'foo ' Or to avoid it overriding commands queued by the user with Alt+q : zle-line-init() { [ -n "$BUFFER" ] || LBUFFER='foo '; }zle -N zle-line-init With bash , instead of xdotool , you could use the TIOCSTI ioctl : insert() { perl -le 'require "sys/ioctl.ph"; ioctl(STDIN, &TIOCSTI, $_) for split "", join " ", @ARGV' -- "$@"}PROMPT_COMMAND='insert "foo "' It's preferable to xdotool because it's inserting those characters directly in the input buffer of the device bash is reading from. xdotool would only work if there's a X server running (wouldn't work on the console or real terminals or over ssh (without -X ) for instance), that it is the one identified by $DISPLAY and that's the one you're interacting with, and that the terminal emulator bash is running in has the focus when $PROMPT_COMMAND is evaluated. Now, like in your xdotool case, because the ioctl() is done before the prompt is displayed and the tty terminal line discipline is put out of icanon+echo mode by readline, you're likely to see the echo of that foo by the tty line discipline messing up the display. You could work around that by instead inserting a string whose echo is invisible (like U+200B if using exclusively Unicode locales) and bind that to an action that inserts "foo " : insert() { perl -le 'require "sys/ioctl.ph"; ioctl(STDIN, &TIOCSTI, $_) for split "", join " ", @ARGV' -- "$@"}bind $'"\u200b":"foo "'PROMPT_COMMAND="insert $'\u200b'" Or you could delay the TIOCSTI ioctl enough for readline to have time to initialise: insert_with_delay() { perl -le 'require "sys/ioctl.ph"; $delay = shift @ARGV; unless(fork) { select undef, undef, undef, $delay; ioctl(STDIN, &TIOCSTI, $_) for split "", join " ", @ARGV; }' -- "$@";}PROMPT_COMMAND='insert_with_delay 0.05 "foo "' If, like in the zsh approaches, you want to handle the case where the user enters text before the prompt is displayed, you could either drain that by doing a tcflush(0, TCIFLUSH) in perl before the TIOCSTI ioctl (also needs a -MPOSIX option), or like in the zsh approach, ensure the foo is inserted at the start of the buffer by inserting a ^A (assuming you use the emacs (default) editing mode where that moves the cursor to the beginning of the line) before foo and ^E after (to move to the end): insert_with_delay 0.05 $'\1foo \5' or bind $'"\u200b":"\1foo \5"' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246819/"
]
} |
391,796 | Both of these commands work: (note the -S in sudo tells sudo to read the password from stdin). echo 'mypassword' | sudo -S tee -a /etc/test.txt &> /dev/nullecho -e '\nsome\nmore\ntext' | sudo tee -a /etc/test.txt &> /dev/null Now I would like to combine the two, i.e. achieve everything in just one line. But, of course, something like this doesn't work: echo -e '\nsome\nmore\ntext' | echo 'mypassword' | sudo -S tee -a /etc/test.txt &> /dev/null What would work? Thanks:) - Loady PS: Minor unrelated question: is 1> identical to > ? I believe they are.. | This will do: { echo 'mypassword'; echo 'some text'; } | sudo -k -S tee -a /etc/test.txt &>/dev/null The point is sudo and tee use the same stdin, so both will read from the same source. We should put "mypassword" + "\n" just before anything we want pass to tee . Explaining the command: The curly braces groups command. We can look at {...} as one command. Whatever is in {...} writes to the pipe. echo 'mypassword' will write "mypassword\n" to the pipe. This is read by sudo later. echo 'some text' write "some text\n" to the pipe. This is what will reach tee at the end. sudo -k -S reads password from its stdin, which is the pipe, until it reaches "\n". so "mypassword\n" will be consumed here. The -k switch is to make sure sudo prompt for a password and ignore user's cached credential if it's used recently. tee reads from stdin and it gets whatever left in it, "some text\n". PS: About I/O redirection: Yes you are right, 1>filename is identical to >filename . They both redirect stdout to filename . Also 0<filename and <filename are identical, both redirect stdin. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238486/"
]
} |
391,813 | Let's say I send an email, containing a link to my website, to someone that I really hope he'll visit it (fingers-crossed style): http://www.example.com/?utm_source=email392 or http://www.example.com/somefile.pdf?utm_source=email392 How to make Linux trigger an action (such as sending an automated email to myself) when this URL is visited, by regularly examining /var/log/apache2/other_vhosts_access.log ? I can't do it at PHP level because I need to do it for various sources/websites (some of them use PHP, some don't and are just link to files to be downloaded, etc.; even for the websites using PHP, I don't want to modify every index.php to do it from there, that's why I prefer an Apache log parsing method) | Live log monitoring using bash process substitution: #!/bin/bashwhile IFS='$\n' read -r line;do # action here, log line in $linedone < <(tail -n 0 -f /var/log/apache2/other_vhosts_access.log | \ grep '/somefile.pdf?utm_source=email392') Process substitution feeds the read loop with the output from the pipeline inside <(...) . The log line itself is assigned to variable $line . Logs are watched using tail -f , which outputs lines as they are written to the logs. If your log files are moved periodically by logrotate , add --follow=name and --retry options to watch the file path instead of just the file descriptor. Output from tail is piped to grep , which filters the relevant lines matching your URLs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59989/"
]
} |
391,822 | I am converting a document from Pandoc markdown to .pdf.I run the conversion like this, and everything works fine: pandoc test.MD -f markdown -o test.pdf However, I would like pandoc to output my PDF pages in landscape, rather than portrait, format. Is there a way to do this? In the documentation , I could not find the right command (checking under Variables for LaTex ). Adding the command \setuppapersize[letter,landscape] , which is mentioned there, seems to work only if you use the Context engine which I have no experience with / not installed. I also wanted to note that I am using the \newpage command to break the file up into pages, just in case that makes a difference. I'd be grateful for any pointers! | Not sure how exactly it works if you convert from a markdown file, but for converting html to pdf using latex, I could make the pdf be landscape by adding this flag to the command: -V geometry:landscape So the complete command in your case could then be: pandoc test.MD -V geometry:landscape -f markdown -o test.pdf Note, as I said I used latex to convert, so I only can confirm that this one here will work: pandoc test.html -V geometry:landscape -t latex -o test.pdf Hope this is useful. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171456/"
]
} |
391,870 | Press Ctrl and hold it. Then, press Alt and hold it. Finally, press Delete . If you have a Ubuntu system (maybe any debian-based system), it is very likely that your session will be locked, as you have executed the Ctrl + Alt + Delete shortcut. Now, press Delete and hold it. Then press Ctrl and hold it. Then, press Alt . The lock session shortcut is not going to be triggered . Why is this the case? Where is this setting hard-coded? My guess: I get the impression that shortcuts work by virtually "extending the keyboard keys". Thus, selecting Ctrl or Alt open a "new keyboard", of which you selected the key Delete . That is however not the same when you select Delete first, which belongs to the physical keyboard and not to this virtual, extended keyboard. Is this the case? | Because non-modifier actions are enacted on the key down event. This is actually almost nothing to do with keyboard hardware. Both USB and PS/2 keyboards operate the same in this respect. There is nothing in the hardware that makes so-called "modifier keys" special. Any key, with one exception, can be a modifier key or not. What determines what is a modifier key is the keyboard map employed in software in an operating system. The hardware just sends what are in effect (glossing over the details of the USB HID input report protocol actually being a bitmap of currently pressed keys that is partly encoded into an inside-out form to keep it short) key down and key up events. In a FreeBSD keyboard map, for example, one finds lines such as this: # alt# scan cntrl alt alt cntrl lock# code base shift cntrl shift alt shift cntrl shift state# ------------------------------------------------------------------…029 lctrl lctrl lctrl lctrl lctrl lctrl lctrl lctrl O…042 lshift lshift lshift lshift lshift lshift lshift lshift O…054 rshift rshift rshift rshift rshift rshift rshift rshift O…056 lalt lalt lalt lalt lalt lalt lalt lalt O…083 del '.' '.' '.' '.' '.' boot boot N… 029, 042, 054, and 056 are the keyboard codes (normalized into a common system from the USB HID usage numbers and the PS/2 scancode numbers) but it is the lctrl , lshift , rshift , and lalt actions in the map that define these keys to be modifier keys. Define them with different actions and move these actions elsewhere, as indeed several out-of-the-box FreeBSD maps do, and entirely different keys are modifiers. (The exception to the rule is the Fn key, which is the one modifier that is implemented in hardware. It is implemented entirely in hardware and not seen by software at all . It does not even generate any events over the wire. There's actually another hardware modifier, too. It isn't a key. It's the state of the NumLock LED.) The action , when it is a modifier action such as this, changes the current modifier state , which (simply put) is a set of flags recorded in the operating system that record what modifiers are currently "on". As you can see from the column headings in the keyboard map, the current modifier state — in terms of "on" flags for "shift", "altgr", "control" and "alt" states — influences what action further keypresses map to. On the line for key code 083, which is the one engraved . del on the numeric keypad, you can see that only if the current modifier state is at least "alt cntrl" will the mapped action be boot . Keyboard drivers enact modifier actions upon receiving key press or key release events. Other actions, however, only take effect upon key press or autorepeat events. This is the case for the boot action, for example. Only if a key press or autorepeat event for key 083 occurs and the current modifier state is already "alt cntrl"/"alt cntrl shift", does the boot action happen. It should be obvious from this that in order to get the operating system's current modifier state into that state in the first place, the lalt and lctrl / rctrl actions must have already happened , by first pressing the keys that happen to be mapped to them. (FreeBSD's system also allows for modifier locks in addition to the usual system of modifier shifts , although only two keyboard maps out-of-the-box make any use of them at all. The ISO keyboard standard also allows for modifier latches , but FreeBSD does not provide this mechanism.) FreeBSD is, as I said, an example here. But most operating systems with PS/2 or USB HID devices, from MS/PC-DOS (where the current modifier state is a well-known byte in memory) to Windows NT (where keyboard maps are kernel-mode DLLs containing code and data), work in roughly this way. Further reading Ubuntu 16.04 doesn't recognize Fn key Unable to simulate Ctrl+Shift+Fn+F10 Key press \ Jonathan de Boyne Pollard. "Keyboard mapping". console-fb-realizer . nosh toolset manual pages. Kazutaka Yokota (2008-01-29). atkbd . FreeBSD manual pages. kbdmap . §5. FreeBSD manual pages. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192321/"
]
} |
391,927 | When I press escape 4 times in a bash terminal, it displays something like this: -bash-4.1$Display all 2837 possibilities? (y or n):!./[[[]]{}411toppma2pacacceptacctonaclocalaclocal-1.11acpi_listen What is this feature, and how are these entries found? (On the second esc press, the terminal gives an audible alert.) | $ bind -p | grep 'complete$'"\C-i": complete"\M-\e": complete This shows that the default key binding of Meta+Esc (and Ctrl+i ) in Emacs command line editing mode is the Readline function complete . The Meta key is usually Esc on keyboards without an explicit Meta key. The Readline documentation for this function says Attempt to perform completion on the text before point. The actual completion performed is application-specific. Bash, for instance, attempts completion treating the text as a variable (if the text begins with $ ), username (if the text begins with ~ ), hostname (if the text begins with @ ), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. Gdb, on the other hand, allows completion of program functions and variables, and only attempts filename completion under certain circumstances. Regarding your comment to Anthon's answer : No, pressing Esc twice is not the same as pressing Tab generally (unless it's in a program that maps them both to the same action, as Readline does by default). However Ctrl+i is the same as Tab , just like Ctrl+[ is the same as Esc . This means that you can do completion with Ctrl+[ Ctrl+[ in bash if you wish, as long as double Esc is bound to the Readline complete function. This is handy if you're working at a VT220 terminal, for example, which lacks the Escape key: | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391927",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227169/"
]
} |
391,942 | I've written some multi-thread test, and now I want to be sure that the highest CPU usage of this test is equal to 100 * CPU_NUMBER of current machine. Is it possible to do? UPD 0: I'm talking about Linux system. | $ bind -p | grep 'complete$'"\C-i": complete"\M-\e": complete This shows that the default key binding of Meta+Esc (and Ctrl+i ) in Emacs command line editing mode is the Readline function complete . The Meta key is usually Esc on keyboards without an explicit Meta key. The Readline documentation for this function says Attempt to perform completion on the text before point. The actual completion performed is application-specific. Bash, for instance, attempts completion treating the text as a variable (if the text begins with $ ), username (if the text begins with ~ ), hostname (if the text begins with @ ), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. Gdb, on the other hand, allows completion of program functions and variables, and only attempts filename completion under certain circumstances. Regarding your comment to Anthon's answer : No, pressing Esc twice is not the same as pressing Tab generally (unless it's in a program that maps them both to the same action, as Readline does by default). However Ctrl+i is the same as Tab , just like Ctrl+[ is the same as Esc . This means that you can do completion with Ctrl+[ Ctrl+[ in bash if you wish, as long as double Esc is bound to the Readline complete function. This is handy if you're working at a VT220 terminal, for example, which lacks the Escape key: | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126073/"
]
} |
391,987 | I'm trying to list missing files in a sequence in the terminal. While many answers existed here, they are not generic enough that I managed to adapt it to my situation. So if you can make a generic enough answer that will work for more people, please do it. I'm doing ls {369..422}.avi >/dev/null to list missing files but I can't use * to match anything that ends with .avi. How can I do this? The numbers are not in the end of the file but in the middle. So I should need something like *numbers*.avi | ls *{369..422}*.avi >/dev/null This will first generate patterns like *369*.avi*370*.avi*371*.avi*372*.avi*373*.avi*374*.avi through the brace expansion, and then ls will be executed with these patterns, which will give you an error message for each pattern that can't be expanded to a name in the current directory. Alternatively, if you have no files that contain * in their name: for name in *{369..422}*.avi; do case "$name" in '*'*) printf '"%s" not matched\n' "$name" ;; esacdone This relies on the fact that the pattern remains unexpanded if it did not match a name in the current directory. This gives you a way of possibly doing something useful for the missing files, without resorting to parsing the error messages of ls . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119404/"
]
} |
392,035 | I would like to replace a set of characters with corresponding characters from another set, something like this: original set: ots"target" set: u.xfoobartest → fuubar.ex. Translations/transliterations like this are the specialty of the tr command: $ echo 'foobartest' | tr 'ots' 'u.x'fuubar.ex. Unfortunately tr doesn't support changing files in-place like sed does. I would like to use sed so I don't have to reinvent the wheel of juggling temp files. | sed has the y command that works just like tr at least in most implementations: $ echo 'foobartest' | sed 'y/ots/u.x/'fuubar.ex. The y command is part the POSIX sed specification , so it should work on just about any platform. And since it's sed , you can have it replace a file with its edited version, sparing you the bothersome temp file business (provided your implementation of sed supports the -i option, which is not specified by POSIX): $ sed -i 'y/ots/u.x/' some-file.txt Currently BSD implementation of sed does not actually mirror the behavior of tr in some corner cases | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/392035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28235/"
]
} |
392,050 | I know that I can run a command with an environment variable like this: FOO=bar mycommand I know that I can run commands in a subshell like this: (firstcommand && secondcommand) But can I somehow combine those two? FOO=bar (firstcommand && secondcommand) gives: sh: syntax error: unexpected "(" at least in busybox shell (ash). Edit: Kusalananda suggested FOO=bar sh -c 'first && second' which is indeed a solution. However, I am also interested in alternative answers because I like the subshell syntax because it doesn't require fiddling around with escaping of quotes. | One way: FOO=bar sh -c 'first && second' This sets the FOO environment variable for the single sh command. To set multiple environment variables: FOO=bar BAZ=quux sh -c 'first && second' Another way to do this is to create the variable and export it inside a subshell. Doing the export inside the subshell ensures that the outer shell does not get the variable in its environment: ( export FOO=bar; first && second ) Summarizing the (now deleted) comments: The export is needed to create an environment variable (as opposed to a shell variable). The thing with environment variables is that they get inherited by child processes. If first and second are external utilities (or scripts) that look at their environment, they would not see the FOO variable without the export . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/392050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30206/"
]
} |
392,052 | I'm working on a Bash script that will bring up to speed my shell after a fresh installation. main(){ # # By default we assume the terminal doesn't support colors # RED="" GREEN="" YELLOW="" BLUE="" BOLD="" NORMAL="" # # Check if we are connected to a terminal, and that terminal # supports colors. # if which tput >/dev/null 2>&1; then ncolors=$(tput colors) fi # # Set the colors if we can # if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then RED="$(tput setaf 1)" GREEN="$(tput setaf 2)" YELLOW="$(tput setaf 3)" BLUE="$(tput setaf 4)" BOLD="$(tput bold)" NORMAL="$(tput sgr0)" fi # # Only enable exit-on-error after the non-critical colorization stuff, # which may fail on systems lacking tput or terminfo # set -e################################################################################ printf "${YELLOW}" echo '' echo ' ____ __ ___ _____ __ ' echo ' / __ )____ ______/ /_ |__ \/__ / _____/ /_ ' echo ' / __ / __ `/ ___/ __ \__/ / / / / ___/ __ \' echo ' / /_/ / /_/ (__ ) / / / __/ / /__(__ ) / / /' echo '/_____/\__,_/____/_/ /_/____/ /____/____/_/ /_/ ' echo '' echo '' printf "${NORMAL}"################################################################################ # # Find out if Zsh is already installed # CHECK_ZSH_INSTALLED=$(grep /zsh$ /etc/shells | wc -l) # # Check to see if Zsh is already installed # if [ ! $CHECK_ZSH_INSTALLED -ge 1 ]; then sudo apt-get -y install zsh fi # # Clean the memory # unset CHECK_ZSH_INSTALLED################################################################################ # # Remove the previous config file, so we know we start from scratch # rm ~/.zshrc && # # Removing unnecessary Bash files # rm ~/.bash_history 2> /dev/null && rm ~/.bash_logout 2> /dev/null && rm ~/.bashrc 2> /dev/null && rm ~/.bash_sessions 2> /dev/null && rm ~/.sh_history 2> /dev/null &&################################################################################ # # Download the configuration file # curl -fsSL "https://raw.githubusercontent.com/davidgatti/my-development-setup/master/08_Zsh_instead_of_Bash/zshrc" >> ~/.zshrc # # Get the name of the logged in user # USER_NAME=$(whoami) # # Get the home path for the logged in user # HOME_PATH=$(getent passwd $USER_NAME | cut -d: -f6) # # Add a dynamic entry # echo 'zstyle :compinstall filename '$HOME_PATH/.zshrc'' >> ~/.zshrc}main I did run Bash -x and all works. But curl is nowhere to be seen in the resulting trace. What I tried: adding curl in a variable using eval setting " in 3 different ways etc. Question I want to download a file unsigned curl inside the bash script that I linked to, where the bash script will be executed in the following way: sh -c "$(curl -fsSL https://raw.githubusercontent.com/davidgatti/my-development-setup/master/08_Zsh_instead_of_Bash/install.sh)" sh -x output If I run sh -cx "$(curl -fsSL https://raw.githubusercontent.com/davidgatti/my-development-setup/master/08_Zsh_instead_of_Bash/install.sh)" This is the output. + main+ RED=+ GREEN=+ YELLOW=+ BLUE=+ BOLD=+ NORMAL=+ which tput+ tput colors+ ncolors=256+ [ -t 1 ]+ [ -n 256 ]+ [ 256 -ge 8 ]+ tput setaf 1+ RED=+ tput setaf 2+ GREEN=+ tput setaf 3+ YELLOW=+ tput setaf 4+ BLUE=+ tput bold+ BOLD=+ tput sgr0+ NORMAL=+ set -e+ printf+ echo+ echo ____ __ ___ _____ __ ____ __ ___ _____ __+ echo / __ )____ ______/ /_ |__ \/__ / _____/ /_ / __ )____ ______/ /_ |__ \/__ / _____/ /_+ echo / __ / __ `/ ___/ __ \__/ / / / / ___/ __ \ / __ / __ `/ ___/ __ \__/ / / / / ___/ __ \+ echo / /_/ / /_/ (__ ) / / / __/ / /__(__ ) / / / / /_/ / /_/ (__ ) / / / __/ / /__(__ ) / / /+ echo /_____/\__,_/____/_/ /_/____/ /____/____/_/ /_//_____/\__,_/____/_/ /_/____/ /____/____/_/ /_/+ echo+ echo+ printf+ wc -l+ grep /zsh$ /etc/shells+ CHECK_ZSH_INSTALLED=2+ [ ! 2 -ge 1 ]+ unset CHECK_ZSH_INSTALLED+ rm /home/admin/.zshrc I personally don't see curl being executed | You have no ~/.zshrc file, therefore rm ~/.zshrc exits with a non-zero value. Since rm ~/.zshrc is the first command in a long list of commands chained with && , none of the following commands are executed. curl is the last command of this list. Solution #1: use rm -f instead of rm or don't terminate your lines with && . Moreover, you have put set -e just before your shinny banner. This makes your script exit at the first command that fails unexpectedly. Thus, removing && won't be enough. Solution #2: use rm -f or terminate your rm lines with || true or || : Conclusion: change all your rm foo 2> /dev/null && to rm -f foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197448/"
]
} |
392,059 | I installed Debian 9.1 with KDE and Chromium. In Chromium there is a built-in extension called "GNOME Shell Integration" that I cannot remove or disable (it is "installed by system administrator", which is, in theory, me). However, I do not use GNOME, and package chrome-gnome-shell is not installed according to Aptitude. If I go to the extension's options, it says: "Although GNOME Shell integration extension is running, native host connector is not detected" , which is correct. How can I get rid of it? | sudo rm /etc/opt/chrome/policies/managed/chrome-gnome-shell.jsonsudo rm /etc/chromium/policies/managed/chrome-gnome-shell.json You may also need to delete the extension from the extensions folder in your profile path. You can find this path @ chrome://version/ in the browser. From: https://wiki.gnome.org/Projects/GnomeShellIbtegrationForChrome/Installation#Troubleshooting | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
392,095 | When using rsync to copy files over the network, I give a path so that rsync will know where to put the file on the remote server. rsync -av /home/ME/myfile user@remoteserver:/home/ME/ If I leave off the remote path, where will rsync put the file? Eg: rsync -av /home/ME/myfile user@remoteserver | rsync -av /home/ME/myfile user@remoteserver This command will not send the file to your remote server, it will just make a duplicate of the /home/me/myfile in your current working directory and the name of the file will be called user@remoteserver . Just like when you want to create a backup of a file before editing it with cp , you do cp -a /etc/fstab /etc/fstab.org | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61235/"
]
} |
392,129 | I'm very new to linux / command line and need to encrypt the names of 10K+ files (unique names) so they match the MD5 encrypted name in the mySQL database. I've seen how you can rename a directory of files and how to get the hash of a file ( mdsum? ) but I'm stuck on how to get the hash of the file name and then rename that file to the generated hash retaining the extension i.e. mynicepicture.jpg > fba8255e8e9ce687522455f3e1561e53.jpg It seems like it should be a simple rename or mv line but I can't get my head around it. Many thanks for your insights PS I've seen the use of Perl functions in a few examples close to what I'm looking for but have no idea where / how to use those. | You didn't say which shell you want to use, so I'm just assuming Bash – the answer needs adjustments to work with other shells. for i in *; do sum=$(echo -n "$i"|md5sum); echo -- "$i" "${sum%% *}.${i##*.}"; done Script version: for i in *; do sum=$(echo -n "$i" | md5sum) echo -- "$i" "${sum%% *}.${i##*.}"done This simple for loop takes every file in the current directory, computes the md5 sum of its name and outputs it. Use this to check the functionality, if you want to start renaming replace the second echo by mv . Explanations echo -n "$i" | md5sum – calculate md5 sum of the full file name including the file extension ( Piping ), to strip the extension change echo -n "$i" to one of the following: ${i%%.*}sed 's/\..*//' <<< "$i"echo "$i" | sed 's/\..*//' sum=$(…) – execute … and save the output in $sum ( Command Substitution ) ${sum%% *} – output everything until the first space ( Parameter Substitution ), the same as one of the following: $(sed 's/ .*//' <<< "$sum")$(echo "$sum" | sed 's/ .*//') ${i##*.} – output everything after the last dot (Parameter Substitution), the same as one of the following: $(sed 's/.*\.//' <<< "$i")$(echo "$i" | sed 's/.*\.//') If you need to rename files recursively in different folders, use find with the -exec option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/392129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251094/"
]
} |
392,149 | I have a table in Linux : A 0A 0A 0B 0B 1B 0B 1B 0 I want to extract lines appeared consecutively for 3 times or more. My expected output is : A 0 Actually, 3 times or more is just a simplified example. The actual situation is I want to extract lines that appear consecutively for 30 times more. Any idea? Thank you! | uniq -c file | awk '$1 >= 3 { print $2,$3 }' The uniq -c will output each line together with a count of how many times that line occurs consecutively. For the given data, it will produce 3 A 0 1 B 0 1 B 1 1 B 0 1 B 1 1 B 0 The awk script will take this and output the last two fields if the first field is greater than or equal to 3. The result will be A 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240505/"
]
} |
392,152 | I'd like to decrypt multiple files at once and check gpg status in order to verify their consistency. The problem is that --output option doesn't work with --multiline argument and normal STDOUT redirection is ignored. find -name '*.gpg' | gpg --multifile --decrypt >/dev/null Redirection ignored. Normal files are created. >find -name '*.gpg' | gpg --multifile --decrypt --output=/dev/nullgpg: --output doesn't work for this command How to achieve this goal with single gpg call? | uniq -c file | awk '$1 >= 3 { print $2,$3 }' The uniq -c will output each line together with a count of how many times that line occurs consecutively. For the given data, it will produce 3 A 0 1 B 0 1 B 1 1 B 0 1 B 1 1 B 0 The awk script will take this and output the last two fields if the first field is greater than or equal to 3. The result will be A 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142725/"
]
} |
392,169 | I need to create a purge script to remove any foreign directories from a specific list of directories. My idee was to do something like this : #!/bin/ksh find /data/${USER}/SAS/ -type d ! -name 'SE' | find /data/${USER}/SAS/ -type d ! -name 'Rejet' | find /data/${USER}/SAS/ -type d ! -name 'Acq' | find /data/${USER}/SAS/ -type d ! -name 'Archiv' | find /data/${USER}/SAS/ -type d ! -name 'Cloture' | find /data/${USER}/SAS/ -type d ! -name 'Emis' | find /data/${USER}/SAS/ -type d ! -name 'Ident' | find /data/${USER}/SAS/ -type d ! -name 'Irr*' | find /data/${USER}/SAS/ -type d ! -name 'Recep*' and then -type f -exec rm {} \; but don't really know how to do this. | uniq -c file | awk '$1 >= 3 { print $2,$3 }' The uniq -c will output each line together with a count of how many times that line occurs consecutively. For the given data, it will produce 3 A 0 1 B 0 1 B 1 1 B 0 1 B 1 1 B 0 The awk script will take this and output the last two fields if the first field is greater than or equal to 3. The result will be A 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209161/"
]
} |
392,186 | I see this folder structure on Ubuntu. It comes by size of each volume. Three volumes are coming with sizes. One volume is for files shared between Ubuntu and Kali and other two are of Kali Linux. It is not nice to see it that way and remember the purpose of each volume. Can I see any name there? After making this change , volumes are Kali, Kali_Home and Shared are visible, which makes sense than volume size. | You can use tune2fs to check and create volume names for extN filesystems. Read current label tune2fs -l /dev/sda1 | grep 'volume name'Filesystem volume name: root Set a new label tune2fs -L vmguest_root /dev/sda1tune2fs -l /dev/sda1 | grep 'volume name'Filesystem volume name: vmguest_root This assumes that whatever GUI you're using to display these volumes actually looks at the filesystem's volume name, of course. (But without knowing what you're using I can't give you a definitive answer.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31506/"
]
} |
392,191 | Normally diff and git diff show both the original and the modified line with - and + respectively. Is there any way, I can filter only to see the modified line? This would reduce the number of lines to read by a factor of 2 instantly. I was assuming git diff test.yml | grep '^+' | less -R and git diff test.yml | egrep '^+' | less -R to have the same result. ie they would show any new additions in a file. However egrep shows me the entire file. Why is that so? With the above method anyways, I lose the color. Is there any way to retain the color? | You can use --word-diff to condense the + and - lines together with the changes highlighted using red/green text and stop using grep all together. You can combine this with -U0 to remove all context around the diffs if you really want to condense it down further. This approch is better than using grep as you don't lose output, you can tell when a line was added or simply changed and you don't completely lose removals while still condensing the output down into something that is easy to read. The answer to the question regarding egrep is already answered by @Stephen Kitt here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119007/"
]
} |
392,236 | I have a software project that uses a specific directory structure for configuration files. Symlinks are used to point to the config files currently in use. I'm in the process of making a custom installer script for CentOS 7. I use another script to bundle the directory structure and the installer script. The bundle script uses rsync to copy the directory structure with all default symlinks intact. It also excludes the hidden svn folders. rsync -a --exclude=".*" [sourceFolder] [bundleFolder] The install script uses cp to install the directory structure (default symlinks intact) to the user specified location. cp -rP [bundleFolder] [installLocation] This all works great. However, I also need the installer script to be able to update an existing installation. The problem with this is that I need to be able to update the config files without altering the symlinks that the user has in place. Is there a way to copy the entire directory structure (all folders and sub-folders) but ignore any symlinks? I'm trying to avoid having to use find to parse the entire structure in a bash script just to ignore the symlinks. I assumed that this would be a common task that cp or rsync would have an option for. I haven't been able to find one though. | Moved from question into answer: As h3rrmiller pointed out, I was able to achieve this with rsync by using the --no-links option. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251169/"
]
} |
392,284 | My machine has an SSD, where I installed the system and an HDD, which I use as a storage for large and/or infrequently used files. Both are encrypted, but I chose to use the same passphrase for them. SSD is mounted at / and HDD at /usr/hdd (individual users each have a directory on it and can symlink as they like from home directory). When the system is booted, it immediately asks for passphrase for the SSD, and just a couple seconds later for the one for HDD (it is auto-mounted). Given that both passphrases are the same, is there a way to configure the system to ask just once? | Debian based distributions: Debian and Ubuntu ship a password caching script decrypt_keyctl with cryptsetup package. decrypt_keyctl script provides the same password to multiple encrypted LUKS targets, saving you from typing it multiple times. It can be enabled in crypttab with keyscript=decrypt_keyctl option. The same password is used for targets which have the same identifier in keyfile field . On boot password for each identifier is asked once. An example crypttab : <target> <source> <keyfile> <options>part1_crypt /dev/disk/... crypt_disks luks,keyscript=decrypt_keyctlpart2_crypt /dev/disk/... crypt_disks luks,keyscript=decrypt_keyctl The decrypt_keyctl script depends on the keyutils package (which is only suggested, and therefore not necessarily installed). After you've updated your cryptab , you will also have to update initramfs to apply the changes. Use update-initramfs -u . Full readme for decrypt_keyctl is located in /usr/share/doc/cryptsetup/README.keyctl Unfortunately, this currently doesn't work on Debian systems using systemd init due to a bug (other init systems should be unaffected). With this bug you're asked a second time for the password by systemd, making it impossible to unlock remotely via ssh. Debian crypttab man page suggests as a workaround to use initramfs option to force processing in initramfs stage of boot. So to circumvent this bug an example for /etc/crypttab in Debian <target> <source> <keyfile> <options>part1_crypt /dev/disk/... crypt_disks luks,initramfs,keyscript=decrypt_keyctlpart2_crypt /dev/disk/... crypt_disks luks,initramfs,keyscript=decrypt_keyctl Distributions which do not provide decrypt_keyctl script: If decrypt_keyctrl isn't provided by your distribution, the device can be unlocked using a keyfile in encrypted root file system. This when root file system can be unlocked and mounted before of any other encrypted devices. LUKS supports multiple key slots. This allows you to alternatively unlock the device using password if the key file is unavailable/lost. Generate the key with random data and set its permissions to owner readable only to avoid leaking it. Note that the key file needs to be on the root partition which is unlocked first. dd if=/dev/urandom of=<path to key file> bs=1024 count=1 chmod u=rw,g=,o= <path to key file> Add the key to your LUKS device cryptsetup luksAddKey <path to encrypted device> <path to key file> Configure crypttab to use the key file. First line should be the root device, since devices are unlocked in same order as listed in crypttab . Use absolute paths for key files. <target> <source> <keyfile> <options> root_crypt /dev/disk/... none luks part1_crypt /dev/disk/... <path to key file> luks | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/392284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
392,393 | When I move a single file with spaces in the filename it works like this: $ mv "file with spaces.txt" "new_place/file with spaces.txt" Now I have a list of files which may contain spaces and I want to move them. For example: $ echo "file with spaces.txt" > file_list.txt$ for file in $(cat file_list.txt); do mv "$file" "new_place/$file"; done;mv: cannot stat 'file': No such file or directorymv: cannot stat 'with': No such file or directorymv: cannot stat 'spaces.txt': No such file or directory Why does the first example work, but the second one doese not? How can I make it work? | Never, ever use for foo in $(cat bar) . This is a classic mistake, commonly known as bash pitfall number 1 . You should instead use: while IFS= read -r file; do mv -- "$file" "new_place/$file"; done < file_list.txt When you run the for loop, bash will apply wordsplitting to what it reads, meaning that a strange blue cloud will be read as a , strange , blue and cloud : $ cat files a strange blue cloud.txt$ for file in $(cat files); do echo "$file"; doneastrangebluecloud.txt Compare to: $ while IFS= read -r file; do echo "$file"; done < files a strange blue cloud.txt Or even, if you insist on the UUoC : $ cat files | while IFS= read -r file; do echo "$file"; donea strange blue cloud.txt So, the while loop will read over its input and use the read to assign each line to a variable. The IFS= sets the input field separator to NULL * , and the -r option of read stops it from interpreting backslash escapes (so that \t is treated as slash + t and not as a tab). The -- after the mv means "treat everything after the -- as an argument and not an option", which lets you deal with file names starting with - correctly. * This isn't necessary here, strictly speaking, the only benefit in this scenario is that keeps read from removing any leading or trailing whitespace, but it is a good habit to get into for when you need to deal with filenames containing newline characters, or in general, when you need to be able to deal with arbitrary file names. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/392393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251281/"
]
} |
392,436 | I am trying to use while loop in csh shell command prompt in RHEL 7.2 but am getting the below error: $ while truewhile: Expression Syntax. The same is working in bash shell. | The syntax of while loops in csh is different from that of Bourne-like shells. It's: while (arithmetic-expression) bodyend When csh is interactive, for some reason, that end has to appear on its own on a line. For the arithmetic-expression to test on the success of a command, you need { cmd } (spaces are required). { cmd } in arithmetic expressions resolves to 1 if the command succeeded (exited with a 0 exit status) or 0 otherwise (if the command exited with a non-zero exit status). So: while ({ true }) bodyend But that would be a bit silly especially considering that true is not a built-in command in csh . For an infinite loop, you'd rather use: while (1) bodyend By contrast, in POSIX shells, the syntax is: while cmd; do bodydone And if you want the condition to evaluate an arithmetic expression, you need to run a command that evaluates them like expr , or ksh 's let / ((...)) or the test / [ command combined with $((...)) arithmetic expansions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99923/"
]
} |
392,475 | I'm trying to understand Linux, its command-line and this quote: You can run into problems with globsbecause .* matches . and .. (the current and parent directories). You may wish to use a pattern such as .[^.]* or .??* to get all dot files except the current and parent directories. Source: How Linux Works: What Every Superuser Should Know by Brian Ward, 2nd Edition , Section 2.7 “Dot Files”, page 23 (Requires registration/login) or How Linux Works: What Every Superuser Should Know by Brian Ward, 3rd Edition , Section 2.7 “Dot Files”, page 22 When exactly (in what command) would you use .[^.]* or .??* ? | That's to work around a bug/misfeature in shells other than zsh , fish and the descendants of the Forsyth shell (including pdksh and derivatives)¹, whereby the expansion of the .* glob includes . and .. (on systems (most, unfortunately) where readdir() returns them) With those shells, chmod -R og-rwx .* for instance would recursively remove rwx permissions to the current ( . ) and parent ( .. ) directories instead of just the hidden files and directories in the current directory. It's particularly bad for commands that do things recursively or act on directories like ls .* , chown -R .* , find .* , grep -r blah .* but it's still annoying for most other commands and I can't think of any commands for which you'd want to have those . and .. included in the list of files passed to them. A safeguard had to be added to the rm utility to work around that misfeature as too many people were tripping on rm -rf .* . With * added, it's also used to pass all files (hidden or not) as arguments to a command ( cmd -- .[!.]* ..?* * ), for which you'll find other workarounds depending on the shell . The .[^.]* glob ( .[!.]* in Bourne/POSIX shells) excludes . (as it matches on filenames with at least two characters) and .. (as the second character is . which doesn't match [^.] ), but also excludes files like ..foo , for which you need the second glob ..?* . Those . and .. are tools for directory traversal, it's a mistake that they should be listed like ordinary files. POSIX requires them to be understood in path components (like in open(".") , stat("foo/../bar") ) but not necessarily be implemented as directory entries nor included in readdir() . Still, most systems still do implement those like in the early Unices as hard links, and most of those that don't will still fake entries for them in the output of getdents() / readdir() . With bash , an alternative is to turn the dotglob option on and use: chmod -R og-rwx [.]* (though beware that if there's no non-hidden file, it could change the permissions of the [.]* file unless you had the failglob option on to mimic the behaviour of zsh / fish ). As a history note, filenames starting with . being hidden files were born from a coding mistake from someone trying to skip . and .. in the first place . It's ironical that when trying to do things with hidden files we would run into the same problem. ¹ see also the globskipdots option in bash 5.2+ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251330/"
]
} |
392,512 | I'm an OpenBSD user. In the OpenBSD FAQ it says: OpenBSD is a complete system, intended to be kept in sync. It is not a kernel plus utilities that can be upgraded separately from each other. When you upgrade a system, you do so in one go; the kernel and the base system is replaced. Then you go and update your 3rd party packages . If compiling from source , you recompile the kernel and boot it. Then you rebuild the base system, and then the packages that you've got installed. If more than a couple of weeks/months have past since you last rebuilt everything, you first install a snapshot and rebuild from there (if you're following the most current CVS branch). Having an out of sync kernel, base system and/or 3rd party packages is a potential source of issues and more or less disqualifies you from getting any serious help from the official mailing lists. I'm quite okay with this. In fact, this is one of the reasons I use OpenBSD. It makes the system a consistent unit and it makes it easy for me to form a mental overview of it. What's it like on Linux? Most Linuxes that I'm aware of don't have a "base system" in the same sense as the BSDs, but rather a collection of packages assembled by the distribution provider. Further software is then added to this by a local administrator in such a way that the boundary between what was there from the start and what was added later is, at best, blurry. Does Linux (in general) not have a strong kernel to userspace coupling? The kernel is updated, as far as I know, like any other software package, and it confuses me slightly that this is at all possible. Add to this the fact that some even compile custom kernels (which is discouraged on OpenBSD), and have a multitude of various kernel versions listed in their boot menus. Who or what guarantees that the various subsystems of a Linux system are able to cooperate with each other even though they are updated independently from each other? The reason I'm asking is because another user on this site asked me whether replacing the kernel in his Linux system with a newer version "would be doable". Coming from the OpenBSD side of things, I couldn't say that yes, this would be guaranteed to not break the system. I use "Linux" above as a shorthand for "Linux distribution", kernel + utilities. | Linus Torvalds has a very strong opinion against kernel changes resulting in userspace regressions (see the question " The Linux kernel: breaking user space " for details). Interface between userspace and kernel is provided by system calls. Newer kernels can have more system calls, and changes in exiting ones when those changes do not break existing applications. When a system call interface has a flag parameter, new kernels often expose the new functionality with a new bit flag. This way kernel maintains backwards compatibility to old applications. When it has not been possible to alter existing interface without breaking userspace, additional system calls have been added that provide the extended functionality. This is why there are three versions of dup and two versions of umount system call. The policy of having a stable userspace is the reason why kernel updates rarely cause issues in userspace applications and you do not generally expect issues after upgrading the kernel. However same API stability is not guaranteed for kernel interfaces and other implementation details . Sysfs (on /sys ) and procsfs (on /proc/ ) expose kernel implementation details on runtime configuration, hardware, network, processes etc. which are used by low-level applications. It is possible for those interfaces to change in an incompatible way between kernel versions if there is a good reason to. Changes still try to minimize incompatibilities if possible and there are rules for how applications can use the interfaces in a way least likely to cause issues. The impact is also limited because non low-level applications shouldn't be using these interfaces. @PeterCordes pointed out if a change in procfs or sysfs breaks an application used by your distributions init scripts, you could have a a problem. This depends somewhat on how your distribution updates kernel (long term support or mainline) and even then the issues are relatively rare as distributions usually ship the updated tools at the same time. @StephenKitt added that upgraded userspace might require a newer version of the kernel, in which case the system might not be able to boot with the old kernel and that distribution release notes mention this when appropriate. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/392512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116858/"
]
} |
392,528 | Lets say I've got the following directory structure: base/|+-- app| || +-- main| || +-- sub| || +-- first| | || | +-- tib1.ear| | \-- tib1.xml| || \-- second| || +-- tib2.ear| \-- tib2.xml One of the relative paths to an ear file would be base/app/main/sub/first/tib1.ear , how could I extract the substrings for: The file, tib1.ear or tib2.ear The sub-directory after base/app/ but not including the file,that being main/sub/first or main/sub/second All of the directory names are dynamically generated, so I don't know them beyond base/app/ , and therefore cannot simply use the lengths of the known sub-strings and use cut to truncate them; but I see how it could be possible once the filenames are known. I just feel like there's an easier way than cutting and joining a bunch of strings based on the length of other results. I remember seeing some regular expression magic for something similar to this. It dealt with splitting and joining the substrings with backslashes, but sadly, I don't remember how they did it or where I saw it on here to look it up again. | Let's start with your filename: $ f=base/app/main/sub/first/tib1.ear To extract the base name: $ echo "${f##*/}"tib1.ear To extract the desired part of the directory name: $ g=${f%/*}; echo "${g#base/app/}"main/sub/first ${g#base/app/} and ${f##*/} are examples of prefix removal . ${f%/*} is an example of suffix removal . Documentation From man bash : ${parameter#word} ${parameter##word} Remove matching prefix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``#'' case) or the longest matching pattern (the ``##'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. ${parameter%word} ${parameter%%word} Remove matching suffix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``%'' case) or the longest matching pattern (the ``%%'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable sub‐ scripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. Alternatives You may also want to consider the utilities basename and dirname : $ basename "$f"tib1.ear$ dirname "$f"base/app/main/sub/first | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228827/"
]
} |
392,532 | First crack at using CUPS to setup a print server, I have a HP Photosmart C4280 but it's not listed as a printer in my CUPS install (and looking at the listed Photosmart prints none are even similar to mine). Any ideas which printer I could choose instead, or where I can find a PPD file for this printer? | Let's start with your filename: $ f=base/app/main/sub/first/tib1.ear To extract the base name: $ echo "${f##*/}"tib1.ear To extract the desired part of the directory name: $ g=${f%/*}; echo "${g#base/app/}"main/sub/first ${g#base/app/} and ${f##*/} are examples of prefix removal . ${f%/*} is an example of suffix removal . Documentation From man bash : ${parameter#word} ${parameter##word} Remove matching prefix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``#'' case) or the longest matching pattern (the ``##'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. ${parameter%word} ${parameter%%word} Remove matching suffix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``%'' case) or the longest matching pattern (the ``%%'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable sub‐ scripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. Alternatives You may also want to consider the utilities basename and dirname : $ basename "$f"tib1.ear$ dirname "$f"base/app/main/sub/first | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251371/"
]
} |
392,655 | I want to output two text files in two columns — one on the left side and other one on the right. paste doesn't solve the problem, because it only insert a character as delimiter, so if the first file has lines of different lengths output will be twisted: $ cat file1looooooooong lineline$ cat file2helloworld$ paste file1 file2looooooooong line helloline world If there was a command to add trailing spaces like fmt --add-spaces --width 50 the problem would be solved (1) : $ paste <(fmt --add-spaces --width 50 file1) file2looooooooong line helloline world But I don't know a simple way to do this. So how to merge files horizontally and print them to standard output without twisting? In fact I just want to read them side-by-side. (1) UPD: command to add trailing spaces does exist, e. g. xargs -d '\n' printf '%-50s\n' .But running $ paste <(add-trailing-spaces file1) file2 won't produce expected visual output when file1 has fewer lines than file2 . | What about paste file{1,2}| column -s $'\t' -tn ? looooooooong line line helloline world This is telling column to use Tab as columns' separator where we takes it from the paste command which is the default seperator there if not specified; generally: paste -d'X' file{1,2}| column -s $'X' -tn where X means any single character. You need to choose the one which granted that won't be occur in your files. The -t option is used to determine the number of columns the input contains. This will not add long tab between two files while other answers does. this will work even if there was empty line(s) in file1 and it will not print second file in print area of file1 , see below input/ouput Input file1: looooooooong lineline Input file2: helloworld Output: looooooooong line hello worldline | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230365/"
]
} |
392,697 | I made a file descriptor using mkfifo fifo As soon as something is written to this pipe, I want to reuse it immediately . Should I use tail -f fifo or while true; do cat fifo; done ? They seem to do the same thing and I could not measure a difference in performance. However, when a system does not support inotify (Busybox, for example), the former needs to be tail -f -s 0 fifo But this eats up the CPU with 100% usage (test it out: mkfifo fifo && busybox tail -f -s 0 fifo & echo hi>fifo / cancel with fg 1 and Ctrl C ). So is the while-true-cat the more reliable solution? | When you do: cat fifo Assuming no other process has opened the fifo for writing yet, cat will block on the open() system call. When another process opens the file for writing, a pipe will be instantiated and open() will return. cat will call read() in a loop and read() will block until some other process writes data to the pipe. cat will see end-of-file (eof) when all the other writing processes have closed their file descriptor to the fifo . At which points cat terminates and the pipe is destroyed¹. You'd need to run cat again to read what will be written after that to the fifo (but via a different pipe instance). In: tail -f file Like cat , tail will wait for a process to open a file for writing. But here, since you didn't specify a -n +1 to copy from the beginning, tail will need to wait until eof to find out what the last 10 lines were, so you won't see anything until the writing end is closed. After that, tail will not close its fd to the pipe which means the pipe instance won't be destroyed, and will still attempt to read from the pipe every second (on Linux, that polling can be avoided via the use of inotify and some versions of GNU tail do that there). That read() will return with eof (straight away, which is why you see 100% CPU with -s 0 (which with GNU tail means to not wait between read() s instead of waiting for one second)) until some other process opens the file again for writing. Here instead, you may want to use cat , but make sure the pipe instance always stays around after it has been instantiated. For that, on most systems, you could do: cat 0<> fifo # the 0 is needed for recent versions of ksh93 where the # default fd changed from 0 to 1 for the <> operator cat 's stdin will be open for both reading and writing which means cat will never see eof on it (it also instantiates the pipe straight away even if there's no other process opening the fifo for writing). On systems where that doesn't work, you can do instead: cat < fifo 3> fifo That way, as soon as some other process opens the fifo for writing, the first read-only open() will return, at which point the shell will do the write-only open() before starting cat , which will prevent the pipe from ever being destroyed again. So, to sum up: compared to cat file , it would not stop after the first round. compared to tail -n +1 -f file : it would not do a useless read() every second after the first round, there would never be eof on the one instance of the pipe, there would not be that up to one second delay when a second process opens the pipe for writing after the first one has closed it. compared to tail -f file . In addition to the above, it would not have to wait for the first round to finish before outputting something (only the last 10 lines). compared to cat file in a loop, there would be only one pipe instance. The race windows mentioned in ¹ would be avoided. ¹ at this point, in between the last read() that indicates eof and cat terminating and closing the reading end of the pipe, there is actually a small windows during which a process could open the fifo for writing again (and not be blocked as there's still a reading end). Then, if it writes something after cat has exited and before another process opens the fifo for reading, it would get killed with a SIGPIPE. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/392697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103120/"
]
} |
392,701 | fdisk(8) says: The device is usually /dev/sda, /dev/sdb or so. A device name refers to the entire disk. Old systems without libata (a library used inside the Linux kernel to support ATA host controllers and devices) make a difference between IDE and SCSI disks. In such cases the device name will be /dev/hd* (IDE) or /dev/sd* (SCSI). The partition is a device name followed by a partition number. For example, /dev/sda1 is the first partition on the first hard disk in the system. See also Linux kernel documentation (the Documentation/devices.txt file). Based on this, I understand that in the context of Linux, a string like /dev/hda or /dev/sda is a "device name". Otherwise, the second sentence I have emphasised above does not make sense: it would instead say, " For example, sda1 is the first partition on the first hard disk in the system. " This view is corroborated by the Linux Partition HOWTO : By convention, IDE drives will be given device names /dev/hda to /dev/hdd . Is there a technically correct (and, preferably, unambiguous and concise) English term for the substring hda or sda of such a device name? For example, would it be correct in this case to call sda the drive's: "short name"; or "unqualified device name"; or something else? (I am not asking for colloquialisms that are technically incorrect, even if they are in common use.) | sda is the device name . /dev/sda is the device path . Think of /sbin/fdisk , fdisk is the file name , while /sbin/fdisk is the file path . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/392701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
392,826 | I've got a beaglebone black that has Debian 9.1. we wrote a c++ program for its GPIOs and We want this program to always run when the system turns on.how can we do that? | An extremely simple solution would be to add a @reboot cron job that just runs the binary. Do crontab -e for the user that needs to run the code (e.g. sudo crontab -e for roots crontab), and add the line @reboot /path/to/some/executable This will schedule the job to run each time the system has booted up. See the crontab(5) manual for more info ( man 5 crontab ). Depending on what the program does, this may be enough, or it may be too simplistic. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251597/"
]
} |
392,828 | To flatten a directory structure, I can do this: find . -type f -exec sh -c 'mv "{}" "./`basename "{}"`"' \; I want to store the following in my profile as $FLATTEN -exec sh -c 'mv "{}" "./`basename "{}"`"' \; so that later I can just execute find . $FLATTEN I'm having trouble storing the variable because it gets interpreted too early. I want it to be stored as a string literal and interpreted only in usage on the shell, not when sourced. | If using GNU mv , you should rather do: find . -type f -exec mv -t . {} + With other mv s: find . -type f -exec sh -c 'exec mv "$@" .' sh {} + You should never embed {} in the sh code. That's a command injection vulnerability as the names of the files are interpreted as shell code (try with a file called `reboot` for instance). Good point for quoting the command substitution, but because you used the archaic form ( `...` as opposed to $(...) ), you'd need to escape the inner double quotes or it won't work in sh implementations based on the Bourne shell or AT&T ksh (where "`basename "foo bar"`" would actually be treated as "`basename " (with an unmatched ` which is accepted in those shells) concatenated with foo and then bar"`" ). Also, when you do: mv foo/bar bar If bar actually existed and was a directory, that would actually be a mv foo/bar bar/bar . mv -t . foo/bar or mv foo/bar . don't have that issue. Now, to store those several arguments ( -exec , sh , -c , exec mv "$@" . , sh , {} , + ) into a variable, you'd need an array variable. Shells supporting arrays are (t)csh , ksh , bash , zsh , rc , es , yash , fish . And to be able to use that variable as just $FLATTEN (as opposed to "${FLATTEN[@]}" in ksh/bash/yash or $FLATTEN:q in (t)csh ), you'd need a shell with a sane array implementation: rc , es or fish . Also zsh here as it happens none of those arguments is empty. In rc / es / zsh : FLATTEN=(-exec sh -c 'exec mv "$@" .' sh '{}' +) In fish : set FLATTEN -exec sh -c 'exec mv "$@" .' sh '{}' + Then you can use: find . -type f $FLATTEN | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118708/"
]
} |
392,833 | I have a daemon ( apache/samba/vsftpd/... ) running on SELinux enabled system and I need to allow it to use files in a non-default location. The standard file permissions are configured to allow access. If the daemon is running in permissive mode, everything works. When set back to enforcing it doesn't work anymore and I get a SELinux AVC denial messages . How can I configure the system to allow the access in enforcing mode? | Background SELinux adds another layer of permission checks on Linux systems. On SELinux enabled systems regular DAC permissions are checked first, and if they permit access, SELinux policy is consulted. If SELinux policy denies access, a log entry is generated in audit log in /var/log/audit/audit.log or in dmesg if auditd isn't running on the system. SELinux assigns a label, called security context , to every object (file, process, etc) in the system: Files have security context stored in extended attributes. These can be viewed with ls -Z . SELinux maintains a database mapping paths patterns to default file contexts. This database is used when you need to restore default file contexts manually or when the system is relabeled. This database can be queried with semanage tool. Processes are assigned a security context when an executable is run ( execve syscall). Process security contexts can be viewed with most system monitoring tools, for example with ps Z $PID . Other labeled objects also exist, but are not relevant to this answer. SELinux policy contains the rules that specify which operations between contexts are allowed. SELinux operates on whitelist rules, anything not explicitly allowed by the policy is denied. The reference policy contains policy modules for many applications and it is usually the policy used by SELinux enabled distributions. This answer is primarily describing how to work with a policy based on the reference policy, which you are most likely using if you use the distribution provided policy. When you run your application as your normal user, you probably do not notice SELinux, because default configuration places the users in unconfined context. Processes running in unconfined context have very few restrictions in place. You might be able to run your program without issues in user shell in unconfined context, but when launched using init system it might not work anymore in a restricted context. Typical issues When files are in a non-default location (not described in default policy) the issues are often related to the following reasons: Files have incorrect/incompatible file context : Files moved with mv keep their metadata including file security contexts from old location. Files created in new location inherited the context from parent directory or creating process. Having multiple daemons using the same files : The default policy does not include rules to allow the interaction between the security contexts in question. Files with incorrect security context If the files are not used by another daemon (or other confined process) and the only change is the location where files are stored, the required changes to SELinux configuration are: Add a new rule to file context database Apply correct file context to existing files The file context on the default location can be used as template to for the new location. Most policy modules include man page documentation (generated using sepolicy manpages ) explaining possible alternative file contexts with their access semantics. File context database uses regualr expression syntax, which allows writing overlapping specifications. It is worthwhile to note that applied context is the last specification found [src] . To add a new entry to file context database: semanage fcontext -a -t <type> "/path/here/(/.*)?" After new context entry is added to the database, the context from database can be applied on your files using restorecon <files> . Running restorecon with -vn flags will show what file contexts would be changed without applying any changes. Testing a new file context without adding a new entry in database Context can be changed manually with chcon tool. This is useful when you want to test the new file context without adding an entry to file context database. New file context is specified in the arguments to chcon . When used with --reference= option, the security context from a reference file is copied to the target files. using a specific context ( default_t ): chcon -t default_t <target files> or using a reference: chcon --reference=<path to default location> <target files> Note about different file systems & mount points If the new location is its own mount point, the context can be set with a mount option . Context set with mount option isn't stored on disk, so it can also be used with file systems that do not support extended attributes. mount <device> <mount point> -o context="<context>" Allowing processes running in different security contexts to use the same files Option 1: Booleans Reference policy includes tunable options, called booleans , which enable/disable certain additional rules. Many of them allow inter-operation of different system daemons which usually do not use same files. List of all possible tunable options and their descriptions can be listed using semanage boolean -l . audit2allow might also be able to directly tell which boolean needs to be enabled. To enable/disable a boolean using semanage : semanage boolean --on <boolean name>semanage boolean --off <boolean name> Booleans are the simplest way to modify the policy. However, all possible situations can not be addressed by toggling a boolean. Some booleans also allow very broad access, being overly permissive. Option 2: Extend policy with a new module If no boolean exists to allow the access, the policy needs to be modified by adding a custom module. A simple module adding the required rules to allow access can be generated from log files using audit2allow with following steps: Set the daemon's domain (security context) to permissive mode . In permissive mode the policy isn't enforced , but logs are generated on the access the policy would normally deny. semanage permissive -a <domain> Test your daemon in normal operation to generate log entries. Create a new policy module and insert it. audit2allow -a -M <name>semodule -i <name>.pp' Re-enable enforcing mode semanage permissive -d <domain> This method works best when there are only a few security contexts involved. In a complex configuration you quite likely have to write your own policy module. Some resources for getting started are gentoo wiki and the reference policy API documentation. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/392833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/239817/"
]
} |
392,863 | I have a script with two blocks: The first block is written in perl, the second block is written in bash How do I switch shells (perl --> bash) in the middle of the script? Script attached below: #! /usr/bin/perl -w#my @dirs = glob("*.frames");foreach $dir (@dirs) { print "working on $dir\n"; chdir $dir; my @digitfiles = glob ("RawImage_?.tif"); #need to make all files have 2-digit numbering for sort order to be correct foreach $file (@digitfiles) { my $newfile = $file; $newfile =~ s/RawImage_/RawImage_0/; rename $file,$newfile; } my $stackname = "../" . $dir . ".mrc"; `tif2mrc -s *.tif $stackname`; #IMOD program to stack: -s option means treat input as signed INT chdir "../"; #go back up}#!/usr/bin/env bashfor f in *.mrc; do mv -- "$f" "${f%%.*}".mrc ; done | Just rewrite the loop in Perl: for my $file (glob '*.mrc') { ( my $newname = $file ) =~ s/\..*/.mrc/; rename $file, $newname or warn "$file: $!";} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/392863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165209/"
]
} |
392,892 | How can we specify the hostname of the current system, where this command is executed ls -l | awk '{print $9}' Current output fileabc.txtfileABC.txtfiledatafisample Expected output hostname fileabc.txt fileABC.txt filedata fisample If we am execute this on multipal server and store the date in one file then for understanding the start & the end line of the particular server we are trying to do this. if there is any better solution, please advice. | To get the exact output of the updated question: ls -1 | awk -vhostname="$(hostname)" \ 'NR == 1 { printf("%-20s\t%s\n", hostname, $0) } NR > 1 { printf("%-20s\t%s\n", "", $0) }' This allocates a 20 character wide column for the hostname, and adds the output of ls -1 in a second column. The columns are tab-separated and the hostname only occurs in the first line. It seems as if you're asking about how to get the hostname of the current system and how to use that in either a filename or inside a file, to be able to tell what system the data came from. The hostname of a machine is given by the hostname command. This usually gives the full hostname, including domain name (if defined), while hostname -s gives the name up to the first dot: $ hostnameclient.local$ hostname -sclient If you have some command, and you'd like to create a filename for where to store its output, then you may do this: somecommand >"$(hostname)_output.txt" This will run somecommand (for example, your ls command) and store its output in a file with a name that contains the hostname of the system it ran on. If you'd like to insert the hostname in a header or footer in the file: ( echo "HEADER: The following comes from $(hostname)" somecommand echo "FOOTER: The above came from $(hostname)" ) >outputfile or, with awk : somecommand | awk -v hostname="$(hostname)" 'BEGIN { print "## From:", hostname } { print } END { print "## From:", hostname }' where somecommand is the command you want to store the output of. If you want to tag every line with the hostname: somecommand | awk -v hostname="$(hostname)" '{ print hostname, $0 }' or, variations thereof. The -v option to awk lets you set an awk variable on the command line. It may be that you have a HOSTNAME environment variable defined as well. In this case, you may use that without using the hostname command: awk -v hostname="$HOSTNAME" ... or, you may access the environment variable directly inside the awk code with ENVIRON["HOSTNAME"] . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246084/"
]
} |
392,912 | Desktops such as GNOME have moved processes from the per-session scope, into the per-user systemd manager ( systemd --user ). This includes GUI apps such as GNOME Terminal. What does GNOME use the systemd user manager to achieve? Is there a rationale somewhere I can read? GNOME appears to copy the environment variables of the session into the user manager. Note that GNOME does not support the user logging in more than once at the same time. These environment variables include, intentionally or not, XDG_SESSION_ID. loginctl , as in loginctl lock-session , ended up being modified to support this second, less well-defined concept of a session . I'm curious what prompted people to create this strangeness. | To get the exact output of the updated question: ls -1 | awk -vhostname="$(hostname)" \ 'NR == 1 { printf("%-20s\t%s\n", hostname, $0) } NR > 1 { printf("%-20s\t%s\n", "", $0) }' This allocates a 20 character wide column for the hostname, and adds the output of ls -1 in a second column. The columns are tab-separated and the hostname only occurs in the first line. It seems as if you're asking about how to get the hostname of the current system and how to use that in either a filename or inside a file, to be able to tell what system the data came from. The hostname of a machine is given by the hostname command. This usually gives the full hostname, including domain name (if defined), while hostname -s gives the name up to the first dot: $ hostnameclient.local$ hostname -sclient If you have some command, and you'd like to create a filename for where to store its output, then you may do this: somecommand >"$(hostname)_output.txt" This will run somecommand (for example, your ls command) and store its output in a file with a name that contains the hostname of the system it ran on. If you'd like to insert the hostname in a header or footer in the file: ( echo "HEADER: The following comes from $(hostname)" somecommand echo "FOOTER: The above came from $(hostname)" ) >outputfile or, with awk : somecommand | awk -v hostname="$(hostname)" 'BEGIN { print "## From:", hostname } { print } END { print "## From:", hostname }' where somecommand is the command you want to store the output of. If you want to tag every line with the hostname: somecommand | awk -v hostname="$(hostname)" '{ print hostname, $0 }' or, variations thereof. The -v option to awk lets you set an awk variable on the command line. It may be that you have a HOSTNAME environment variable defined as well. In this case, you may use that without using the hostname command: awk -v hostname="$HOSTNAME" ... or, you may access the environment variable directly inside the awk code with ENVIRON["HOSTNAME"] . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392912",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
392,938 | I typed a bunch of commands and when I ran history I saw they were not present in there. Beside the case when the command start with a (space) what are the cases when commands will not be logged in history? | You've already mentioned the leading space, and another answer has mentioned settings that will intentionally change what is saved in the history such as the HISTIGNORE setting and simply turning history off . Another scenario is multiple interactive shells run by the same user. The Z shell has a share_history option that makes it possible to have more than one Z shell instance updating a shared history file. It re-reads the file looking for new entries, and it applies timestamps to each entry. The Bourne Again shell does not have this built-in (albeit that one can sort of do so using shenanighans to execute commands when the prompt is printed). The default history file behaviour that this is altering is that neither shell expects anything other than itself to be writing to the history file, and does not update the history after every command line executed. The default behaviour in the Bourne Again shell, specifically, only updates the history file when the shell exits or one explicitly tells it to with the history command. This means not only that one shell will simply overwrite the history written by another when a user has multiple interactive shell sessions; but also that it is possible for a shell that is not terminated cleanly to not write out the history file and thus to lose whatever command history occurred between the shell being uncleanly terminated and its last (explicit) update of the history file. Further reading https://askubuntu.com/questions/23630/ Preserve bash history in multiple terminal windows Is there a way to make the history when pressing up in bash shared between shells? Bash history: "ignoredups" and "erasedups" setting conflict with common history across sessions | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
392,944 | Is there a way to black-list a client-side certificate issued to an specific user? I currently have a case where a few malicious actors have my website's client side certificates and I do not know how stop them from using it. I currently run Apache2 on a CentOS machine but can migrate to NGINX if needed. | You've already mentioned the leading space, and another answer has mentioned settings that will intentionally change what is saved in the history such as the HISTIGNORE setting and simply turning history off . Another scenario is multiple interactive shells run by the same user. The Z shell has a share_history option that makes it possible to have more than one Z shell instance updating a shared history file. It re-reads the file looking for new entries, and it applies timestamps to each entry. The Bourne Again shell does not have this built-in (albeit that one can sort of do so using shenanighans to execute commands when the prompt is printed). The default history file behaviour that this is altering is that neither shell expects anything other than itself to be writing to the history file, and does not update the history after every command line executed. The default behaviour in the Bourne Again shell, specifically, only updates the history file when the shell exits or one explicitly tells it to with the history command. This means not only that one shell will simply overwrite the history written by another when a user has multiple interactive shell sessions; but also that it is possible for a shell that is not terminated cleanly to not write out the history file and thus to lose whatever command history occurred between the shell being uncleanly terminated and its last (explicit) update of the history file. Further reading https://askubuntu.com/questions/23630/ Preserve bash history in multiple terminal windows Is there a way to make the history when pressing up in bash shared between shells? Bash history: "ignoredups" and "erasedups" setting conflict with common history across sessions | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171274/"
]
} |
392,946 | I installed Ubuntu Server 16.04.3 in two Virtualbox virtual machines, and then I created a NAT network and a DHCP server with the following commands from the host machine: $ vboxmanage natnetwork add --netname testlab --network "10.10.10.0/24" --enable$ vboxmanage dhcpserver add --netname testlab --ip 10.10.10.1 --netmask 255.255.255.0 --lowerip 10.10.10.2 --upperip 10.10.10.12 --enable I configured the Network setting of each virtual machine to use the Adapter 1 attached to 'Nat Network' testlab. The two virtual machines can ping each other with these settings, but they cannot access the Internet. If I ping 8.8.8.8 , I have a 100% packet loss and I am unable to install any package: $ apt-get update && apt-get upgradeTemporary failure resolving ‘gb.archive.ubuntu.com’ Both have an empty /etc/resolv.conf and the same /etc/hosts files. I need to have them connected to each other and the Internet for testing purposes. One should act as a server, the second one as a client, and the machine acting as a server should be connected to the Internet. I have no idea why the two servers cannot connect to the internet as Virtualbox NAT Network. Any ideas? | You've already mentioned the leading space, and another answer has mentioned settings that will intentionally change what is saved in the history such as the HISTIGNORE setting and simply turning history off . Another scenario is multiple interactive shells run by the same user. The Z shell has a share_history option that makes it possible to have more than one Z shell instance updating a shared history file. It re-reads the file looking for new entries, and it applies timestamps to each entry. The Bourne Again shell does not have this built-in (albeit that one can sort of do so using shenanighans to execute commands when the prompt is printed). The default history file behaviour that this is altering is that neither shell expects anything other than itself to be writing to the history file, and does not update the history after every command line executed. The default behaviour in the Bourne Again shell, specifically, only updates the history file when the shell exits or one explicitly tells it to with the history command. This means not only that one shell will simply overwrite the history written by another when a user has multiple interactive shell sessions; but also that it is possible for a shell that is not terminated cleanly to not write out the history file and thus to lose whatever command history occurred between the shell being uncleanly terminated and its last (explicit) update of the history file. Further reading https://askubuntu.com/questions/23630/ Preserve bash history in multiple terminal windows Is there a way to make the history when pressing up in bash shared between shells? Bash history: "ignoredups" and "erasedups" setting conflict with common history across sessions | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184919/"
]
} |
392,951 | I wonder why there is an error using an asynchronous command within a loop? $ for i in {1..8}; do sleep 100 & ; donebash: syntax error near unexpected token `;' If I write it as $ for i in {1..8}; do > sleep 100 & > done this works fine. How can I write it in one line without error? | Drop the ; : for i in {1..8}; do sleep 100 & done & separates commands , so the ; is extraneous (and the shell expects something between & and ; ). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/392951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
392,970 | I am using "find" in a Travis-CI to check a particular file type with a program. (To be exact, it is a shellcheck check.) However, when using find the exit codes of the command(s)/subshells executed by it are naturally discarded, as they are not passed to the "main script". As an example this is a find command: find . -type f -iname "*.sh" -exec sh ./testScripts.sh "{}" \; ./testScripts.sh may exit with 0 or >= 1, depending on the test result. The testScripts.sh exits properly with the correct exit code, but due to find the exit code of the command is always "0". All I want is, that if one file/execution errors, this error is "propagated" up to Travis-CI. How can I accomplish this? | Using Stephen Kitt's suggestion in comments: find . -type f -iname "*.sh" -exec sh -c 'for n; do ./testScripts.sh "$n" || exit 1; done' sh {} + This will cause the sh -c script to exit with a non-zero exit status as soon as testScript.sh does. This means that find will also exit with a non-zero exit status: If terminated by a plus sign, the pathnames for which the primary is evaluated are aggregated into sets, and utility will be invoked once per set, similar to xargs(1) . If any invocation exits with a non-zero exit status, then find will eventually do so as well , but this does not cause find to exit early. Regarding the questions in comment: for n; do ... ; done looks weird but makes sense when you realize that without anything to iterate over, the for loop will iterate over "$@" implicitly. The trailing sh at the end will be placed in $0 of the sh -c shell. The {} will be substituted by a number of pathnames. Without sh there, the first pathname would end up in $0 and would not be picked up by the loop, since it's not in $@ . $0 usually contains the name of the current interpreter (it will be used in error message produced by the sh -c shell). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/392970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146739/"
]
} |
393,069 | 1 #!/bin/bash2 # query2.sh34 numbers=(53 8 12 9 784 69 8 7 1)5 i=467 echo ${numbers[@]} # <--- this echoes "53 8 12 9 784 69 8 7 1" to stdout.8 echo ${numbers[i]} # <--- this echoes "784" to stdout.910 unset numbers[i]1112 echo ${numbers[@]} # <--- this echoes "53 8 12 9 69 8 7 1" to stdout.13 echo ${numbers[i]} # <--- stdout is blank. Why, in line 13, is the stdout blank, considering that the array seems to have been updated judging by line 12's stdout? And therefore, what should I do to get the intended answer, "69"? | unset removes an element. It doesn't renumber the remaining elements. We can use declare -p to see exactly what happens to numbers : $ unset "numbers[i]"$ declare -p numbersdeclare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [5]="69" [6]="8" [7]="7" [8]="1") Observe the numbers no longer has an element 4 . Another example Observe: $ a=()$ a[1]="element 1"$ a[22]="element 22"$ declare -p adeclare -a a=([1]="element 1" [22]="element 22") Array a has no elements 2 through 21. Bash does not require that array indices be consecutive. Suggested method to force a renumbering of the indices Let's start with the numbers array with the missing element 4 : $ declare -p numbersdeclare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [5]="69" [6]="8" [7]="7" [8]="1") If we would like the indices to change, then: $ numbers=("${numbers[@]}")$ declare -p numbersdeclare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [4]="69" [5]="8" [6]="7" [7]="1") There is now an element number 4 and it has value 69 . Alternate method to remove an element & renumber array in one step Again, let's define numbers : $ numbers=(53 8 12 9 784 69 8 7 1) As suggested by Toby Speight in the comments, a method to remove the fifth element (at index 4) and renumber the remaining elements all in one step: $ numbers=("${numbers[@]:0:4}" "${numbers[@]:5}")$ declare -p numbersdeclare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [4]="69" [5]="8" [6]="7" [7]="1") As you can see, the fifth element was removed and all remaining elements were renumbered. ${numbers[@]:0:4} slices array numbers : it takes the first four elements starting with element 0. Similarly, ${numbers[@]:5} slice array numbers : it takes all elements starting with element 5 and continuing to the end of the array. Obtaining the indices of an array The values of an array can be obtained with ${a[@]} . To find the indices (or keys ) that correspond to those values, use ${!a[@]} . For example, consider again our array numbers with the missing element 4 : $ declare -p numbersdeclare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [5]="69" [6]="8" [7]="7" [8]="1") To see which indices are assigned: $ echo "${!numbers[@]}"0 1 2 3 5 6 7 8 Again, 4 is missing from the list of indices. Documentation From man bash : The unset builtin is used to destroy arrays. unset name[subscript] destroys the array element at index subscript .Negative subscripts to indexed arrays are interpreted as described above. Care must be taken to avoid unwanted side effects caused by pathname expansion. unset name , where name is an array, or unset name[subscript] , where subscript is * or @ , removes the entirearray. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/393069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238486/"
]
} |
393,091 | I don't know why I can't use env array variable inside a script ? In my ~/.bashrc or ~/.profile export HELLO="ee"export HELLOO=(aaa bbbb ccc) in a shell : > echo $HELLOee> echo $HELLOOaaa> echo ${HELLOO[@]}aaa bbbb ccc in a script : #!/usr/bin/env bashecho $HELLOecho $HELLOOecho ${HELLOO[@]}---# Return ee Why ? | A bash array can not be an environment variable as environment variables may only be key-value string pairs. You may do as the shell does with its $PATH variable, which essentially is an array of paths; turn the array into a string, delimited with some particular character not otherwise present in the values of the array: $ arr=( aa bb cc "some string" )$ arr=$( printf '%s:' "${arr[@]}" )$ printf '%s\n' "$arr"aa:bb:cc:some string: Or neater, arr=( aa bb cc "some string" )arr=$( IFS=:; printf '%s' "${arr[*]}" )export arr The expansion of ${arr[*]} will be the elements of the arr array separated by the first character of IFS , here set to : . Note that if doing it this way, the elements of the string will be separated (not delimited ) by : , which means that you would not be able to distinguish an empty element at the end, if there was one. An alternative to passing values to a script using environment variables is (obviously?) to use the command line arguments: arr=( aa bb cc )./some_script "${arr[@]}" The script would then access the passed arguments either one by one by using the positional parameters $1 , $2 , $3 etc, or by the use of $@ : printf 'First I got "%s"\n' "$1"printf 'Then I got "%s"\n' "$2"printf 'Lastly there was "%s"\n' "$3"for opt in "$@"; do printf 'I have "%s"\n' "$opt"done | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/393091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/202577/"
]
} |
393,142 | Running Ubuntu 17.04, I was installing a software from non-repository distribution, I was supposed to move the software bin -folder contents to /usr/bin (which was already iffy advice) It's one of those days, so what I did instead: mv /bin/* /usr/bin So I screwed up and I accidentally moved all the files in bin to /usr/bin and /bin was empty. Since I take that /bin is system critical, for quick remedy, I copied /usr/bin contents to /bin. Now my /bin and /usr/bin contents identical and both contain the files originally in /bin and /usr/bin separated. Is my Ubuntu in a broken state now? (Did not try to reboot the computer yet, right now everything seems to still work) Is there a way to know which files have been moved/copied to /usr/bin most recently, so I could just manually take care of the situation?2.1 Are there usually overlapping files in /bin and /usr/bin Is there other ways to undo what I did? I don't have Timeshift installed so restoring backups is not an option, but there's nothing critical on the computer currently, so I could just admit to screwup reinstall the whole linux partition. | On Linux (and on most other systems, though POSIX doesn't give you that guarantee unless the move was across file systems), that would have updated their ctime, so assuming none of the other ones in /usr/bin have been touched in the last 24 hours, you should be able to move them back with: find /usr/bin/. ! -name . -prune -ctime -1 -exec sh -c ' echo mv -i "$@" /bin' sh {} + Remove the echo if that looks right. Note that you won't be able to recover the files that existed by the same name in /bin and /usr/bin (the original ones in /usr/bin would have been lost) A potential caveat: if some files were hard linked in both /bin and /usr/bin , all the hard links in /usr/bin would be moved to /bin . Now, you may think that since /bin and /usr/bin are in the default $PATH , and /bin is available on /boot at least before /usr is mounted, it should not matter whether the executables are in /bin instead of /usr/bin . But that would be overlooking that many commands hard code the paths of executables and expect them to be in some specific case. A common case is she-bangs. All scripts that have: #! /usr/bin/env bash will fail to work after you do mv /usr/bin/env /bin/env . In that regard, having the commands in both locations is safer in that it won't break those scripts. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251847/"
]
} |
393,264 | When I type just $PATH as below, the output starts with -bash: followed by the value of $PATH then at the end it prints : No such directory whereas the output of echo $PATH does not produce that output. Is the bash's readline involved? [user1@Server1 ~]$ $PATH-bash: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/user1/.local/bin:/home/user1/bin: No such file or directory When I just do echo $PATH the output is: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/user1/.local/bin:/home/user1/bin | The first word on a simple command line is a command - an action. (There are more complex variants but for now consider this as a sufficient truth.) In your first example, the "command" is the value of the $PATH variable, which isn't actually a command, so bash complains that it can't find it to run. (The shell searches the colon-separated list of directories specified in the $PATH variable for the command that you've entered.) In your second example, the "command" is the echo verb, with the value of $PATH as its argument. The echo command prints its arguments to stdout , so you get to see the value of $PATH on the screen. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194688/"
]
} |
393,295 | Here is my question: Why is iptables unable to prevent SSH from connecting to localhost? A more detailed description follows. During a process of experimentation with iptables I came across the following curiosity that I'd like to understand. Even when I set every policy to DROP, I'm still able to access the machine locally via SSH. Here is what I'm doing. First I use iptables to set all POLICY values to DROP: cat <<HEREDOC | sudo iptables-restore*filter:INPUT DROP:FORWARD DROP:OUTPUT DROPCOMMIT*mangle:PREROUTING DROP:INPUT DROP:FORWARD DROP:OUTPUT DROP:POSTROUTING DROPCOMMIT*nat:PREROUTING DROP:POSTROUTING DROP:OUTPUT DROP:INPUT DROPCOMMIT*raw:PREROUTING DROP:OUTPUT DROPCOMMITHEREDOC Then I try to connect via SSH: ssh localhost And, much to my surprise, this works! I'm presented with a new shell session as if there were no firewall. As a sanity check I then try to ping localhost, which results in the following error message: ping: sendmsg: Operation not permitted This seems to suggest that the firewall is in fact operational. Finally I try to SSH using an IP address ssh 127.0.0.1 This hangs, as I would have expected. So my best guest is that SSH is doing something differently when it's passed the string "localhost" as an argument - something that doesn't actually involve the loopback interface. If this is in fact the case then my question becomes, "What exactly is ssh doing?" | Most probably, localhost resolves to an IPv6 address ( ::1 ) which is not filtered by iptables (use ip6tables ). The output of: strace -e connect ssh localhost will tell you what IP address and what protocol are used. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99163/"
]
} |
393,305 | We have seen OS doing Copy on Write optimisation when forking a process. Reason being that most of the time fork is preceded by exec, so we don't want to incur the cost of page allocations and copying the data from the caller address space unnecessarily. So does this also happen when doing CP on a linux with ext4 or xfs (journaling) file systems? If it does not happen, then why not? | The keyword to search is reflink . It was recently implemented in XFS. EDIT: the XFS implementation was initially marked EXPERIMENTAL. This warning was removed in the kernel release 4.16, a number of months after I wrote the above :-). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251979/"
]
} |
393,310 | I have set -e turned on for my script. The only thing is there is one command here that I don't want causing the script to exit if it fails, but I want everything else to do that. How can I keep set -e on, and not have my script exit when an error code is thrown? script in question: native=$(pacman -Qenq -) If stdin has a non-native package name an error code gets written to stdin. | set -e aka set -o errexit doesn't apply to commands that are parts of conditions like in: if cmd; dountil cmd; dowhile cmd; docmd || whatevercmd && whatever That also applies to the ERR trap for shells supporting it. So, an idiomatic way to ignore the failure of a command is with: cmd || : errors ignored Or just: cmd || truecmd || : That cancels set -e for that cmd invocation and also sets $? to 0 (to that of : / true when cmd fails) cmd && trueret=$? Also cancels set -e but preserves the exit status of cmd . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174776/"
]
} |
393,348 | I have a huge number of files that are numbered in a way such as file_01_01.out where the first number is the group that the file belongs to, and the second is number of the file in the group - so file_10_07.out is the 7th file in the 10th group. I want to copy some text from these files and group them in some output files. I have tried using this and it doesn't really work, and I can't understand why: for i in {0..21}; do grep "text" file_$i_*out > out_$i.txt;done; Not sure why this doesn't work, but there is definitely logic to the output. It's just not the output I was going for, and some files are just completely skipped. | (in adition to @Philippos): Bash is trying to expand variable $i_ instead of $i .Try ...${i}_... : for i in {00..21} do grep "text" file_${i}_*out > out_$i.txt done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252014/"
]
} |
393,351 | In a bash script I want to do the following: script.sh < some_file The some_file is a file that has 1 single line that I want to pass it as an argument to my bash script.sh . How can I do this? | Three variations: Pass the contents of the file on the command line, then use that in the script. Pass the filename of the file on the command line, then read from the file in the script. Pass the contents of the file on standard input, then read standard input in the script. Passing the contents as a command line argument: $ ./script.sh "$(<some_file)" Inside the script: some_data=$1 $1 will be the value of the first command line argument. This would fail if you have too much data (the command that the shell would have to execute would grow too big). Passing the filename: $ ./script.sh some_file Inside the script: some_data=$(<"$1") or IFS= read -r some_data <"$1" Connecting standard input to the file: $ ./script.sh <some_file Inside the script: IFS= read -r some_data The downside with this way of doing it is that the standard input of the script now is connected to some_file . It does however provide a lot of flexibility for the user of the script to pass the data on standard input from a file or from a pipeline. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42132/"
]
} |
393,387 | I have a file similar to the following: random,test123,MyCompany, Inc.hello,12345,TestCompany, LLC I want to remove the commas from the third column so I'd have something like this: random,test123,MyCompany Inc.hello,12345,TestCompany LLC How would I do this? | This is easy with sed: sed 's/,//3' file Try it online! If you want to directly apply the modifications in your input file, then run: sed -i 's/,//3' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117923/"
]
} |
393,465 | I'm using Debian 8. How do I get my external IP address from a command line? I thought the below command would do the job ... myuser@myserver:~ $ /sbin/ifconfig $1 | grep "inet\|inet6" | awk -F' ' '{print $2}' | awk '{print $1}'addr:192.168.0.114addr:addr:127.0.0.1addr: but as you can see, it is only revealing the IP address of the machine in the LAN. I'm interested in knowing its IP for the whole world. | You mean whatever routable IP your dsl/cable modem/etc. router has? You need to either query that device OR ask an outside server what IP it sees when you connect to it. The easiest way of doing that is to search google for "what is my ip" and like the calculation searches, it will tell you in the first search result. If you want to do it from the command line, you'll need to check the output of some script out there that will echo out the information. The dynamic dns service dyndns.org has one that you can use - try this command wget http://checkip.dyndns.org -O - You should get something like HTTP request sent, awaiting response... 200 OKLength: 105 [text/html]Saving to: ‘STDOUT’- 0%[ ] 0 --.-KB/s <html><head><title>Current IP Check</title></head><body>Current IP Address: 192.168.1.199</body></html>- 100%[===================>] 105 --.-KB/s in 0s 2017-09-20 14:16:00 (15.4 MB/s) - written to stdout [105/105] I've changed the IP in mine to a generic non-routable and bolded it for you. If you want just the IP, you'll need to parse it out of there - quick and dirty, but it works for me. And I'm 100% sure there is a better safer way of doing it... wget http://checkip.dyndns.org -O - | grep IP | cut -f 2- -d : | cut -f 1 -d \< Which will give you just 192.168.1.199 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
393,485 | I have 2 webservers under AWS ELB. Each webserver has one virtual host file and bundle.crt, .key files. When I tried to load the ELB with http then its directing to the webservers fine but when I use https://ELB url then I am getting below error. I am tried various options to troubleshoot this issue. I changed the certificates in webserver, I changed the listener ports on the ELB servers, I checked the security group of instances and ELB, I verified the httpd.conf file, verified ssl_conf file but I didnt find any server level error or misconfigurations. All seems to be good at server level but still I am facing above issue. When I tested my web url in ssltest site then I got "The secure protocol is not support" error. I am not sure how to proceed further. | You mean whatever routable IP your dsl/cable modem/etc. router has? You need to either query that device OR ask an outside server what IP it sees when you connect to it. The easiest way of doing that is to search google for "what is my ip" and like the calculation searches, it will tell you in the first search result. If you want to do it from the command line, you'll need to check the output of some script out there that will echo out the information. The dynamic dns service dyndns.org has one that you can use - try this command wget http://checkip.dyndns.org -O - You should get something like HTTP request sent, awaiting response... 200 OKLength: 105 [text/html]Saving to: ‘STDOUT’- 0%[ ] 0 --.-KB/s <html><head><title>Current IP Check</title></head><body>Current IP Address: 192.168.1.199</body></html>- 100%[===================>] 105 --.-KB/s in 0s 2017-09-20 14:16:00 (15.4 MB/s) - written to stdout [105/105] I've changed the IP in mine to a generic non-routable and bolded it for you. If you want just the IP, you'll need to parse it out of there - quick and dirty, but it works for me. And I'm 100% sure there is a better safer way of doing it... wget http://checkip.dyndns.org -O - | grep IP | cut -f 2- -d : | cut -f 1 -d \< Which will give you just 192.168.1.199 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236029/"
]
} |
393,531 | I've just noticed some interesting behavior with chmod when unsetting the setgid bit: $ mkdir test$ chmod 2755 test$ stat -c '%a %n' test2755 test # as expected$ chmod 0755 test$ stat -c '%a %n' test2755 test # what? see below$ chmod 00755 test$ stat -c '%a %n' test755 test # double what?! Attempting to unset the setgid bit with chmod 0755 doesn't work, which is surprising. However, the man page indicates that this is the intended behavior: SETUID AND SETGID BITS chmod clears the set-group-ID bit of a regular file if the file's group ID does not match the user's effective group ID or one of the user's supplementary group IDs, unless the user has appropriate privileges. Additional restrictions may cause the set-user-ID and set-group-ID bits of MODE or RFILE to be ignored. This behavior depends on the policy and functionality of the underlying chmod system call. When in doubt, check the underlying system behavior. chmod preserves a directory's set-user-ID and set-group-ID bits unless you explicitly specify otherwise. You can set or clear the bits with symbolic modes like u+s and g-s, and you can set (but not clear) the bits with a numeric mode. (Emphasis added) So it seems chmod 0755 isn't meant to unset the setgid bit. Why, then, however, does chmod 00755 unset it? chmod doesn't seem to have any use for five digits of a numeric mode. Again from the man page: A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Omitted digits are assumed to be leading zeroes. (Emphasis added) What's going on here? Why would chmod decide to ignore a single leading 0? Why does it not ignore two leading 0's? (Debian Stretch 9.1, with chmod (GNU Coreutils) 8.6) | I've found it! This info is missing from the man page but is in the Coreutils manual online. To wit: On most systems, if a directory’s set-group-ID bit is set, newly created subfiles inherit the same group as the directory, and newly created subdirectories inherit the set-group-ID bit of the parent directory. On a few systems, a directory’s set-user-ID bit has a similar effect on the ownership of new subfiles and the set-user-ID bits of new subdirectories. These mechanisms let users share files more easily, by lessening the need to use chmod or chown to share new files. These convenience mechanisms rely on the set-user-ID and set-group-ID bits of directories. If commands like chmod and mkdir routinely cleared these bits on directories, the mechanisms would be less convenient and it would be harder to share files. Therefore, a command like chmod does not affect the set-user-ID or set-group-ID bits of a directory unless the user specifically mentions them in a symbolic mode, or uses an operator numeric mode such as ‘=755’, or sets them in a numeric mode, or clears them in a numeric mode that has five or more octal digits. Reference: https://www.gnu.org/software/coreutils/manual/html_node/Directory-Setuid-and-Setgid.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231592/"
]
} |
393,534 | I have changed some stuff within the sshd_config file and want to reset the file to its default settings. How would I go about doing this? | The ssh default config file is on /private/etc/ssh/sshd_config , you can copy it to .ssh directory by the following command sudo cp /private/etc/ssh/sshd_config ~/.ssh/config Then restart SSHD: sudo launchctl stop com.openssh.sshdsudo launchctl start com.openssh.sshd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241691/"
]
} |
393,543 | I have an input table, partially: TCTTTTAAAGCCTCCTCAACTGTTTTAGGG 1 0CACAACTGAAAAGTACAATGTGTTTGCTTC 1 0CACCATATTTATTTAAAGGAGCATCTAAAT 1 3ACGAGAAAAAAAAAAGGGGTGACCCCCTGG 3 0CAAAATATTAATTCTTTACTATGAAACTTA 1 0TTCTATTTTGTCGTGGTTAGCAACCATCAC 6 5TAATAATAAAATAATGAAAAAGAAAAATCA 1 0AAAGCATTTGAAGGTGACAAAAGGGAAAGT 20 7TGCTAAGGAAGAATCATGGAAGAGTGTTTT 0 1CTCCCTTCCTCGCAAACATGCTTGCCCAGG 0 1AATAAAAATCAAATTTAGTGACGGGTTGAG 130 4AGAACGAAGCTGATATAAAGACATCAAAGA 1 0TGCCCCTAATGCAGCATCTCTCTCTCCCTC 1 0CCACAAAATAATTACATGGCAAACACGAGT 1 0 I want to print all the lines with column 3 >= 120 and column 2 >= 420 I have got two different results by using and not using "" around the number. (A) awk '$3>=120 && $2>=420 {print $0}'(B) awk '$3>="120" && $2>="420" {print $0}' Result of (A) partially, which seems to be what I want : GTGTCATTTCATGCCTCATTCATCCTCATT 1375 439TGAATTCTATTACTTGATTGACATTGACAG 541 301TCTTTGGCGGTTGTTAAAGAATTTTCTGAT 823 203TCTACACCTCAATATGCAAAACATTACATC 535 165TTCAACAAATTAATTAAAATTGAATTAAAC 3010 627GATATGTAAAAAAAATTATATTATATGAAT 609 173 Result of (B) partially, is not what I want : TAATAATAATAATAAAAGAAGAAGAAAAGA 5 2TATCTGAGCTATCAACTCAATTCATCGTCG 5 4TTAATGATAAATTTATCTTAAAAGTTTAAC 62 23TTCAACCCCCTCTCCTGGTGTGTGCCCTAG 45 7TCCAAAGCCTTTAATGTGTACCGCGTGAAA 6 5GGCAATGGGATACTCCTGTATGTTATTCTA 6 3 The question comes to my mind :How does the quotation mark (") in number selection made the difference? Thank you very much. | Quotation marks force a comparison on the string representation of your numbers. Alphabetically, "42" comes after "120" (you then have "42" > "120"); numerically it doesn't (you then have 42 < 120). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240505/"
]
} |
393,553 | The output of {lv,vg,pv}display gives: Name UUID How do I find an LVM name given a UUID? | You can filter LVM commands’ output directly using the -S option: # pvs --noheadings -o name -S uuid=MtLb3p-MUle-8fyk-fy6m-z99n-V9mi-xxxxxx /dev/sdb3 This also works with vgs and lvs to find VGs and LVs. To avoid having to deal with the spaces at the start of the output, add --config 'log{prefix=""}' : # pvs --noheadings -o name -S uuid=MtLb3p-MUle-8fyk-fy6m-z99n-V9mi-xxxxxx --config 'log{prefix=""}'/dev/sdb3 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
393,598 | I've seen cases like that with faulty storage devices, with faults in remote storage (SAN, NAS), I think I've even seen something similar caused by mount permissions. But it's the first time I see this happening on the same filesystem as my home directory. What kind of permissions are kicking in here? Definitely not mounts (I'm on the same ext4 filesystem), not SELinux, not ACLs. Then what? I do not recall how this directory was created. It's likely it got created by some kind of software. For me the weirdest part is that the directory is not even allowed to see its or its parent's info (last command). I'm using Linux Mint Sarah. user01@MyPC ~/somedirectory $ ls -l ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:ls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace': Permission deniedviso 0d????????? ? ? ? ? ? workspace user01@MyPC ~/somedirectory $ ls -ld ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:drw-r--r-- 3 user01 user01 4096 Rgs 27 2016 ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D: user01@MyPC ~/somedirectory $ sudo file ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:: directory user01@MyPC ~/somedirectory $ sudo ls -l ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:viso 4drwxr-xr-x 3 user01 user01 4096 Rgs 27 2016 workspace user01@MyPC ~/somedirectory $ sudo stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\: File: './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:' Size: 4096 Blocks: 8 IO Block: 4096 aplankasDevice: 807h/2055d Inode: 3937216 Links: 3Access: (0644/drw-r--r--) Uid: ( 1000/ user01) Gid: ( 1000/ user01)Access: 2017-09-21 12:57:33.990819052 +0300Modify: 2016-09-27 11:18:38.309775066 +0300Change: 2017-03-13 14:56:40.960468954 +0200 Birth: - user01@MyPC ~/somedirectory $ sudo getfacl ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:# file: deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:# owner: user01# group: user01user::rw-group::r--other::r-- user01@MyPC ~/somedirectory $ stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\: File: './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:' Size: 4096 Blocks: 8 IO Block: 4096 aplankasDevice: 807h/2055d Inode: 3937216 Links: 3Access: (0644/drw-r--r--) Uid: ( 1000/ user01) Gid: ( 1000/ user01)Access: 2017-09-21 12:57:33.990819052 +0300Modify: 2016-09-27 11:18:38.309775066 +0300Change: 2017-03-13 14:56:40.960468954 +0200 Birth: - user01@MyPC ~/somedirectory $ stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/workspacestat: nepavyksta patikrinti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace': Permission denied user01@MyPC ~/somedirectory $ sudo stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/workspace File: './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace' Size: 4096 Blocks: 8 IO Block: 4096 aplankasDevice: 807h/2055d Inode: 3937217 Links: 3Access: (0755/drwxr-xr-x) Uid: ( 1000/ user01) Gid: ( 1000/ user01)Access: 2017-09-21 12:58:46.845727190 +0300Modify: 2016-09-27 11:18:38.309775066 +0300Change: 2016-12-02 13:56:08.298109826 +0200 Birth: - user01@MyPC ~/somedirectory $ stat . File: '.' Size: 4096 Blocks: 8 IO Block: 4096 aplankasDevice: 807h/2055d Inode: 3278479 Links: 23Access: (0755/drwxr-xr-x) Uid: ( 1000/ user01) Gid: ( 1000/ user01)Access: 2017-09-21 09:46:22.102269130 +0300Modify: 2017-09-20 17:33:04.564009275 +0300Change: 2017-09-20 17:33:04.564009275 +0300 Birth: - user01@MyPC ~/somedirectory $ ll ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/ls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace': Permission deniedls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/.': Permission deniedls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/..': Permission deniedviso 0d????????? ? ? ? ? ? ./d????????? ? ? ? ? ? ../d????????? ? ? ? ? ? workspace/ Attributes: user01@MyPC ~/somedirectory $ sudo lsattr ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/-------------e-- ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspaceuser01@MyPC ~/somedirectory $ sudo lsattr ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/workspace-------------e-- ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace/directory2 | On files read suffices to check the permissions. You need read AND execute on folders to ls them. chmod -R a+X ./deploy_dir Capital X to only set execute on folders (and files that already have execute bit set). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/393598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95079/"
]
} |
393,790 | I'm trying to run this script: #!/bin/bash -e{ echo "Doing something"; will_fail # like `false` echo "Worked"; } || echo "Failed" To my surprise, will_fail failed, but I did not see "Failed" on my command line, but "Worked". Why did the compound command not exit with error after will_fail failed? | Failed will not be printed because the exit status of the compound command is that of the last command executing in { ...; } , which is echo . The echo succeeds, so the compound command exits with an exit status of zero. The following would output three strings: { echo "Do something"; echo "Worked"; false; } || echo "Failed" From the POSIX standard : Unless otherwise stated, the exit status of a command shall be that of the last simple command executed by the command. There are several things happening here (summary): You run with set -e active. This will cause the shell to exit if any command returns a non-zero exit status (broadly speaking). However, this does not apply here since the will_fail command is part of (compound command, which is part of) a || list (and not last in it). Again, from the POSIX standard (my emphasis): The -e setting shall be ignored when executing the compound list following the while , until , if , or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last. The last simple command in the || list is echo "Failed" . This is what determines the overall exit status of the compound command. Since it executes successfully (and since will_fail will not cause the shell to exit), the status will be zero, which means that the other side of || won't be executed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
393,919 | My goal is to allow all users who are members of the "team" group to edit (r/w) the same set of remote files -- normal work collaboration -- using a local mount point. I have tried NFS and SSHFS using ACLs without success yet. Here I am trying to get SSHFS working by making the umask correct (which, in theory, should solve the problems I'm experiencing). Updated description of problem: user1, user2, and user3 all log into the same client computer. All are members of group "team". The client computer mounts a share via SSHFS. Client and server run Arch Linux (updated a couple days ago). The Client runs KDE desktop. The SSHFS mount is done via user3@sshfsrv with option allow_other. On the server, the shared directory has permissions user3 (owner) rwx and group (team) rwx, while other have r-x permissions. The gid sticky bit is set with chmod g+s . We removed all ACLs for the umask-focused configuration. First problem: user2 scans a document with XSane (a Gnome app) and attempts to save it in Shared1 directory, which is part of the SSHFS mount point. The save operation fails due to permissions. A 0 byte file is written. The permissions on that file are owner (user3) rw and group (team) read only (and other none). user2 can save the scanned document to their home directory. The terminal works as expected: In a terminal, user2 can touch a document in the Shared1 directory and the permissions are: -rw-rw---- 1 user3 team 6 Sep 23 19:41 deleteme6.txt We get the correct g+rw permissions. Note that ownership is user3 while this is user2 creating the file. In /etc/fstab, the mount is specified as: user3@sshfsrv:/home/common /home/common fuse.sshfs x-systemd.automount,_netdev,user,follow_symlinks,identityfile=/home/user3/.ssh/id_rsa,allow_other,default_permissions 0 0 In the terminal, and with a text editor (Kate in KDE), the users can collaborate on files that were created in Shared1 as expected. Any user in group "team" can create and save a file in Shared1 via nano text editor, and any other user in the group can edit / update it. Second problem: As a temporary workaround I tested saving the scanned images to user2's home directory, then moving them to the Shared1 directory using Dolphin File manager. Permissions errors prevent this, and sometimes it crashes Dolphin. I can show the same result by moving text files in the terminal: [user2@client2 Shared1]$ echo user2 > /home/user2/MoveMe/deleteme7.txt[user2@client2 Shared1]$ mv /home/user2/MoveMe/deleteme7.txt .mv: preserving times for './deleteme7.txt': Operation not permittedmv: preserving permissions for ‘./deleteme7.txt’: Operation not permitted The two errors above appear to be key to understanding the problem. If I change the mount specification to use user2@sshfsrv those errors go away for user2 but then user1 and user3 experience them. The only user that doesn't have the problem is the one used in the mount specification. (I had expected the allow_other mount option would prevent this, but it doesn't. Also using root in the mount specification doesn't seem to help.) Removing the mount option default_permissions eliminates these errors, but it also eliminates all permissions checking. Any user in any group can read and write files in Shared1, which does not meet our requirements. sftp-server umask setting: As sebasth says below, when sftp-server is used, the umask in /etc/profile or ~/.bashrc isn't used. I found that the following specification in /etc/ssh/sshd_config is a good solution for setting the umask: Subsystem sftp internal-sftp -u 0006 I do not want to use the umask mount option for sshfs (in /etc/fstab) as that does not give the desired behavior. Unfortunately, the above "-u" flag, while required, doesn't (yet) fully resolve my problem as described above. New Update: I have enabled pam_umask, but that alone doesn't resolve the issue. The above "-u" option is still required and I do not see that pam_umask adds anything additional that helps resolve this issue. Here are the configs currently used: /etc/pam.d/system-loginsession optional pam_umask.so/etc/login.defsUMASK 006 The Shared1 directory has these permissions, as shown from the server side. The gid sticky bit is set with chmod g+s . We removed all ACLs. All files within this directory have g+rw permissions. drwxrwsr-x 1 user3 team 7996 Sep 23 18:54 .# cat /etc/groupteam:x:50:user1,user2,user3 Both client and server are running OpenSSH_7.5p1, OpenSSL 1.1.0f dated 25 May 2017. This looks like the latest version. On the server, systemctl status sshd shows Main PID: 4853 (sshd). The main proc status shows a umask of 022. However, I will provide the process info for the sftp subsystem further below, which shows the correct umask of 006. # cat /proc/4853/statusName: sshdUmask: 0022State: S (sleeping)Tgid: 4853Ngid: 0Pid: 4853PPid: 1TracerPid: 0Uid: 0 0 0 0Gid: 0 0 0 0FDSize: 64Groups: NStgid: 4853NSpid: 4853NSpgid: 4853NSsid: 4853VmPeak: 47028 kBVmSize: 47028 kBVmLck: 0 kBVmPin: 0 kBVmHWM: 5644 kBVmRSS: 5644 kBRssAnon: 692 kBRssFile: 4952 kBRssShmem: 0 kBVmData: 752 kBVmStk: 132 kBVmExe: 744 kBVmLib: 6260 kBVmPTE: 120 kBVmPMD: 16 kBVmSwap: 0 kBHugetlbPages: 0 kBThreads: 1SigQ: 0/62965SigPnd: 0000000000000000ShdPnd: 0000000000000000SigBlk: 0000000000000000SigIgn: 0000000000001000SigCgt: 0000000180014005CapInh: 0000000000000000CapPrm: 0000003fffffffffCapEff: 0000003fffffffffCapBnd: 0000003fffffffffCapAmb: 0000000000000000Seccomp: 0Cpus_allowed: 3fCpus_allowed_list: 0-5Mems_allowed: 00000000,00000001Mems_allowed_list: 0voluntary_ctxt_switches: 25nonvoluntary_ctxt_switches: 2 We need to look at the sftp-server process for this client. It shows the expected umask of 006. I'm not sure if the GID is correct. 1002 is the GID for the user3 group. The directory specifies team group (GID 50) rwx. # ps ax | grep sftp*5112 ? Ss 0:00 sshd: user3@internal-sftp# cat /proc/5112/statusName: sshdUmask: 0006State: S (sleeping)Tgid: 5112Ngid: 0Pid: 5112PPid: 5111TracerPid: 0Uid: 1002 1002 1002 1002Gid: 1002 1002 1002 1002FDSize: 64Groups: 47 48 49 50 51 52 1002 NStgid: 5112NSpid: 5112NSpgid: 5112NSsid: 5112VmPeak: 85280 kBVmSize: 85276 kBVmLck: 0 kBVmPin: 0 kBVmHWM: 3640 kBVmRSS: 3640 kBRssAnon: 980 kBRssFile: 2660 kBRssShmem: 0 kBVmData: 1008 kBVmStk: 132 kBVmExe: 744 kBVmLib: 7352 kBVmPTE: 184 kBVmPMD: 12 kBVmSwap: 0 kBHugetlbPages: 0 kBThreads: 1SigQ: 0/62965SigPnd: 0000000000000000ShdPnd: 0000000000000000SigBlk: 0000000000000000SigIgn: 0000000000000000SigCgt: 0000000180010000CapInh: 0000000000000000CapPrm: 0000000000000000CapEff: 0000000000000000CapBnd: 0000003fffffffffCapAmb: 0000000000000000Seccomp: 0Cpus_allowed: 3fCpus_allowed_list: 0-5Mems_allowed: 00000000,00000001Mems_allowed_list: 0voluntary_ctxt_switches: 8nonvoluntary_ctxt_switches: 0 Original Question - can probably skip this after the above updates I am sharing the Shared1 directory from the SSHFS file server to various client machines. All machines use Arch Linux and BTRFS. pwck and grpck report no errors on both client and server. My goal is to allow all users in the team group to have rw permissions in the Shared1 directory. For unknown reasons, I am not able to achieve this goal. Some group members are experiencing permission denied errors (on write), as I will show below. What am I overlooking? (I have checked all the related questions on unix.stackexchange.com and I still did not resolve this issue.) Server: [user2@sshfsrv Shared1]$ cat /etc/profileumask 006[user2@sshfsrv Syncd]$ whoamiuser2[user2@sshfsrv Syncd]$ groupsteam user2[user2@sshfsrv Syncd]$ cat /etc/fuse.conf user_allow_other[root2@sshfsrv Syncd]# cat /proc/18940/statusName: sshdUmask: 0022 Note below that the setgid bit ( chmod g+s ) is initially set: [user1@sshfsrv Syncd]$ ls -latotal 0drwxrws--x 1 user1 limited 170 Aug 29 09:47 .drwxrwxr-x 1 user1 limited 10 Jul 9 14:10 ..drwxrwsr-x 1 user2 team 7892 Sep 22 17:21 Shared1[root@sshfsrv Syncd]# getfacl Shared1/# file: Shared1/# owner: user2# group: team# flags: -s-user::rwxgroup::rwxother::r-x[user2@sshfsrv Shared1]$ umask -Su=rwx,g=rx,o=x[user2@sshfsrv Shared1]$ sudo chmod g+w .[user2@sshfsrv Shared1]$ umask -Su=rwx,g=rx,o=x NOTE: Even after the above step, there are still no group write permissions. [user2@sshfsrv Shared1]$ touch deleteme2.txt[user2@sshfsrv Shared1]$ echo deleteme > deleteme2.txt [user2@sshfsrv Shared1]$ cat deleteme2.txt deleteme[user2@sshfsrv Shared1]$ ls -la deleteme2.txt -rw-r----- 1 user2 team 9 Sep 22 17:55 deleteme2.txt[user2@sshfsrv Shared1]$ getfacl .# file: .# owner: user2# group: team# flags: -s-user::rwxgroup::rwxother::r-x[root@sshfsrv Syncd]# chmod g-s Shared1/[root@sshfsrv Syncd]# ls -ladrwxrwxr-x 1 user2 team 7944 Sep 22 17:54 Shared1 Client [user2@client2 Shared1]$ cat /etc/fstabuser3@sshfsrv:/home/common /home/common fuse.sshfs x-systemd.automount,_netdev,user,follow_symlinks,identityfile=/home/user3/.ssh/id_rsa,allow_other,default_permissions 0 0[user2@client2 Shared1]$ cat /etc/profileumask 006[user2@client2 Shared1]$ cat /etc/fuse.conf user_allow_other[user2@client2 Shared1]$ groupsteam user2[user2@client2 Shared1]$ echo deleteme > deleteme2.txtbash: deleteme2.txt: Permission denied[user2@client2 Shared1]$ touch deleteme3.txttouch: setting times of 'deleteme3.txt': Permission denied[user2@client2 Shared1]$ ls -latotal 19520drwxrwsr-x 1 user2 team 7918 Sep 22 17:51 .drwxrws--x 1 user1 limited 170 Aug 29 09:47 ..-rw-r----- 1 user3 team 0 Sep 22 17:51 deleteme3.txt | The general solution is to add the following line to /etc/ssh/sshd_config on Arch Linux: Subsystem sftp internal-sftp -u 0002 However, the gotcha for me was that users of group "team" had a ForceCommand defined in that same config file. For these users, the ForceCommand was overriding the specification listed above. The solution was to add the same "-u" flag on the ForceCommand Match Group team ForceCommand internal-sftp -u 0002 Then run: systemctl restart sshd.service It is important to note that using the sshfs mount option umask is not recommended. It did not produce the desired behavior for me. References: The umask option for sshfs goes down to the underlying fuse layerwhere it's handled wrongly. afaict the advice is to avoid it. – RalphRönnquist Jun 17 '16 at 7:56 Understanding sshfs and umask https://jeff.robbins.ws/articles/setting-the-umask-for-sftp-transactions https://unix.stackexchange.com/a/289278/15010 EDIT: while this solution works on the command line and with some desktop apps (e.g., KDE's Kate text editor), it does not work correctly with many desktop applications (including KDE's Dolphin file manager, XSane, etc.). So this turned out not to be a good overall solution. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
393,928 | I am learning shell scripting through some online tutorials and I came to the following script that declares differences of global and local variables. #!/bin/bash# Experimenting with variable scopevar_change () { local var1='local 1' echo Inside function: var1 is $var1 : var2 is $var2 var1='changed again' var2='changed again'}var1='global 1'var2='global 2'echo Before function call: var1 is $var1 : var2 is $var2var_changeecho After function call: var1 is $var1 : var2 is $var2 And the output is: Before function call: var1 is global 1 : var2 is global 2 Inside function: var1 is local 1 : var2 is global 2 After function call: var1 is global 1 : var2 is changed again My Question... Why is var1 after the function call "global 1" instead of "changed again"?Can someone explain? | var1 is declared local in the function. So during the execution of the function there are two variables named var1 : the global one and the local one; but the function can only "see" the local one (the local one "shadows" the global one). So inside the function anything that you do to var1 is done to the local variable; the global var1 is not touched at all. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/393928",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252501/"
]
} |
393,948 | OS: Kernel 2.6.x Utilities: From busybox 1.2x A command outputs multiple lines of text. string1 text1: "asdfs asdf adfas"string2 text2: "iojksdfa kdfj adsfj;"string3 text3: "skidslk sadfj"string4 text4: "lkpird sdfd"string5 text5: "alskjfdsd safsd" Goal: I need to search for the line that contains "text4: " (no quotes) and then extract all characters after that string to the end of the line. Desired Output: "lkpird sdfd" (with quotes) Currently I have ... command | grep 'text4:' | awk -F': ' '{print $3}' Is there a simpler way to write this ? | Using sed $ command | sed -n 's/.*text4://p' "lkpird sdfd" -n tells sed not to print unless we explicitly ask it to. s/.*text4:// tells sed to remove any text from the beginning of the line to the final occurrence of text4: . If such a line is found, then the p tells sed to print it. Using grep -P $ command | grep -oP '(?<=text4:).*' "lkpird sdfd" -o tells grep to print only the matching part. (?<=text4:).* matches any text that follows text4: but does not include the text4: . The -P option requires GNU grep. Thus, it will not work with busybox's builtin grep , nor with the default grep on BSD/Mac OSX systems. Using awk The original grep-awk solution can be simplified: $ command | awk -F': ' '/text4: /{print $2}'"lkpird sdfd" Using awk (alternate) $ command | awk '/text4:/{sub(/.*text4:/, ""); print}' "lkpird sdfd" /text4:/ selects lines that contain text4: . sub(/.*text4:/, "") tells awk to remove all text from the beginning of the line to the last occurrence of text4: on the line. print tells awk to print those lines. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/393948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227235/"
]
} |
393,987 | Suppose that I have list of pathnames of files stored in an array filearray=("dir1/0010.pdf" "dir2/0003.pdf" "dir3/0040.pdf" ) I want to sort the elements in the array according to the basenames of the filenames, in numeric order sortedfilearray=("dir2/0003.pdf" "dir1/0010.pdf" "dir3/0040.pdf") How can I do that? I can only sort their basename parts: basenames=()for file in "${filearray[@]}"do filename=${file##*/} basenames+=(${filename%.*})donesortedbasenamearr=($(printf '%s\n' "${basenames[@]}" | sort -n)) I am thinking about creating an associative array whose keys are the basenames and values are the pathnames, so access to the pathnames is always done via basenames. creating another array for basenames only, and apply sort to the basename array. Thanks. | sort in GNU coreutils allows custom field separator and key. You set / as field separator and sort based on second field to sort on the basename, instead of entire path. printf "%s\n" "${filearray[@]}" | sort -t/ -k2 will produce dir2/0003.pdfdir1/0010.pdfdir3/0040.pdf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/393987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
394,065 | When I scan documents that are landscape-oriented, the output PDF files are portrait and so all the PDF viewers display the scanned documents in portrait. From the command line, how do you rotate a PDF file 90 degrees? I tried searching and found a bunch of solutions but I had trouble finding what looked like an authoritative solution[1] that uses a stable and robust Linux/Unix tool. footnote [1] For example, here is a sampling of some of the haphazard solutions I found: "just use Adobe Acrobat Pro to rotate the file and then save the file" "use pdfjam" "use PDFtk" "use ${PROGRAM_NAME} from Poppler" "use ImageMagick's convert"-- but then all the comments were very negative and stating "the image quality is ruined" "open the file in a PDF viewer, then rotate, then print using a PDF printer like cutePDF or PDF printer or etc" "use ${PROGRAM_NAME}", then I searched for "${PROGRAM_NAME}" and there is something about "Fedora removed ${PROGRAM_NAME} because of licensing issues" | Use PDFtk. For rotating clockwise: pdftk input.pdf cat 1-endeast output output.pdf For rotating anti-clockwise: pdftk input.pdf cat 1-endwest output output.pdf Regarding the installation of PDFtk on Fedora, I found these links: Pdftk substitute for Fedora 21 and 22 Pdftk not available? Install pdftk on Fedora using the Snap Store | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/394065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5510/"
]
} |
394,072 | Lets assume that I am in a directory with a lot of files. How would you search the contents of all the files in a directory and display the longest line that contains the string “ER” but not “Cheese”? So far, to my best knowledge, I'm trying to do this in one line command. I am thinking I need to use grep -r for recursive, in order to search through all the files in the directorybut my end goal is to just display the longest line, so I assume so far it should be like: grep -r -e "ER" and when I do -v "Cheese" attached to it out of small hope, it doesn't work of course. Is this not possible with one line of command? If so, what would I need to do in multiple lines? | Here's an awk solution: awk '/ER/ && !/Cheese/ {if (length($0) > maxlen) { maxline=$0; maxlen=length($0);}} END {print maxlen, maxline;}' * (it also prints the length of the longest line, but if you don't want that, just say ... END {print maxline;} . The advantage over the grep solution of Jeremy Dover is that it does one pass over the input. The disadvantage is that if there are multiple lines with the same max length, it only prints the first one (or the last one if you use >= to compare the lengths); the grep solution prints all of them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252602/"
]
} |
394,143 | There's a shortcut on Discord that enables you to switch between guilds easily. It's Ctrl + Alt + Up and Ctrl + Alt + Down . The problem is that Gnome uses this shortcut for changing workspaces. I have two monitors so I don't use additional workspaces very often so I opened settings and looked for the shortcut so that I can disable it. I found that apparently the shortcut to switch workspaces up and down is Super + Page Up and Super + Page Down and I couldn't find the Ctrl + Alt + Up or down shortcut anywhere else. It seems almost as if this shortcut isn't possible to change but I'm sure that's not the case, though I have no idea how to do that. | In general this can happen because the OS (window system) has priority and intercepts this shortcut and stops propagation to your desired application.Solution: Removing the shortcuts using dconf-editor : Open a terminal sudo apt-get install dconf-tools (or dconf-editor ) Now run dconf-editor in dconf-editor go to: /org/gnome/desktop/wm/keybindings/ Find switch-to-workspace-down , put ['disabled'] instead of default same for switch-to-workspace-up quit dconf-editor and you are done I always have this problem when I want to use some Eclipse IDE shortcuts: https://bugs.eclipse.org/bugs/show_bug.cgi?id=321094 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/394143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252642/"
]
} |
394,168 | Shell: Bash. Goal: obtain time t in milliseconds since some fixed point in time, suitable for timestamping stuff with printf. Condition: the solution must pass the all-in-one-line-of-text test. Additional: the solution should be atomic (yeah, right!..), lightweight, keep quantization & rounding issues to a minimum, blah.. t=$[$(date +%s%N)/1000000] <--- my solution, the fixed point being Jan 1, 1970 in this case. BUT fundamentally bad due to the two date calls. printf "t=%d\n" $[$(date +%s%N)/1000000] <--- here it is, using printf. t=$(date +%s)$[10#$(date +%N)/1000000] <--- terrible example. Even seems to need de-pad of, then re-pad with, leading zeros. printf "t=%d%03d\n" $(date +%s) $[10#$(date +%N)/1000000] <--- here it is, using printf. Any better (sensible) suggestions? EDIT (appending): t=$(date +%s%N) and then printf "%s\n" ${t::13} <--- I guess, but not one line. | As noted by @Isaac, with date implementations that support %N like GNU's or ast-open's, you can use %s%3N to limit the precision, but except in ksh93 where date can be made to be the builtin version of ast-open's date , the date command is not builtin. It will take a few hundred if not thousand microseconds to start and a few more to print the date and return. bash did copy a subset of ksh93 printf '%(...)T' format, but not the %N part. Here it looks like you'd need to use more advanced shells like ksh93 or zsh . Those shells can make their $SECONDS variable which records the time since the shell started (and that you can also reset to any value) floating point: $ typeset -F SECONDS=0; date +%s%3N; echo $SECONDS15063187806470.0017870000 It took up to 1787 microseconds to run GNU date here. You can use $((SECONDS*1000)) to get a number of milliseconds as both shells support floating point arithmetic (beware ksh93 honours the locale's decimal mark). For the epoch time as a float, zsh has $EPOCHREALTIME : $ zmodload zsh/datetime$ echo $EPOCHREALTIME1506318947.2758708000 And ksh93 can use "$(printf '%(%s.%N)T' now)" (note that ksh93 's command substitution doesn't fork processes nor use pipes for builtins so is not as expensive as in other Bourne-like shells). You could also define the $EPOCHREALTIME variable there with: $ EPOCHREALTIME.get() { .sh.value=$(printf "%(%s.%6N)T");$ echo "$EPOCHREALTIME"1506333341.962697 For automatic timestamping, you can also use set -o xtrace and a $PS4 that prints the current time. In zsh : $ zsh -c 'PS4="+%D{%s.%.}> "; set -x; sleep 1; date +%s.%N'+1506332128.753> sleep 1+1506332129.754> date +%s.%N1506332129.755322928 In ksh93: $ ksh -c 'PS4="+\$(printf "%(%s.%3N)T")> "; set -x; sleep 1; date +%s.%N'+1506332247.844> sleep 1+1506332248.851> date +%s.%N1506332248.853111699 Depending on your use case, you may be able to rely on moreutils 's ts for your time-stamping: $ (date +%s.%6N; date +%s.%6N) | ts %.s1506319395.000080 1506319394.9706191506319395.000141 1506319394.971972 ( ts gives the time it read the line from date 's output through the pipe). Or for time between lines of output: $ (date +%s.%6N; date +%s.%6N) | ts -i %.s0.000011 1506319496.8065540.000071 1506319496.807907 If you want to get the time it took to run a given command (pipeline), you can also use the time keyword, adjusting the format with $TIMEFORMAT in bash : $ TIMEFORMAT=%E; time dateMon 25 Sep 09:51:41 BST 20170.002 Those time format directives initially come from csh (though bash , contrary to zsh or GNU time only supports a tiny subset). In (t)csh, you can time every command by setting the $time special variable: $ csh -xc 'set time = (0 %E); sleep 1; sleep 2'set time = ( 0 %E )sleep 10:01.00sleep 20:02.00 (the first number ( 0 here) tells that commands that take more than that many seconds should be timed, the second specifies the format). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238486/"
]
} |
394,169 | Is it risky to rename folder with 180GB with the mv command? We have a folder /data that contain 180GB. We want to rename the /data folder to /BD_FILES with the mv command. Is it safe to do that? | Changing the name on a folder is safe, if it stays within the same file system. If it is a mount point ( /data kinda looks like it could be a mount point to me, check this with mount ), then you need to do something other than just a simple mv since mv /data /BD_FILES would move the data to the root partition (which may not be what you want to happen). You should unmount the filesystem, rename the now empty directory, update /etc/fstab with the new location for this filesystem, and then remount the filesystem at the renamed location. In other words, umount /data mv /data /BD_FILES (assuming /BD_FILES doesn't already exist, in that case, move it out of the way first) update /etc/fstab , changing the mount point from /data to /BD_FILES mount /BD_FILES This does not involve copying any files around, it just changes the name of the directory that acts as the mount point for the filesystem. If the renaming of the directory involves moving it to a new file system (which would be the case if /data is on one disk while /BD_FILES is on another disk, a common thing to do if you're moving things to a bigger partition, for example), I'd recommend copying the data while leaving the original intact until you can check that the copy is ok. You may do this with rsync -a /data/ /BD_FILES/ for example, but see the rsync manual for what this does and does not do (it does not preserve hard links, for example). Once the folder is renamed, you also need to make sure that existing procedures (programs and users using the folder, backups etc.) are aware of the name change. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/394169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
394,334 | Is there any simple way to summarize a day value like (1327 days) to the format: xx years; xx month; xx days without having to use a separate variable for each value. Preferably with one command. | For a duration that includes a number of months or years, that has to make reference to a particular date, as different months or different years have different lengths. To know how many years/months/days from now to 1327 days from now, with dateutils : $ ddiff -f '%Y years, %m months, %d days' today "$(dadd now 1327)"3 years, 7 months, 19 days (you may sometimes find ddiff available as datediff or dateutils.ddiff ; same for dadd ). That's what I get now on 2017-09-25 (because that's from 2017-09-25 to 2021-05-14). If I were to run that on 2018-03-01, I'd get: 3 years, 7 months, 17 days because that's from 2018-03-01 to 2021-10-18. And on that 2018-03-01 day, 1327 days ago would give 3 years, 7 months, 16 days . More info at How can I calculate and format a date duration using GNU tools, not a result date? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/394334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
394,362 | I do Ctrl+W in the mac terminal to delete a word (deletes from where the cursor is at to the beginning of the word) How do I do the opposite - deletes from where the cursor is to the end of the word? | This depends on your shell and its active command line editing mode. For a shell with Emacs command line editing mode ( set -o emacs in some shells), use Alt+D (this doesn't work on macOS for whatever reason, but prints the character ∂ , use Esc d instead). For a shell with Vi command line editing mode ( set -o vi in some shells), use Esc dw (this does work on macOS as well). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/394362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1679/"
]
} |
394,381 | The following lines are defined in my /etc/fstab file. My current fstab: /dev/sdb /lpo/sda ext4 defaults,noatime 0 0/dev/sdc /lpo/sdb ext4 defaults,noatime 0 0 From blkid we get: /dev/sdb: UUID="14314872-abd5-24e7-a850-db36fab2c6a1" TYPE="ext4"/dev/sdc: UUID="6d439357-3d20-48de-9973-3afb2a325eee" TYPE="ext4" How to update my current fstab (the two lines) to use the UUID? For example, if I create the following line (according to the man page) for /dev/sdb , is it correct? UUID="14314872-abd5-24e7-a850-db36fab2c6a1" /dev/sdb ext4 defaults,noatime 0 0 | UUID="14314872-abd5-24e7-a850-db36fab2c6a1" /lpo/sda ext4 defaults,noatime 0 0UUID="6d439357-3d20-48de-9973-3afb2a325eee" /lpo/sdb ext4 defaults,noatime 0 0 The format of entries in fstab are as follows: <file system> <dir> <type> <options> <dump> <pass> Where <file system> is the device you want to mount (such as /dev/sdb and <dir> is the path to where the device should be mounted ( /lpo/sda in your case). There are multiple ways you can specify <file system> , the simplest being the path to the file system device in question /dev/sdb in your case (although typically they point to a partition on a drive rather than the drive, such as /dev/sdb1 but it appears that your drives lack a partition table and simply have the filesystem on the main device). But you can also use the device UUID or PARTUUID by specifying it as a key/value pair UUID="14314872-abd5-24e7-a850-db36fab2c6a1" inplace of /dev/sdb . The main reason to use UUID or PARTUUID instead of device paths is that they are more consistent when changing the physical disks. The devices are numbered according to how they are presented to the OS by the bios (which is normally ordered by the socket they are plugged into). This means that if you add in a new device or physically rearrange existing devices they will be renumbered and what was /dev/sdb before might not be now. As you can imagine this will result in the wrong disk being mounted to the wrong location. UUID and PARTUUID are ids that are written as part of formatting the filesystem for UUID or at the time of creating the partition in the case of PARTUUID . These numbers are written to the disk and will always remain the same so can be used to mount the correct disk even when the underlying device file gets renumbered. Side note: Your devices are a bit confusing - you have /dev/sdb mounted to /lpo/sda - while that works it can be confusing and lead to errors when you maintain/configuring your system, you may want to make these more consistent. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
394,400 | Not sure what I'm doing wrong here. User2 sources a file in it's .bash_profile to set environment specific aliases. # .bash_profilesource $HOME/set_environment_shortcuts Inside $HOME/set_environment_shortcuts (there are many aliases in here).Example: alias startservices="verylongcommand" Now I would like to 'startservices from another user. [User1@server1 ~]$ sudo su -l User2 -c '. ~/.bash_profile; startservices'-bash: startservices: command not found The runuser command produces the same result. [User1@Server1 ~]$ sudo runuser -l User2 -c '. ~/.bash_profile; startservices'-bash: startservices: command not found Do aliases not work in this way? Note, when bypassing the alias entirely, the command works. | Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set with shopt -s expand_aliases . Aliases are a shortcut tool for use interactively. For any kind of scripting, use a shell function instead: startservices () { # commands go here} Shell functions are a lot more flexible than aliases in many ways. They are able to take arguments like a shell script does, for starters: startservices () { user="$1" service="$2" # code to start service "$service" as user "$user"} You should not have to source the other user's .bash_profile explicitly. Use sudo -i instead. This will start a login shell, which will read .bash_profile when starting: $ sudo -i -u User2 startservices This requires startservices to either be a script or other external utility in the $PATH of User2 , a shell function defined in the shell startup files of User2 , or an alias (with the shell running with expand_aliases set) defined in the shell startup files of User2 . See also Is there ever a good reason to run sudo su? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111233/"
]
} |
394,421 | I would like to install kubectl version 1.2.4 on a machine. The Kubernetes documentation recommends using snap for installation on Ubuntu. snap install --help is not very useful, the one promising parameter --revision= doesn't work: $ sudo snap install --revision=1.2.4 kubectlerror: cannot decode request body into snap instruction: invalid snap revision: "\"1.2.4\"" I suspect that --revision expects a SHA rather than a semver. The apt-get convention of using package=1.2.3 also doesn't work: $ sudo snap install kubectl=1.2.4error: snap "kubectl=1.2.4" not found The usage documentation seems silent on the question. Anybody know? | you can run snap info kubectl which gives you a list of kubectl versions. Then you can install your preferred version with --channel like this sudo snap install kubectl --channel=1.6/stable --classic or if you want to upgrade / downgrade to specific version: sudo snap refresh kubectl --channel=1.6/stable --classic It seems that version 1.2.4 Is not available in snap, in that case you can download the executable https://storage.googleapis.com/kubernetes-release/release/v1.2.4/bin/linux/amd64/kubectl | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/394421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
394,431 | I have a C++ based application which I'm running(executable) as a daemon with systemd. Unit File: [Unit]Description=Console ServiceAfter=network.target[Service]Environment="USER=ubuntu" "Path=/home/ubuntu/console/bin" WorkingDirectory=/home/ubuntu/console/binExecStart=/bin/sh -ec "exec /sbin/start-stop-daemon -S -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --oknodo --exec consoleExecutable " #2>/dev/nullExecStop=/bin/sh -ec "exec /sbin/start-stop-daemon -K --quiet -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --retry=TERM/30/KILL/5 --oknodo --exec consoleExecutable" #2>/dev/nullRestart=on-failureRemainAfterExit=noTimeoutStopSec=10SuccessExitStatus=0 1TimeoutStartSec=360[Install]WantedBy=multi-user.target When I issue start command the service is starting up, but then it immediately receives a shutdown signal and then exits.Any clue, what is happening? sudo systemctl status console.service● console.service - Console Service Loaded: loaded (/etc/systemd/system/console.service; enabled; vendor preset: enabled) Active: deactivating (stop-sigterm) since Mon 2017-09-25 19:58:58 UTC; 1s ago Process: 8706 ExecStop=/bin/sh -ec exec /sbin/start-stop-daemon -K --quiet -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --retry=TERM/30/KILL/5 --oknodo --exec consoleExecutable #2>/dev/null (code=exited, status=0/SUCCESS) Process: 8701 ExecStart=/bin/sh -ec exec /sbin/start-stop-daemon -S -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --oknodo --exec consoleExecutable #2>/dev/null (code=exited, status=0/SUCCESS) Main PID: 8701 (code=exited, status=0/SUCCESS) Tasks: 1 Memory: 1.8M CPU: 53ms CGroup: /system.slice/console.service └─8705 consoleExecutableSep 25 19:58:58 mgmt1 systemd[1]: Started Console Service.sudo systemctl status console.service● console.service - Console Service Loaded: loaded (/etc/systemd/system/console.service; enabled; vendor preset: enabled) Active: inactive (dead) since Mon 2017-09-25 19:59:01 UTC; 947ms ago Process: 8706 ExecStop=/bin/sh -ec exec /sbin/start-stop-daemon -K --quiet -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --retry=TERM/30/KILL/5 --oknodo --exec consoleExecutable #2>/dev/null (code=exited, status=0/SUCCESS) Process: 8701 ExecStart=/bin/sh -ec exec /sbin/start-stop-daemon -S -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --oknodo --exec consoleExecutable #2>/dev/null (code=exited, status=0/SUCCESS) Main PID: 8701 (code=exited, status=0/SUCCESS)Sep 25 19:58:58 mgmt1 systemd[1]: Started Console Service. | Environment="USER=ubuntu" "Path=/home/ubuntu/console/bin" WorkingDirectory=/home/ubuntu/console/binExecStart=/bin/sh -ec "exec /sbin/start-stop-daemon -S -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --oknodo --exec consoleExecutable " #2>/dev/nullExecStop=/bin/sh -ec "exec /sbin/start-stop-daemon -K --quiet -c ${USER} -d ${Path} --pidfile=/var/run/console.pid --retry=TERM/30/KILL/5 --oknodo --exec consoleExecutable" #2>/dev/null This is almost worthy of the systemd House of Horror. Were it not the case that there's a horror story already in there that does this. Do not use start-stop-daemon in a service unit to do all of the things that a service unit already does . With unnecessary PID files and the wrongheaded assumption that ExecStart accepts shell syntax comments, no less. And do not do what the other answer says and try to bodge it with Type=forking . That makes things worse, not better. The nonsense with start-stop-daemon is why things are going wrong. Because the process running start-stop-daemon does not become the service, but in fact exits pretty much imediately, systemd is thinking that your service is terminating. In your first systemctl status output, you can see that systemd is in the middle of sending SIGTERM to clean up all left-over running processes after running the ExecStop action, which is what it does when it thinks that a service has terminated. Just do things simply: Type=simpleWorkingDirectory=/home/ubuntu/console/binUser=ubuntuExecStart=/home/ubuntu/console/bin/consoleExecutable No ExecStop nor Environment is actually required. Further reading Jonathan de Boyne Pollard (2015). You really don't need to daemonize. Really. . The systemd House of Horror. Jonathan de Boyne Pollard (2016). If you have two services, define two services. . The systemd House of Horror. Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons . Frequently Given Answers. Systemd kills service immediately after start | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252845/"
]
} |
394,438 | is there a way to list or install security upgrades only using apt? if I list upgrades with: apt list --upgradable can I also see without knowing packages and libraries which upgrades are relevant security upgrades . and furthermore is there an option to only apply those by skipping any others, so the non-security-relevant upgrades would be prompted again next time I run apt upgrade ? | apt can’t (yet) provide the information you’re after. aptitude can though, albeit somewhat confusingly: aptitude search '~U ~ODebian' -F "%p %O"|awk '/Debian-Security/ {print $1}' This searches all upgradable ( ~U ) packages from official Debian repositories ( ~ODebian ), and displays their package name ( %p ) and “origin” ( %O ). The latter actually displays the repository label , which is “Debian-Security:9/stable” for the Debian 9 security repositories. You end up with a list of upgradable package names from the security repositories. There are a variety of ways to install only security upgrades, none of them ideal though. aptitude ’s text interface allows only security upgrades to be applied, simply by scrolling to the “Security Updates” header (which should be the first one) and hitting + . You can feed the list of packages extracted above to apt to install the upgrades: aptitude search '~U ~ODebian' -F "%p %O" |awk '/Debian-Security/ {print $1}' |xargs apt-get install --only-upgrade This has the unfortunate side-effect of clearing the “automatically installed” marker on upgraded packages. You can use unattended-upgrades , whose default action is to only apply security upgrades: unattended-upgrades -v If you don’t want upgrades to be installed automatically, you’ll need to disable unattended-upgrades ’s daily cron job. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.