source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
346,826 | I have a pen drive and one partition: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 931.5G 0 disk └─sda1 8:1 0 931.5G 0 part /sdb 8:16 1 7.5G 0 disk └─sdb1 8:17 1 7.5G 0 part and I have formatted with command: # mkfs.fat -n A /dev/sdb and it works fine. But after then, I skimmed though the man page for mkfs: mkfs is used to build a Linux filesystem on a device, usually a hard disk partition. The device argument is either the device name (e.g. /dev/hda1, /dev/sdb2), or a regular file that shall contain the filesystem. The size argument is the number of blocks to be used for the filesystem. It says mkfs should work with partition number. And my problem is why my operation works without error prompt? | Creating a filesystem on a whole disk rather than a partition is possible, but unusual. The documentation only explicitly mentions the partition because that's the most usual case (it does say usually ). You can create a filesystem on anything that acts sufficiently like a fixed-size file, i.e. something where if you write data at a certain location and read back from the same location then you get back the same data. This includes whole disks, disk partitions, and other kinds of block devices , as well as regular files (disk images). After doing mkfs.fat -n A /dev/sdb , you no longer have a partition on that disk. Beware that the kernel still thinks that the disk has a partition, because it keeps the partition table cached in memory. But you shouldn't try to use /dev/sdb1 anymore, since it no longer exists; writing to it would corrupt the filesystem you created on /dev/sdb since /dev/sdb1 is a part of /dev/sdb (everything except a few hundred bytes at the beginning). Run the command partprobe as root to tell the kernel to re-read the partition table. While creating a filesystem on a whole disk is possible, I don't recommend it. Some operating systems may have problems with it (I think Windows would cope but some devices such as cameras might not), and you lose the possibility of creating other partitions. See also The merits of a partitionless filesystem | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/346826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/199064/"
]
} |
346,841 | XDG_RUNTIME_DIR is necessary for systemctl --user to work. I have set up ubuntu server 16.04 to run systemd user sessions. Now, when trying to administer them, I find that when changin a user via sudo -u $user -i or even su - $user , the environment does not have XDG_RUNTIME_DIR set, preventing systemctl --user from working. However, when I ssh straight into that user, it is set correctly. If I understand the documentation correctly, this should be set by libpam-systemd when creating the user session. The user slice is started correctly, as the directory to which XDG_RUNTIME_DIR should point( /run/users/$uid ) exists. I'm hesitant to just hardcode it in, say, .bash_profile , because that seems hacky (albeit working), when pam should be taking care of it. I can, of course, add XDG_RUNTIME_DIR to env_keep in sudoers , but that would just preserve the sudoing user's environment, which is not what I want. I want the target user's environment. What I'm really wondering, though, is how come the session is set up correctly with ssh , but not with su or sudo -i ? | I have replicated this issue on my Fedora 25 system. I found a very suspicious condition in the source code. https://github.com/systemd/systemd/blob/f97b34a/src/login/pam_systemd.c#L439 It looks as if it was written with normal sudo in mind but not sudo -u non-root-user . machinectl shell --uid=non-root-user worked as you requested. systemd-run did not appear to work as desired, despite the reference to it in the machinectl documentation. Some machinectl commands don't work if you have enabled SELinux at the moment, and these specific commands didn't work for me until I did setenforce 0 . However I'm in the middle of trying workarounds to get machinectl working as I want it to w.r.t SELinux, so it's possible my fiddling is what causes e.g. machinectl shell to timeout. EDIT: I think this code was introduced after this discussion . And apparently su - / sudo -i could be made to work, but no-one has implemented it (still). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/346841",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26423/"
]
} |
346,902 | I have the given code below for converting bytes to the corresponding values: for OUTPUT in $(find $IP_DIR -maxdepth 1 | awk 'NR>1')do case $RETURNSIZE in "gb") FS=`du -b $OUTPUT | awk {'print $1'}` FS=`echo $FS | awk '{ byte =$1 /1024/1024**2 ; print byte " GB" }'` echo $OUTPUT "|" $FS;; "mb") FS=`du -b $OUTPUT | awk {'print $1'}` FS=`echo $FS | awk '{ byte =$1 /1024/1024 ; print byte " MB" }'` echo $OUTPUT "|" $FS;; "kb") FS=`du -b $OUTPUT | awk {'print $1'}` FS=`echo $FS | awk '{ byte =$1 /1024 ; print byte " KB" }'` echo $OUTPUT "|" $FS;; "b") FS=`du -b $OUTPUT | awk {'print $1'}` echo $OUTPUT "|" $FS "B";; esac OUTPUT/home/work/exten.sh | 3.53903e-07 GB/home/work/e.txt | 0 GB/home/work/t.txt | 0 GB/home/worktest | 9.53674e-07 GB/home/work/s.txt | 3.23169e-07 GB The logic seems to be perfectly working but when the case comes to gb (RETURNSIZE) , the result is in exponential format. But I am looking for the result in normal decimal format. | Since this is Linux I'm surprised no-one mentioned the numfmt command: numfmt - Convert numbers from/to human-readable strings >>numfmt --to iec --format "%8.4f" 599511627776558.3388G | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/346902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48806/"
]
} |
346,917 | I've tried a bunch of examples of the rename command but I can't figure out the syntax to do what I want. I have a bunch of files labeled something like File_Ex_1.jpgFile_Ex_2.jpgFile_Ex_3.jpgFile_Ex_4.jpg...File_Ex_10.jpgFile_Ex_11.jpgetc. I want to change only some of them by inserting a 0 so that the file names have the same number of characters. So I want File_Ex_1.jpg to go to File_Ex_01.jpg , File_Ex_2.jpg to File_Ex_02.jpg How do you do this with the rename command? This rename command was installed via home-brew on a Mac.Output of rename -v : rename -vUsage: rename [switches|transforms] [files] Switches: -0/--null (when reading from STDIN) -f/--force or -i/--interactive (proceed or prompt when overwriting)Wide character in print at /System/Library/Perl/5.18/Pod/Text.pm line 286. -g/--glob (expand "*" etc. in filenames, useful in Windows™ CMD.EXE) -k/--backwards/--reverse-order -l/--symlink or -L/--hardlink -M/--use=*Module* -n/--just-print/--dry-run -N/--counter-format -p/--mkpath/--make-dirs --stdin/--no-stdin -t/--sort-time -T/--transcode=*encoding* -v/--verbose Transforms, applied sequentially: -a/--append=*str* -A/--prepend=*str* -c/--lower-case -C/--upper-case -d/--delete=*str* -D/--delete-all=*str* -e/--expr=*code* -P/--pipe=*cmd* -s/--subst *from* *to* -S/--subst-all *from* *to* -x/--remove-extension -X/--keep-extension -z/--sanitize --camelcase --urlesc --nows --rews --noctrl --nometa --trim (see manual) | Try with this: rename -e 's/\d+/sprintf("%02d",$&)/e' -- *.jpg Example: $ lsDevice_Ex_10.jpg Device_Ex_1.jpg Device_Ex_4.jpg Device_Ex_7.jpgDevice_Ex_11.jpg Device_Ex_2.jpg Device_Ex_5.jpg Device_Ex_8.jpgDevice_Ex_12.jpg Device_Ex_3.jpg Device_Ex_6.jpg Device_Ex_9.jpg$ rename -e 's/\d+/sprintf("%02d",$&)/e' -- *.jpg$ lsDevice_Ex_01.jpg Device_Ex_04.jpg Device_Ex_07.jpg Device_Ex_10.jpgDevice_Ex_02.jpg Device_Ex_05.jpg Device_Ex_08.jpg Device_Ex_11.jpgDevice_Ex_03.jpg Device_Ex_06.jpg Device_Ex_09.jpg Device_Ex_12.jpg I took the reference from here: https://stackoverflow.com/questions/5417979/batch-rename-sequential-files-by-padding-with-zeroes Here adapted to your particular rename implementation. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/346917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217500/"
]
} |
346,924 | Why when I enter this command the prompt changes to my directory? PS1='$(pwd)' I am using single quotes, which means no interpolation , a.k.a echo '$(pwd)' ——→ $(pwd) Furthermore, say that we clarified why this works... why is it functioning differently from PS1=$(pwd) ? (no quotes at all) By different I mean that if I use the quotes then the prompt will keep changing to my current directory as I navigate through the terminal. But if I don't use quotes, then the prompt will always remain the directory that I was in when I first entered the command PS1=$(pwd) why? | When you simply assign a value to a variable, the $(...) expression is evaluated unless it is enclosed in single quotes (or backslash-escaped). To understand, try and compare these two: A=$(pwd)echo "$A"B='$(pwd)'echo "$B" The value of A immediately becomes the string /home/yourusername and obviously it's not remembered where this string comes from, it stays the same even if you change directory. The value of B , however, becomes the literal string $(pwd) without getting interpreted. Now, in the value of PS1 something special happens: whenever the prompt is printed, certain special constructs are interpreted, e.g. the command substitution $(...) is performed exactly the way it happened above at the assignment to the A variable. Obviously if your PS1 contains the literal string of your home directory (as above with A ) then there's no way it could change. But if it contains the string $(pwd) (as above with B ) then it is evaluated whenever the prompt is printed and hence your actual directory is displayed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/346924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217505/"
]
} |
346,935 | I have seen that with IPv6 you can assign multiple addresses to one interface. I would like to take advantage of that to table able to have specific https certificates associated with a specific address, rather than trying to use one catch-all certificate. Does anyone know how to specify a collection of IP addresses to bind to an interface, on Linux (Ubuntu)? I will be using Nginx, so it can bind to an IP address when specifying a virtual host. BTW The clients visiting the services I am setting up are IPv6 enabled, so I am not worried about not getting any IPv4 based connections. | When you simply assign a value to a variable, the $(...) expression is evaluated unless it is enclosed in single quotes (or backslash-escaped). To understand, try and compare these two: A=$(pwd)echo "$A"B='$(pwd)'echo "$B" The value of A immediately becomes the string /home/yourusername and obviously it's not remembered where this string comes from, it stays the same even if you change directory. The value of B , however, becomes the literal string $(pwd) without getting interpreted. Now, in the value of PS1 something special happens: whenever the prompt is printed, certain special constructs are interpreted, e.g. the command substitution $(...) is performed exactly the way it happened above at the assignment to the A variable. Obviously if your PS1 contains the literal string of your home directory (as above with A ) then there's no way it could change. But if it contains the string $(pwd) (as above with B ) then it is evaluated whenever the prompt is printed and hence your actual directory is displayed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/346935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27460/"
]
} |
346,938 | I'm working on an application, where I need to create drives with always the exact same partition layout. My initial thought was to once dump the partition table of the original drive with sfdisk. sfdisk -d /dev/sdX > parttable And then apply it to all other drives with: sfdisk /dev/sdX < parttable But this method doesn't seem to work. I dumped the right partitiontable from an USB-drive, then created some random partitions with gparted and tried then to write the initial partitiontable back to the drive. But the problem is, the partition isn't recognized. Gparted for example lists the partition as unknown. I figured, I probably have to format the created partition, as the partitiontable stores no information about filesystems. My question is now: Can I somehow save the partitiontable and information about the partitions (filesystem etc.) and create a new drive this way (at best in one command). btw.: msdos partiontable Edit: An alternative would be, to gather all the data about the drives (e.g. parttable, filesystems) myself and create the command manually. Is it possible (maybe with parted) to create the partition table and format multiple partition in one command? | When you simply assign a value to a variable, the $(...) expression is evaluated unless it is enclosed in single quotes (or backslash-escaped). To understand, try and compare these two: A=$(pwd)echo "$A"B='$(pwd)'echo "$B" The value of A immediately becomes the string /home/yourusername and obviously it's not remembered where this string comes from, it stays the same even if you change directory. The value of B , however, becomes the literal string $(pwd) without getting interpreted. Now, in the value of PS1 something special happens: whenever the prompt is printed, certain special constructs are interpreted, e.g. the command substitution $(...) is performed exactly the way it happened above at the assignment to the A variable. Obviously if your PS1 contains the literal string of your home directory (as above with A ) then there's no way it could change. But if it contains the string $(pwd) (as above with B ) then it is evaluated whenever the prompt is printed and hence your actual directory is displayed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/346938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210618/"
]
} |
346,954 | I tried the following : [$] wget http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webm > Conan1.webm [1:21:05]--2017-02-23 01:51:50-- http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webmResolving cdn.edgecast.steamstatic.com (cdn.edgecast.steamstatic.com)... 117.18.232.131Connecting to cdn.edgecast.steamstatic.com (cdn.edgecast.steamstatic.com)|117.18.232.131|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 11255413 (11M) [video/webm]Saving to: ‘movie480.webm’movie480.webm 100%[===================================================================>] 10.73M 33.7KB/s in 6m 25s 2017-02-23 01:58:16 (28.6 KB/s) - ‘movie480.webm’ saved [11255413/11255413] As can be seen the first part of the command worked, wget downloaded the file but the second part of the command that renaming the file, in this case a very generic movie480.webm was downloaded. Why wasn't conan1.webm , the name I had suggested it took. I do know that if I had done - $ mv movie480.webm conan1.webm then it would work, but this means an additional command. Why that failed ? Could there have been another way to do the same thing in a single command though similar to shown above? | You didn't "suggest it took" the name Conan1.webm , you redirected its standard output stream to file called Conan1.webm . Since wget doesn't write to standard output by default, that has no effect on where the content is saved. See man wget - in particular the -O option: -O file--output-document=file The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If - is used as file, documents will be printed to standard output, disabling link conversion. (Use ./- to print to a file literally named -.) So you could have used wget -O Conan1.webm http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webm or wget -O- http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webm > Conan1.webm | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/346954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
346,967 | I have a Debian system that gets its IP from a DHCP server, with my own bind9 server running on localhost . Every time I boot, I have to write nameserver 127.0.0.1 to /etc/resolv.conf , as the system updates the file to match the DNS server assigned by DHCP. To try to prevent resolv.conf from getting updated with DNS server information from my network's DHCP server, I tried writing the following to /etc/network/interfaces : iface eth0 inet dhcp dns-nameservers 127.0.0.1 but that only works when the system has a static IP. So how can I prevent /etc/resolv.conf from getting overwritten with the DHCP server's assigned DNS server, without giving my system a static IP? I use dhclient . | If you are sure you are using dhclient . You can: 1. Change dhclient setting (recommendded) Edit the file /etc/dhcp/dhclient.conf , search for domain-name-servers, and delete it from the line: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, dhcp6.fqdn, dhcp6.sntp-servers, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers; 2. Prevent /etc/resolv.conf from being overwritten Running the following command as root or using sudo so that the file won't get overwritten again: chattr +i /etc/resolv.conf It's more or less a duplicated question to Can not set static DNS on debian . But I can't comment so I add this answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/346967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153468/"
]
} |
346,973 | I already got this installed: 1 core/archlinux-keyring 20170104-1 [installed]10 blackarch/blackarch-keyring 20140118-3 [installed] But I get an error when upgrading libc++abi from AUR: ==> Verifying source file signatures with gpg... llvm-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294) libcxx-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294) libcxxabi-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294)==> ERROR: One or more PGP signatures could not be verified!==> ERROR: Makepkg was unable to build libc++.==> Restart building libc++abi ? [y/N] How to resolve this? Is there a way to know which keyring I should install to solve this issue? | gpg --recv-keys 8F0871F202119294 (AUR) the missing key needs to be added to your USER keyring I did not need to trust the key for makepkg to finish the build. ~/.gnupg/gpg.conf also needed: keyserver-options no-honor-keyserver-url in my particular case Missing keys for official Arch repos are normally missing an updated archlinux-keyring | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/346973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27996/"
]
} |
347,013 | From my understanding of /etc/systemd options, noauto means that the device will not be mounted at boot time (or with mount -a ). Is there any situation where adding nofail changes the behaviour if noauto is already given, or is it totally redundant? man systemd.mount(5) says: With noauto , this mount will not be added as a dependency for local-fs.target or remote-fs.target. This means that it will not be mounted automatically during boot, unless it is pulled in by some other unit. With nofail , this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. What about automount situations? | Just for the record: For an external USB disk which is usually not connected at startup, I have an fstab entry /dev/disk/by-label/data /data xfs noauto,user,noatime 0 0 When booting there is no error as noauto keeps the system from trying to mount. When I try to mount manually without the drive connected, I immediately get the error ~$ mount /datamount: special device /dev/disk/by-label/data does not exist~$ If I change the line in fstab to /dev/disk/by-label/data /data xfs noauto,nofail,user,noatime 0 0 there is no error reported, even when the drive is not available: ~$ mount /data~$ System: Ubuntu 16.04 with systemd. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
347,032 | In bash, I can do this: for name in sqls/*.{schema,migration}.sql; do echo $name; # some operation on the file namedone It would output something like this: sqls/0001.schema.sqlsqls/0002.schema.sqlsqls/0003.migration.sqlsqls/0004.schema.sqlsqls/0006.schema.sqlsqls/0007.schema.sql However, if I put the same loop into a Makefile: names: for name in sqls/*.{schema,migration}.sql; do echo $name; done The output become: sqls/*.{schema,migration}.sql How can I do the same thing in a Makefile? | What you want is : names: for name in sqls/*.{schema,migration}.sql; do\ echo $${name};\ # some operation on the file name done As explained in the docs , that's the difference between accessing GNUMake and bash variables. Here's a simple example using both : LIST = one two threeall: for i in $(LIST); do \ echo $$i; \ done Here i is a bash variable, whilst LIST is a GNUMake variable. In short , if you need bash to recieve a dollar sign $ , put a double dollar sign $$ in your makefile.(Note that the use of {} or () is equivalent in makefiles or bash .) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140360/"
]
} |
347,130 | I know single quotes will not evaluate what's inside, and double quotes will. I often see people quote-quote the dollar sign evaluation. Here are some examples: for i in "${indices[@]}"; do if [ "${a}" == 0 ]; then ffmpeg -framerate 2 -pattern_type glob -i "*.png" -pix_fmt yuv420p output.mp4 What if we don't double-quote the dollar sign expression? AFAIK, the for loop still works. | Your three examples are not all exactly alike. In the last two: if [ "${a}" == 0 ]; thenffmpeg -framerate 2 -pattern_type glob -i "*.png" -pix_fmt yuv420p output.mp4 If $a was not quoted, and its value contained characters of $IFS (by default space, tab and newline) or wildcard characters, this wouldcause [ to receive more than three arguments (beside the [ and ] ones), resulting in an error;similarly, if the value of $a was the empty string, this would cause [ to receive too few arguments: $ (a=0; [ $a == 0 ] && echo OK)OK (but only if $IFS currently happens not to contain 0 ) $ (a='foo bar'; [ $a == 0 ] && echo OK)bash: [: too many arguments (with the default value of $IFS ) $ (a=; [ $a == 0 ] && echo OK)bash: [: ==: unary operator expected (even with an empty $IFS or with zsh (that otherwise doesn't implement that implicit split+glob operator upon unquoted expansions)) $ (a='*'; [ $a == 0 ] && echo OK)bash: [: too many arguments (when run in a directory that contain at least 2 non-hidden files). With quoting, no error: $ (a='foo bar'; [ "$a" == 0 ] && echo OK)$ (a=; [ "$a" == 0 ] && echo OK) Your first example is different. The rules about expansion withindouble quotes are special when arrays are involved; if a denotes anarray, then: $a is the first element of the array (sticktly speaking it's ${a[0]} even if the element at indice 0 is not defined); ${a[*]} or ${a[@]} are the elements of the array, additionally split at $IFS (space, tab, newline by default); "${a[@]}" is the elements of the array, not split at $IFS . So your loop for i in "${indices[@]}"; do ... does not actually workthe same, depending on the contents of the array. For example: $ (declare -a a=(a b c); printf '%s\n' $a)a$ (declare -a a=(a b c); printf '%s\n' ${a[*]})abc$ (declare -a a=(a 'b c'); printf '%s\n' ${a[*]})abc$ (declare -a a=(a 'b c'); printf '%s\n' ${a[@]})abc$ (declare -a a=(a 'b c'); printf '%s\n' "${a[*]}")a b c$ (declare -a a=(a 'b c'); printf '%s\n' "${a[@]}")ab c | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/71888/"
]
} |
347,167 | I'm having post-mortem atop log on Ubuntu machine, which I'm able to view with atop -r The problem is that log files are shown in 10 minute intervals with t and Shift+t , while I would prefer 1 minute intervals. When current time is changed with b , it is quantized to nearest 10 minute point. I have a line in /etc/init.d/atop , but I'm not sure if it affects anything here: INTERVAL=60 How can I make atop logs to be browsed with 1 minute accuracy with t and Shift+t ? Is the information between these 10 minute intervals already lost? What are the possible workarounds if this is not possible? | Add the following line to /etc/atoprc , if the file doesn't exist create it: interval 60 atop no longer uses the /etc/default/atop file. Unless you are using an older version of atop . Then you might want to change INTERVAL=600 to INTERVAL=60 in /etc/default/atop . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217690/"
]
} |
347,178 | I had written procedure for myself on how to deploy a web app, and it contains a step that says to do this: vi ~/.bash_aliases i alias python=python3 Esc :wq This step worked a few months ago on a different instance of Debian Jessie. Today, it wasn't working. After some searching, I found that simply running this works: alias python=python3.6 My question is, what's the difference between these two methods, and any other possible methods of creating aliases? Do they all have the same end-result or are there any subtle differences in functionality/performance? Which method should I be using? | Your first method will add it to .bash_aliases , which means the alias will be loaded every time you log in. Your second method adds the alias temporarily, but it will not persist beyond your session. For more information see What is the .bashrc file? ( .bashrc should include .bash_aliases ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207108/"
]
} |
347,188 | The purpose of this question is to answer a curiosity, not to solve a particular computing problem. The question is: Why are POSIX mandatory utilities not commonly built into shell implementations? For example, I have a script that basically reads a few small text files and checks that they are properly formatted, but it takes 27 seconds to run, on my machine, due to a significant amount of string manipulation. This string manipulation makes thousands of new processes by calling various utilities, hence the slowness. I am pretty confident that if some of the utilities were built in, namely grep , sed , cut , tr , and expr , then the script would run in a second or less (based on my experience in C). It seems there would be a lot of situations where building these utilities in would make the difference between whether or not a solution in shell script has acceptable performance. Obviously, there is a reason it was chosen not to make these utilities built in. Maybe having one version of a utility at a system level avoids having multiple unequal versions of that utility being used by various shells. I really can't think of many other reasons to keep the overhead of creating so many new processes, and POSIX defines enough about the utilities that it does not seem like much of a problem to have different implementations, so long as they are each POSIX compliant. At least not as big a problem as the inefficiency of having so many processes. | Why are POSIX mandatory utilities not built into shell? Because to be POSIX compliant, a system is required 1 to provide most utilities as standalone commands. Having them builtin would imply they have to exist in two different locations, inside the shell and outside it. Of course, it would be possible to implement the external version by using a shell script wrapper to the builtin, but that would disadvantage non shell applications calling the utilities. Note that BusyBox took the path you suggested by implementing many commands internally, and providing the standalone variant using links to itself. One issue is while the command set can be quite large, the implementations are often a subset of the standard so aren't compliant. Note also that at least ksh93 , bash and zsh go further by providing custom methods for the running shell to dynamically load builtins from shared libraries. Technically, nothing then prevents all POSIX utilities to be implemented and made available as builtins. Finally, spawning new processes has become quite a fast operation with modern OSes. If you are really hit by a performance issue, there might be some improvements to make your scripts run faster. 1 POSIX.1-2008 However, all of the standard utilities , including the regular built-ins in the table, but not the special built-ins described in Special Built-In Utilities, shall be implemented in a manner so that they can be accessed via the exec family of functions as defined in the System Interfaces volume of POSIX.1-2008 and can be invoked directly by those standard utilities that require it (env, find, nice, nohup, time, xargs). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/347188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/183171/"
]
} |
347,275 | I would like to execute this shell script at reboot and shut down: #!/bin/shtouch /test Its permissions are -rwxr-xr-x 1 root root 22 Feb 24 09:34 /etc/init.d/te1 And it has this links /etc/rc0.d/K01te1 -> ../init.d/te1/etc/rc6.d/K01te1 -> ../init.d/te1 It is working at start up if I have a this link /etc/rc5.d/S01te1 -> ../init.d/te1 But I need it running at shut down. How can I do this on Debian 8 and 9 testing? The suggestion touch /var/lock/subsys/te1 didn't work. | I got the impression that others seem to have problems in getting this running, too. Seems like starting with Debian 8.0 (Jessie) systemd breaks compatibility to System V init. So here it was suggested to create a systemd service instead. The solution is used here and looks like this: [Unit]Description=The te1 script[Service]Type=oneshotRemainAfterExit=trueExecStart=/bin/trueExecStop=/usr/local/bin/te1[Install]WantedBy=multi-user.target The systemd service needs to be saved in /lib/systemd/system/te1.service and installed with sudo systemctl enable te1 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159562/"
]
} |
347,280 | I have 2TB ext4 partition with half million files on it. I want to check whether this partition contains any errors or not. I don't want to search for bad blocks, only logical structure should be checked. I have unmounted the partition and run fsck /dev/sda2 , but fsck returns immediately with exit code 0 without actually checking whole file system. I'm expecting full partition check would take hours to complete. I have read man fsck but did not find an option for "thorough testing". I'm afraid my partition may have some sectors accidentally overwritten by garbage data. My HDD was previously connected to another OS, and ext4 partition may get harmed by wrong behavior of that OS. That's why I want to be sure the whole tree structure is completely correct. In other words, I want to perform a check similar to what utility chkdsk.exe does on Windows. What should I use on Debian to completely check ext4 file system? | As mentioned by Satō Katsura , run e2fsck in "force" mode: e2fsck -f /dev/sda2 This will force a check even if the system thinks the file system is clean. The "verbose" option is helpful too: e2fsck -vf /dev/sda2 As a side-note, and not applicable in your case, but if you use LVM for your storage you can use the neat little lvcheck tool to run an "offline" file system check on a mounted file system (it uses an LVM snapshot and updates the file system metadata if the check doesn't find any errors). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/347280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124598/"
]
} |
347,295 | I have sha1sum or sha512sum on an average Linux distro. But where is the sha3sum command that can generate SHA-3 commands? | You can use OpenSSL to do this. Run openssl list -digest-algorithms to get a list of algorithms: ...SHA3-224SHA3-256SHA3-384SHA3-512... As you can see sha3-{224,256,384,512} is supported by OpenSSL 1.1.1 (11 Sep 2018) from Ubuntu 18.10.You can also run openssl interactively: OpenSSL> help...Message Digest commands (see the `dgst' command for more details)blake2b512 blake2s256 gost md4 md5 rmd160 sha1 sha224 sha256 sha3-224 sha3-256 sha3-384 sha3-512 sha384 sha512 sha512-224 sha512-256 shake128 shake256 sm3 To checksum a file: openssl dgst -sha3-512 /bin/echoSHA3-512(/bin/echo)= c9a3baaa2aa3d667a4ff475d893b3e84eb588fb46adecd0af5f3cdd735be88c62e179f98dc8275955da4ee5ef1dc7968620686c6f7f63f5b80f10e43bc1f00fc To checksum a string: printf "foobar" | openssl dgst -sha3-512 You can change the output format with these options: -c Print out the digest in two digit groups separated by colons -r Output the digest in the "coreutils" format, including newlines | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196913/"
]
} |
347,306 | According to this answer and this bug report one can simply drop a script in /usr/lib/systemd/system-shutdown/ or for Debian in /lib/systemd/system-shutdown/ to have it executed at reboot or shutdown. From https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html : Immediately before executing the actual system halt/poweroff/reboot/kexec systemd-shutdown will run all executables in /usr/lib/systemd/system-shutdown/ and pass one arguments to them: either "halt", "poweroff", "reboot" or "kexec", depending on the chosen action. All executables in this directory are executed in parallel, and execution of the action is not continued before all executables finished. My script is described at How to run a script at shutdown on Debian 9 or Raspbian 8 (Jessie) as: #!/bin/shtouch /test However, it didn't seem to run on my Debian systems and I even reported it as bug . | In fact, the script is running. As pointed out by Bigon and in the bug report the touch just cannot take effect because the file system is already mounted read-only when the scripts in /lib/systemd/system-shutdown/ are executed. One can prove this by mounting and fs read-write before the touch : #!/bin/shmount -oremount,rw /touch /testmount -oremount,ro / Now the /test really appears after a reboot. However, this also means that running my script through this folder will not be useful since it will happen too late. In order to write to log files etc. one needs to run the script earlier through a service as suggested by Bigon . I explain this at How to run a script at shutdown on Debian 9 or Raspbian 8 (Jessie) . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347306",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159562/"
]
} |
347,390 | I'm trying to do some tricks with dd. I thought it would be possible to store some hexvalues in a variable called "header" to pipe it into dd. My first step without a variable was this: $ echo -ne "\x36\xc9\xda\x00\xb4" |dd of=hex$ hd hex00000000 36 c9 da 00 b4 |6....|00000005 After that I tried this: $ header=$(echo -ne "\x36\xc9\xda\x00\xb4") $ echo -n $header | hd00000000 36 c9 da b4 |6...|00000004 As you can see I lost my \x00 value in the $header variable.Does anyone have an explanation for this behavior? This is driving me crazy. | You can't store a null byte in a string because Bash uses C-style strings, which reserve the null byte for terminators. So you need to rewrite your script to simply pipe the sequence that contains the null byte without Bash needing to store it in the middle. For example, you can do this: printf "\x36\xc9\xda\x00\xb4" | hd Notice, by the way, that you don't need echo ; you can use Bash's printf for this an many other simple tasks. Or instead of chaining, you can use a temporary file: printf "\x36\xc9\xda\x00\xb4" > /tmp/mysequencehd /tmp/mysequence Of course, this has the problem that the file /tmp/mysequence may already exist. And now you need to keep creating temporary files and saving their paths in strings. Or you can avoid that by using process substitution: hd <(printf "\x36\xc9\xda\x00\xb4") The <(command) operator creates a named pipe in the file system, which will receive the output of command . hd will receive, as its first argument, the path to that pipe—which it will open and read almost like any file. You can read more about it here: https://unix.stackexchange.com/a/17117/136742 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217836/"
]
} |
347,463 | How can I get the first line of an input text file, while deleting that line from the text file? If I had a text file /myPathToTheFile.txt like this ► put returns between paragraphs► for linebreak add 2 spaces at end► _italic_ or **bold** I'd like to get this line as an output ► put returns between paragraphs and my text file should be now like this ► for linebreak add 2 spaces at end► _italic_ or **bold* | ex -s /myPathToTheFile.txt <<\EX1p1dwqEX or ex -s /myPathToTheFile.txt <<< 1p$'\n'1d$'\n'wq or, less typing: ed -s /myPathToTheFile.txt <<< $'1\nd\nwq' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152267/"
]
} |
347,487 | My btrfs metadata is getting full. (I'm creating hourly snapshots using btrbk .) How do I increase / extend the space allocated to the metadata of a btrfs filesystem? Or is it automatically expanded? | TL;DR The metadata (if the btrfs is not suffering general low space condition) will automatically increase. In cases that no unallocated free space exists, the automatic increase is hembered. If, however, the data part of btrfs has been allocated more space than it needs, then it is possible to redistribute this. This is called balance -ing in btrfs. Assuming that there is enough unallocated memory on the backing block device(s) of the btrfs , then the Metadata part of the filesystem allocates - just as assumed by the OP - automatically memory to increase/expand the metadata. Therefore, the answer is: Yes (provided there is not low memory/free space condition in the btrfs ) , then the metadata will get automatically increased, as such: (1) We have a look at some initial allocation setup of the btrfs (on a 40GB device) $> btrfs filesystem df /Data, single: total=25.00GiB, used=24.49GiBSystem, single: total=32.00MiB, used=16.00KiBMetadata, single: total=1.55GiB, used=1.33GiBGlobalReserve, single: total=85.41MiB, used=0.00B (2) As can be seen, the allocated space in the filesystem to store Metadata is 1.55GiB, of which 1.33GiB, hence almost all is used (this might be a situation as occurring in the OP's case) (3) We now provoke an increase of metadata to be added. To do so, we copy the /home folder using the --reflink=always option of the cp command. $> cp -r --reflink=awlways /home /home.copy (4) Since (as we assume there were lots of files in /home), which a lot of new data to the filesystem has been added, which because we used --reflink does use little to no additional space for the actual data, it uses the Copy-on-Write, mechanism. In short, mostly Metadata was added to the filesystem. We can have hence another look $> btrfs filesystem df /Data, single: total=25.00GiB, used=24.65GiBSystem, single: total=32.00MiB, used=16.00KiBMetadata, single: total=2.78GiB, used=2.45GiBGlobalReserve, single: total=85.41MiB, used=0.00B As can be seen, the space allocated for Metadata used in this btrfs has automatically increased expanded. Since this is so automatically, it normally goes undetected by the user. However, there are some cases, mostly those where the whole filesystem is already pretty much filled up. In those cases, btrfs may begin to "stutter" and fail to automatically increase the allocated space for the Metadata. The reason would be, for example, that all the space has already been allocated to the parts (Data, System, Metadata, GlobalReserve). Confusingly, it could be yet the case that there is apparent space. An example would be this output: $> btrfs filesystem df /Data, single: total=38.12GiB, used=25.01GiBSystem, single: total=32.00MiB, used=16.00KiBMetadata, single: total=1.55GiB, used=1.45GiBGlobalReserve, single: total=85.41MiB, used=0.00B As it can be seen, the system all the 40GiB , yet the allocation is somewhat off balance , since while there is still space for the new files' data, the Metadata (as in the OP case) is low. The automatic allocation of memory for the devices backing the btrfs filesystem is not anymore possible (simply add up the totals of the allocation, 38.12G+1.55G+..~= 40GiB). Since there is however excess free space that was allocated to the data part of the filesystem, it can now be useful, necessary to balance the btrfs. Balance would mean to redistribute the already allocated space. In the case of the OP, it may be assumed that, for some reason, an imbalance between the different parts of the btrfs allocation has occurred. Unfortunately, the simple command sudo btrfs balance -dusage=0 , which in principle should search empty blocks (allocated for data) and put them to better user (that would be the almost depleted space for Metadata), may fail, because no completely empty data blocks can be found. The btrfs developers recommend to hence successively increase the usage limit of "when data blocks should be rearranged to reclaim space" Hence, if the result of $> sudo btrfs balance -dusage=0Done, had to relocate 0 out of 170 chunks is showing no relocation, one should do some $> sudo btrfs balance -dusage=5Done, had to relocate 0 out of 170 chunks <--(again fail)$> sudo btrfs balance -dusage=10Done, had to relocate 0 out of 170 chunks <--(again fail)$> sudo btrfs balance -dusage=15Done, had to relocate 2 out of 170 chunks <--(success) The other answer has hinted on the influence of the btrfs nodesize, which influences somewhat how quickly metadata will increase. The nodesize is (as mentioned in the other answer) only set once at mkfs.btrfs filesystem creation time. In theory, one could reduce the size of Metadata if it was possible to change to a lower value for the nodesize, if that was possible (it is not!). The nodesize, however, will not be able to help expand or increase the metadata space allocated in any way. Instead, it might have only helped to conserve space in the first place. A smaller nodesize, is not guaranteed however to reduce the metadata size. Indeed, some cases might show that larger nodesizes reduce the tree-traversal length of btrfs, as notes can contain more "links". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
347,676 | Sometimes I would like to export an email, in readable plain-text format, with headers stripped and the text readable, just like it's shown to me when I read it in mutt. It must surely be possible? Of course there is the s-key for saving the email as it is, but unfortunately in 2017 very few emails are easily readable without any processing, there are often several pages with irrelevant headers, and there are different forms of content-transfer-encoding making the content itself unreadable, extra noise due to the multipart/alternative encoding, etc. | I, often, use Escape-S to decode-save the message to a file. After that, I can edit the file to remove headers, signatures or anything else. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186667/"
]
} |
347,732 | In a bash or zsh script, how might I extract thehost from a url, e.g. unix.stackexchange.com from http://unix.stackexchange.com/questions/ask , if the latter is in an environment variable? | You can use parameter expansion, which is available in any POSIX compliant shell. $ export FOO=http://unix.stackexchange.com/questions/ask$ tmp="${FOO#*//}" # remove http://$ echo "${tmp%%/*}" # remove everything after the first /unix.stackexchange.com A more reliable, but uglier method would be to use an actual URL parser. Here is an example for python : $ python3 -c 'import sys; from urllib.parse import urlparse; print(urlparse(sys.argv[1]).netloc)' "$FOO"unix.stackexchange.com | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347732",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134690/"
]
} |
347,791 | below is the code I executed in terminal [root@idm ~]# x="$(date +%d%m%y)" [root@idm ~]# echo $x270217[root@idm ~]# echo ${#x}6 Can someone help me understand why the output is 6 ? What is # essentially doing to the variable? | It's a parameter expansion that returns the length of the parameter, or the number of elements in an array, or the number of positional parameters. Please read your shell's manual. The following is from the bash manual: ${#parameter} The length in characters of the value of parameter is substituted . If parameter is * or @ , the value substituted is the number of positional parameters. If parameter is an array name subscripted by * or @ , the value substituted is the number of elements in the array. If parameter is an indexed array name subscripted by a negative number, that number is interpreted as relative to one greater than the maximum index of parameter, so negative indices count back from the end of the array, and an index of -1 references the last element. And also, please don't make a habit of working in an interactive root shell. It's dangerous and reckless at best. Use sudo sparingly and only in situations that requires elevated privileges. Playing around with bash is something you definitely can do as an ordinary non-root user. In the last few years, I've only used an interactive root shell for manually adding a single user for myself. It's a 2-minute job and then I never need to see a # prompt ever again on that machine. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176232/"
]
} |
347,808 | I am using latest Ubuntu 16.04 LTS. For some days after installing system worked fine but after some days it start showing this error. Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or ^D to try again to boot into dafault mode. Press Enter for maintenance(or press control-D to continue): But there is one way using which I can start system. On boot up I go to recovery menu then clean package and then resume the system. It works but , it is time consuming to do after every boot up. Suggest something simple and cleaner to resolve this problem. Before going into emergency mode it gave message as: [12.320307] intel_soc_dts_thermal:request_threaded_irq ret -22 In case anyone interested in seeing log files : http://www.filehosting.org/file/details/644902/error_log.txt | The Emergency Mode sometime means that your file system may be corrupted. In such cases, you will be left out with a prompt to go nowhere. All you have to do is perform a file system check using, fsck.ext4 /dev/sda3 where sda3 can be your partition and if you are using ext3 file system, change the command as follows: fsck.ext3 /dev/sda3 About the partition number, Linux shows you the partition before arriving at the prompt. This should solve the problem. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218142/"
]
} |
347,814 | How can I see all cron records in CentOs7?Is there a folder with a file that contains all the cron records? | You can find cron jobs from the following locations: /etc/crontab /etc/cron.d/ /etc/cron.daily/ /etc/cron.hourly/ /etc/cron.monthly/ /etc/cron.weekly/ /var/spool/cron/ The last entry contains a crontab file for each user who is using crontab. There is also a default log file for cron daemon, which will contain information about cron runs, /var/log/cron . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218148/"
]
} |
347,836 | A file in a ls -l listing has permissions such as: -rw-r-----+ How do I find the extended Access Control List (ACL) permissions denoted by the + ? | The names getfacl and setfacl as in Tom Hale's answer are semi-conventional and are derived from the original TRUSIX names getacl and setacl for these utilities.However, on several operating systems one simply uses just the usual ls and chmod tools, which have been extended to handle ACLs; and one operating system has its own different set of commands. The original TRUSIX scheme of POSIX-style ACLs has three permission flags in an access control list entry.Later NFS4-style schemes divide up permissions in a more fine grained manner into between 11 and 17 permission flags. https://superuser.com/a/384500/38062 Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System . NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548. Portable Applications Standards Committee of the IEEE Computer Society (October 1997). Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17. S. Shepler, M. Eisler, D. Noveck (January 2010). "ACE Access Mask" . Network File System (NFS) Version 4 Minor Version 1 Protocol . RFC 5661. IETF. On OpenBSD and NetBSD This situation does not arise.OpenBSD and NetBSD both lack any ACL mechanisms. NetBSD implements the system calls in a FreeBSD compatibility layer, but they only return an error.OpenBSD simply doesn't have ACLs at all. On Linux-based operating systems Use getfacl as in Tom Hale's answer, or getrichacl .Setting ACLs is done with setfacl or setrichacl . Linux (a kernel, remember) has two forms of ACL.It supports the both original TRUSIX scheme of POSIX-style ACLs, and (since 2015, but stuck in "experimental" status for a long time because there aren't enough maintainers available to review the VFS layer in Linux ) a NFS4-style scheme. There are several implementations of standard commands on Linux-based operating systems, from toybox through BusyBox to GNU coreutils.But in all cases chmod does not handle ACLs, and ls at most only indicates their overall presence or absence.This is unlike Solaris, Illumos, or MacOS. Nor is there one tool for getting, or setting, ACLs. setfacl and getfacl handle TRUSIX ACLs, whilst one has to use setrichacl and getrichacl for NFS4-style ACLs.This is unlike FreeBSD. Rob Landley. " chmod ". toybox Manual . On FreeBSD Use getfacl as in Tom Hale's answer. Setting ACLs is done with setfacl . FreeBSD has two forms of ACL.One has POSIX-style entries like the original TRUSIX model; the other has NFS4-style entries, with 14 permissions flags. Unlike on Solaris, Illumos, and MacOS, on FreeBSD chmod does not handle ACLs, and ls only indicates their overall presence or absence.But there is a single tool each for getting and setting ACLs, unlike Linux-based operating systems.The getfacl and setfacl commands on FreeBSD handle both forms of ACL.They have several extensions beyond TRUSIX for the NFS4-style, such as the -v option to getfacl that prints NFS4-style access controls in a long form with words, rather than as a list of single-letter codes. Robert N. M. Watson (2009-09-14). getfacl . FreeBSD General Commands Manual . FreeBSD. On MacOS There are no getfacl and setfacl commands on MacOS.MacOS is like Solaris and Illumos. MacOS only supports NFS4-style access controls, with ACL entries divided up into 17 individual permission flags. Apple rolled ACL functionality into existing commands.Use the -e option to ls to view ACLs. Use the -a / +a / =a and related options to chmod to set them. ls . BSD General Commands Manual . 2002-05-19. Apple corporation. On AIX There are no getfacl and setfacl commands on AIX.IBM uses its own command names. AIX supports both POSIX-style (which IBM names "AIXC") and NFS4-style ACLs. Use the aclget command to get ACLs.Use the aclset command to set them.Use the acledit command to edit them with a text editor.Use the aclconvert command to convert POSIX-style to NFS4-style. " Access Control List Management ". IBM AIX V7.1 documentation . IBM. On Illumos and Solaris There are no getfacl and setfacl commands on Illumos and Solaris.Solaris and Illumos are like MacOS. Illumos and Solaris support both POSIX-style and NFS4-style ACLs. Sun rolled ACL functionality into existing commands.Use the -v or -V option to ls to view ACLs.Use the A prefix for symbolic modes in the chmod command to set them. ls . User Commands . 2014-11-24. Illumos Project. chmod . User Commands . 2014-11-24. Illumos Project. ls . Oracle Solaris 11 Information Library . 2011. Oracle. On Cygwin Use getfacl as in Tom Hale's answer.Setting ACLs is done with setfacl . Windows NT itself has an ACL scheme that is roughly NFS4-style with a set of drctpoxfew standard-and-specific permissions flags, albeit with a larger set of security principals and a generic-rights mechanism that maps a POSIX-style set of three flags onto its standard-and-specific-rights permissions system. Cygwin presents this as a wacky admixture of a Solaris-like ACL API, the ID mapping mechanism from Microsoft second POSIX subsystem for Windows NT (née Interix), and a Linux-like set of command-line tools that only recognize POSIX-style permissions. getfacl . Cygwin Utilities . Cygnus. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
347,843 | I want to delete the lines containing empty fields in the last row $7 File: 1 1479870 5022248660 1 40001 189445122 7400201 1911574 3015889020 1 33001 162049034 6330041 1569783 5029193930 1 22001 133687297 5222161 2069616 1025856960 2 25001 185608704 1 1741598 5021128160 1 44001 164870942 6440271 1052941 5020319300 1 10001 156161802 6100071 1686734 5020347480 1 13001 131405824 5133041 1872263 5029089700 1 23001 185092353 723017 Desired output : 1 1479870 5022248660 1 40001 189445122 7400201 1911574 3015889020 1 33001 162049034 6330041 1569783 5029193930 1 22001 133687297 5222161 1741598 5021128160 1 44001 164870942 6440271 1052941 5020319300 1 10001 156161802 6100071 1686734 5020347480 1 13001 131405824 5133041 1872263 5029089700 1 23001 185092353 723017 | Try with this one : awk '$7!=""' file > final_output | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218031/"
]
} |
347,873 | How does one set the PATH for non-login shells in CentOS 7? Specifically, I have a systemd unit that needs binaries in /usr/local/texlive/2016/bin/x86_64-linux . I attempted to set it in /etc/environment with PATH=/usr/local/texlive/2016/bin/x86_64-linux:$PATH but then my PATH was /usr/local/texlive/2016/bin/x86_64-linux:$PATH:/usr/local/sbin:/usr/sbin . I created /etc/profile.d/texlive.sh with export PATH="/usr/local/texlive/2016/bin/x86_64-linux:${PATH}" but that only worked for login shells. I looked at Set Path for all Users (Login and Non-login Shells) but the solution was already attempted above. I looked at How to add a path to system $PATH for all users's non-login shell and login shell on debian but there's no accepted solution and I'm not sure I want to modify /etc/login.defs because it might get changed in an update. | The simplest answer is to set the PATH as part of your ExecStart command in the systemd Unit file. For example, if you currently have ExecStart=/bin/mycmd arg1 arg2 then change it to ExecStart=/bin/bash -c 'PATH=/new/path:$PATH exec /bin/mycmd arg1 arg2' The expansion of $PATH will be done by bash, not systemd. Alternatives such as using Environment=PATH=/new/path:$PATH will not work as systemd will not expand the $PATH . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/347873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218185/"
]
} |
347,875 | I have a file inside many directory from within a root directory. I need to apply the chmod 640 and chown command on all the files. I have two commands, one to find the file paths and the other is to apply the chmod and chown. How can I apply the chmod and chown to the output of the find command Example: find . -type f -name 'myawesomeapp.jar'chmod 640 /path/to/file/myawesomeapp.jarchown root:webapps /path/to/file/myawesomeapp.jarchmod 640 /path/to/another/file/myawesomeapp.jarchown root:webapps /path/to/another/file/myawesomeapp.jar | Use the -exec flag of find to run commands on the results: find . -type f -name 'myawesomeapp.jar' -exec chmod 640 {} \+ -exec chown root:webapps {} \+ In your case you want to use the second variant of exec: -exec command ; Execute command; true if 0 status is returned. All following argu‐ ments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. The string `{}' is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find. Both of these construc‐ tions might need to be escaped (with a `\') or quoted to protect them from expansion by the shell. See the EXAMPLES section for examples of the use of the -exec option. The specified command is run once for each matched file. The command is executed in the starting directory. There are unavoidable security problems sur‐ rounding use of the -exec action; you should use the -execdir option instead.-exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}' is allowed within the com‐ mand. The command is executed in the starting directory. If find encounters an error, this can sometimes cause an immediate exit, so some pending commands may not be run at all. This variant of -exec always returns true. {} is the substitution token for the filenames that will be passed on find . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218186/"
]
} |
347,914 | I know the answer in the thread How do I fix my locale issue? , by which I cannot change the fields LANGUAGE and LC_ALL to the corresponding values. I am setting up retropie in my Raspberry Pi 3B Raspbian the newest OS. My locale LANG=en_GB.UTF-8LANGUAGE= # TODO empty! but should be en_GB:enLC_CTYPE="en_GB.UTF-8"LC_NUMERIC="en_GB.UTF-8"LC_TIME="en_GB.UTF-8"LC_COLLATE="en_GB.UTF-8"LC_MONETARY="en_GB.UTF-8"LC_MESSAGES="en_GB:UTF-8"LC_PAPER="en_GB.UTF-8"LC_NAME="en_GB.UTF-8"LC_ADDRESS="en_GB.UTF-8"LC_TELEPHONE="en_GB.UTF-8"LC_MEASUREMENT="en_GB.UTF-8"LC_IDENTIFICATION="en_GB.UTF-8"LC_ALL= # TODO empty but should be en_GB.UTF-8 OS: Raspbian Hardware: Raspberry Pi 3B | Edit your /etc/locale.gen then uncomment the following line: en_GB.UTF-8 UTF-8 Run: locale-gen en_GB.UTF-8 UTF-8update-locale en_GB.UTF-8 UTF-8export LANGUAGE=en_GB.UTF-8export LC_ALL=en_GB.UTF-8 Verify it; locale : LANG=en_GB.UTF-8LANGUAGE=en_GB.UTF-8LC_CTYPE="en_GB.UTF-8"LC_NUMERIC="en_GB.UTF-8"LC_TIME="en_GB.UTF-8"LC_COLLATE="en_GB.UTF-8"LC_MONETARY="en_GB.UTF-8"LC_MESSAGES="en_GB.UTF-8"LC_PAPER="en_GB.UTF-8"LC_NAME="en_GB.UTF-8"LC_ADDRESS="en_GB.UTF-8"LC_TELEPHONE="en_GB.UTF-8"LC_MEASUREMENT="en_GB.UTF-8"LC_IDENTIFICATION="en_GB.UTF-8"LC_ALL=en_GB.UTF-8 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
347,939 | I want to set my resin/rpi-raspbian:jessie container's /etc/resolv.conf to: nameserver 208.67.222.222nameserver 208.67.220.220 My Dockerfile has the following line: ADD resolv.conf /etc/resolv.conf This added file contains the correct nameservers. My Docker host's /etc/resolv.conf contains the correct information. I'm running the container like this: docker run -itd --cap-add=NET_ADMIN --device /dev/net/tun \-v /home/pi/share/ovpn:/ovpn \--privileged --network=internet_disabled --name vpn-client \--dns=208.67.222.222 \openvpn-client_nat-gateway /bin/bash Despite all of this, the container gives this output: root@642b0f4716ba:/# cat /etc/resolv.confnameserver 127.0.0.11options ndots:0 It's only after I change the resolv.conf manually from within the container (or with docker exec) that it looks right. I'd rather avoid having to fix it with an exec command. Anybody have an idea what's going on here? | AFAIK, docker overrides some files in an image when it's started , even if they were ADDed in Dockerfile. This for sure includes /etc/hosts , and most probably the same happens for /etc/resolv.conf too. This is apparently used to properly build the default "internal" network of Docker (so that images see each other, but not host, etc.) If you are really sure you want to override/modify some of those files, I believe you must do that as part of the runtime actions, that is as part of the CMD line. For example: ...ADD resolv.conf /etc/resolv.conf.overrideCMD cp /etc/resolv.conf.override /etc/resolv.conf && \ your_old_commands... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159946/"
]
} |
347,956 | My logic is to check a variable contains Floating point or an Integer and then post that if the variable if floating then have to round the number to the nearesthigher number and if it is integer print as it is. if echo "$FS" | grep "^[0-9]*$" > /dev/nullthenecho "Integer"elif echo "FS" | grep "^[0-9]*[.][0-9]*$" > /dev/nullthenecho "Floating"fi This works perfectly but the problem comes when I integrate this with in the case statement. #!/bin/bashIP_DIR=$1ACTUAL=$2typeset -l ACTUALRETURNSIZE=$3typeset -l RETURNSIZEif [ -d "$IP_DIR" ]; thenfor OUTPUT in $(find $IP_DIR -maxdepth 1 | awk 'NR>1')do if [ "$ACTUAL" == "true" ]; then case $RETURNSIZE in "gb") FS=`du -b $OUTPUT | awk {'print $1'}` FS=$(echo "scale=12; $FS / 1073741824" | bc) echo $OUTPUT "|" $FS "GB";; "mb") FS=`du -b $OUTPUT | awk {'print $1'}` FS=`echo $FS | awk '{ byte =$1 /1024/1024 ; print byte " MB" }'` echo $OUTPUT "|" $FS;; "kb") FS=`du -b $OUTPUT | awk {'print $1'}` FS=`echo $FS | awk '{ byte =$1 /1024 ; print byte " KB" }'` echo $OUTPUT "|" $FS;; "b") FS=`du -b $OUTPUT | awk {'print $1'}` echo $OUTPUT "|" $FS "B";; "all")FS=`du -h $OUTPUT | awk {'print $1'}` echo $OUTPUT "|" $FS;; esac elif [ "$ACTUAL" == "false" ]; then case $RETURNSIZE in "gb") FS=`du -b $OUTPUT | awk {'print $1'}` FS=$(echo "scale=12; $FS / 1073741824" | bc) if [[ $FS == ^[0-9]*$ ]]; then echo $OUTPUT "|" $FS "GB" ;elif [[ $FS == ^[0-9]*[.][0-9]*$ ]]; then echo "$OUTPUT "|" $FS "GB round"; fi ;;#echo $OUTPUT "|" $FS "GB Needed to be rounded";; "mb") FS=`du -m $OUTPUT | awk {'print $1'}` echo $OUTPUT "|" $FS "MB";; "kb") FS=`du -k $OUTPUT | awk {'print $1'}` echo $OUTPUT "|" $FS "KB";; "b") FS=`du -b $OUTPUT | awk {'print $1'}` echo $OUTPUT "|" $FS "B";; "all")FS=`du -h $OUTPUT | awk {'print $1'}` echo $OUTPUT "|" $FS;; esac fidoneelseecho "Directory Not Found"fi The error messages .sh: line 48: unexpected EOF while looking for matching `"' .sh: line 50: syntax error: unexpected end of file | AFAIK, docker overrides some files in an image when it's started , even if they were ADDed in Dockerfile. This for sure includes /etc/hosts , and most probably the same happens for /etc/resolv.conf too. This is apparently used to properly build the default "internal" network of Docker (so that images see each other, but not host, etc.) If you are really sure you want to override/modify some of those files, I believe you must do that as part of the runtime actions, that is as part of the CMD line. For example: ...ADD resolv.conf /etc/resolv.conf.overrideCMD cp /etc/resolv.conf.override /etc/resolv.conf && \ your_old_commands... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48806/"
]
} |
347,975 | I have a script in /etc/init.d that I'm trying to add echo statements to...but all of my changes seem to be ignored. I'm guessing because the script itself is not actually run but that systemd is doing something behind the scenes to continue running the old version of the script. Is there a way to "load" these changes? | No, there's not a way for systemd to load the changes you made to files in /etc/init.d because systemd is largely ignoring the files anyway. See the gory details at How does systemd use /etc/init.d scripts? . Consider asking a different question about the specific problem you are attempting to solve by editing /etc/init.d files. I could guess that perhaps you are looking for ExecStartPre= or ExecStartPost= . Read man systemd.service for other directives that you can use in your systemd service unit files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/347975",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/215003/"
]
} |
347,991 | I'm trying to get a list of directories that are contained within a specific folder. Given these example folders: foo/bar/testfoo/bar/test/cssfoo/bar/wp-content/plugins/XYZfoo/bar/wp-content/plugins/XYZ/jsfoo/bar/wp-content/plugins/XYZ/cssbaz/wp-content/plugins/ABCbaz/wp-content/plugins/ABC/incbaz/wp-content/plugins/ABC/inc/libbaz/wp-content/plugins/DEFbat/bar/foo/blog/wp-content/plugins/GHI I'd like a command that will return: XYZABCDEFGHI Essentially, I'm looking for the folders that are inside of wp-content/plugins/ Using find has gotten me the closest, but I can't use -maxdepth , because the folder is variably away from where I'm searching. Running the following returns all of the child directories, recursively. find -type d -path *wp-content/plugins/* foo/bar/wp-content/plugins/XYZfoo/bar/wp-content/plugins/XYZ/jsfoo/bar/wp-content/plugins/XYZ/cssbaz/wp-content/plugins/ABCbaz/wp-content/plugins/ABC/incbaz/wp-content/plugins/ABC/inc/libbaz/wp-content/plugins/DEFbat/bar/foo/blog/wp-content/plugins/GHI | Just add a -prune so that the found directories are not descended into: find . -type d -path '*/wp-content/plugins/*' -prune -print You need to quote that *wp-content/plugins/* as it's also a shell glob. If you want only the directory names as opposed to their full path, with GNU find , you can replace the -print with -printf '%f\n' or assuming the file paths don't contain newline characters, pipe the output of the above command to awk -F / '{print $NF}' or sed 's|.*/||' (also assuming the file paths contain only valid characters). With zsh : printf '%s\n' **/wp-content/plugins/*(D/:t) **/ is any level of subdirectories (feature originating in zsh in the early nighties, and now found in most other shells like ksh93 , tcsh , fish , bash , yash though generally under some option), (/) to select only files of type directory , D to include hidden (dot) ones, :t to get the tail (file name). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/347991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11387/"
]
} |
348,006 | Today, I have received a new doc. document and wanted to open it using LibreOffice (Version 5.1.6.2). The following Warning Message was displayed: "This document contains macros. Macros may contain viruses. Execution of macros is disabled due to the current macro security setting in Tools -> Options -> LibreOffice -> Security (3 - High). Therefore, some functionality may not be available". Then, I had no option of preventing myself from opening the document. Although the execution of macros is disabled, I am still worried about the possible presence of the Micro Virus on my Ubuntu (16.04.2 LTS) computer. How is it possible to know whether my computer has been infected by a Macro Virus? What should I do if my computer has indeed been infected? Help much appreciated. Thank you. | Just add a -prune so that the found directories are not descended into: find . -type d -path '*/wp-content/plugins/*' -prune -print You need to quote that *wp-content/plugins/* as it's also a shell glob. If you want only the directory names as opposed to their full path, with GNU find , you can replace the -print with -printf '%f\n' or assuming the file paths don't contain newline characters, pipe the output of the above command to awk -F / '{print $NF}' or sed 's|.*/||' (also assuming the file paths contain only valid characters). With zsh : printf '%s\n' **/wp-content/plugins/*(D/:t) **/ is any level of subdirectories (feature originating in zsh in the early nighties, and now found in most other shells like ksh93 , tcsh , fish , bash , yash though generally under some option), (/) to select only files of type directory , D to include hidden (dot) ones, :t to get the tail (file name). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201398/"
]
} |
348,043 | I am using Ansible-2.0 on linux machine and I wish to do change DNS on remote machine using ansible as below: ---# tasks file for test- name: Change dns become: yes become_user: admin replace: dest: /etc/resolv.conf regexp: '192.168.1.24' replace: '8.8.8.8' Output: $ ansible-playbook -i "mn," test.yml TASK [test : Change dns] *******************************************************fatal: [mn]: FAILED! => {"changed": false, "failed": true, "msg": "The destination directory (/etc) is not writable by the current user."}PLAY RECAP *********************************************************************mn : ok=1 changed=0 unreachable=0 failed=1 On remote machine, admin is as sudo user and here I don't want to change sudo setting on remote machine, Is there any method to pass password using task(script), not using command line. | you should configure admin as a remote_user , not become_user . The become_user option sets to which user you would su to execute certain task. In other words, your playbook should look like: ---- hosts: somehosts remote_user: admin roles: ....#tasks file---# tasks file for test- name: Change dns become: yes replace: dest: /etc/resolv.conf regexp: '192.168.1.24' replace: '8.8.8.8' That way the admin user will be used to establish your ssh session, but for the Change dns task will be used sudo . Also, if admin requires password for sudo, you will have to run your playbook like this: ansible-playbook -i mn test.yml -K The -K switch will prompt you for the sudo password for admin . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138125/"
]
} |
348,061 | I have a regex for session ID extraction: [root@docker tmp]# grep -oE "\[[0-9].+\]" logfile[113a6d9e-7b06-42c6-a52b-7a4e4d2e216c][113a6d9e-7b06-42c6-a52b-7a4e4d2e216c][root@docker tmp]# How can I hide the square brackets from the output? | Instead of using extended-regex grep ( -E ), use perl-regex grep instead ( -P ), with a lookbehind and lookahead. $ grep -oP "(?<=\[)[0-9].+(?=\])" logfile113a6d9e-7b06-42c6-a52b-7a4e4d2e216c113a6d9e-7b06-42c6-a52b-7a4e4d2e216c Here, (?<=\[) indicates that there should be a preceding \[ , and (?=\]) indicates that there should be a following \] , but not to include them in the match output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19072/"
]
} |
348,086 | How to write "Hello Word" with dd? Are there any tricks to create a txt file with dd that has a "Hello Word" in it? | dd does not generate content (except for the space or zero padding that it can do with some options), its job is to read data from somewhere and write it somewhere else in the specified manners. By default, dd writes on its stdout what it reads from its stdin. So the simplest you can do would be (as shell code). dd << EOF > fileHello WorldEOF dd reads its stdin (here, a here document provided by the shell) 512 bytes at a time, and writes it on its stdout (here open by the shell on file ). It can open the output file by itself: dd of=file << EOFHello WorldEOF It can do some transformations, seek, read by smaller/larger chunks etc. dd bs=1 skip=15 << EOF > filePlease output: Hello WorldEOF Or: dd bs=12 count=1 conv=swab << EOF > fileeHll ooWlrdaftEOF | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196913/"
]
} |
348,146 | I found only the following in apt-get , but nothing seems to be relevant, then much pure ftp clients. I think sftp is the same as SSH ftp, so scp should do the case, but I would like to have a GUI. I just want to move files to my raspberry pies in local network. apt-cache search sftp | grep sftpgesftpserver - sftp server submodule for OpenSSHlibnet-sftp-foreign-perl - client for the Secure File Transfer Protocollibnet-sftp-sftpserver-perl - Secure File Transfer Protocol Serveropenssh-sftp-server - secure shell (SSH) sftp server module, for SFTP access from remote machinespython-fs-plugin-sftp - Python filesystem abstraction - SFTP accessrssh - Restricted shell allowing scp, sftp, cvs, svn, rsync or rdistruby-net-sftp - Ruby implementation of the SFTP protocolsftpcloudfs - SFTP interface to Rackspace/OpenStack storage servicesvsftpd - lightweight, efficient FTP server written for securityvsftpd-dbg - lightweight, efficient FTP server written for security (debug) OS: Debian 8.7 | In most modern DEs, the file browser/manager (Nautilus, Nemo, Thunar, Dolphin, etc.) should support SFTP (using GVFS in the case of Nautilus, Nemo, etc., and probably some KDE library for Dolphin). So, use your file browser's address bar ( Ctrl L in Nautilus, Nemo and Thunar, iirc) and go to sftp://host or sftp://user@host . Any address usable by SSH would work here, and entries in your SSH config file are available. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
348,148 | We have csv file with more than 50000 linesThis is only example Dcccrev,wefrwvfr,rfregt,wr4f,rfvrv,ecxwec,ecfrv,rfrfGrge,gtgr,frfrv,gthtgv,gerg5tgvrt,rvrfvtg,tgt,frfrf,rfrfDrfrfr,t,tgtg,rf,rgr,grtg,tgt,gtgtg,rg... My task is:in case numbers of separator “,” In each line isn’t equal to 7 then need to print the line number Is it possible to create one awk line or one perl liner for this task? Without to use echo or cat that spend time | This is fairly easy with awk. You can set the delimiter to , with -F',' then count the columns with NF. For 7 comma's we would need 8 fields and print the current line number with NR. awk -F ',' 'NF != 8 {print NR}' test.txt Contents of test.txt Dcccrev,wefrwvfr,rfregt,wr4f,rfvrv,ecxwec,ecfrv,rfrfGrge,gtgr,frfrv,gthtgv,gerg5tgvrt,rvrfvtg,tgt,frfrf,rfrfDrfrfr,t,tgtg,rf,rgr,grtg,tgt,gtgtg,rg Output 23 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153544/"
]
} |
348,156 | I was installing an application and had trouble with understanding why the installation guide told to add an export command to the one of startup files. I did not see when it is executed.That was the line: export WORKON_HOME=$HOME/.virtualenvs I executed some code in terminal and figured out that the export command did not work. Why? root@localhost:/home/gameboy# echo export ADAM=Boss>>/home/pythontest/.profileroot@localhost:/home/gameboy# tail /home/pythontest/.profile if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fifi# set PATH so it includes user's private bin if it existsif [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH"fiexport ADAM=Bossroot@localhost:/home/gameboy# su pythontestpythontest@localhost:/home/gameboy$ echo $ADAMpythontest@localhost:/home/gameboy$ | ~/.profile may be run at the launch of a login Bash shell. First the system executes the system-wide /etc/profile , then the first of these files which exists and is readable: ~/.bash_profile~/.bash_login~/.profile Your problem is that you're changing user via su pythontest . You must ensure that the spawned shell is a login shell by adding the -l flag: su -l pythontest | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/204439/"
]
} |
348,175 | Consider: numbers="1 111 5 23 56 211 63"max=0for num in ${numbers[@]}; do [ $num -gt $max ]\ && echo "old -> new max = $max -> $num"\ && max=$numdone | tee logfileecho "Max= $max" If I remove | tee logfile the max variable is correctly printed as 211, but if I leave it in I get Max= 0 . What is going on? | Each side of a pipe is executed in a subshell. A subshell is copy of the original shell process which starts in the same state and then evolves independently¹. Variables set in the subshell cannot escape back to the parent shell. In bash, instead of using a pipe, you could use process substitution . This would behave almost identically to the pipe, except that the loop executes in the original shell and only tee executes in a subshell. for num in "${numbers[@]}"; do if ((num > max)); then echo "old -> new max = $max -> $num" max=$num fidone > >(tee logfile)echo "Max= $max" While I was at it I changed a few things in your script: Always use double quotes around variable substitutions unless you know why you need to leave them out. Don't use && when you mean if . It's less readable and doesn't have the same behavior if the command's return status is used. Since I'm using bash-specific constructs, I might as well use arithmetic syntax . Note that there is a small difference between the pipe solution and the subshell solution: a pipe command finishes when both commands exit, whereas a command with a process substitution finishes when the main command exits, without waiting for the process in the process substitution. This means that when your script finishes, the log file may not be fully written. Another approach is to use some other channel to communicate from the subshell to the original shell process. You can use a temporary file (flexible, but harder to do right — write the temporary file to an appropriate directory, avoid name conflicts (use mktemp ), remove it even in case of errors). Or you can use another pipe to communicate the result and grab the result in a command substitution. max=$({ { for … done; echo "$max" >&4 } | tee logfile >&3; } 4>&1) 3&>1 The output of tee goes to file descriptor 3 which is redirected to the script's standard output. The output of echo "$max" goes to file descriptor 4 which is redirected into the command substitution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93678/"
]
} |
348,176 | With regard to this question: How to script make menuconfig , what is the difference between running make and make oldconfig ? I assume that if you run make , it uses the old .config anyway. | make oldconfig is used to apply your old .config file to the newerkernel. For exapmle, you have .config file of your current kernel and youdownloaded new kernel and want to build your new kernel. Since very likely new kernel will have some new configuration options, you will need to update your config. The easiest way to do this is to run make oldconfig , which will prompt you questions about the new configuration options. (that is the ones your current .config file doesn't have) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218401/"
]
} |
348,303 | I have some output in the form of: count id type588 10 | 3 10 12 | 3883 14 | 3 98 17 | 3 17 18 | 177598 18 | 310000 21 | 317892 2 | 320000 23 | 3 63 27 | 3 6 3 | 3 2446 35 | 3 14 4 | 3 15 4 | 1253 4 | 219857 4 | 3 1000 5 | 3... Which is pretty messy and needs to be cleaned up to a CSV so I can gift it to a Project Manager for them the spreadsheet the hell out of it. The core of the problem is this: I need the output of this to be: id, sum_of_type_1, sum_of_type_2, sum_of_type_3 An example of this is id "4": 14 4 | 3 15 4 | 1253 4 | 219857 4 | 3 This should instead be: 4,15,253,19871 Unfortunately I'm pretty rubbish at this sort of thing, I've managed to get all the lines cleaned up and into CSV but I haven't been able to deduplicate and group the rows. Right now I have this: awk 'BEGIN{OFS=",";} {split($line, part, " "); print part[1],part[2],part[4]}' | awk '{ gsub (" ", "", $0); print}' But all that does is clean up the rubbish characters and print the rows again. What is the best way of getting massaging the rows into the above-mentioned output? | A way to do it is to put everything in a hash. # put values into a hash based on the id and tagawk 'NR>1{n[$2","$4]+=$1}END{ # merge the same ids on the one line for(i in n){ id=i; sub(/,.*/,"",id); a[id]=a[id]","n[i]; } # print everyhing for(i in a){ print i""a[i]; }}' edit: my first answer didn't answer the question properly | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134363/"
]
} |
348,434 | I have set up a systemd script that is working.I want it to only run when I call it, but currently it is also running on reboot.How can I make the script not run on reboot, but only when I call it (as in: sudo service systemdname start )? | Welcome to Unix & Linux. It sounds like your service is "enabled" to start at boot. To disable it from starting on boot: systemctl disable your-service-name It's possible your service could be started anyway on boot if another service depends on it. Also note that service is not a systemd command. The service command was used with Upstart and SysVinit init systems and has been made compatible with systemd . The systemd -specific way to start a service would be: sudo systemctl start your-service-name | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218585/"
]
} |
348,441 | I’ve been prototyping a BLE peripheral running as root on a small raspberry like board. Now I’m hardening things and partitioning the BLE app to a non-root user. So I've changed my systemd service file for the app to look like: [Unit]Description=BLE Peripheral[Service]Type=simpleExecStart=/usr/bin/python3 -u /opt/myPeripheral/bleMainloopWorkingDirectory=/opt/myPeripheralStandardOutput=journalRestart=on-failureUser=blePeripheral[Install]WantedBy=multi-user.target Having added the User field to run as the blePeripheral user, it now fails to start due to: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.AccessDenied: Rejected send message, 2 matched rules; type="method_call", sender=":1.6797" (uid=107 pid=17300 comm="/usr/bin/python3 -u /opt/pilot/bleMainloop ") interface="org.freedesktop.DBus.ObjectManager" member="GetManagedObjects" error name="(unset)" requested_reply="0" destination=":1.2" (uid=0 pid=1373 comm="/usr/lib/bluetooth/bluetoothd -d -E --noplugin=* “) I think what I need to do is somehow allow certain uses of dbus for this non-root user. I see that there’s a bluetooth.conf in /etc/dbus-1/system.d . Do I need to tune something in this file to allow my app to still use the BLE DBus services? | In /etc/dbus-1/system.d/bluetooth.conf , try adding this: <policy user="blePeripheral"> <allow own="org.bluez"/> <allow send_destination="org.bluez"/> <allow send_interface="org.bluez.GattCharacteristic1"/> <allow send_interface="org.bluez.GattDescriptor1"/> <allow send_interface="org.freedesktop.DBus.ObjectManager"/> <allow send_interface="org.freedesktop.DBus.Properties"/></policy> Then restart the dbus service: systemctl restart dbus | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101140/"
]
} |
348,450 | I have a systemd service that needs to create a directory in /run , but otherwise run as a non-root user. From a blog example, I derived the following solution: [Unit]Description=Startup Thing[Service]Type=oneshotExecStart=/usr/bin/python3 -u /opt/thing/doStartupWorkingDirectory=/opt/thingStandardOutput=journalUser=thingUser# Make sure the /run/thing directory existsPermissionsStartOnly=trueExecStartPre=-/bin/mkdir -p /run/thingExecStartPre=/bin/chmod -R 777 /run/thing[Install]WantedBy=multi-user.target The magic is in the 3 lines that follow the comment. Apparently the ExecStartPre 's will run as root this way, but the ExecStart will run as the specified user. This has lead to 3 questions though: What does the - do in front of the /bin/mkdir ? I don't know why it's there or what it does. When there are multiple ExecStartPre 's in a unit file, are they just run serially in the order that they are found in the unit file? Or some other method? Is this actually the best technique to accomplish my goal of getting the run directory created so that the non-root user can use it? | For any questions about a systemd directives, you can use man systemd.directives to lookup the man page that documents the directive. In the case of ExecStartPre= , you'll find it documented in man systemd.service . There in docs for ExecStartPre= , you'll find it explained that the leading "-" is used to note that failure is tolerated for these commands. In this case, it's tolerated if /run/thing already exists. The docs there also explain that "multiple command lines are allowed and the commands are executed one after the other, serially." One improvement to your method of pre-creating the directory is not make it world-writable when you only need it to be writable by a particular user. More limited permissions would be accomplished with: ExecStartPre=-/bin/chown thingUser /run/thing ExecStartPre=-/bin/chmod 700 /run/thing That makes the directory owned by and fully accessible from a particular user. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/348450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101140/"
]
} |
348,599 | I have the following in my ~/.bashrc to hide the listing of test from the output of ls : alias ls='ls -I test' But I want it to only hide test if my current working directory is the root ( / ) folder, not if I am in some other folder. How may I achieve this? | Use a function that tests if you're in / for ls : ls () { if [[ "$PWD" == / ]] then command ls -I test "$@" else command ls "$@" fi} This way, any arguments you pass to ls will still be used. Or: ls () { if [ "$PWD" == / ] then set -- -I test "$@" fi command ls "$@"} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
348,692 | Isn't clear to me, why there are two instructions for copying files into the Docker image described in the Dockerfile reference. There are ADD and COPY and they seem pretty similar to me. Is there a practical difference between them? If not, which one is most used? | According to Best practices for writing Dockerfiles , Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD . ADD can extract tar files and fetch remote URL files, although it's not very clear in the official documentation . It's also important to state that, Because image size matters, using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead. COPY entrypoint.sh /srv/app/ADD app.tar /srv/app/ So the general rule is as @derobert mentioned, use COPY unless you need ADD exclusive features. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21240/"
]
} |
348,695 | I'm using Omnios (Unix) and i want to log everything about nfs. I have some VM on my NFS share and sometimes connection brokes 3-5 second.I can not find a reason for that on dmesg or syslog.I can watch SMB logs from "dmesg" but nfs logs not writing there. I think i need to open my logs someway.Any help works. Ty. | According to Best practices for writing Dockerfiles , Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD . ADD can extract tar files and fetch remote URL files, although it's not very clear in the official documentation . It's also important to state that, Because image size matters, using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead. COPY entrypoint.sh /srv/app/ADD app.tar /srv/app/ So the general rule is as @derobert mentioned, use COPY unless you need ADD exclusive features. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/199043/"
]
} |
348,714 | When I use the grep command, all occurrences of a word are picked up, even if they are part of other words. For example, if I use grep to find occurrences of the word 'the' it will also highlight 'the' in 'theatre' Is there a way to adapt the grep command so that it only picks up full words, not part of words? | -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. from man grep | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/215206/"
]
} |
348,771 | Environment: Fedora 25 (4.9.12-200.fc25.x86_64) GNOME Terminal 3.22.1 Using VTE version 0.46.1 +GNUTLS VIM - Vi IMproved 8.0 (2016 Sep 12, compiled Feb 22 2017 16:26:11) tmux 2.2 I recently started using tmux and have observed that the colors within Vim change depending on whether I'm running inside or outside of tmux. Below are screenshots of Vim outside (left) and inside (right) of tmux while viewing a Git diff: My TERM variable is Outside tmux: xterm-256color Inside tmux: screen-256color Vim reports these terminal types as expected (via :set term? ): Outside tmux: term=xterm-256color Inside tmux: term=screen-256color Vim also reports both instances are running in 256-color mode (via :set t_Co? ): Outside tmux: t_Co=256 Inside tmux: t_Co=256 There are many similar questions out there regarding getting Vim to run in 256-color mode inside tmux (the best answer I found is here ), but I don't think that's my problem given the above information. I can duplicate the problem outside of tmux if I run Vim with the terminal type set to screen-256color : $ TERM=screen-256color vim So that makes me believe there's simply some difference between the xterm-256color and screen-256color terminal capabilities that causes the difference in color. Which leads to the question posed in the title: what specifically in the terminal capabilities causes the Vim colors to be different? I see the differences between running :set termcap inside and outside of tmux, but I'm curious as to which variables actually cause the difference in behavior. Independent of the previous question, is it possible to have the Vim colors be consistent when running inside or outside of tmux? Some things I've tried include: Explicitly setting the default terminal tmux uses in ~/.tmux.conf to various values (some against the advice of the tmux FAQ ): set -g default-terminal "screen-256color" set -g default-terminal "xterm-256color" set -g default-terminal "screen.xterm-256color" set -g default-terminal "tmux-256color" Starting tmux using tmux -2 . In all cases, Vim continued to display different colors inside of tmux. | tmux doesn't support the terminfo capability bce (back color erase), which vim checks for, to decide whether to use its "default color" scheme. That characteristic of tmux has been mentioned a few times - Reset background to transparent with tmux? Clear to end of line uses the wrong background color in tmux | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/348771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124274/"
]
} |
348,786 | Sure, echo -e can be used so that \n is understood as a new line. The problem is when I want to echo something beginning with \t e.g. "\test". So let's say I want to perform echo -e "test\n\\test" . I expect this to output: test\test But instead outputs: test est The \\t is being interpreted as a tab instead of a literal \t. Is there a clean workaround for this issue? | echo -e "\\t" passes \t to echo because backslash is special inside double-quotes in bash . It serves as an escaping (quoting) operator. In \\ , it escapes itself. You can either do: echo -e "\\\\t" for echo to be passed \\t ( echo -e "\\\t" would also do), or you could use single quotes within which \ is not special: echo -e '\t' Now, echo is a very unportable command . Even in bash , its behaviour can depend on the environment. I'd would advise to avoid it and use printf instead, with which you can do: printf 'test\n\\test\n' Or even decide which parts undergo those escape sequence expansions: printf 'test\n%s\n' '\test' Or: printf '%b%s\n' 'test\n' '\test' %b understands the same escape sequences as echo (some echo s), while the first argument to printf , the format, also understands sequences, but in a slightly different way than echo (more like what is done in other languages). In any case \n is understood by both. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218886/"
]
} |
348,850 | In home directory, df -h $HOME shows Filesystem Size Used Avail Use% Mounted on/dev/disk1 231G 177G 54G 77% / But in finder Any ideas? EDIT Attached df -h output /dev/disk1 231G 177G 54G 77% /# this is memory disk/dev/disk2 7.0G 677M 6.4G 10% /Users/HOME/Library/Caches# this is an encrypted dmg/dev/disk3s2 9.3G 7.0G 2.3G 76% /Volumes/NOT_HOME# this is a bindfs/Volumes/NOT_HOME/xxxx 9.3G 7.0G 2.3G 76% /Users/HOME/Library/some-folder | I was having this same problem and apparently it was caused by my local time machine snapshots filling up the disk. OSX supposedly clears those snapshots out automatically whenever more space is needed (and so it reports it as available space in Finder), but df doesn't know that. To solve the issue, you can use the command line utility, tmutil , to purge old snapshots. To list existing local snapshots: sudo tmutil listlocalsnapshots / To clear out 20 gigs (21474836480 bytes) worth of old local snapshots (with a purge priority of 4): sudo tmutil thinlocalsnapshots / 21474836480 4 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
348,866 | How do I list files in the same order as scp -rp copies them? I need to know this because sometimes I need to ctrl-C an scp and later want to copy the remaining files. | I was having this same problem and apparently it was caused by my local time machine snapshots filling up the disk. OSX supposedly clears those snapshots out automatically whenever more space is needed (and so it reports it as available space in Finder), but df doesn't know that. To solve the issue, you can use the command line utility, tmutil , to purge old snapshots. To list existing local snapshots: sudo tmutil listlocalsnapshots / To clear out 20 gigs (21474836480 bytes) worth of old local snapshots (with a purge priority of 4): sudo tmutil thinlocalsnapshots / 21474836480 4 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348866",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17288/"
]
} |
348,873 | On my server using LVM, I have a simple linear LV on a single drive (PV). Now, I added 2 more (same size) drives (PVs) to the server. I want to convert my existing linear LV to a striped LV (RAID0 like) across the 3 drives, if possible, online. This would allow me to enhance performance, thanks to the striping. I know it is theoretically possible. I tried many things, like creating a striped mirror of my LV, based on this website technique , but in my case it is more complicated because I want to keep using the original drive (on the website, it is a migration from a single drive LV to 3 other drives). I am becoming more and more familiar with the pvmove , lvconvert and other LVM tools but didn't succeed. Please help. :) If needed, I have very few extra space on another drive (about 5% of my original LV size). My lvdisplay -m is as follows: --- Logical volume ---LV Path /dev/vg_space/vol_spaceLV Name vol_spaceVG Name vg_spaceLV Status available# open 1LV Size 260.75 GiBCurrent LE 66752Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:0--- Segments ---Logical extent 0 to 66751: Type linear Physical volume /dev/sda5 Physical extents 0 to 66751 | I finally found a trick way. The setup : Let's say our original drive is /dev/sda (PV is /dev/sda1 ) and our two new drives are /dev/sdb and /dev/sdc . All drives are 100 MB big. The idea : Because all our data can fit on half of sdb and sdc , we can temporarily put our data there and in the meanwhile, create a striped mirror of our LV across the 3 other halves of the drives. Then, get rid of the original side of the (temporary) mirror and extend our striped LV to full size. This wonderful piece of art should explain better: original state: sda sdb sdc _______ _______ _______| | | | | || | | | | ||lv_orig| | empty | | empty || | | | | || | | | | || | | | | || | | | | ||_______| |_______| |_______|partition sdb & sdc, pvmove, then partition sda: sda sdb sdc _______ _______ _______| | | | | || sda1 | | sdb1 | | sdc1 || empty | | empty | | empty ||_______| |_______| |_______|| | | | | || sda2 | |lv_orig| |lv_orig| <= linear on 2 drives| empty | |half 1 | |half 2 ||_______| |_______| |_______|add sda{1,2,3} to vg, mirror the LV on this in striped mode: sda sdb sdc _______ _______ _______|lv_orig| |lv_orig| |lv_orig||mirror | |mirror | |mirror | <= striped!|stripe1| |stripe2| |stripe3||_______| |_______| |_______|| | | | | || sda2 | |lv_orig| |lv_orig|| empty | |half 1 | |half 2 ||_______| |_______| |_______|get rid of the sd{b,c}2 side of the mirror: sda sdb sdc _______ _______ _______| | | | | ||lv_orig| |lv_orig| |lv_orig| <= still striped!|stripe1| |stripe2| |stripe3||_______| |_______| |_______|| | | | | || sda2 | | sdb2 | | sdc2 || empty | | empty | | empty ||_______| |_______| |_______|delete sd{a,b,c}2 partitions to extend sd{a,b,c}1 on the whole disk,finally, extend the lv: sda sdb sdc _______ _______ _______| | | | | || sda1 | | sdb1 | | sdc1 || | | | | ||lv_orig| |lv_orig| |lv_orig| <= definitely striped!| | | | | ||bigger&| |bigger&| |bigger&||striped| |striped| |striped||_______| |_______| |_______| Here is how to proceed : Disclaimer: I wrote this mostly based on memories, please double check the commands (and edit the post if needed!) create the partitions sdb1 and sdb2 , respectively 42 and 58 MB, same thing for sdc , pvcreate /dev/sd{b,c}{1,2} , vgextend vg_orig /dev/sdb2 /dev/sdc2 , pvmove /dev/sda1 will move all LV data to sdb2 and sdc2 , vgreduce vg_orig /dev/sda1 and pvremove /dev/sda1 will make LVM completely stop using sda , create a 42 MB partition /dev/sda1 (erasing the previous one if needed), and pvcreate /dev/sda1 , vgextend vg_orig dev/sd{a,b,c}1 , lvconvert --type mirror --mirrors 1 --stripes 3 vg_orig/lv_orig /dev/sd{a,b,c}1 will create a stripped mirror of our original LV volume (what we are looking for!), you can check the details with lvdisplay -am , the previous command may fail if the total number of extends in the LV is not a multiple of 3, in which case, you can simply add 1 or 2 extend to the LV like this: lvextend -l +1 vg_orig/lv_orig , with this command, we will get rid of the temporary mirror copy of the data we have in sdb2 and sdc2 : lvconvert --type mirror --mirrors 0 vg_orig/lv_orig /dev/sd{b,c}2 , remove the sdX2 partitions we don't need anymore: vgreduce vg_orig /dev/sd{b,c}2 , pvremove /dev/sd{b,c}2 , now we have a striped version of our original data, we still need to make the sd{a,b,c}1 partitions bigger, so delete the sdb2 and sdc2 partitions and recreate the sda1 , sdb1 and sdc1 , partitions so that they start at the same sector number, but end at a higher sector number (don't be afraid :)), partprobe /dev/sd{a,b,c}1 to refresh the kernel partition table, pvresize /dev/sd{a,b,c}1 to make LVM realize the PVs are bigger, lvextend -l 100%VG vg_orig/lv_orig to make the LV bigger now, resize2fs vg_orig/lv_orig if you have an ext filesystem that you want to grow online. Here you go! It is pretty confusing to me that a tool like LVM, supposedly made for this kind of operation is not able to do this task easily in a single (or two) command... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30196/"
]
} |
348,879 | I have a SSD drive with LUKS encrypted partition. How to discard all data with one command? Or damage it to non-recoverable state? Even if partition is in use. | If your SSD is encrypted with LUKS, erase the header is good enough e.g dd if=/dev/urandom of=/dev/sda1 bs=512 count=20480 See the following link for details https://wiki.archlinux.org/index.php/Dm-crypt/Drive_preparation#Wipe_LUKS_header | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190338/"
]
} |
348,913 | If I select text with a mouse in tmux in iTerm2 on macOS I get the selected text copied into my clipboard. I do not have to click any extra buttons - just select the text you want and you're done. I've tested tmux in terminal.app on macOS but it doesn't work there - I have to hit y to copy the selection to my clipboard. I thought that there is a mouse binding (something like MouseOnSelection similar to MouseDown1Pane ) but I couldn't find anything useful on the web and man tmux . I wonder if there is a way to have a similar behaviour on Ubuntu 16.10 - preferably in the Gnome terminal. | Tmux 2.4+ with vi copy mode bindings and xclip : set-option -g mouse onset-option -s set-clipboard offbind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -selection clipboard -i" For older tmux versions, emacs copy mode bindings (the default), or non-X platforms (i.e., no xclip), see the explanation below. Explanation: First we need to enable the mouse option so tmux will capture the mouse and let us bind mouse events: set-option -g mouse on Gnome-terminal doesn't support setting the clipboard using xterm escape sequences so we should ensure the set-clipboard option is off: set-option -s set-clipboard off This option might be supported and enabled by default on iTerm2 (see set-clipboard in the tmux manual), which would explain the behavior on there. We can then bind the copy mode MouseDragEnd1Pane "key", i.e., when the first mouse button is released after clicking and dragging in a pane, to a tmux command which takes the current copy mode selection (made by the default binding for MouseDrag1Pane ) and pipes it to a shell command. This tmux command was copy-pipe before tmux 2.4, and has since changed to send-keys -X copy-pipe[-and-cancel] . As for the shell command, we simply need something which will set the contents of the system clipboard to whatever is piped to it; xclip is used to do this in the following commands. Some equivalent replacements for "xclip -selection clipboard -i" below on non-X platforms are "wl-copy" (Wayland), "pbcopy" (macOS), "clip.exe" (Windows, WSL), and "cat /dev/clipboard" (Cygwin, MinGW). Tmux 2.4+: # For vi copy mode bindingsbind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -selection clipboard -i"# For emacs copy mode bindingsbind-key -T copy-mode MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -selection clipboard -i" Tmux 2.2 to 2.4: # For vi copy mode bindingsbind-key -t vi-copy MouseDragEnd1Pane copy-pipe "xclip -selection clipboard -i"# For emacs copy mode bindingsbind-key -t emacs-copy MouseDragEnd1Pane copy-pipe "xclip -selection clipboard -i" Before tmux 2.2: Copy after mouse drag support was originally added in Tmux 1.3 through setting the new mode-mouse option to on . Tmux 2.1 changed the mouse support to the familiar mouse key bindings, but did not have DragEnd bindings, which were introduced in 2.2. Thus, before 2.2 I believe the only method of setting the system clipboard on mouse drag was through the built-in use of xterm escape sequences (the set-clipboard option). This means that it's necessary to update to at least tmux 2.2 to obtain the drag-and-copy behavior for terminals that don't support set-clipboard , such as GNOME Terminal. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/348913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128489/"
]
} |
348,941 | So I'm trying to create a Docker image that contains gcc. I'm running the command RUN yum -y install gcc within my Dockerfile. It downloads the package until it finally fails trying to install a dependency. Rpmdb checksum is invalid: dCDPT(pkg checksums): glibc-headers.x86_64 0:2.17-157.el7_3.1 - u Here's the output below: Dependencies Resolved================================================================================ Package Arch Version Repository Size================================================================================Installing: gcc x86_64 4.8.5-11.el7 base 16 MInstalling for dependencies: cpp x86_64 4.8.5-11.el7 base 5.9 M glibc-devel x86_64 2.17-157.el7_3.1 updates 1.1 M glibc-headers x86_64 2.17-157.el7_3.1 updates 668 k kernel-headers x86_64 3.10.0-514.6.2.el7 updates 4.8 M libgomp x86_64 4.8.5-11.el7 base 152 k libmpc x86_64 1.0.1-3.el7 base 51 k mpfr x86_64 3.1.1-4.el7 base 203 kUpdating for dependencies: glibc x86_64 2.17-157.el7_3.1 updates 3.6 M glibc-common x86_64 2.17-157.el7_3.1 updates 11 M libgcc x86_64 4.8.5-11.el7 base 97 kTransaction Summary================================================================================Install 1 Package (+7 Dependent packages)Upgrade ( 3 Dependent packages)Total download size: 44 MDownloading packages:Delta RPMs disabled because /usr/bin/applydeltarpm not installed.--------------------------------------------------------------------------------Total 2.3 MB/s | 44 MB 00:19 Running transaction checkRunning transaction testTransaction test succeededRunning transaction Updating : libgcc-4.8.5-11.el7.x86_64 1/14 Updating : glibc-2.17-157.el7_3.1.x86_64 2/14 warning: /etc/nsswitch.conf created as /etc/nsswitch.conf.rpmnewerror: lua script failed: [string "%triggerin(glibc-common-2.17-106.el7_2.6.x86_64)"]:1: attempt to compare number with nilNon-fatal <unknown> scriptlet failure in rpm package glibc-2.17-157.el7_3.1.x86_64 Updating : glibc-common-2.17-157.el7_3.1.x86_64 3/14 Installing : mpfr-3.1.1-4.el7.x86_64 4/14 Installing : libmpc-1.0.1-3.el7.x86_64 5/14 Installing : cpp-4.8.5-11.el7.x86_64 6/14 Installing : libgomp-4.8.5-11.el7.x86_64 7/14 Installing : kernel-headers-3.10.0-514.6.2.el7.x86_64 8/14 Installing : glibc-headers-2.17-157.el7_3.1.x86_64 9/14 Installing : glibc-devel-2.17-157.el7_3.1.x86_64 10/14 Installing : gcc-4.8.5-11.el7.x86_64 11/14 Cleanup : glibc-2.17-106.el7_2.6.x86_64 12/14 Cleanup : glibc-common-2.17-106.el7_2.6.x86_64 13/14 Cleanup : libgcc-4.8.5-4.el7.x86_64 14/14Rpmdb checksum is invalid: dCDPT(pkg checksums): glibc-headers.x86_64 0:2.17-157.el7_3.1 - uThe command '/bin/sh -c yum -y install gcc' returned a non-zero code: 1 | I was having this exact issue when creating a Docker image. First installing yum-plugin-ovl , which is a yum plugin for the Docker overlay fs, fixed the issue for me. Example: ...RUN yum -y update \ && yum -y install yum-plugin-ovl \ && yum -y install gcc... See this GitHub issue for more information on the fix. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/348941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63748/"
]
} |
348,962 | w/o colors " ls " command 1 2 3 unknown which is a folder which is a file. can "ls" or some other command clarify which is file which is folder perhaps by including ' / ' infront if : a folder ? for example: /1 2 /3 | On Linux, ls -p adds the trailing slash on dirs | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/348962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218838/"
]
} |
349,004 | I'm looking for a straight Unix commands to get first Sunday of next month, first Monday of next month, first Tuesday of next month, first Wednesday of next month etc. I will need them in the complete date format ( time is not mandatory ) I can get numbers like 2, 3, 4 etc. As I don't want only numbers, I will need them in date format (includes day, month, year) $ NEXT_MONTH=`date +'%m %Y' -d 'next month'`$ echo $NEXT_MONTH04 2017$ NEXT_SUNDAY=`cal $NEXT_MONTH | awk 'NF==7 && !/^Su/{print $1;exit}'`$ echo $NEXT_SUNDAY2 I will need these dates to send notifications for the email group. Ex: I could get the first Saturday of next month as below. $ firstofmonth=$(date -d '+1 months' '+%Y%m01')20170401$ firstsaturday=$(date -d "$firstofmonth" '+%Y-%m')-$((7 - \ $(date -d "$firstofmonth" '+%u') )) 2017-04-1 | In ksh93 : $ printf '%(%F)T\n' 'next month, first Monday'2017-04-03 bash , since 4.2 now supports a %(<format>)T in its printf buitin, but not the ability to parse this kind of date expression. If you had to use bash and wanted to use that %(<format>)T , you could still do it without forking with something like: printf -v code '%( t=$((%s + (12 - %-H) * 60 * 60)) increment=$((8 - %u)) current_month=%m)T' -1eval "$code"until t=$((t + increment * 24 * 60 * 60)) # next Monday around mid-day printf -v code '%(date=%F month=%m)T' "$t" eval "$code" [ "$month" != "$current_month" ] # until be get to next monthdo increment=7 # the first increment is the number of days # til Monday. Next increments are just a week.doneecho "$date" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207779/"
]
} |
349,005 | Why do people use apt-get instead of apt ? In nearly every tutorial I see, the suggestion is to use apt-get . apt is prettier (by default), shorter, and generally more intuitive. ( apt-cache search vs apt search , for example) I don't know if I'm missing something because apt just seems better in every way. What's the argument for apt-get over apt for everyday use? | The apt front-end is a recent addition, it was added in version 1.0 in April 2014. So it's only been part of one Debian stable release, Debian 8. People who've used Debian for longer are used to apt-get and apt-cache , and old habits die hard — and old tutorials die harder (and new users learn old habits from those). apt is nicer for end users as a command-line tool, although even there it has competition — I prefer aptitude for example. As a general-purpose tool though it's not necessarily ideal, because its interface is explicitly not guaranteed to stay the same from one release to the next, and it's not designed for use in scripts. Thus in any circumstance where instructions may be used in a script, it should be avoided; so it's typically safer to suggest apt-get rather than apt in answers on Unix.SE and similar sites. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/349005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219064/"
]
} |
349,015 | I'm using sshpass and ssh to send a command to a Linux box, and then disconnect. The command is sent ok, but I don't get the response I expect. I noticed that upon login the host sends 5 blank lines, then a 5 line banner. It appears that the ssh command (when passing a command as a parameter) is returning only the first blank line. Is there a way to cause it to return ALL text? (or wait for 5 seconds to capture all text before returning) Command looks like this, and capturing response into Bash variable RESPONSE=$(sshpass .... ssh..... "my command") | The apt front-end is a recent addition, it was added in version 1.0 in April 2014. So it's only been part of one Debian stable release, Debian 8. People who've used Debian for longer are used to apt-get and apt-cache , and old habits die hard — and old tutorials die harder (and new users learn old habits from those). apt is nicer for end users as a command-line tool, although even there it has competition — I prefer aptitude for example. As a general-purpose tool though it's not necessarily ideal, because its interface is explicitly not guaranteed to stay the same from one release to the next, and it's not designed for use in scripts. Thus in any circumstance where instructions may be used in a script, it should be avoided; so it's typically safer to suggest apt-get rather than apt in answers on Unix.SE and similar sites. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/349015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23091/"
]
} |
349,052 | I have been trying to format an sd card with the lastest debian jessie-lite image for use with raspberry pi. When using the dd command, it states that there is no space left on device after copying 10 megs. I have searched SE and have tried to use various answers to questions but I always end up back at the same place. Below are the outputs of dd, fdisk, df and ls commands that may be of interest. /dev/sdb is the sd card dd bs=4M if=/home/user/Downloads/2017-02-16-raspbian-jessie-lite.img of=/dev/sdbdd: error writing ‘/dev/sdb’: No space left on device3+0 records in2+0 records out10485760 bytes (10 MB) copied, 0.0137885 s, 760 MB/s fdisk -l /dev/sdbDisk /dev/sdb: 10 MiB, 10485760 bytes, 20480 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xdbcc7ab3Device Boot Start End Sectors Size Id Type/dev/sdb1 8192 137215 129024 63M c W95 FAT32 (LBA)/dev/sdb2 137216 2807807 2670592 1.3G 83 Linux ls -al /dev/sdb*-rw-r--r-- 1 root root 10485760 Mar 3 22:04 /dev/sdbbrw-rw---- 1 root disk 8, 17 Mar 3 22:05 /dev/sdb1brw-rw---- 1 root disk 8, 18 Mar 3 22:05 /dev/sdb2brw-rw---- 1 root disk 8, 19 Mar 3 22:05 /dev/sdb3 df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 226G 7.3G 207G 4% /udev 10M 10M 0 100% /devtmpfs 1.6G 9.3M 1.6G 1% /runtmpfs 3.9G 112K 3.9G 1% /dev/shmtmpfs 5.0M 4.0K 5.0M 1% /run/locktmpfs 3.9G 0 3.9G 0% /sys/fs/cgrouptmpfs 792M 4.0K 792M 1% /run/user/119tmpfs 792M 8.0K 792M 1% /run/user/1000 | -rw-r--r-- 1 root root 10485760 Mar 3 22:04 /dev/sdb /dev/sdb is a regular file, not a device. You must have run rm /dev/sdb at some point. It is created automatically when the device is inserted, but when you run commands as root, you can mess up with it. Now that /dev/sdb is a regular file, it's stored in memory, on a filesystem which has a low size limit because it's only meant to contain device files that have no content as such since they're just markers to say “call this device driver to store the contents”. Remove the file ( rm /dev/sdb as root). Then, to re-create the proper /dev/sdb , the easiest way is to eject the SD card and insert it back it. Once you've done that, you can copy the image with the command you were using, or simply </home/user/Downloads/2017-02-16-raspbian-jessie-lite.img sudo tee /dev/sdb >/dev/null | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/349052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141314/"
]
} |
349,118 | I am looking for command like mount 1234-SOME-UUID /some/mount/folder I am connecting a couple of external USB hard drives. I want them to be mounted on specific folders during startup. I am unable to boot using /etc/fstab if one of the drive is not connected. so I am using an init script. But /dev/sdbx enumeration is not always same to use with mount /dev/sdX /some/mount/folder in the init script. | From the manpage of mount . -U, --uuid uuid Mount the partition that has the specified uuid. So your mount command should look like as follows. mount -U 1234-SOME-UUID /some/mount/folder or mount --uuid 1234-SOME-UUID /some/mount/folder A third possibility would be mount UUID=1234-SOME-UUID /some/mount/folder | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/349118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27711/"
]
} |
349,214 | Many webpages said that ASCII code for enter key is 13(0d). Enter key is considered as Carriage Return (CR). Now let's make an experiment. To open vim and just click enter key for three times ,do nothing more,then save the file as test.csv. xxd test.csv0000000: 0a0a 0a My conclusions got. The enter key's ascii value is 0a ,meaning newline ,it is different from Carriage Return (13 or 0d in ascii). Is right or not? | Your terminal sends carriage return when you press Enter , and on Unix-like systems, the terminal driver translates that into line-feed ("newline"). That's the icrnl feature shown by stty -a , e.g., $ stty -aspeed 38400 baud; rows 40; columns 80; line = 0;intr = ^C; quit = ^\; erase = ^H; kill = ^U; eof = ^D; eol = <undef>;eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff-iuclc -ixany -imaxbel -iutf8opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprtechoctl echoke Programs (even shell scripts) can turn that off to read the actual carriage return character to distinguish it from Control J (line feed). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
349,243 | I have this kind of directory tree that is obtained from unzipping a zip file: x -> y -> z -> run -> FILES AND DIRECTORIES HERE So there are 4 directories, 3 of whom are empty of files (x, y, z) and only contain 1 sub-directory, and there is the directory I am interested in, named " run ". I want to move the "run" directory itself (including everything within) to the "root" location where I unzipped (i.e. where "x" is, but not inside "x"). Assumptions: There exists a folder named "run", but I do not know how many directories I will have to "cd" to get to it (could be 3 (x,y,z), could be 10 or more. The names are also unknown and do not have to be x,y,z etc). How can I accomplish this? I tried many variations of this question but they all failed. | what about find . -type d -name run -exec mv {} /path/to/X \; where /path/to/X is your destination directory you start from this same place. then use other answer to remove empty directories. (on a side note there is a --junk-paths option in zip, either when zipping or unzipping) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/349243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219237/"
]
} |
349,252 | I run debian jessie on Host 64-bit and in virtualbox 32-bit. To spare traffic I try to cp the i386 packages from host to the shared folder, for using them in virualbox. My Hostname/var/cache/apt/archives$ ls -al /var/cache/apt/archives/ | grep 'i386' | awk '{print $9}'alsa-oss_1.0.28-1_i386.debgcc-4.9-base_4.9.2-10_i386.debi965-va-driver_1.4.1-2_i386.deblibaacplus2_2.0.2-dmo2_i386.deblibaio1_0.3.110-1_i386.deblibasound2_1.0.28-1_i386.deblibasound2-dev_1.0.28-1_i386.deblibasound2-plugins_1.0.28-1+b1_i386.deb Shows me the packages I'm looking for. but them I try to cp them after xargs My Hostname/var/cache/apt/archives$ ls -al /var/cache/apt/archives/ | grep 'i386' | awk '{print $9}' | LANG=C xargs cp -u /home/alex/debian-share/apt-archives/cp: target 'zlib1g_1%3a1.2.8.dfsg-2+b1_i386.deb' is not a directory I can not figure out what I am doing wrong. Is this way even possible? My problem is I can not script. Probable it is somthing like that for i in *_i386.deb ; do cp [option] full-path to shared-folder I didn't dry, because I will not mess my Host. | While you already know how you should solve your current problem, I'll still answer about xargs . xargs puts the string it got in the end of command, while in your case you need that string before the last argument of cp . Use -I option of xargs to construct the command. Like this: ls /source/path/*pattern* | xargs -I{} cp -u {} /destination/path In this example I'm using {} to as a replacement string, so the syntax looks similar to find . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/349252",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
349,264 | I'm trying to mount a CIFS device after the system boots (using systemd ), but the system tries to mount the system before the network is established, so it fails. After logging into the system I can mount it without any problem, using sudo mount -a . How can I tell my Arch (arm) to wait until the network is available? | Adding _netdev to the mount options in /etc/fstab might be sufficient. Mount units referring to local and network file systems are distinguished by their file system type specification. In some cases this is not sufficient (for example network block device based mounts, such as iSCSI), in which case _netdev may be added to the mount option string of the unit, which forces systemd to consider the mount unit a network mount. Additionally systemd supports explicit order dependencies between mount entries and other units: Adding x-systemd.after=network-online.target to the mount options might work if _netdev is not enough. See the systemd mount unit documentation for more details. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/349264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148745/"
]
} |
349,304 | Having trouble mounting a btrfs filesystem. Originally created on a server running xbian. Trying to mount on an up-to-date OpenSUSE 42.2 server. Complains about unsupported feature 0x10, open_ctree failed. How can I mount this filesystem ? Mount attempt # file -s /dev/sdc2/dev/sdc2: BTRFS Filesystem (label "xbian", sectorsize 4096, nodesize 16384, leafsize 16384)# mount -t btrfs /dev/sdc2 /mntmount: wrong fs type, bad option, bad superblock on /dev/sdc2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.# dmesg output [ 119.698406] BTRFS info (device sdc2): disk space caching is enabled[ 119.698409] BTRFS: couldn't mount because of unsupported optional features (10).[ 119.744887] BTRFS: open_ctree failed btrfs version # rpm -qa|grep btrfsbtrfsprogs-udev-rules-4.5.3-3.1.noarchbtrfsprogs-4.5.3-3.1.x86_64libbtrfs0-4.5.3-3.1.x86_64btrfsmaintenance-0.2-13.1.noarch# btrfs inspect-internal Reports unknown flag. This behaviour seen on stock btrfs version supplied with OpenSUSE (btrfs-progs v4.5.3+20160729) and with latest when downloaded from git and compiled (btrfs-progs v4.9.1) # btrfs inspect-internal dump-super /dev/sdc2superblock: bytenr=65536, device=/dev/sdc2---------------------------------------------------------csum 0x394d4988 [match]bytenr 65536flags 0x1 ( WRITTEN )magic _BHRfS_M [match]fsid 71ecbcc5-c88f-4f27-b4d8-763bd801765elabel xbiangeneration 129root 4669440sys_array_size 97chunk_root_generation 102root_level 0chunk_root 131072chunk_root_level 0log_root 0log_root_transid 0log_root_level 0total_bytes 7451181056bytes_used 691642368sectorsize 4096nodesize 16384leafsize 16384stripesize 4096root_dir 6num_devices 1compat_flags 0x0compat_ro_flags 0x0incompat_flags 0x179 ( MIXED_BACKREF | COMPRESS_LZO | COMPRESS_LZOv2 | BIG_METADATA | EXTENDED_IREF | SKINNY_METADATA | unknown flag: 0x10 )csum_type 0csum_size 4cache_generation 129uuid_tree_generation 112dev_item.uuid a8b49751-56e3-4c42-a1d3-40a1554c800cdev_item.fsid 71ecbcc5-c88f-4f27-b4d8-763bd801765e [match]dev_item.type 0dev_item.total_bytes 7451181056dev_item.bytes_used 926941184dev_item.io_align 4096dev_item.io_width 4096dev_item.sector_size 4096dev_item.devid 1dev_item.dev_group 0dev_item.seek_speed 0dev_item.bandwidth 0dev_item.generation 0# | The problem is indeed that the two Linux versions sport a slightly different BTRFS version, i.e. do not support the same features: [ 119.698406] BTRFS info (device sdc2): disk space caching is enabled [ 119.698409] BTRFS: couldn't mount because of unsupported optional features (10). It seems that the xbian has enabled that features, while OpenSuse 42.2 does not, which prevents interoperability. These FS features are optional: This means it is possible to create downward compatible BTRFS partitions on newer systems that are readable from older systems (without those features), controlled by the parameters that are passed to the mkfs.btrfs program. The numeric code of the features is 10 - unknown flag: 0x10. I had a hard time to figure out what that codes means (my guess: extended inode references.) But since the number is so low, I think this is something basic. I think you cannot make this filesystem readable by unpatched kernels anymore. Otherwise, knowing the feature, we maybe could specify a mount option to avoid the error; like here, where the fs compression algorithm is specified: mount -t btrfs -o compress=lz4 dev /mnt If we do not know what this feature is you even cannot update your kernel in OpenSuse to match xbian. Usually in such a situation, you would rely on ext4 instead for compatibility reasons. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349304",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104877/"
]
} |
349,316 | Why does the human readable format of the memory tool free print its numbers using full stops when I print it, but when its run by crontab it uses commas? Sample: free -h total used free shared buffers cachedMem: 3.7G 2.3G 1.4G 145M 675M 869M-/+ buffers/cache: 839M 2.9GSwap: 3.9G 385M 3.5G But when run by crontab: total used free shared buffers cachedMem: 3,7G 2,3G 1,4G 145M 675M 869M-/+ buffers/cache: 840M 2,9GSwap: 3,9G 385M 3,5G I would call this a bug as its a very unexpected behaviour. It's a formula for mistakes. | Your locale settings are different in your shell and in your cronjob. You can check by running locale in both settings, and you can change your cronjob's locale settings by setting the appropriate variables ( LC_ALL is the hammer if you don't need to be subtle; see locale(7) for details). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8563/"
]
} |
349,401 | ./Queen/(1986)/Innuendo/01.One vision.mp3./Queen/(1986)/Innuendo/02.One vision.mp3./Queen/(1986)/Innuendo/03.One vision.mp3 I want to display above line as: 01.One vision.mp302.One vision.mp303.One vision.mp3 I tried with: sed -i 's/^[^.$/]/ /g/ | Your main problem is that you're trying to use / as a separator, when / also is a character you'll be parsing. You'll need to use a different separator, such as a pipe. Match everything and a / , then use \( and \) (capture groups) delimiting the part you want to extract, and use \1 , rendering that group: echo "yourstring" | sed 's|.*/\([^/]*\)$|\1|' You may have it working without capture groups: just dropping everything until the last / : echo "yourstring" | sed 's|.*/||' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207112/"
]
} |
349,555 | I use stat -f %m .bashrc to get modification time of my .bashrc on osx. But when I run the same command on ubuntu, it spits error: stat: cannot read file system information for %m': No such file or directory is there a compatible way to achieve this? | Ubuntu uses the GNU coreutils stat , whereas OSX uses the BSD variant. So on Ubuntu the command is a bit different: stat -c %Y .bashrc From man stat : -c --format=FORMAT use the specified FORMAT instead of the default; output a new‐ line after each use of FORMAT and: %Y time of last data modification, seconds since Epoch If you want a portable way to run these regardless of OS, then there are several ways of doing it. I think I would set a variable one time to the appropriate parameters: if uname | grep -q "Darwin"; then mod_time_fmt="-f %m"else mod_time_fmt="-c %Y"fi And then use this value in the stat command wherever needed: stat $mod_time_fmt .bashrc | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/349555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217330/"
]
} |
349,669 | How will you ssh into some other system's root account? Assume that you have to access to target system. This is a question I was asked in a quiz. Apparently simply using ssh [email protected] wasn't the answer. I'd like to know the answer. | That is actually the proper way to SSH into a server (192.168.xxx.xxx), that accepts SSH connections on the default port (22). To specify the user you want to use for login, you can use: ssh -l root 192.168.xxx.xxx or ssh [email protected] If the SSH service is configured to allow root login, you should be able to connect without problems ( PermitRootLogin yes, under sshd_config). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/349669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219550/"
]
} |
349,676 | I would like to know that is it possible to use Android studio in FreeBSD ?I tried to run it but I couldn't.I installed IntelliJ from ports but there was no option to select the Android SDK. | When Android Studio was still in Beta, I tried to get it running on FreeBSD (my preferred platform) but had nothing but issues. I did manage to compile a debug APK but could not get a full release version (weird). I ran Android Studio under Linux Emulation but there was still issues with the Java side of things (from memory). I even wrote a complex script for adb to help install the APKs as they would not install from the "Run" option of Android Studio. Not hard, but did speed things up a lot. In the end I gave up and tried a heap of Linux distro (Live CDs) until I found one I was comfortable with - then installed Android Studio without issues. Personally I still prefer FreeBSD for a lot of things but I am more than happy with a stable working environment for Android development. Not the answer you were looking for I know, just sharing my own experience. I guess things could have changed from the Beta to now (v2.3) - but I've decided that Android Studio is updated so often (too often to be honest) that I'm not going to risk issues with FreeBSD and just run Linux. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219553/"
]
} |
349,686 | Suppose I have a list of csv files with the following format: INT_V1_<Product>_<ID>_<Name>_<ddmmyy>.csvASG_B1_V1_<Product>_<ID>_<Name>_<ddmmyy>.csv The INT_V1_ & ASG_B1_V1_ is fixed, meaning all the csv files start with it. How can I split the file names into variable? For example, I wanted to capture the Name & assign it to a variable $Name . | With zsh : file='INT_V1_<Product>_<ID>_<Name>_<ddmmyy>.csv'setopt extendedglobif [[ $file = (#b)*_(*)_(*)_(*)_(*).csv ]]; then product=$match[1] id=$match[2] name=$match[3] date=$match[4]fi With bash 4.3 or newer, ksh93t or newer or zsh in sh emulation (though in zsh , you'd rather simply do field=("${(@s:_:)field}") for splitting than using the split+glob non-sense operator of sh ) you could split the string on _ characters and reference them from the end: IFS=_set -o noglobfield=($file) # split+glob operatordate=${field[-1]%.*}name=${field[-2]}id=${field[-3]}product=${field[-4]} Or (bash 3.2 or newer): if [[ $file =~ .*_(.*)_(.*)_(.*)_(.*)\.csv$ ]]; then product=${BASH_REMATCH[1]} id=${BASH_REMATCH[2]} name=${BASH_REMATCH[3]} date=${BASH_REMATCH[4]}fi (that assumes $file contains valid text in the current locale which is not guaranteed for file names unless you fix the locale to C or other locale with a single-byte per character charset). Like zsh 's * above, the .* is greedy . So the first one will eat as many *_ as possible, so the remaining .* will only match _ -free strings. With ksh93 , you could do pattern='*_(*)_(*)_(*)_(*).csv'product=${file//$pattern/\1}id=${file//$pattern/\2}name=${file//$pattern/\3}date=${file//$pattern/\4} In a POSIX sh script, you could use the ${var#pattern} , ${var%pattern} standard parameter expansion operators: rest=${file%.*} # remove .csv suffixdate=${rest##*_} # remove everything on the left up to the rightmost _rest=${rest%_*} # remove one _* from the rightname=${rest##*_}rest=${rest%_*}id=${rest##*_}rest=${rest%_*}product=${rest##*_} Or use the split+glob operator again: IFS=_set -o noglobset -- $fileshift "$(($# - 4))"product=$1 id=$2 name=$3 date=${4%.*} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/215899/"
]
} |
349,783 | I'm trying to find out what folders occupy / partition.I see that lots of disk space goes to jenkins directory sudo du -sh /home/jenkins289G /home/jenkins When I examine jenkins directory folder I get the largest folder is: sudo du -sh /home/jenkins/*137G /home/jenkins/jobs And rest of the folders are relatively small, tens of K/M...In total there are 50 folders under /home/jenkins. How can I find who "eats" the space? Thanks | The difference between: sudo du -sh /home/jenkins and sudo du -sh /home/jenkins/* is that in almost all shells (with the default setttings), * does not include hidden files or directories. Hidden means names starting with a period (e.g., if there is a /home/jenkins/.temp/ , that would not be included in the second du ). So it'd appear you have about 289-137=152 GiB of hidden files. The easiest way to find out where they are is something like this: sudo du -m /home/jenkins | sort -nr | less Taking off the -s will make du show you the subdirectories everything is in, which sounds like what you want. That'll include hidden ones. If that still doesn't find it, add an -a : sudo du -am /home/jenkins | sort -nr | less that will additionally show individual files, in case you have a few very large hidden files. It will probably also take a bit longer to run (adding files often greatly expands the output). There are also graphical frontends you can use; personally, I use xdiskusage (but maybe just because I've been using it forever): sudo du -am /home/jenkins | xdiskusage - | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72990/"
]
} |
349,790 | I converted csv file into columns, I want now to add dashes under each label of the first row (as it was underlined). with the dashed lines equal to the length of each label (labels may be composed of more than a word) Here is an example: Full name age country--------- --- ------- | The difference between: sudo du -sh /home/jenkins and sudo du -sh /home/jenkins/* is that in almost all shells (with the default setttings), * does not include hidden files or directories. Hidden means names starting with a period (e.g., if there is a /home/jenkins/.temp/ , that would not be included in the second du ). So it'd appear you have about 289-137=152 GiB of hidden files. The easiest way to find out where they are is something like this: sudo du -m /home/jenkins | sort -nr | less Taking off the -s will make du show you the subdirectories everything is in, which sounds like what you want. That'll include hidden ones. If that still doesn't find it, add an -a : sudo du -am /home/jenkins | sort -nr | less that will additionally show individual files, in case you have a few very large hidden files. It will probably also take a bit longer to run (adding files often greatly expands the output). There are also graphical frontends you can use; personally, I use xdiskusage (but maybe just because I've been using it forever): sudo du -am /home/jenkins | xdiskusage - | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219650/"
]
} |
349,798 | Nginx's error log shows some OpenSSL Handshake errors and while searching for the cause I found confusing outputs of what OpenSSL version is used. Details:Debian Jessie 8.7 64 Bit# apt-cache policy opensslopenssl: Installed: 1.0.1t-1+deb8u6 Candidate: 1.0.1t-1+deb8u6 Version table: 1.0.2k-1~bpo8+1 0 100 http://ftp.debian.org/debian/ jessie-backports/main amd64 Packages *** 1.0.1t-1+deb8u6 0 500 http://security.debian.org/ jessie/updates/main amd64 Packages 100 /var/lib/dpkg/status 1.0.1t-1+deb8u5 0 500 http://mirror.hetzner.de/debian/packages/ jessie/main amd64 Packages 500 http://http.debian.net/debian/ jessie/main amd64 Packages# apt-cache policy nginxnginx: Installed: 1.9.10-1~bpo8+4 Candidate: 1.10.3-1~bpo8+1 Version table: 1.10.3-1~bpo8+1 0 100 http://ftp.debian.org/debian/ jessie-backports/main amd64 Packages *** 1.9.10-1~bpo8+4 0 100 /var/lib/dpkg/status 1.6.2-5+deb8u4 0 500 http://mirror.hetzner.de/debian/packages/ jessie/main amd64 Packages 500 http://http.debian.net/debian/ jessie/main amd64 Packages 500 http://security.debian.org/ jessie/updates/main amd64 Packages# nginx -Vnginx version: nginx/1.9.10built with OpenSSL 1.0.2j 26 Sep 2016 (running with OpenSSL 1.0.2k 26 Jan 2017)# openssl version -aOpenSSL 1.0.1t 3 May 2016 (Library: OpenSSL 1.0.2k 26 Jan 2017) How can nginx runs with openssl 1.0.2k and o penssl version -a says that the Library is OpenSSL 1.0.2k but apt-cache policy openssl says installed is 1.0.1t ? Could someone shed some light, please? | The openssl package contains the front-end binary, not the library. You're tracking Jessie for that package (with its security updates). The library itself is libssl1.0.0 , and you're tracking Jessie backports for that package (along with Nginx; you're just a few versions behind for the latter). This is what Nginx uses, and is the library version identified by the openssl front-end. You can see the version of the library on your system with apt-cache policy libssl1.0.0 (as well as the availability of newer versions, if any). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219653/"
]
} |
349,805 | I have text-files containing many lines, of which some starts with ">" (it's a so-called *.fasta file, and the ">"s marks the beginning of a new information container): >header_name1sequence_info>header_name2sequence_info I want to add the name of the file these lines are located in to the header. For example, if the file is named "1_nc.fasta", all the lines inside the file starting with > should have the label "001" added: >001-header_name1sequence_info>001-header_name2sequence_info Someone nice provided me with this line: sed 's/^>/>001-/g' 1_nc.fasta>001_tagged.fasta Accordingly, all headers in 2_nc.fasta should start with "002-", 3_nc.fasta -> "003-", and so on. I know how to write parallel job scripts, but the jobs are done so quickly, I think a script that serially processes all files in a loop is much better. Unfortunately, I can't do this on my own. Added twist: 11_nc.fasta and 149_nc.fasta are not available. How can I loop that through all the 500 files in my directory? | This should do the trick. I break the filename at the underscore to get the numerical prefix, and then use a printf to zero-pad it out to a three digit string. for file in *.fasta; do prefix="$(printf "%03d" "${file%%_*}")" sed "s/^>/>$prefix-/" "$file" > "${prefix}_tagged.fasta"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219660/"
]
} |
349,813 | Im trying to build embeded software using this RSDK toolchain but after make menuconfig and make Im getting following error make[7]: Entering directory `/mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8/sim/mips'/bin/sh ../.././sim/mips/../../mkinstalldirs /mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8/../gdb-host/binmkdir -p -- /mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8/../gdb-host/bin/bin/sh ../.././sim/mips/../../mkinstalldirs /mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8/../gdb-host/libn=`echo run | sed 's,^,mips-linux-,'`; \ /usr/bin/install -c run /mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8/../gdb-host/bin/$n/usr/bin/install: cannot stat `run': No such file or directorymake[7]: *** [install-common] Error 1make[7]: Leaving directory `/mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8/sim/mips'make[6]: *** [install] Error 1make[6]: Leaving directory `/mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8/sim'make[5]: *** [install-sim] Error 2make[5]: Leaving directory `/mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8'make[4]: *** [install] Error 2make[4]: Leaving directory `/mnt/hgfs/A/rtl819x-toolchain/users/gdb/gdb-6.8'make[3]: *** [all] Error 2make[3]: Leaving directory `/mnt/hgfs/A/rtl819x-toolchain/users/gdb'make[2]: *** [gdb] Error 2make[2]: Leaving directory `/mnt/hgfs/A/rtl819x-toolchain/users'make[1]: *** [app] Error 2make[1]: Leaving directory `/mnt/hgfs/A/rtl819x-toolchain/users'make: *** [bins] Error 2 What does cannot stat `run' mean ? | cannot stat 'thing' means that something is expecting a file or directory to exist (in this case, likely a directory called 'run') and is trying to perform an operation on it, only to find it is not there. The meaning comes from the stat(1) system call, which reads the metadata of a link (i. e. a file, directory, socket, symbolic link, etc.) on the filesystem. Looking at your error log, install is an executable (i. e. a script or binary) which is trying do access run and it does not exist, causing the error to be thrown. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126743/"
]
} |
349,816 | Is there a way to generate or reinstall some package to get the contents of ~/.ssh/known_hosts file ? | Whenever you connect to an unknown host ssh will prompt you The authenticity of host '...' can't be established.RSA key fingerprint is ...Are you sure you want to continue connecting (yes/no)? and add a new entry to the file known_hosts file. So to regenerate the file connect to your usual hosts and optionally check the fingerprint if you suspect a MITM. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
349,836 | I need to display a number of days to upcoming payment day (let's say it is always at 10th of any month). How do I do that in bash ? | While bash now has date formatting capabilities, it has no date parsing or calculation ones, so you may want to use another shell like ksh93 or zsh or a proper programming language like perl or python . With ksh93 , the difficult part is to find out what date format are supported as it's hardly documented (you can always have a look at the test data though for examples). For instance, it does support crontab-like time specification and then gives you the next time that matches the specification, so you can do: now=$(printf '%(%s)T')next_10th=$(printf '%(%s)T' '* * 10 * *')echo "Pay day is in $(((next_10th - now) / 86400)) days" Now with standard utilities, it's not so difficult to implement: eval "$(date '+day=%d month=%m year=%Y')"day=${day#0} month=${month#0}if [ "$day" -le 10 ]; then delta=$((10 - day))else case $month in [13578]|10|12) D=31;; 2) D=$((28 + (year % 4 == 0 && (year % 100 != 0 || year % 400 == 0))));; *) D=30;; esac delta=$((D - day + 10))fiecho "Pay day is in $delta days" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34098/"
]
} |
349,849 | I want to replace /home with a symlink to my nfs-mounted home dirs. Only root is logged in, /home is not a separate filesystem, lsof shows no locks, selinux is permissive. What am I missing? I'm logged in directly as root via ssh: [root@usil01-sql01 /]# uname -aLinux usil01-sql01 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux[root@usil01-sql01 /]# w 15:30:33 up 1:41, 1 user, load average: 0.00, 0.02, 0.22USER TTY FROM LOGIN@ IDLE JCPU PCPU WHATroot pts/2 10.50.11.114 15:13 1.00s 0.19s 0.01s w[root@usil01-sql01 /]# lsof | grep /home[root@usil01-sql01 /]# lsof +D /home[root@usil01-sql01 /]# df -h /homeFilesystem Size Used Avail Use% Mounted on/dev/sda2 63G 4.1G 56G 7% /[root@usil01-sql01 /]# mount | grep -w //dev/sda2 on / type ext4 (rw,relatime,seclabel,data=ordered)[root@usil01-sql01 /]# ls -lFd /homedrwxr-xr-x. 3 root root 4096 Mar 7 13:36 /home/[root@usil01-sql01 /]# getenforcePermissive[root@usil01-sql01 /]# mv /home /home-oldmv: cannot move "/home" to "/home-old": Device or resource busy What else can I check? More system info: [root@usil01-sql01 /]# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 836.6G 0 disk |-sda1 8:1 0 768.6G 0 part /storage|-sda2 8:2 0 64G 0 part /`-sda3 8:3 0 4G 0 part [SWAP]sr0 11:0 1 1024M 0 rom [root@usil01-sql01 /]# blkid/dev/sda2: UUID="5ba6a429-4c65-4023-82b4-3673bfcf6a88" TYPE="ext4" /dev/sda3: UUID="b5eb680f-8789-43b2-9f7e-c52570b0eb73" TYPE="swap" /dev/sda1: UUID="cb22d57d-4a5b-4963-a990-890abe0c56dc" TYPE="ext4" | mv: cannot move "/home" to "/home-old": Device or resource busy The only "use"[*] I can think of, which holds the name of a file from changing, is a mount point. What else can I check? I am not certain, but perhaps this could happen if the mount still exists in another mount namespace. Because it's not getting unmounts propagated from the root namespace, for some reason? Or looking at the result on my system, maybe systemd services with ProtectHome ? $ grep -h home /proc/*/task/*/mountinfo | sort -u121 89 0:22 /systemd/inaccessible/dir /home ro,nosuid,nodev shared:142 master:24 - tmpfs tmpfs rw,seclabel,mode=755275 243 253:2 / /home ro,relatime shared:218 master:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered321 288 253:2 / /home rw,relatime shared:262 master:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered84 64 253:2 / /home rw,relatime shared:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered85 46 253:2 / /home rw,relatime master:33 - ext4 /dev/mapper/alan_dell_2016-home rw,seclabel,data=ordered Note this issue - unable to rename /home despite it not showing as a mount point (in the current namespace) - should be fixed in Linux kernel version 3.18+. https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.18.y&id=8ed936b5671bfb33d89bc60bdcc7cf0470ba52fe how to find out namespace of a particular process? lsns might be useful if you can install it. More possible commands: List mount namespaces: # readlink /proc/*/task/*/ns/mnt | sort -u Identify root mount namespace: # readlink /proc/1/ns/mnt Find processes with a given mount namespace # readlink /proc/*/task/*/ns/mnt | grep 4026531840 Inspect the namespace of a given process: # cat /proc/1/task/1/mountinfo [*] EBUSY The rename fails because oldpath or newpath is a directory that is in use by some process (perhaps as current working directory, or as root directory, or because it was open for reading) or is in use by the system (for example as mount point) , while the system considers this an error. (Note that there is no require‐ ment to return EBUSY in such cases—there is nothing wrong with doing the rename anyway—but it is allowed to return EBUSY if the system cannot otherwise handle such situations.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/349849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77424/"
]
} |
349,908 | I am working on Ubuntu14.04 server and it has 48 CPU cores. I am seeing there is high CPU usage on one core from sar information. So I want to know which processes are running on that core. How should I get all processes running on each CPU core in Ubuntu? | You can do that with ps -aeF , see the C column UID PID PPID C STIME TTY TIME CMDroot 1 0 0 2015 ? 00:08:07 /sbin/init Or with htop , configure it to show the PROCESSOR column, To set CPU affinity, you can use taskset command | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/349908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219721/"
]
} |
349,955 | I have a file which has three columns. Column 3 contains names of genes and it looks like this: Rv0729,Rv0993,Rv1408 Rv0162c,Rv0761c,Rv1862,Rv3086 Rv2790c How can I print the number of genes in each row? | You simply want to add a column with the count of columns in it. This may be done using awk : $ awk -F ',' '{ printf("%d,%s\n", NF, $0) }' data.in3,Rv0729,Rv0993,Rv14084,Rv0162c,Rv0761c,Rv1862,Rv30861,Rv2790c NF is an awk variable containing the number of fields (columns) in the current record (row). We print this number followed by a comma and the rest of the row, for each row. An alternative (same result, but may look a bit cleaner): $ awk -F ',' 'BEGIN { OFS=FS } { print NF, $0 }' data.in FS is the field separator which awk uses to split each record into fields, and we set that to a comma with -F ',' on the command line (as in the first solution). OFS is the output field separator, and we set that to be the same as FS before reading the first line of input. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197749/"
]
} |
349,973 | I have directory which have 777 permission on it . I want every directory to have the same permission that i created in it or will create in it. | You simply want to add a column with the count of columns in it. This may be done using awk : $ awk -F ',' '{ printf("%d,%s\n", NF, $0) }' data.in3,Rv0729,Rv0993,Rv14084,Rv0162c,Rv0761c,Rv1862,Rv30861,Rv2790c NF is an awk variable containing the number of fields (columns) in the current record (row). We print this number followed by a comma and the rest of the row, for each row. An alternative (same result, but may look a bit cleaner): $ awk -F ',' 'BEGIN { OFS=FS } { print NF, $0 }' data.in FS is the field separator which awk uses to split each record into fields, and we set that to a comma with -F ',' on the command line (as in the first solution). OFS is the output field separator, and we set that to be the same as FS before reading the first line of input. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219762/"
]
} |
349,974 | I have an array declare -a arr0=("'1 2 3'" "'4 5 6'") and a variable x=0 Then I create the new variable with the array's name tmp="arr$x" and I'd like to be able to expand arr0 content from this tmp variable like this newArr=( "${!tmp}" ) and to use newArr like the ordinary array, e.g. use indices etc. But when I try print it now, it looks like this: $ echo ${newArr[@]}'1 2 3' Only the first element is stored and I don't know, how to fix it. I've also tried to create newArr like this newArr=( "${!tmp[@]}" ) but then it's even worse - only 0 is printed. $ echo ${newArr[@]}0 So, do you know, how to use an array if its name is stored in some other variable? | It is possible with eval : $ declare -a array=( 1 2 3 4 )$ echo "${array[@]}"1 2 3 4$ p=ay$ tmp=arr$p$ echo "$tmp"array$ echo "\${${tmp}[@]}"${array[@]}$ echo "newarray=(\"\${${tmp}[@]}\")"newarray=("${array[@]}")$ eval "newarray=(\"\${${tmp}[@]}\")"$ echo "${newarray[@]}"1 2 3 4$ Commands starting with echo are for illustration, eval is dangerous. Note that the above doesn't preserve array indices for sparse arrays. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/349974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88699/"
]
} |
350,016 | I've this find script that finds files in the /home directory of more than 100 MB in size and deletes if it is more than 10 days old. It is scheduled for every day one time by cron job. find /home/ -mtime +10 -type f -size +100M -delete >/dev/null 2>&1 Now, I want this script to remove directories recursively from where it removes files, as it is leaving empty directories. Could anyone advise or suggest what change needs to be done in this script? | On a GNU system, you could do: find /home/ -mtime +10 -type f -size +100M -delete -printf '%h\0' | awk -v RS='\0' '!seen[$0]++ {out = out $0 RS} END {printf "%s", out}' | xargs -r0 rmdir We use awk to filter out duplicate while still keeping the order (leaves before the branch they're on) and also delay the printing until all the files have been removed so rmdir can remove empty directories. With zsh : files=(/home/**/*(D.LM+100m+10od))rm -f $filesrmdir ${(u)files:h} Note that those would remove the directories that become empty after files are removed from them, but not the parent of those directories if they don't have any of those files to delete and become empty as a result of the directories being removed. If you want to remove those as well, with GNU rmdir , you can add the -p / --parents option to rmdir . If you wanted to remove all empty directories regardless of whether files or directories have been removed from them or not, still with GNU find , you could do: find /home/ \( -mtime +10 -type f -size +100M -o -type d -empty \) -delete | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/350016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219814/"
]
} |
350,070 | I have a space in one of the directory names. I want to list a file under it from a bash script. Here is my script: fpath="${HOME}/\"New Folder\"/foobar.txt"echo "fpath=(${fpath})"#fpath="${HOME}/${fname}"ls "${fpath}" The output for this script is: fpath=(/Users/<username>/"New Folder"/foobar.txt)ls: /Users/<username>/"New Folder"/foobar.txt: No such file or directory But when is list the file on my shell it exists: $ ls /Users/<username>/"New Folder"/foobar.txt/Users/<username>/New Folder/foobar.txt Is there a way I can get ls to display the path from my script? | Just remove the inner quoted double-quotes: fpath="${HOME}/New Folder/foobar.txt" Since the complete variable contents are contained in double-quotes, you don't need to do it a second time. The reason it works from the CLI is that Bash evaluates the quotes first. It fails in the variable because the backslash-escaped quotes are treated as a literal part of the directory path. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/350070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209498/"
]
} |
350,085 | I am using find -type f command to recursively find all files from a certain starting directory. However, I would like to have some directories prevented from entering and extracting names of files inside. So basically I am looking for something like: find . -type f ! -name "avoid_this_directory_please" Is there a functioning alternative to this? | This is what the -prune option is for: find . -type d -name 'avoid_this_directory_please' -prune -o \ -type f -print You may interpret the above as "if there's a directory called avoid_this_directory_please , don't enter it, otherwise, if it's a regular file, print its pathname." You may also prune the directory given any other criteria, e.g. its full pathnames from the top-level search path: find . -type d -path './some/dir/avoid_this_directory_please' -prune -o \ -type f -print | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/350085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219860/"
]
} |
350,093 | I am trying to parse out just the date from 2017-03-08T19:41:26Z . The desired output is 2017-03-08. | To extract the part before T , with POSIX shells: time=2017-03-08T19:41:26Zutc_date=${time%T*} # as already said Or to be Bourne compatible or for non-POSIX shells: expr "$time" : '\(.*\)T' Now, note that 2017-03-08T19:41:26Z is the zulu time (another name for UTC), an unambiguous specification of a precise instant in time. At that time, the date was 2017-03-08 in London, but 2017-03-09 (in the early morning) in Bangkok. If you wanted to know the local date (as opposed to the UTC date) for that time specification, that is for a Bangkok user to get 2017-03-09 and the London user to get 2017-03-08, there are a few options. With GNU date : time=2017-03-08T19:41:26Zdate -d "$time" +%F (easy as GNU date recognises that zulu format out of the box) Same with ksh93 : printf '%(%F)T\n' "$time" With zsh built-ins: zmodload zsh/datetimeTZ=UTC0 strftime -rs unix_time %Y-%m-%dT%TZ $time &&strftime %Y-%m-%d $unix_time (you can replace %Y-%m-%d with %F on systems like GNU ones where strftime() / strptime() support it). Similar with busybox date : unix_time=$(date -u -D %Y-%m-%dT%TZ -d "$time" +%s)date -d "@$unix_time" +%Y-%m-%d | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/350093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219864/"
]
} |
350,100 | I have a new install of Debian 8 ("Jessie") and want to install the firewalld package, but APT can't find it. This may be related to the fact that I didn't specify a mirror during installation, but as far as I can tell, I've made the appropriate configurations. firewalld is located on the mirror here: http://ftp.us.debian.org/debian/pool/main/f/firewalld/ That mirror is in my /etc/apt/sources.list file: deb http://ftp.us.debian.org/debian/ jessie-updates main contribdeb-src http://ftp.us.debian.org/debian/ jessie-updates main contrib I've updated the available packages: # apt update APT still can't find it: # apt search firewalldSorting... DoneFull Text Search... Done Why can't it find the package? | You need to have the full Jessie repository in your sources.list ; you should have at least deb http://ftp.us.debian.org/debian jessie maindeb http://ftp.us.debian.org/debian jessie-updates maindeb http://security.debian.org jessie/updates main (you can add contrib if you want, and the corresponding deb-src lines if you want to be able to download source code for the packages too). Check your sources.list contains at least those three lines, then run apt updateapt install firewalld The jessie-updates repository only contains updates to Jessie, it doesn’t contain the packages which were part of the original Jessie release and haven't been updated since. See What is the difference between the "jessie" and "jessie-updates" distributions in /etc/apt/sources.list? for more information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/350100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219868/"
]
} |
350,105 | I have some normal x86_64 desktop Linux installed in a single ext4 root partition † on some 500GB HDD. Now if I want to migrate this installation to a 500GB SSD (rest of the system stays the same), do I just clone the disk and run genfstab (I know that from the Arch installation guide, do I even need that?) and done? Or is there more to it? † That is, everything is in that single partition. I do not have a swap partition, but a swap file, and my system can easily do without that too if it should be an issue. | After some research, I found that ext4 is apparently quite usable on SSDs, so I went with the clone approach. Here is what I did, step by step: Install the SSD Boot from a USB and clone the HDD to SSD with dd Change the UUID of the new filesystem . I missed that one at first, which caused funny results as grub and other software got confused Update the fstab on the new filesystem. I used the genfstab script from the Arch USB for that Re-generate initramfs , reinstall and reconfigure grub Move SSD to the top in boot priority, done The above worked for me; however, I am very much a novice admin, so I'm not sure if every step is actually necessary and useful. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/350105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83756/"
]
} |
350,130 | Symbolic links do take room, of course, but just the room it takes to store the name and target plus a few bytes for other metadata Do symbolic links actually make a difference in disk usage? My question is, can we determine how many bytes a symlink is taking up? $ touch alfa.txt$ ln -s alfa.txt bravo.txt Both du and ls report 8, which is "alfa.txt": $ du -b bravo.txt8 bravo.txt$ ls -l bravo.txtlrwxrwxrwx 1 Steven None 8 Mar 8 18:17 bravo.txt -> alfa.txt Is some command available that can print the true size of the symlink with the"few bytes for other metadata" included? | Sort of, but note that the size of a file is not well-defined at that level of precision. A symbolic link involves four parts: The name of the link, which is stored in the directory where it is an entry. Other metadata that is present for every directory entry, to locate the rest of the metadata. This is typically the location of an inode. In addition, each directory entry costs a few more bytes, for example to pad file names and to maintain a data structure such as a balanced tree or hash. Other metadata of the symbolic link itself such as timestamps. This metadata is also present for other file types (e.g. an empty regular file). The target of the link. If the filesystem allows symbolic links to have multiple hard links, the first two parts are per directory entry, the last two parts are present only once per symlink. In ext2/ext3/ext4 , the target of a symbolic link is stored in the inode if it's at most 60 bytes long. You can confirm that by asking du : it reports 0 for symlinks whose target is ≤60 bytes and one block for larger targets. Just like for a regular file, the figure reported by du excludes the storage for the directory entry and the inode. If you want to know exactly how much space the symlink takes, you have to count those as well. Most classical filesystems allocate inodes at filesystem creation time, so the cost is split: the size of the directory entry counts against the number of blocks of data, the inode counts against the inode pool size. For the size of the directory entry itself, the exact number of bytes taken up by an entry can depend on what other entries are present in the directory. However a directory usually takes up a whole number of blocks, so if you create more and more entries, the size of the directory remains the same, until the entries no longer fit in one block and a second block is allocated, and so on. To see exactly how the directory entries are stored in the block, you'd need a filesystem debugger and a passable understanding of the filesystem format, or a good to excellent understanding of the filesystem format as well as the knowledge of what other entries are present in the directory and possibly the order in which they were created and other entries were removed. In summary, the “few bytes for other metadata” are: The directory entry, of a variable size. Creating the symbolic link may either make no difference or add one block. The inode. And the target may occupy anywhere from 0 to one block in addition to that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/350130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17307/"
]
} |
350,152 | Currently I am working with Makefiles that have definitions like MYLIB=/.../mylib-1.2.34 The problem is that these are different for different developers, and it is a pain having to re-edit the file after every checkout. So I tried exporting a specific environment variable, and then doing MYLIBX:=$(MYLIB_ENV)MYLIBX?=MYLIB Trouble is that if MYLIB_ENV is not defined, it still creates an empty MYLIBX, so the ?= does not work. Is there a clean way to do this very basic thing? I am working with a "rich" set of make files developed over many years that do all sorts of things like make and call each other, so changing things deeply is not an option. SOLUTION Double shuffle. MYLIB already defined. MYLIB_ENV?=MYLIBMYLIB:=MYLIB_ENV | The problem with MYLIB:=$(MYLIB_ENV)MYLIB?=/.../mylib-1.2.34 is that MYLIB is always defined in the first line, so the second never applies. The typical approach in this situation is just MYLIB?=/.../mylib-1.2.34 That way individual developers can specify their own value from the shell, either on the make command line make MYLIB=... or in their environment before running make export MYLIB=...make (so they can set it once, e.g. in their shell startup scripts, and forget about it). If you just run make without specifying a value for MYLIB , the default /.../mylib-1.2.34 is used. Another option is to determine where the Makefile is stored , but that doesn't work in all cases (in particular if the path to the Makefile contains spaces). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/350152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168979/"
]
} |
350,192 | I have a custom package foo with a dependency in the control file on a fixed version of another package bar : Depends: bar (= 1.2.3) Both the foo and the bar packages are published in my own repo. Furthermore I have multiple versions of bar in the repo, say, 1.2.3 as well as 2.1.0. Now, when trying to install foo on a new machine using apt-get install foo it fails with The following packages have unmet dependencies: foo : Depends: bar (= 1.2.3) but 2.1.0 is to be installed I.e. apt-get does not appear to correctly figure out the proper versions of packages to use. I tried adding a conflicts: Depends: bar (= 1.2.3)Conflicts: bar (>> 1.2.3) but that only resulted in the error changing to The following packages have unmet dependencies: foo : Depends: bar (= 1.2.3) but it is not going to be installed If I specify the version of bar while installing, that works: apt-get install foo bar=1.2.3 But this is not feasible (the real case has multiple levels of dependencies and I really don't want to have to implement my own dependency resolver in order to find and specify everything manually on the command-line - might as well skip apt in that case). So the question is, is there any way to get apt to behave properly and automatically install the correct versions of the dependencies (without having to explicitly specify those versions on the command line)? And I should add that I also really don't want to have to go the apt_preferences route with version pinning, as that requires managing versions in two separate places. For completeness sake, here's the full output when turning on various apt debugging output: apt-get -o Debug::pkgProblemResolver=1 -o Debug::pkgDepCache::AutoInstall=1 -o Debug::pkgDepCache::Marker=1 install fooReading package lists... DoneBuilding dependency tree Reading state information... Done foo:amd64 Depends on bar [ amd64 ] < none -> 2.1.0 > ( universe/utils ) (= 1.2.3) can't be satisfied!Starting pkgProblemResolver with broken count: 1Starting 2 pkgProblemResolver with broken count: 1Investigating (0) foo [ amd64 ] < none -> 1.0.0 > ( misc )Broken foo:amd64 Depends on bar [ amd64 ] < none -> 2.1.0 > ( universe/utils ) (= 1.2.3) Considering bar:amd64 0 as a solution to foo:amd64 9998 Re-Instated bar:amd64DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: foo : Depends: bar (= 1.2.3) but 2.1.0 is to be installedE: Unable to correct problems, you have held broken packages. | The apt resolver does not consider the possibility that you might want to install something that is not the most recent available version of a package in a given target release; Debian just doesn't support installing anything but the most up to date version of a package for your system. If you're using different repositories for each version of a (set of) package(s), then you can use pinning to prefer a given origin, or give them a different codename and use apt's -t option to select the target release. Otherwise it's just not possible. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/350192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219923/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.