source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
303,614 | I have folders and files looks likes this: /thumbs/6b0/6b029aab9ca00329cd28fd792ecf90a.jpg
/thumbs/6b0/6b029aab9ca00329cd28fd792ecf90a-s.jpg
/thumbs/d11/d11e15a72e20e14c45bd2769d763126d.jpg
/thumbs/d11/d11e15a72e20e14c45bd2769d763126d-s.jpg And I want to apply following command to the files not have -s in their names in all sub directories in thumbs folder. mogrify -resize 50% -quality 85 -strip filename.jpg I have look around find and grep but couldn't figure out how can I do this. Any help appreciated. | From the 4BSD manual for csh : A ^Z takes effect immediately and is like an interrupt in that pending output and unread input are discarded when it is typed. There is another special key ^Y which does not generate a STOP signal until a program attempts to read (2) it. This can usefully be typed ahead when you have prepared some commands for a job which you wish to stop after it has read them. So, the purpose is to type multiple inputs while the first one is being processed, and have the job stop after they are done. | {
"source": [
"https://unix.stackexchange.com/questions/303614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184846/"
]
} |
303,669 | If I want to search a file in the system I use the following command: sudo find `pwd` -name filename.ext I want to make an alias for an easier word like search , so I used the command: alias search "find `pwd` -name " The problem is that the command translates the pwd part to the actual path i'm in now. When i type simply alias to see the list of aliases I see: search find /path/to/my/homedir -name How can I avoid this? | Use single quotes to avoid shell expansion at time of definition alias search='find `pwd` -name ' | {
"source": [
"https://unix.stackexchange.com/questions/303669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/182347/"
]
} |
303,699 | I would like to know if the output of a Red-Hat based linux could be differently interpreted by a Debian based linux. To make the question even more specific, what I am after, is understanding how the "load average" from the first line of the top command on a Red-Hat system is interpreted and how to verify this by official documentation ro code. [There are many ways to approach this subject, all of which are acceptable answers to the question] One potential approach, would be to find where this information is officially documented. Another one, would be to find the code version that top is built from in the specific distribution and version I am working on. The command output I am getting is: top - 13:08:34 up 1:19, 2 users, load average: 0.02, 0.00, 0.00
Tasks: 183 total, 1 running, 182 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 96.8%id, 2.7%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 3922520k total, 788956k used, 3133564k free, 120720k buffers
Swap: 2097148k total, 0k used, 2097148k free, 344216k cached In this case how can I interpret the load average value? I have managed to locate that the average load is about the last minute, from one documentation source and that it should be interpreted after being multiplied with 100, by another documentation source. So, the question is: Is it 0.02% or 2% loaded? Documentation sources and versions: The first one stars with TOP(1) Linux User’s Manual TOP(1)
NAME
top - display Linux tasks Source: man top in my RedHat distribution Ubuntu also has the version with "tasks" that does not explain the load average in: http://manpages.ubuntu.com/manpages/precise/man1/top.1.html The second one starts with TOP(1) User Commands TOP(1)
NAME top
top - display Linux processes Source: http://man7.org/linux/man-pages/man1/top.1.htm This one starts with: TOP(1)
NAME
top - display and update information about the top cpu processes Source: http://www.unixtop.org/man.shtml The first one, can be seen by man top in RHEL or in online ubuntu documentation and it does not have any explanation for the output format (nor about the load average in which I am interested in). The second one, contains a brief explanation, pointing out that the load average has to do with the last 1 minute, but nothing about the interpretation of its value! I quote directly from the second source: 2a. UPTIME and LOAD Averages This portion consists of a single line containing: program or window name, depending on display mode current time and length of time since last boot total number of users system load avg over the last 1, 5 and 15 minutes So, if this explanation is indeed correct, it is just enough to understand that the load average is about the last 1 minute. But it does not explain the format of the number. In the third explanation, it says that: When specifying numbers for load averages, they should be multiplied by 100. This explanation suggests that 0.02 means 2% and not 0.02%. But is this correct? Additionally, is it correct for all distributions of linux and potentially different implementations of top ? To find the answer to this question, I tried to go through the code by searching it online. But I found, at least, two different version of top related to RHEL out there! the builtin-top.c and the refactored top.c . Both copyrighted by Red-Hat as the notice says in the beginning of the code and thus seems logical that RHEL uses one of these. http://lxr.free-electrons.com/source/tools/perf/builtin-top.c http://lxr.free-electrons.com/source/tools/perf/util/top.c So, before delving into that much code, I wanted an opinion about where to focus to form an accurate understanding on how cpu load is interpreted? From information given in the answers below, in addition to some personal search, I have found that: The top that I am using is contained in the package procps-3.2.8. Which can be verified by using top -v . In the version of procps-3.2.8 that I have downloaded from the official website it seems that the tool uptime get its information from the procfs file /proc/loadavg directly (not utilizing the linux function getloadavg() ). Now for the top command it also does not use the function getloadavg() . I managed to verify that the top does indeed the same things as the uptime tool to show the load averages. It actually calls the uptime tool's function, which gets its information from the procfs file /proc/loadavg . So, everything points to the /proc/loadavg file! Thus, to form an accurate understanding of the load average produced by top , one must read the kernel code to see how the file loadavg is written. There is also an excellent article pointed out in one of the answers that provides a layman's terms explanation of the three values of loadavg . So, despite the fact that all answers have been equally useful and helpful, I am going to mark the one that pointed to the article http://www.linuxjournal.com//article/9001 as "the" answer to my question.Thank you all for your contribution! Additionally from the question Understanding top and load average , I have found a link to the source code of the kernel that points to the spot where loadavg is calculated. As it seems there is a huge comment explaining the way it works, also this part of the code is in C ! The link to the code is http://lxr.free-electrons.com/source/kernel/sched/loadavg.c Again I am not trying to engage in any form of plagiarism, I am just adding this for completeness. So, I am repeating that the link to the kernel code was found from one of the answers in Understanding top and load average . | The CPU load is the length of the run queue, i.e. the length of the queue of processes waiting to be run. The uptime command may be used to see the average length of the run queue over the last minute, the last five minutes, and the last 15 minutes, just like what's usually displayed by top . A high load value means the run queue is long. A low value means that it is short. So, if the one minute load average is 0.05, it means that on average during that minute, there was 0.05 processes waiting to run in the run queue. It is not a percentage. This is, AFAIK, the same on all Unices (although some Unices may not count processes waiting for I/O, which I think Linux does; OpenBSD, for a while only, also counted kernel threads, so that the load was always 1 or more). The Linux top utility gets the load values from the kernel, which writes them to /proc/loadavg . Looking at the sources for procps-3.2.8 , we see that: To display the load averages, the sprint_uptime() function is called in top.c . This function lives in proc/whattime.c and calls loadavg() in proc/sysinfo.c . That function simply opens LOADAVG_FILE to read the load averages. LOADAVG_FILE is defined earlier as "/proc/loadavg" . | {
"source": [
"https://unix.stackexchange.com/questions/303699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180044/"
]
} |
303,754 | I am using CentOS 7. I installed okular, which is a PDF viewer, with the command: sudo yum install okular As you can see in the picture below, it installed 37 dependent packages to install okular. But I wasn't satisfied with the features of the application and I decided to remove it. The problem is that if I remove it with the command: sudo yum autoremove okular It only removes four dependent packages. And if I remove it with the command: sudo yum remove okular It removes only one package which is okular.x86_64. Now, my question is that is there a way to remove all 37 installed packages with a command or do I have to remove all of them one by one? | Personally, I don't like yum plugins because they don't work a lot of the time, in my experience. You can use the yum history command to view your yum history. [root@testbox ~]# yum history
Loaded plugins: product-id, rhnplugin, search-disabled-repos, subscription-manager, verify, versionlock
ID | Login user | Date and time | Action(s) | Altered
----------------------------------------------------------------------------------
19 | Jason <jason> | 2016-06-28 09:16 | Install | 10 You can find info about the transaction by doing yum history info <transaction id> . So: yum history info 19 would tell you all the packages that were installed with transaction 19 and the command line that was used to install the packages. If you want to undo transaction 19, you would run yum history undo 19 . Alternatively, if you just wanted to undo the last transaction you did (you installed a software package and didn't like it), you could just do yum history undo last | {
"source": [
"https://unix.stackexchange.com/questions/303754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184941/"
]
} |
303,771 | I have granted a group permission to run certain commands with no password via sudo. When one of the users makes a typo or runs the wrong command the system prompts them for their password and then they get an error. This is confusing for the user so I'd like to just display an error instead of prompting them for a password. Is this possible? Here is an example of my sudoers file: %mygroup ALL=(ALL) NOPASSWD:/usr/local/bin/myscript.sh * Example when they run the wrong script: # sudo /usr/local/bin/otherscript.sh
[sudo] password for user:
Sorry, user user is not allowed to execute '/usr/local/bin/otherscript.sh' as root on <hostname>. Desired output: Sorry, user user is not allowed to execute '/usr/local/bin/otherscript.sh' as root on <hostname>. Please check the command and try again. Note the lack of password prompt. My google-fu has failed me and only returns results on not asking for a password when the user is permitted to run the command. | From a quick read of sudo(8) -n The -n (non-interactive) option prevents sudo from
prompting the user for a password. If a password is
required for the command to run, sudo will display an error
message and exit. And for the doubters: # grep jdoe /etc/sudoers
jdoe ALL=(ALL) NOPASSWD: /bin/echo
# Tested thusly: % sudo echo allowed
allowed
% sudo -n ed
sudo: a password is required
% sudo ed
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
Password: So an alias for sudo for these folks would likely do the trick, to prevent the password prompt. Now why this requires custom compiling sudo , I don't know, I just read the manual. | {
"source": [
"https://unix.stackexchange.com/questions/303771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184982/"
]
} |
303,881 | I'm on Ubuntu 15.04 and today I've been reading an article about Linux security from this link. Everything went good until the part of UID 0 Account Only root should have the UID 0. Another account with that UID is
often synonymous to backdoor. When running the command they gave me, I found out there were another root account. Just after that I disabled the account as the article do but I'm sort of afraid of this account, I can find him on /etc/passwd rootk:x:0:500::/:/bin/false And in /etc/shadow rootk:!$6$loVamV9N$TorjQ2i4UATqZs0WUneMGRCDFGgrRA8OoJqoO3CCLzbeQm5eLx.VaJHeVXUgAV7E5hgvDTM4BAe7XonW6xmup1:16795:0:99999:7::1: I tried to delete this account using userdel rootk but got this error ; userdel: user rootk is currently used by process 1 The process 1 is systemd. Could anyone give me some advice please ? Should I userdel -f ? Is this account a normal root account ? | Processes and files are actually owned by user ID numbers, not user names. rootk and root have the same UID, so everything owned by one is also owned by the other. Based on your description, it sounds like userdel saw every root process (UID 0) as belonging rootk user. According to this man page , userdel has an option -f to force removal of the account even if it has active processes. And userdel would probably just delete rootk 's passwd entry and home directory, without affecting the actual root account. To be safer, I might be inclined to hand-edit the password file to remove the entry for rootk , then hand-remove rootk 's home directory. You may have a command on your system named vipw , which lets you safely edit /etc/passwd in a text editor. | {
"source": [
"https://unix.stackexchange.com/questions/303881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185068/"
]
} |
303,960 | How can I refer to a string by index in sh/bash? That is, basically splitting it. I am trying to strip 5 characters of a file name. All names have the structure: name_nr_code. I am trying to remove the 5 alphanumeric code bit. name_nr_ is always 10 characters. Is there a thing like; for i in * ; do mv "$i" "$i"[:10] ; done | Simple as this. (bash) for i in * ; do mv -- "$i" "${i:0:5}" ; done Voila. And an explanation from Advanced Bash-Scripting Guide ( Chapter 10. Manipulating Variables ) , (with extra NOTE s inline to highlight the errors in that manual): Substring Extraction ${string:position} Extracts substring from $string at $position . If the $string parameter is "*" or "@", then this extracts the positional parameters, starting at $position . ${string:position:length} Extracts $length characters of substring from $string at $position . NOTE missing quotes around parameter expansions! echo should not be used for arbitrary data. stringZ=abcABC123ABCabc
# 0123456789.....
# 0-based indexing.
echo ${stringZ:0} # abcABC123ABCabc
echo ${stringZ:1} # bcABC123ABCabc
echo ${stringZ:7} # 23ABCabc
echo ${stringZ:7:3} # 23A
# Three characters of substring.
# Is it possible to index from the right end of the string?
echo ${stringZ:-4} # abcABC123ABCabc
# Defaults to full string, as in ${parameter:-default}.
# However . . .
echo ${stringZ:(-4)} # Cabc
echo ${stringZ: -4} # Cabc
# Now, it works.
# Parentheses or added space "escape" the position parameter. The position and length arguments can be "parameterized," that is,
represented as a variable, rather than as a numerical constant. If the $string parameter is "*" or "@", then this extracts a maximum
of $length positional parameters, starting at $position . echo ${*:2} # Echoes second and following positional parameters.
echo ${@:2} # Same as above.
echo ${*:2:3} # Echoes three positional parameters, starting at second. NOTE : expr substr is a GNU extension. expr substr $string $position $length Extracts $length characters from $string starting at $position . stringZ=abcABC123ABCabc
# 123456789......
# 1-based indexing.
echo `expr substr $stringZ 1 2` # ab
echo `expr substr $stringZ 4 3` # ABC NOTE : That echo is redundant and makes it even less reliable. Use expr substr + "$string1" 1 2 . NOTE : expr will return with a non-zero exit status if the output is 0 (or -0, 00...). BTW. The book is present in the official Ubuntu repository as abs-guide . | {
"source": [
"https://unix.stackexchange.com/questions/303960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63603/"
]
} |
303,961 | I have a file sample_init.ora There are 2 lines in the file. db_create_file_dest="DATAG"
control_files=('DATAG/DBNAME/controlfile/control01.ctl','DATAG/DBNAME/controlfile/control02.ctl') Now, I would like to change only the 2nd line from DATAG to DATAC2. Is there a way? I tried many SED/AWK forums but left with no options. A SED based reply would be HIGHLY regarded. | Simple as this. (bash) for i in * ; do mv -- "$i" "${i:0:5}" ; done Voila. And an explanation from Advanced Bash-Scripting Guide ( Chapter 10. Manipulating Variables ) , (with extra NOTE s inline to highlight the errors in that manual): Substring Extraction ${string:position} Extracts substring from $string at $position . If the $string parameter is "*" or "@", then this extracts the positional parameters, starting at $position . ${string:position:length} Extracts $length characters of substring from $string at $position . NOTE missing quotes around parameter expansions! echo should not be used for arbitrary data. stringZ=abcABC123ABCabc
# 0123456789.....
# 0-based indexing.
echo ${stringZ:0} # abcABC123ABCabc
echo ${stringZ:1} # bcABC123ABCabc
echo ${stringZ:7} # 23ABCabc
echo ${stringZ:7:3} # 23A
# Three characters of substring.
# Is it possible to index from the right end of the string?
echo ${stringZ:-4} # abcABC123ABCabc
# Defaults to full string, as in ${parameter:-default}.
# However . . .
echo ${stringZ:(-4)} # Cabc
echo ${stringZ: -4} # Cabc
# Now, it works.
# Parentheses or added space "escape" the position parameter. The position and length arguments can be "parameterized," that is,
represented as a variable, rather than as a numerical constant. If the $string parameter is "*" or "@", then this extracts a maximum
of $length positional parameters, starting at $position . echo ${*:2} # Echoes second and following positional parameters.
echo ${@:2} # Same as above.
echo ${*:2:3} # Echoes three positional parameters, starting at second. NOTE : expr substr is a GNU extension. expr substr $string $position $length Extracts $length characters from $string starting at $position . stringZ=abcABC123ABCabc
# 123456789......
# 1-based indexing.
echo `expr substr $stringZ 1 2` # ab
echo `expr substr $stringZ 4 3` # ABC NOTE : That echo is redundant and makes it even less reliable. Use expr substr + "$string1" 1 2 . NOTE : expr will return with a non-zero exit status if the output is 0 (or -0, 00...). BTW. The book is present in the official Ubuntu repository as abs-guide . | {
"source": [
"https://unix.stackexchange.com/questions/303961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163418/"
]
} |
304,005 | Two setuid programs, /usr/bin/bar and /usr/bin/baz , share a single configuration file foo . The configuration file's mode is 0640 , for it holds sensitive information. The one program runs as bar:bar (that is, as user bar, group bar ); the other as baz:baz . Changing users is not an option, and even changing groups would not be preferable. I wish to hard link the single configuration file as /etc/bar/foo and /etc/baz/foo . However, this fails because the file must, as far as I know, belong either to root:bar or to root:baz . Potential solution: Create a new group barbaz whose members are bar and baz . Let foo belong to root:barbaz . That looks like a pretty heavy-handed solution to me. Is there no neater, simpler way to share the configuration file foo between the two programs? For now, I am maintaining two, identical copies of the file. This works, but is obviously wrong. What would be right? For information: I have little experience with Unix groups and none with setgid(2). | You can use ACLs so the file can be read by people in both groups. chgrp bar file
chmod 640 file
setfacl -m g:baz:r-- file Now both bar and baz groups can read the file. For example, here's a file owned by bin:bin with mode 640. $ ls -l foo
-rw-r-----+ 1 bin bin 5 Aug 17 12:19 foo The + means there's an ACL set, so let's take a look at it. $ getfacl foo
# file: foo
# owner: bin
# group: bin
user::rw-
group::r--
group:sweh:r--
mask::r--
other::--- We can see the line group:sweh:r-- : that means people in the group sweh can read it. Hey, that's me! $ id
uid=500(sweh) gid=500(sweh) groups=500(sweh) And yes, I can read the file. $ cat foo
data | {
"source": [
"https://unix.stackexchange.com/questions/304005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18202/"
]
} |
304,050 | I recently installed dnsmasq to act as DNS Server for my local network. dnsmasq listens on port 53 which is already in use by the local DNS stub listener from systemd-resolved . Just stopping systemd-resolved and then restart it after dnsmasq is running solves this issue. But it returns after a reboot: systemd-resolved is started with preference and dnsmasq will not start because port 53 is already in use. The first obvious question, I guess, is how do I best make systemd-resolved understand that it should not start the local DNS stub listener and thus keep port 53 for use by dnsmasq? A more interesting question, however, is how the two services are generally meant to work together. Are they even meant to work side by side or is systemd-resolved just in the way if one's using dnsmasq? | As of systemd 232 (released in 2017) you can edit /etc/systemd/resolved.conf (not /etc/resolv.conf ) and add this line: DNSStubListener=no This will switch off binding to port 53. The option is described in more details in the resolved.conf manpage . You can find the systemd version your system is running with: systemctl --version | {
"source": [
"https://unix.stackexchange.com/questions/304050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134509/"
]
} |
304,661 | Sometimes I have strange troubles booting my computer (which runs Debian). So I issued "dmesg" command. In its output I saw a lot of errors. However, when I run extended SMART test on hard disks (using "smartctl -t long /dev/sda" command), the result is that my disks are not broken. What can be the reason of those errors? Here are the errors: (...)
[ 505.918537] ata3.00: exception Emask 0x50 SAct 0x400 SErr 0x280900 action 0x6 frozen
[ 505.918549] ata3.00: irq_stat 0x08000000, interface fatal error
[ 505.918558] ata3: SError: { UnrecovData HostInt 10B8B BadCRC }
[ 505.918566] ata3.00: failed command: READ FPDMA QUEUED
[ 505.918579] ata3.00: cmd 60/40:50:20:5b:60/00:00:0b:00:00/40 tag 10 ncq 32768 in
res 40/00:54:20:5b:60/00:00:0b:00:00/40 Emask 0x50 (ATA bus error)
[ 505.918586] ata3.00: status: { DRDY }
[ 505.918595] ata3: hard resetting link
[ 506.410055] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 506.422648] ata3.00: configured for UDMA/133
[ 506.422679] ata3: EH complete
[ 1633.123880] md: bind<sdb3>
[ 1633.187966] RAID1 conf printout:
[ 1633.187977] --- wd:1 rd:2
[ 1633.187984] disk 0, wo:0, o:1, dev:sda3
[ 1633.187989] disk 1, wo:1, o:1, dev:sdb3
[ 1633.188866] md: recovery of RAID array md0
[ 1633.188871] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[ 1633.188875] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 1633.188890] md: using 128k window, over a total of 1943618560k.
[ 1634.167341] ata3.00: exception Emask 0x50 SAct 0x7f80 SErr 0x280900 action 0x6 frozen
[ 1634.167353] ata3.00: irq_stat 0x08000000, interface fatal error
[ 1634.167361] ata3: SError: { UnrecovData HostInt 10B8B BadCRC }
[ 1634.167369] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167382] ata3.00: cmd 60/00:38:00:00:6f/02:00:01:00:00/40 tag 7 ncq 262144 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167389] ata3.00: status: { DRDY }
[ 1634.167395] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167407] ata3.00: cmd 60/00:40:00:02:6f/02:00:01:00:00/40 tag 8 ncq 262144 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167413] ata3.00: status: { DRDY }
[ 1634.167418] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167429] ata3.00: cmd 60/00:48:00:04:6f/02:00:01:00:00/40 tag 9 ncq 262144 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167435] ata3.00: status: { DRDY }
[ 1634.167439] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167451] ata3.00: cmd 60/00:50:00:06:6f/02:00:01:00:00/40 tag 10 ncq 262144 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167457] ata3.00: status: { DRDY }
[ 1634.167462] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167473] ata3.00: cmd 60/00:58:00:08:6f/02:00:01:00:00/40 tag 11 ncq 262144 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167479] ata3.00: status: { DRDY }
[ 1634.167484] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167495] ata3.00: cmd 60/00:60:00:0a:6f/02:00:01:00:00/40 tag 12 ncq 262144 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167500] ata3.00: status: { DRDY }
[ 1634.167505] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167516] ata3.00: cmd 60/80:68:00:0c:6f/00:00:01:00:00/40 tag 13 ncq 65536 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167522] ata3.00: status: { DRDY }
[ 1634.167527] ata3.00: failed command: READ FPDMA QUEUED
[ 1634.167538] ata3.00: cmd 60/00:70:80:0c:6f/02:00:01:00:00/40 tag 14 ncq 262144 in
res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error)
[ 1634.167544] ata3.00: status: { DRDY }
[ 1634.167553] ata3: hard resetting link
[ 1634.658816] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 1634.672645] ata3.00: configured for UDMA/133
[ 1634.672696] ata3: EH complete
[ 1637.687898] ata3.00: exception Emask 0x50 SAct 0x3ff000 SErr 0x280900 action 0x6 frozen
[ 1637.687910] ata3.00: irq_stat 0x08000000, interface fatal error
[ 1637.687918] ata3: SError: { UnrecovData HostInt 10B8B BadCRC }
[ 1637.687926] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.687940] ata3.00: cmd 60/00:60:80:a7:af/02:00:02:00:00/40 tag 12 ncq 262144 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.687947] ata3.00: status: { DRDY }
[ 1637.687953] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.687965] ata3.00: cmd 60/00:68:80:a9:af/02:00:02:00:00/40 tag 13 ncq 262144 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.687971] ata3.00: status: { DRDY }
[ 1637.687976] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.687987] ata3.00: cmd 60/80:70:80:ab:af/01:00:02:00:00/40 tag 14 ncq 196608 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.687993] ata3.00: status: { DRDY }
[ 1637.687998] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.688009] ata3.00: cmd 60/00:78:00:ad:af/02:00:02:00:00/40 tag 15 ncq 262144 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.688015] ata3.00: status: { DRDY }
[ 1637.688020] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.688031] ata3.00: cmd 60/80:80:00:af:af/00:00:02:00:00/40 tag 16 ncq 65536 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.688037] ata3.00: status: { DRDY }
[ 1637.688042] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.688053] ata3.00: cmd 60/00:88:80:af:af/01:00:02:00:00/40 tag 17 ncq 131072 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.688059] ata3.00: status: { DRDY }
[ 1637.688064] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.688075] ata3.00: cmd 60/80:90:80:b0:af/00:00:02:00:00/40 tag 18 ncq 65536 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.688081] ata3.00: status: { DRDY }
[ 1637.688085] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.688096] ata3.00: cmd 60/00:98:00:b1:af/02:00:02:00:00/40 tag 19 ncq 262144 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.688102] ata3.00: status: { DRDY }
[ 1637.688107] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.688118] ata3.00: cmd 60/00:a0:00:b3:af/01:00:02:00:00/40 tag 20 ncq 131072 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.688124] ata3.00: status: { DRDY }
[ 1637.688129] ata3.00: failed command: READ FPDMA QUEUED
[ 1637.688140] ata3.00: cmd 60/00:a8:00:b4:af/01:00:02:00:00/40 tag 21 ncq 131072 in
res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error)
[ 1637.688146] ata3.00: status: { DRDY }
[ 1637.688154] ata3: hard resetting link
[ 1638.179398] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 1638.192977] ata3.00: configured for UDMA/133
[ 1638.193029] ata3: EH complete
[ 1640.259492] md: export_rdev(sdb1)
[ 1640.326109] md: bind<sdb1>
[ 1640.346712] RAID1 conf printout:
[ 1640.346724] --- wd:1 rd:2
[ 1640.346731] disk 0, wo:0, o:1, dev:sda1
[ 1640.346736] disk 1, wo:1, o:1, dev:sdb1
[ 1640.346893] md: delaying recovery of md1 until md0 has finished (they share one or more physical units)
[ 1657.987964] ata3.00: exception Emask 0x50 SAct 0x40000 SErr 0x280900 action 0x6 frozen
[ 1657.987975] ata3.00: irq_stat 0x08000000, interface fatal error
[ 1657.987984] ata3: SError: { UnrecovData HostInt 10B8B BadCRC }
[ 1657.987992] ata3.00: failed command: READ FPDMA QUEUED
[ 1657.988006] ata3.00: cmd 60/00:90:00:30:2e/03:00:09:00:00/40 tag 18 ncq 393216 in
res 40/00:94:00:30:2e/00:00:09:00:00/40 Emask 0x50 (ATA bus error)
[ 1657.988013] ata3.00: status: { DRDY }
[ 1657.988022] ata3: hard resetting link
[ 1658.479548] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 1658.493107] ata3.00: configured for UDMA/133
[ 1658.493147] ata3: EH complete
[ 1670.547791] ata3: limiting SATA link speed to 1.5 Gbps
[ 1670.547805] ata3.00: exception Emask 0x50 SAct 0x7f SErr 0x280900 action 0x6 frozen
[ 1670.547812] ata3.00: irq_stat 0x08000000, interface fatal error
[ 1670.547820] ata3: SError: { UnrecovData HostInt 10B8B BadCRC }
[ 1670.547826] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547839] ata3.00: cmd 60/80:00:00:1f:2e/01:00:0c:00:00/40 tag 0 ncq 196608 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547846] ata3.00: status: { DRDY }
[ 1670.547852] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547863] ata3.00: cmd 60/80:08:80:20:2e/00:00:0c:00:00/40 tag 1 ncq 65536 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547869] ata3.00: status: { DRDY }
[ 1670.547875] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547886] ata3.00: cmd 60/00:10:00:21:2e/02:00:0c:00:00/40 tag 2 ncq 262144 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547892] ata3.00: status: { DRDY }
[ 1670.547896] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547907] ata3.00: cmd 60/00:18:00:23:2e/02:00:0c:00:00/40 tag 3 ncq 262144 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547913] ata3.00: status: { DRDY }
[ 1670.547918] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547929] ata3.00: cmd 60/00:20:00:25:2e/01:00:0c:00:00/40 tag 4 ncq 131072 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547935] ata3.00: status: { DRDY }
[ 1670.547940] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547951] ata3.00: cmd 60/00:28:00:26:2e/02:00:0c:00:00/40 tag 5 ncq 262144 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547957] ata3.00: status: { DRDY }
[ 1670.547961] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547972] ata3.00: cmd 60/00:30:00:28:2e/02:00:0c:00:00/40 tag 6 ncq 262144 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547978] ata3.00: status: { DRDY }
[ 1670.547987] ata3: hard resetting link
[ 1671.039264] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[ 1671.053386] ata3.00: configured for UDMA/133
[ 1671.053444] ata3: EH complete
[ 2422.512002] md: md0: recovery done.
[ 2422.547344] md: recovery of RAID array md1
[ 2422.547355] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[ 2422.547360] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 2422.547378] md: using 128k window, over a total of 4877312k.
[ 2422.668465] RAID1 conf printout:
[ 2422.668474] --- wd:2 rd:2
[ 2422.668480] disk 0, wo:0, o:1, dev:sda3
[ 2422.668486] disk 1, wo:0, o:1, dev:sdb3
[ 2469.990451] md: md1: recovery done.
[ 2470.049986] RAID1 conf printout:
[ 2470.049997] --- wd:2 rd:2
[ 2470.050003] disk 0, wo:0, o:1, dev:sda1
[ 2470.050009] disk 1, wo:0, o:1, dev:sdb1
[ 3304.445149] PM: Hibernation mode set to 'platform'
[ 3304.782375] PM: Syncing filesystems ... done.
[ 3307.028591] Freezing user space processes ... (elapsed 0.001 seconds) done.
(...) | First, keep in mind that SMART saying that your drive is healthy doesn't necessarily mean that the drive is healthy. SMART reports are an aid , not an absolute truth. If all you are interested in is what to do, rather than why, then feel free to scroll down to the last few paragraphs; however, the interim text will tell you why I think what I propose is the correct course of action, and how to derive that from what you posted. With that said, let's look at what one of those errors are telling us. [ 1670.547805] ata3.00: exception Emask 0x50 SAct 0x7f SErr 0x280900 action 0x6 frozen
[ 1670.547812] ata3.00: irq_stat 0x08000000, interface fatal error
[ 1670.547820] ata3: SError: { UnrecovData HostInt 10B8B BadCRC }
[ 1670.547826] ata3.00: failed command: READ FPDMA QUEUED
[ 1670.547839] ata3.00: cmd 60/80:00:00:1f:2e/01:00:0c:00:00/40 tag 0 ncq 196608 in
res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)
[ 1670.547846] ata3.00: status: { DRDY }
[ 1670.547852] ata3.00: failed command: READ FPDMA QUEUED (I hope I got the parts that should go together, but you were getting a bundle of those so it should be okay either way.) The Linux ata Wiki has a page explaining how to read these errors . Particularly, A status value of DRDY means "Device ready. Normally 1, when all is OK." Seeing a status value of DRDY is perfectly normal and expected. SError has multiple component values, of which you are seeing (in this particular snippet): UnrecovData "Data integrity error occurred, interface did not recover" HostInt "Host bus adapter internal error" 10B8B "10b to 8b decoding error occurred" BadCRC "Link layer CRC error occurred" 10b8b coding, which encodes 8 bits as 10 bits to aid with both signal synchronization and error detection, is used on the physical cabling, not necessarily on the drive itself. The drive most likely uses other forms of FEC or ECC coding, and an error there would normally show up as some form of I/O error, likely with an error value of UNC ("uncorrectable error - often due to bad sectors on the disk"), likely with "media error" ("software detected a media error") in parenthesis at the end of the res line. This latter is not what you are seeing, so while we can't completely rule it out, it seems unlikely. The "link layer" is the physical cables and circuit board traces between the drive's own controller, and the disk drive interface chip (likely part of the southbridge on your computer's motherboard, but could be located at an offboard HBA). A host bus adapter, also known as a HBA, is the circuitry that connects to storage equipment. Also colloquially known as a "disk controller", a term which is a bit of a misnomer with modern systems. The most visible part of the HBA is generally the connection ports, most often these days either SATA or some SAS form factor. The UnrecovData and HostInt flags basically tell us that "something just went horribly wrong, and there was no way to recover or no attempt at recovery was made". The opposite would likely be RecovData , which indicates that a "data integrity error occurred, but the interface recovered". (As an aside, I probably would have used HBAInt instead of HostInt , as the "host" refers to the HBA, not the whole system.) The combination of 10B8B and BadCRC , which both point to the physical link layer, makes me suspect a cabling issue. This suspicion is also supported by the fact that the SMART self-tests, which are completely internal to the drive except for status reporting, are finding no errors that the manufacturer feels are serious enough to warrant reporting in the results. If the drive was having problems storing or reading data, the long SMART self-test in particular should have reported that. TL;DR: The first thing I would do is thus simply to unplug and re-plug the SATA cable at both ends; it may be slightly loose, causing it to lose electrical contact intermittently. See if that resolves the problem. It might even be worth doing this to all SATA cabling in your computer, not just the affected disk. If you are using an off-board HBA, I would also remove and re-seat that card, mainly because it's an easy thing to try while you are already messing around with the cabling. Failing that, try throwing away and replacing the SATA cable, preferably with a high-quality cable. A high-quality cable will be slightly more expensive, but I find that it's usually well worth the small extra expense if it helps avoid headaches like this. Nobody likes seeing their storage spewing errors! | {
"source": [
"https://unix.stackexchange.com/questions/304661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22838/"
]
} |
305,055 | I have a directory on an nfs mount, which on the server is at /home/myname/.rubies Root cannot access this directory: [mitchell.usher@server ~]$ stat /home/mitchell.usher/.rubies
File: `/home/mitchell.usher/.rubies'
Size: 4096 Blocks: 8 IO Block: 32768 directory
Device: 15h/21d Inode: 245910 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 970/mitchell.usher) Gid: ( 100/ users)
Access: 2016-08-22 15:06:15.000000000 +0000
Modify: 2016-08-22 14:55:00.000000000 +0000
Change: 2016-08-22 14:55:00.000000000 +0000
[mitchell.usher@server ~]$ sudo !!
sudo stat /home/mitchell.usher/.rubies
stat: cannot stat `/home/mitchell.usher/.rubies': Permission denied I am attempting to copy something from within that directory to /opt which only root has access to: [mitchell.usher@server ~]$ cp .rubies/ruby-2.1.3/ -r /opt
cp: cannot create directory `/opt/ruby-2.1.3': Permission denied
[mitchell.usher@server ~]$ sudo !!
sudo cp .rubies/ruby-2.1.3/ -r /opt
cp: cannot stat `.rubies/ruby-2.1.3/': Permission denied Obviously I can do the following (and is what I've done for the time being): [mitchell.usher@server ~]$ cp -r .rubies/ruby-2.1.3/ /tmp/
[mitchell.usher@server ~]$ sudo cp -r /tmp/ruby-2.1.3/ /opt/ Is there any way to do this that wouldn't involve copying it as an intermediary step or changing permissions? | You can use tar as a buffer process cd .rubies
tar cf - ruby-2.1.3 | ( cd /opt && sudo tar xvfp - ) The first tar runs as you and so can read your home directory; the second tar runs under sudo and so can write to /opt . | {
"source": [
"https://unix.stackexchange.com/questions/305055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73386/"
]
} |
305,057 | I would like to update and install some software on a Red Hat machine but have no subscription and don't plan on getting one. To get Wine I'm following this tutorial . After doing yum groupinstall 'Development Tools' I get: Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Warning: group Development Tools does not exist.
Maybe run: yum groups mark install (see man yum)
No packages in any requested group available to install or update | You can use tar as a buffer process cd .rubies
tar cf - ruby-2.1.3 | ( cd /opt && sudo tar xvfp - ) The first tar runs as you and so can read your home directory; the second tar runs under sudo and so can write to /opt . | {
"source": [
"https://unix.stackexchange.com/questions/305057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185966/"
]
} |
305,358 | In Advanced Bash-Scripting Guide , in example 27-4 , 7-th line from the bottom, I've read this: A function runs as a sub-process. I did a test in Bash, and it seems that the above statement is wrong. Searches on this site, Bash Man, and my search engine don't bring any light. Do you have the answer and would like to explain? | The Advanced Bash-Scripting Guide is not always reliable and its example scripts contain out-dated practices such as using the effectively deprecated backticks for command substitution, i.e., `command` rather than $(command) . In this particular case, it’s blatantly incorrect. The section on Shell Functions in the (canonical) Bash manual definitively states that Shell functions are executed in the current shell context; no new process is created to interpret them. | {
"source": [
"https://unix.stackexchange.com/questions/305358",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
305,549 | Let’s say you started a new application in Linux (like a text editor, etc.), but you forgot to use the “&”. What command(s) would you use in order to make that process run in the background, while NOT having to CLOSE that application? In this way, you can have both processes open and separately working (e.g., the command line terminal you used to create the process and the process such as a text editor still running.? | In the terminal window you would typically type Control + Z to "suspend" the process and then use the bg command to "background" it. eg with a sleep command $ /bin/sleep 1000
^Z[1] + Stopped /bin/sleep 1000
$ bg
[1] /bin/sleep 1000&
$ jobs
[1] + Running /bin/sleep 1000
$ We can see the process is running and I still have my command line. | {
"source": [
"https://unix.stackexchange.com/questions/305549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186383/"
]
} |
305,606 | When I fire command vmstat -s on my Linux box, I got stats as: $ vmstat -s
16305800 total memory
16217112 used memory
9117400 active memory
6689116 inactive memory
88688 free memory
151280 buffer memory I have skipped some details that is shown with this command. I understand these terms: Active memory is memory that is being used by a particular process. Inactive memory is memory that was allocated to a process that is no longer running. Just want to know, is there any way I can get the processes, with which inactive memory is allocated? Because top or vmstat command still shows the used memory as sum of active and inactive memory and I can see only processes that are using active memory but what processes are using inactive memory is still a question for me. | There are cases where looking at inactive memory is interesting, a high ratio of active to inactive memory can indicate memory pressure for example, but that condition is usually accompanied by paging/swapping which is easier to understand and observe. Another case is being able to observe a ramping up or saw-tooth for active memory over time – this can give you some forewarning of inefficient software (I've seen this with naïve software implementations exhibiting O(n) type behavior and performance degradation). The file /proc/kpageflags contains a 64-bit bitmap for every physical memory page, you can get a summary with the program page-types which may come with your kernel. Your understanding of active and inactive is incorrect however active memory are pages which have been accessed "recently" inactive memory are pages which have not been accessed "recently" "recently" is not an absolute measure of time, but depends also on activity
and memory pressure (you can read some of the technical details in the free book Understanding the Linux Virtual Memory Manager , Chapter 10 is relevant here), or the kernel documentation ( pagemap.txt ). Each list is stored as an LRU (more or less). Inactive memory pages are good candidates for writing to the swapfile, either pre-emptively (before free memory pages are required) or when free memory drops below a configured limit and free pages are (expected to be imminently) needed. Either flag applies to pages allocated to running processes, with the exception of persistent or shared memory all memory is freed when a process exits, it would be considered a bug otherwise. This low level page flagging doesn't need to know the PID (and a memory page can have more than one PID with it mapped in any case), so the information required to provide the data you request isn't in one place. To do this on a per-process basis you need to extract the virtual address ranges from /prod/PID/maps , convert to PFN (physical page) with /proc/PID/pagemap , and index into /proc/kpageflags . It's all described in pagemap.txt , and takes about 60-80 lines of C. Unless you are troubleshooting the VM system, the numbers aren't very interesting. One thing you could do is count the inactive and swap-backed pages per-process, these numbers should indicate processes which have a low RSS (resident) size compared with VSZ (total VM size). Another thing might be to infer a memory leak, but there are better tools for that task. | {
"source": [
"https://unix.stackexchange.com/questions/305606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186413/"
]
} |
305,735 | Using aptitude I can make a search like: aptitude search '~i bash' This seems to be an aptitude specific regex. Is it possible to do the same thing using apt or apt-cache without additional commands? apt search '~i bash' is not working. | You can try: apt list --installed bash This will try to list the installed package s with the name bash However, if you wanted to search for a particular file, use apt-file The following command will list all the packages that have string bash within their name: apt list -a --installed bash As suggested by @Exostor apt list -a --installed bash is not always the case to list those packages that start with a particular string, instead use: apt list -a --installed bash* If globbing is what you're searching for, please upvote @Exostor comment below. | {
"source": [
"https://unix.stackexchange.com/questions/305735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102020/"
]
} |
305,927 | I'm quite new to Bash scripting. I have a "testscript", which I used as the basis for a more advanced / useful script: #!/bin/bash
files=$1
for a in $files
do
echo "$a"
done When I call this without any quotes it just picks up one file in a directory: testscript *.txt But when I call it with quotes it works correctly and picks out all the text files: testscript '*.txt' What is going on here? | When you call a program testscript *.txt then your shell does the expansion and works out all the values. So it might, effectively call your program as testscript file1.txt file2.txt file3.txt file4.txt Now your program only looks at $1 and so only works on file1.txt . By quoting on the command line you are passing the literal string *.txt to the script, and that is what is stored in $1 . Your for loop then expands it. Normally you would use "$@" and not $1 in scripts like this. This is a "gotcha" for people coming from CMD scripting, where the command shell doesn't do globbing (as it's known) and always passes the literal string. | {
"source": [
"https://unix.stackexchange.com/questions/305927",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186571/"
]
} |
305,949 | Ok, so I have 1 server with pfSense and many virtual servers. I'm using Nginx upstream functionality to run multiplies WEB servers on same public IP. Of course I need to know REAL users IP not Nginx proxy which is 192.168.2.2, but after switching to pfSense (recently had simple consumer router) web servers can't see real users IP. I have tried to change various settings in System / Advanced / Firewall & NAT like:
NAT Reflection mode for port forwards
Enable automatic outbound NAT for Reflection Also in Firewall / NAT / Outbound tried every mode, nothing helped still every user have IP of my Proxy server. So how to disable masquarading, or how to pass real client IP. Update Ok, so it seams problem is with subdomains not domains. Situation now: If client go to domain.com - everything is fine backend server can see real clinet IP If client go to subdomain.domain.com - backend server see proxy server IP All domains A records points to external IP, then pfSense forward 80 port to proxy, then proxy depending on domain forward to corresponding internal server. I have 2 physical servers, 1 - pfSense router and another with virtualbox running many VM's in this example 4 VM's Another one interesting thing, when i try to reach troublesome subdomain.domain1.com from inside local network I get this: Again, no problems with domain1.com and domain2.com and so on... | When you call a program testscript *.txt then your shell does the expansion and works out all the values. So it might, effectively call your program as testscript file1.txt file2.txt file3.txt file4.txt Now your program only looks at $1 and so only works on file1.txt . By quoting on the command line you are passing the literal string *.txt to the script, and that is what is stored in $1 . Your for loop then expands it. Normally you would use "$@" and not $1 in scripts like this. This is a "gotcha" for people coming from CMD scripting, where the command shell doesn't do globbing (as it's known) and always passes the literal string. | {
"source": [
"https://unix.stackexchange.com/questions/305949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186141/"
]
} |
306,111 | I am a little bit confused on what do these operators do differently when used in bash (brackets, double brackets, parenthesis and double parenthesis). [[ , [ , ( , (( I have seen people use them on if statements like this : if [[condition]]
if [condition]
if ((condition))
if (condition) | In Bourne-like shells, an if statement typically looks like if
command-list1
then
command-list2
else
command-list3
fi The then clause is executed if the exit code of the command-list1 list of commands is zero. If the exit code is nonzero, then the else clause is executed. command-list1 can be
simple or complex. It can, for example, be a sequence of one or more pipelines separated by one of the operators ; , & , && , || or newline. The if conditions shown below are just special cases of command-list1 : if [ condition ] [ is another name for the traditional test command. [ / test is a standard POSIX utility. All POSIX shells have it builtin (though that's not required by POSIX²). The test command sets an exit code and the if statement acts accordingly. Typical tests are whether a file exists or one number is equal to another. if [[ condition ]] This is a new upgraded variation on test ¹ from ksh that bash , zsh , yash , busybox sh also support. This [[ ... ]] construct also sets an exit code and the if statement acts accordingly. Among its extended features, it can test whether a string matches a wildcard pattern (not in busybox sh ). if ((condition)) Another ksh extension that bash and zsh also support. This performs arithmetic. As the result of the arithmetic, an exit code is set and the if statement acts accordingly. It returns an exit code of zero (true) if the result of the arithmetic calculation is nonzero. Like [[...]] , this form is not POSIX and therefore not portable. if (command) This runs command in a subshell. When command completes, it sets an exit code and the if statement acts accordingly. A typical reason for using a subshell like this is to limit side-effects of command if command required variable assignments or other changes to the shell's environment. Such changes do not remain after the subshell completes. if command command is executed and the if statement acts according to its exit code. ¹ though not really a command but a special shell construct with its own separate syntax from that of normal command, and varying significantly between shell implementations ² POSIX does require that there be a standalone test and [ utilities on the system however, though in the case of [ , several Linux distributions have been known to be missing it. | {
"source": [
"https://unix.stackexchange.com/questions/306111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186194/"
]
} |
306,189 | Is there a reason why most man pages don't include a few common examples?
They usually explain all the possible options, but that makes it even harder for a beginner to understand how it's "usually" used. | That depends on the man pages... Traditionally, they have included a section with examples - but for some reason that is usually missing from the man pages under Linux (and I assume other using GNU commands - which are most these days). On for example Solaris on the other hand, almost every man page include the Example section, often with several examples. If I were to guess, FSF/GNU has for a long time discouraged use of man pages and prefer users to use info for documentation instead. info pages tend to be more comprehensive than man pages, and usually do include examples. info pages are also more "topical" - i.e. related commands (eg. commands for finding files) can often be found together. Another reason may be that GNU and its man pages are used on many different operating systems which may differ from each other (there are after all lots of differences just between different Linux distros). The intention may have been that the publisher added examples relevant to the particular OS/distro - which obviously is rarely done. I would also add that man pages were never intended to "teach beginners". UNIX was developed by computer experts (old term "hackers") and intended to be used by computer experts. The man pages were thus not made to teach a novice, but to quickly assist a computer expert who needed a reminder for some obscure option or strange file format - and this is reflected in how a man page is sectioned. man -pages are thus intended as A quick reference to refresh your memory; showing you how the command should be called, and listing available options. A deep and thorough - and usually very technical - description of all aspects of the command. It's written by computer experts, for fellow computer experts. List of environment variables and files (i.e. config files) used by the command. Reference to other documentation (eg. books), and other man pages - eg. for the format of config files and related/similar commands. That said, I very much agree with you that man pages ought to have examples, since they can explain the usage better than wading through the man page itself. Too bad examples generally aren't available on Linux man pages... Sample of the Example part of a Solaris man page - zfs(1M): (...)
EXAMPLES
Example 1 Creating a ZFS File System Hierarchy
The following commands create a filesystem named pool/home
and a filesystem named pool/home/bob. The mount point
/export/home is set for the parent filesystem, and is
automatically inherited by the child filesystem.
# zfs create pool/home
# zfs set mountpoint=/export/home pool/home
# zfs create pool/home/bob
Example 2 Creating a ZFS Snapshot
The following command creates a snapshot named yesterday.
This snapshot is mounted on demand in the .zfs/snapshot
directory at the root of the pool/home/bob file system.
# zfs snapshot pool/home/bob@yesterday
Example 3 Creating and Destroying Multiple Snapshots
The following command creates snapshots named yesterday of
pool/home and all of its descendent file systems. Each
snapshot is mounted on demand in the .zfs/snapshot directory
at the root of its file system. The second command destroys
the newly created snapshots.
# zfs snapshot -r pool/home@yesterday
# zfs destroy -r pool/home@yesterday
SunOS 5.11 Last change: 23 Jul 2012 51
System Administration Commands zfs(1M)
Example 4 Disabling and Enabling File System Compression
The following command disables the compression property for
(...) This particular man page comes with 16(!) such examples... Kudos to Solaris! (And I'll admit I myself have mostly followed these examples, instead of reading the whole man page for this command...) | {
"source": [
"https://unix.stackexchange.com/questions/306189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186922/"
]
} |
306,293 | I keep reading/hearing that /etc is for system configuration files. Can someone explain/give me some intuition for why these scripts that start/stop/restart various programs are usually in /etc instead of /var or /usr or something similar? | Early on (both historically, and in the process of booting...), /etc is a part of / (the first mounted filesystem), while /usr was not (until disks got large). /var holds temporary data, while these scripts are not temporary. It's not that simple, but it started that way and there's little reason to rework the entire directory layout. | {
"source": [
"https://unix.stackexchange.com/questions/306293",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162496/"
]
} |
306,438 | Say a file called abc exists in the current directory and it has some text in it. When you execute the command: cat abc > abc Why do the contents of the the file abc disappear? Why does the command delete the text in it and the file becomes an empty file? | Because of the order how things are done. When you do: cat abc > abc > is the output redirection operator, when the shell sees this it opens the file in truncation mode using O_TRUNC flag with open(2) i.e. open("abc", O_TRUNC) , so whatever was there in the file will be gone. Note that, this redirection is done first by the shell before the cat command runs. So when the command cat abc executes, the file abc is already truncated hence cat will find the file empty. | {
"source": [
"https://unix.stackexchange.com/questions/306438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187128/"
]
} |
306,673 | Run a job in the background $ command & When it's done, the terminal prints [n]+ command or [n]- command So sometimes it's a plus and other times it's a minus following [n] . What does plus/minus mean? | They are to distinguish between current and previous job; the last job and the second last job for more than two jobs, with + for the last and - for the second last one. From man bash : The previous job may be referenced using %- . If there is only a
single job, %+ and %- can both be used to refer to that job. In
output pertaining to jobs (e.g., the output of the jobs command), the
current job is always flagged with a + , and the previous job with a - . Example: $ sleep 5 &
[1] 21795
$ sleep 5 &
[2] 21796
$ sleep 5 &
[3] 21797
$ sleep 5 &
[4] 21798
$ jobs
[1] Running sleep 5 &
[2] Running sleep 5 &
[3]- Running sleep 5 &
[4]+ Running sleep 5 &
$
[1] Done sleep 5
[2] Done sleep 5
[3]- Done sleep 5
[4]+ Done sleep 5 | {
"source": [
"https://unix.stackexchange.com/questions/306673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67765/"
]
} |
307,046 | Is there any tool available to sync files between two or more linux servers immediately after writing the file in the disk? The rsync command does not suit me on this, because if I set rsync in cron, the minimum time I can set is 1 minute, but I need it on real-time basis. | Haven't used it myself but read about it recently. There is a daemon called lsyncd , which I presume does exactly what you need. Read more about it HERE | {
"source": [
"https://unix.stackexchange.com/questions/307046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184773/"
]
} |
307,167 | I would like to know, given a binary's name, which package I should install on Alpine Linux. How can I do that? | You have three ways basically. First: The package should be installed and you need to specify the full path : apk info --who-owns /path/to/the/file Second: Use the pkgs.alpinelinux.org website Third: Use the api.alpinelinux.org API by filtering the json output.
For this you need a json parser like jq: apk add jq then use the API with the instructions provided here UPDATE on 2022-04-07 I've released a tiny utility that allows to search via CLI what can be found on pkgs.alpinelinux.org website: https://github.com/fcolista/apkfile .: Francesco | {
"source": [
"https://unix.stackexchange.com/questions/307167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42478/"
]
} |
307,497 | Is it possible to stop my laptop going to sleep when I close the lid? GNOME 3.20, Fedora 24. My laptop does not reliably wake from sleep. (It happens to be a hardware issue... I think I basically killed it while trying to replace a wifi card. But I want to keep using it for a while longer). | Install GNOME Tweak Tool and go to the Power section. There's an option to disable the automatic suspend on lid close. Option details I compared dconf before and after to find the option, but it turns out that's not how it's implemented. Instead, Tweak Tool creates ~/.config/autostart/ignore-lid-switch-tweak.desktop . The autostart is a script which effectively runs systemd-inhibit --what=handle-lid-switch . So we can see the lid close action is handled purely by systemd-logind. Alternative route An alternative would be to edit /etc/systemd/logind.conf to include: HandleLidSwitch=ignore This would work all the time, not just when your user is logged in. | {
"source": [
"https://unix.stackexchange.com/questions/307497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
307,580 | So when ever I use the up arrow it does not show the history of last command but does ^[[A and down ^[[B
I don't know what is this called but also I have $ but running su did not have it. Using Ubuntu Server 16.04.1 | History is not present in all shells. You need to start a shell with history like bash . To do so, just type the name of the shell, like bash or the full path of the executable, like /bin/bash | {
"source": [
"https://unix.stackexchange.com/questions/307580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187309/"
]
} |
307,994 | I would like to compute the bcrypt hash of my password. Is there an open source command line tool that would do that ? I would use this hash in the Syncthing configuration file (even if I know from here that I can reset the password by editing the config file to remove the user and password in the gui section, then restart Syncthing). | You can (ab)use htpasswd from apache-utils package, provided you have version 2.4 or higher. htpasswd -bnBC 10 "" password | tr -d ':\n' -b takes the password from the second command argument -n prints the hash to stdout instead of writing it to a file -B instructs to use bcrypt -C 10 sets the bcrypt cost to 10 The bare htpasswd command outputs in format <name>:<hash> followed by two newlines. Hence the empty string for name and tr stripping the colon and newlines. The command outputs bcrypt with $2y$ prefix, which may be problem for some uses, but can easily be fixed by another sed since the OpenBSD variant using $2a$ is compatible with the fixed crypt_blowfish variant using $2y$ . htpasswd -bnBC 10 "" password | tr -d ':\n' | sed 's/$2y/$2a/' Link to htpasswd man page: https://httpd.apache.org/docs/2.4/programs/htpasswd.html Details about bcrypt variants: https://stackoverflow.com/a/36225192/6732096 | {
"source": [
"https://unix.stackexchange.com/questions/307994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166985/"
]
} |
308,207 | I am confused about the meaning of the exit code in the end of a bash script:
I know that exit code 0 means that it finished successfully, and that there are many more exit codes numbers (127 if I'm not mistaken?) My question is about when seeing exit code 0 at the end of a script, does it force the exit code as 0 even if the script failed or does it have another meaning? | The builtin command exit exits the shell (from Bash's reference ): exit [n] Exit the shell, returning a status of n to the shell’s
parent. If n is omitted, the exit status is that of the last command
executed. Any trap on EXIT is executed before the shell terminates. Running to the end of file also exits, returning the return code of the last command, so yes, a final exit 0 will make the script exit with successful status regardless of the exit status of the previous commands. (That is, assuming the script reaches the final exit .) At the end of a script you could also use true or : to get an exit code of zero. Of course more often you'd use exit from inside an if to end the script in the middle. These should print a 1 ( $? contains the exit code returned by the previous command): sh -c "false" ; echo $?
sh -c "false; exit" ; echo $? While this should print a 0: sh -c "false; exit 0" ; echo $? I'm not sure if the concept of the script "failing" when executing an exit makes sense, as it's quite possible to some commands ran by the script to fail, but the script itself to succeed. It's up to the author of the script to decide what is a success and what isn't. Also, the standard range for exit codes is 0..255. Codes above 127 are used by the shell to indicate a process terminated by a signal, but they can be returned in the usual way. The wait system call actually returns a wider value, with the rest containing status bits set by the operating system. | {
"source": [
"https://unix.stackexchange.com/questions/308207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181400/"
]
} |
308,260 | I'm trying to understand what this Docker entrypoint does . It seems to me that's a very common pattern when writing Dockerfiles, but my bash skills are limited and I have no idea of all the special bash symbols kung fu. Also, it's hard to google for "--", "$!" etc. What are these called in bash world? To summarize, what is the line bellow trying to do? if [ "${1#-}" != "$1" ]; then
set -- haproxy "$@"
fi | The set command (when not setting options) sets the positional parameters
eg $ set a b c
$ echo $1
a
$ echo $2
b
$ echo $3
c The -- is the standard "don't treat anything following this as an option" The "$@" are all the existing position paramters. So the sequence set -- haproxy "$@" Will put the word haproxy in front of $1 $2 etc. eg $ echo $1,$2,$3
a,b,c
$ set -- haproxy "$@"
$ echo $1,$2,$3,$4
haproxy,a,b,c | {
"source": [
"https://unix.stackexchange.com/questions/308260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109547/"
]
} |
308,269 | While I probably need some kind of monitoring tool like mon or sysstat or something. I am looking for a way to know which tasks take the most of my memory,CPU time etc. While I understand that each workstation/desktop PC is unique, a typical workload on one of my desktops is something like this : Single user (even though the choice is there to have multiple users) games - Aisleriot, kshisen torrent client - qbittorrent mail client - thunderbird messaging clients - empathy, telegram and quasselcore and client. Browser - Firefox and sometimes tor desktop - MATE media player - mpv most of the time it's usually a light workload most of the time but I still see the hdd sensor lighting up which means some background tasks is going intently even though no foreground tasks are happening. While I could use top to find what tasks take most of the CPU and memory cycles, it is only for the moment. I realize I need something which I could figure out over period of time (say a day), runs in the background and produces nice enough graphs to analyze, and most of all has the raw data in user-defined location, say in /home/shirish/mon or whatever directory name is there. It is ok if it is /var/log//logs is where it keeps. I just need to know few things : Which processes take memory and CPU over time, foreground and background. Which background processes take most of the CPU and memory The logging is tunable, taking snaps every 2-5 minutes. I am sure there are tools and ways in which people have done it for servers etc. but has anybody done for the above scenario ? If yes, how they went about it ? | The set command (when not setting options) sets the positional parameters
eg $ set a b c
$ echo $1
a
$ echo $2
b
$ echo $3
c The -- is the standard "don't treat anything following this as an option" The "$@" are all the existing position paramters. So the sequence set -- haproxy "$@" Will put the word haproxy in front of $1 $2 etc. eg $ echo $1,$2,$3
a,b,c
$ set -- haproxy "$@"
$ echo $1,$2,$3,$4
haproxy,a,b,c | {
"source": [
"https://unix.stackexchange.com/questions/308269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
308,311 | I created my own service for jekyll and when I start the service it seems like it doesn't run as a background process because I am forced to ctrl + c out of it. It just stays in the foreground because of the --watch. I am not sure how to go around it and make it so that it runs in the background. Any thoughts? # /etc/systemd/system/jekyll-blog.service
[Unit]
Description=Start blog jekyll
[Service]
Type=forking
WorkingDirectory=/home/blog
ExecStart=/usr/local/bin/jekyll build --watch --incremental -s /home/blog -d /var/www/html/blog &
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
User=root
Group=root
[Install]
WantedBy=multi-user.target | Systemd is able to handle various different service types specifically one of the following simple - A long-running process that does not background its self and stays attached to the shell. forking - A typical daemon that forks itself detaching it from the process that ran it, effectively backgrounding itself. oneshot - A short-lived process that is expected to exit. dbus - Like simple, but notification of processes startup finishing is sent over dbus. notify - Like simple, but notification of processes startup finishing is sent over inotify. idle - Like simple, but the binary is started after the job has been dispatched. In your case you have picked Type=forking which means systemd is waiting for the process to fork itself and for the parent process to end, which it takes as an indication that the process has started successfully. However, your process is not doing this - it remains in the foreground and so systemctl start will hang indefinitely or until the processes crashes. Instead, you want Type=simple , which is the default so you can remove the line entirely to get the same effect. In this mode systemd does not wait for the processes to finish starting up (as it has no way of know when this has happened) and so continues executing and dependent services straight away. In your case there are none so this does not matter. A small note on security: You are running the service as root, this is discouraged as it is less secure than running it as an unprivileged user. The reason for this is that if there is a vulnerability in jekyll that somehow allows execution of commands (possibly via the code it is parsing) then the attacker needs to do nothing else to completely own your system. If, on the other hand, it is run as a non-privileged user, the attacker is only able to do as much damage as that user and must now attempt to gain root privileges to completely own your system. It simply adds an extra layer attackers must go though. You can simply run it as the same user that is running your web server, but this leaves you open to another potential attack. If there is a vulnerability in your web server that allows the user to manipulate files on your system they can modify the generated html files, or worst the source files and cause your server to serve anything they want. However, if the generated files and source files are only readable by the webserver and writable be another non-privileged user they will not be able to, as easily, modify them by attacking the web server. However, if you are simply serving static files from this server and keep the server up to date these attacks are very very unlikely - but still possible. It is your responsibility to weigh the risks vs the overhead of setting it up based on how critical your system is but both of these tips are very simple to set up and next to no maintenance overhead. | {
"source": [
"https://unix.stackexchange.com/questions/308311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188594/"
]
} |
308,476 | Suppose somebody downloaded a Linux distro, like Ubuntu. Suppose further modify one piece of it, say the Window Manager. Would it be perfectly legal for them to sell copies of this slightly modified version of Ubuntu (let's call it Mubuntu = Modified Ubuntu)? What if they made the new window manager portion closed source? Would it still be legal to sell? | Would it be perfectly legal for them to sell copies of this slightly modified version of Ubuntu (let's call it Mubuntu = Modified Ubuntu)? No. While the software licenses may allow you to do this, the trademark license does not: Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries. This does not affect your rights under any open source licence applicable to any of the components of Ubuntu. If you need us to approve, certify or provide modified versions for redistribution you will require a licence agreement from Canonical, for which you may be required to pay. For further information, please contact us (as set out below). and You will require Canonical’s permission to use: (i) any mark ending with the letters UBUNTU or BUNTU which is sufficiently similar to the Trademarks or any other confusingly similar mark, and (ii) any Trademark in a domain name or URL or for merchandising purposes. You would be allowed to sell an unmodified version of Ubuntu, you would be allowed to sell a heavily modified version of Ubuntu that no longer mentions the Ubuntu name, but for this slightly modified version of Ubuntu, you need an agreement with Canonical. | {
"source": [
"https://unix.stackexchange.com/questions/308476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86396/"
]
} |
308,722 | I'm looking for a portable way to obtain parent block device name (e.g. /dev/sda ) given the partition device name (e.g. /dev/sda1 ). I know I could just drop the last character, but that wouldn't work in some cases: MMC card readers typically have names like /dev/mmcblk0 , while their partitions have names like /dev/mmcblk0p1 (notice the extra p ). optional: some block devices don't have any partition table at all and are formatted as a single partition. In this case, partition device and parent block device are the same. LVM volumes are a whole different kettle of fish. I don't need to support them right now, but if taking them into account requires little extra effort, I wouldn't mind. | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"source": [
"https://unix.stackexchange.com/questions/308722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106927/"
]
} |
308,731 | When using dnf and yum on rpm based Linux distros (RHEL/Red Hat, Fedora, CentOS, etc) the utility will automatically wrap lines to make it more friendly for the user to read. This is problematic as it makes it extremely annoying to work with the data through pipelining. For example: $ dnf search jenkins-ssh-credentials-plugin-javadoc
Last metadata expiration check: 6 days, 15:30:08 ago on Thu Sep 1 21:09:10 2016.
============= N/S Matched: jenkins-ssh-credentials-plugin-javadoc =============
jenkins-ssh-credentials-plugin-javadoc.noarch : Javadoc for jenkins-ssh-credentials-plugin
$ dnf search jenkins-ssh-credentials-plugin-javadoc | grep ssh
====== N/S Matched: jenkins-ssh-credentials-plugin-javadoc =======
jenkins-ssh-credentials-plugin-javadoc.noarch : Javadoc for
: jenkins-ssh-credentials-plugin You can see that once the output for DNF is put through grep it decides to wrap the data in a completely different way then when normally displayed to the user. Multiple issues have been filed about this behavior ( #584525 , #986740 ) and consistently the issues are closed as CLOSED NOTABUG because "Yum is an interactive text-based ui which is not suited, nor intended for piping.". The solution as per the Red Hat developers is to "use a different tool for the job." It seems unreasonable to have to do this, especially when the methods supplied (install repoquery for example) don't even exist within the dnf utilities and require installing a dozen more packages just to parse the output of this data. Ideally a user would be able to just use the data in pipelining. In lieu of that, it would be nice to have a simple one-liner which could be used to make the data usable. | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"source": [
"https://unix.stackexchange.com/questions/308731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14189/"
]
} |
308,735 | In Linux , is it possible to use a local serial port? Something similar to this: ssh user@localhost I tried this on Raspbian but it doesn't work (it should place in my shell but it doesn't): microcom -d /dev/ttyAMA0 I also tried /dev/ttyS0 but to no avail. I can of course access Raspberry Pi through serial console from another machine. There is no specific use-case for this question - I just cannot understand how really serial works. If it's possible to connect to the localhost with ssh shouldn't it be also possible with serial port? | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"source": [
"https://unix.stackexchange.com/questions/308735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20604/"
]
} |
308,741 | I'm looking for something that could copy a paragraph, change the user and insert it in same file. file before: user1
this is only
a test of
a lovely idea
user2
this user shhould
be copied
user3
who has an
idea for
my problem file after ( user2 was searched,copied and inserted as user4 ): user1
this is only
a test of
a lovely idea
user2
this user shhould
be copied
user3
who has an
idea for
my problem
user4
this user shhould
be copied | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"source": [
"https://unix.stackexchange.com/questions/308741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189003/"
]
} |
308,810 | I want to copying multiple files from remote machine using rsync . So I use the following command. rsync -Pav -e 'ssh -i sshkey' user@remotemachine:/home/user/file1.zip file2.zip file3.zip . It shows following error Unexpected local arg:file2.zip If arg is a remote file/dir, prefix it
with a colon (:). rsync error: syntax or usage error (code 1) at
main.c(1362) [Receiver=3.1.0] | All remote files should be one argument for rsync. So, just put all remote files in single quotes: rsync -Pav -e 'ssh -i sshkey' 'user@remotemachine:/home/user/file1.zip file2.zip file3.zip' . BTW, you can also do this with a Asterisk (the Asterisk will be resolved by the remote shell then): rsync -Pav -e 'ssh -i sshkey' 'user@remotemachine:/home/user/*.zip' . | {
"source": [
"https://unix.stackexchange.com/questions/308810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172333/"
]
} |
308,846 | I work on a cluster shared with other colleagues. The hard disk is limited (and has been full on some occasions), so I clean up my part occasionally. I want to do this quickly, so until now I do this by making a list of files larger than 100 MB older than 3 months, and I see if I still need them. But now I am thinking that there could be a folder with >1000 smaller files that I miss, so I want to get an easy way to see if this is the case. From the way I generate data, it would help to get a list of total size per extension. In the context of this question, 'extension' as everything behind the last dot in the filename. Suppose I have multiple folders with multiple files: folder1/file1.bmp 40 kiB
folder1/file2.jpg 20 kiB
folder2/file3.bmp 30 kiB
folder2/file4.jpg 8 kiB Is it possible to make a list of total filesize per file extension, so like this: bmp 70 kiB
jpg 28 kiB I don't care about files without extension, so they can be ignored or put in one category. I already went through man pages of ls , du and find , but I don't know what is the right tool for this job... | On a GNU system: LC_ALL=C find . -name '?*.*' -type f -printf '%b.%f\0' |
LC_ALL=C gawk -F . -v RS='\0' '
{s[$NF] += $1; n[$NF]++}
END {
PROCINFO["sorted_in"] = "@val_num_asc"
for (e in s) printf "%15d %4d %s\n", s[e]*512, n[e], e
}' Or the same with perl , avoiding the -printf extension of GNU find (still using a GNU extension, -print0 , but this one is more widely supported nowadays): LC_ALL=C find . -name '?*.*' -type f -print0 |
perl -0ne '
if (@s = lstat$_){
($ext = $_) =~ s/.*\.//s;
$s{$ext} += $s[12];
$n{$ext}++;
}
END {
for (sort{$s{$a} <=> $s{$b}} keys %s) {
printf "%15d %4d %s\n", $s{$_}<<9, $n{$_}, $_;
}
}' It gives an output like: 12288 1 pnm
16384 4 gif
204800 2 ico
1040384 17 jpg
2752512 83 png If you want KiB , MiB ... suffixes, pipe to numfmt --to=iec-i --suffix=B . %b*512 gives the disk usage¹, but note that if files are hard linked several times, they will be counted several times so you may see a discrepancy with what du reports. ¹ As an exception, on HP/UX, the block size reported by lstat() / stat() is 1024 instead of 512. GNU find adjusts for that so it's %b still represents the number of 512 byte units, but with perl , you'd need to multiply by 1024 instead there. | {
"source": [
"https://unix.stackexchange.com/questions/308846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
309,209 | My pc is dual boot. I have Red Hat Enterprise Linux 5 along with Windows 7 Ultimate installed. There are some common files which are required by me in both the os. Right now I access and manipulate these files via a secondary storage device(USB or DVD RW) attached to my system. Is it possible to create a common folder/directory which is accessible to both the Linux as well as Windows os. Can the files, within such kind of folders/directories, be manipulated via both the os. How? | Of course, and it's very easy. The simplest way is to have a shared partition that uses a filesystem both OSs can understand. I usually have an NTFS-formatted partition which I mount at /data on Linux. This will be recognized as a regular partition on Windows and be assigned a letter ( D: for example) just like any other. You can then use it from both systems and the files will be available to both your OSs. | {
"source": [
"https://unix.stackexchange.com/questions/309209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189391/"
]
} |
309,219 | Some recent Bluetooth chipsets from Intel and Broadcom need to run the btattach command in user-space for Bluetooth to be enabled properly (it "attaches" the BT chipset and triggers the loading of the required firmware if needed). Such an example is the Broadcom BCM43241 rev B5 chipset found on Lenovo ThinkPad 8 tablets which needs the following command # btattach --bredr /dev/ttyS1 -P bcm but this is applicable to many other Bluetooth chipsets connected to an UART controller. Q: What is the best recommended way to trigger the required btattach command during boot to have Bluetooth enabled automatically ? P.S. The idea would be to contribute such a modification to Linux distributions starting to package the btattach command (like Debian), since right now many recent devices simply don't have Bluetooth working out-of-the-box. This would be especially useful for tablets that have no or few full-size USB ports. | Of course, and it's very easy. The simplest way is to have a shared partition that uses a filesystem both OSs can understand. I usually have an NTFS-formatted partition which I mount at /data on Linux. This will be recognized as a regular partition on Windows and be assigned a letter ( D: for example) just like any other. You can then use it from both systems and the files will be available to both your OSs. | {
"source": [
"https://unix.stackexchange.com/questions/309219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189400/"
]
} |
309,339 | How can I write a shell script that exits, if one part of it fails?
For example, if the following code snippet fails, then the script should exit. n=0
until [ $n -ge 5 ]
do
gksu *command* && break
n=$[$n+1]
sleep 3 | One approach would be to add set -e to the beginning of your script. That means (from help set ): -e Exit immediately if a command exits with a non-zero status. So if any of your commands fail, the script will exit. Alternatively, you can add explicit exit statements at the possible failure points: command || exit 1 | {
"source": [
"https://unix.stackexchange.com/questions/309339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189493/"
]
} |
309,514 | I have hundreds of files which contain " </foo:bar> " cc
bb
aa
</foo:bar>
dd
xx
vv I want to change them all at once to cc
bb
aa
</foo:bar> sed works well when i give it the exact file name sed -i "/</foo:bar>/q" 99999.txt but when I try to change all of them at once I get no result. sed -i "/<\/foo:bar>/q" *.txt | Try: sed -s -n -i '0,/<\/foo:bar>/p' *.txt -s tells sed to treat each file as separate. Because we don't want sed to quit until all the files are done, we change to just print from the beginning to <\/foo:bar> and not print the rest of the lines. -n tells sed not print unless we explicitly ask it to. The command 0,/<\/foo:bar>/p tells sed to print any line in the range from the beginning of the file to the first line that matches <\/foo:bar> . The -s option is not available for BSD/OSX sed. | {
"source": [
"https://unix.stackexchange.com/questions/309514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104749/"
]
} |
309,547 | Some shells, like bash , support Process Substitution which is a way to present process output as a file, like this: $ diff <(sort file1) <(sort file2) However, this construct isn't POSIX and, therefore, not portable. How can process substitution be achieved in a POSIX -friendly manner (i.e. one which works in /bin/sh ) ? note: the question isn't asking how to diff two sorted files - that is only a contrived example to demonstrate process substitution ! | That feature was introduced by ksh (first documented in ksh86) and was making use of the /dev/fd/ n feature
(added independently in some BSDs and AT&T systems earlier).
In ksh and up to ksh93u, it wouldn't work unless your system had support for /dev/fd/ n . zsh, bash and ksh93u+ and above can make use of temporary named pipes (named pipes added in System III)
where /dev/fd/ n are not available. On systems where /dev/fd/ n is available (POSIX doesn't specify those),
you can do process substitution (e.g., diff <(cmd1) <(cmd2) ) yourself with: {
cmd1 4<&- | {
# in here fd 3 points to the reading end of the pipe
# from cmd1, while fd 0 has been restored from the original
# stdin (saved on fd 4, now closed as no longer needed)
cmd2 3<&- | diff /dev/fd/3 -
} 3<&0 <&4 4<&- # restore the original stdin for cmd2
} 4<&0 # save a copy of stdin for cmd2 However that doesn't work with ksh93 on Linux as there, shell pipes are implemented with socketpairs instead of pipes and opening /dev/fd/3 where fd 3 points to a socket doesn't work on Linux. Though POSIX doesn't specify /dev/fd/ n , it does specify named pipes. Named pipes work like normal pipes except that you can access them from the file system. The issue here is that you have to create temporary ones and clean up afterwards, which is hard to do reliably, especially considering that POSIX has no standard mechanism (like a mktemp -d as found on some systems) to create temporary files or directories, and signal handling (to clean-up upon hang-up or kill) is also hard to do portably. You could do something like: tmpfifo() (
n=0
until
fifo=$1.$$.$n
mkfifo -m 600 -- "$fifo" 2> /dev/null
do
n=$((n + 1))
# give up after 20 attempts as it could be a permanent condition
# that prevents us from creating fifos. You'd need to raise that
# limit if you intend to create (and use at the same time)
# more than 20 fifos in your script
[ "$n" -lt 20 ] || exit 1
done
printf '%s\n' "$fifo"
)
cleanup() { rm -f -- "$fifo"; }
fifo=$(tmpfifo /tmp/fifo) || exit
cmd2 > "$fifo" & cmd1 | diff - "$fifo"
cleanup (not taking care of signal handling here). | {
"source": [
"https://unix.stackexchange.com/questions/309547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9259/"
]
} |
309,768 | I recently learned, that . ./.a.a and ./.a.a is the same. However trying source source .a.a gives an error. IMO, . being Bash alias for source shouldn't behave differently, so what am I missing? Bonus, why is . . OK while source source is not? | You can't just replace . with source everywhere; if . ./.a.a works, you can replace the first . (at least in Bash): source ./.a.a The second . represents the current directory, you can't replace that with source (especially not ./ with source as you've done). source source would be OK if you had a file called source in the current directory, containing something meaningful for your current shell. I can't see how . . would be OK... Also, . ./.a.a and ./.a.a aren't the same, the second form runs .a.a in a separate shell. See What is the difference between sourcing ('.' or 'source') and executing a file in bash? for details. | {
"source": [
"https://unix.stackexchange.com/questions/309768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65176/"
]
} |
309,788 | It looks like that you cannot create a brand new VM with virsh unless you already have a working XML file. I have just installed all the needed bits for QEMU-KVM to work, and need now to create my very first VM. How to? Hint: I don't have graphics! | There is quite a good walkthrough here . Essentially the tool you're wanting to use is virt-install, which you should already have if you have installed everything needed for QEMU-KVM. Here's the most relevant section. 6. Creating a new Guest VM using virt-install virt-install tool is used to create the VM. This tool can be used in
both interactive or non-interactive mode. In the following example, I passed all the required values to create
an VM as command line parameters to the virt-install command. # virt-install \
-n myRHELVM1 \
--description "Test VM with RHEL 6" \
--os-type=Linux \
--os-variant=rhel6 \
--ram=2048 \
--vcpus=2 \
--disk path=/var/lib/libvirt/images/myRHELVM1.img,bus=virtio,size=10 \
--graphics none \
--cdrom /var/rhel-server-6.5-x86_64-dvd.iso \
--network bridge:br0 In the above virt-install command the parameters have the following meaning: n : Name of your virtual machine description : Some valid description about your VM.
For example: Application server, database server, web server, etc. os-type : OS type can be Linux, Solaris, Unix or Windows. os-variant : Distribution type for the above os-type. For example, for linux, it can be rhel6, centos6, ubuntu14, suse11, fedora6 , etc. For windows, this can be win2k, win2k8, win8, win7 ram : Memory for the VM in MB vcpu : Total number of virtual CPUs for the VM. disk path=/var/lib/libvirt/images/myRHELVM1.img,bus=virtio,size=10 : Path where the VM image files is stored. Size in GB. In this example,
this VM image file is 10GB. graphics none : This instructs virt-install to use a text console on VM serial port instead of graphical VNC window. If you have the
xmanager set up, then you can ignore this parameter. cdrom : Indicates the location of installation image. You can specify the NFS or http installation location (instead of –-cdrom). For
example: --location=http://.com/pub/rhel6/x86_64/* network bridge:br0 : This example uses bridged adapter br0. It is also possible to create your own network on any specific port instead of bridged adapter. If you want to use the NAT then use something like
below for the network parameter with the virtual network name known as
VMnetwork1. All the network configuration files are located under
/etc/libvirt/qemu/networks/ for the virtual machines. For example: –-network network=VMnetwork1 | {
"source": [
"https://unix.stackexchange.com/questions/309788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135585/"
]
} |
309,938 | When you accidentally attempt to connect to the wrong server with password credentials is it possible for the administrator to read and log the password you used? | Simple put: yes More detail... If you connect to my machine then you don't know if I'm running a normal ssh server, or one that has been modified to write out the password being passed. Further, I wouldn't necessarily need to modify sshd , but could write a PAM module (eg using pam_script ), which will be passed your password. So, yes. NEVER send your password to an untrusted server. The owner of the machine could easily have configured it to log all attempted passwords. (In fact this isn't uncommon in the infosec world; set up a honeypot server to log the passwords attempted) | {
"source": [
"https://unix.stackexchange.com/questions/309938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
310,540 | Using zsh , I get a "No match found" message when choosing a pattern that does not fit with rm and that even when redirecting the output. # rm * > /dev/zero 2>&1
zsh: no matches found: * How can I get rid of this message? | This behaviour is controlled by several of Zsh's globbing options . By default, if a command line contains a globbing expression which doesn't match anything, Zsh will print the error message you're seeing, and not run the command at all. You can disable this in three different ways: setopt +o nomatch will leave globbing expressions which don't match anything as-is, and you'll get an error message from rm (which you can disable using -f , although that's a bad idea since it will force removals in other situations where you might not want to); setopt +o nullglob will delete patterns which don’t match anything (so they will be effectively ignored); setopt +o cshnullglob will delete patterns which don’t match anything, and if all patterns in a command are removed, report an error. The last two override nomatch . All these options can be unset with setopt -o … . nullglob can be enabled for a single pattern using the N glob qualifier , e.g. rm *(N) . | {
"source": [
"https://unix.stackexchange.com/questions/310540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
310,666 | When I install a GUI application using nix, I see desktop files end inside ~/.nix-profile directory, e.g: ~/.nix-profile/share/applications/firefox.desktop However, my desktop expect that files to be in /user/share/applications in order to be able to create desktop icons for them. Is there any way to tell nix to symlink desktop files to /user/share/applications so I don't have to do it manually? Thanks | Supposing that you are using a distribution other than NixOS, then yes you can expect your desktop environment to be looking for your applications in /usr/share/applications while those installed with Nix are actually in ~/.nix-profile/share/applications . Instead of creating a symlink from /usr/share/applications you should rather tell you desktop where to look. You should be able to do so by adding the following to your ~/.profile : export XDG_DATA_DIRS=$HOME/.nix-profile/share:$HOME/.share:"${XDG_DATA_DIRS:-/usr/local/share/:/usr/share/}" So your desktop will be looking for applications both in /usr/share/applications and ~/.nix-profile/share/applications , with a priority given to the applications installed with Nix. For more info, https://nixos.org/wiki/KDE#Using_KDE_outside_NixOS | {
"source": [
"https://unix.stackexchange.com/questions/310666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10357/"
]
} |
310,737 | After a shutdown command is issued, sometimes one gets a status message like this: A stop job is running for Session 1 of user xy and then the system hangs for awhile, or forever depending on ??? So what exactly is "a stop job"? Also, why does it sometimes estimate the time it will take, quite accurately, and other times it can run forever? | systemd operates internally in terms of a queue of "jobs". Each job (simplifying a little bit) is an action to take: stop, check, start, or restart a particular unit . When (for example) you instruct systemd to start a service unit , it works out a list of stop and start jobs for whatever units (service units, mount units, device units, and so forth) are necessary for achieving that goal, according to unit requirements and dependencies, orders them, according to unit ordering relationships, works out and (if possible) fixes up any self-contradictions, and (if that final step is successful) places them in the queue. Then it tries to perform the enqueued "jobs". A stop job is running for Session 1 of user xy The unit display name here is Session 1 of user xy . This will be (from the display name) a session unit, not a service unit. This is the user-space login session abstraction that is maintained by systemd's logind program and its PAM plugins. It is (in essence and in theory) a grouping of all of the processes that that user is running as a "login session" somewhere. The job that has been enqueued against it is stop . And it's probably taking a long time because the systemd people have erroneously conflated session hangup with session shutdown . They break the former to get the latter to work, and in response some people alter systemd to break the latter to get the former to work. The systemd people really should recognize that they are two different things. In your login session, you have something that ignores SIGTERM or that takes a long time to terminate once it has seen SIGTERM . Ironically, the former is the long-standing behaviour of some job-control shells. The correct way to terminate login session leaders when they are these particular job-control shells is to tell them that the session has been hung up , whereupon they terminate all of their jobs (a different kind of job to the internal systemd job) and then terminate themselves. What's actually happening is that systemd is waiting the unit's stop timeout until it resorts to SIGKILL . This timeout is configurable per unit, of course, and can be set to never time out. Hence why one can potentially see different behaviours. Further reading Lennart Poettering (2015). systemd . systemd manual pages. Freedesktop.org. Jonathan de Boyne Pollard (2016-06-01). systemd kills background processes after user logs out . 825394. Debian bug tracker. Lennart Poettering (2015). systemd.kill . systemd manual pages. Freedesktop.org. Lennart Poettering (2015). systemd.service . systemd manual pages. Freedesktop.org. Why does bash ignore SIGTERM? https://superuser.com/questions/1102242/ | {
"source": [
"https://unix.stackexchange.com/questions/310737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181039/"
]
} |
310,752 | I'm building an IOT device, powered by headless Debain on a CHIP ( https://getchip.com/ ), and will have connectivity to a customer's wifi. I'm trying to build in functionality for wifi connectivity to the customer's router in a way that wouldn't require the customer to ever need to input a password and username. Basically, I'd like to have WPS push-button functionality in Unix. I've installed wpa_cli , and have been tinkering around with wpa_supplicant.conf. However I'm very confused. The example .conf document located here , states that we'd need to input all the parameters of the router ahead of time. Why would that ever need to be the case? Doesn't that defeat the purpose of WPS (i.e. WPS should be blind to any access points and should handshake with the nearest router that has its WPS window open)? | systemd operates internally in terms of a queue of "jobs". Each job (simplifying a little bit) is an action to take: stop, check, start, or restart a particular unit . When (for example) you instruct systemd to start a service unit , it works out a list of stop and start jobs for whatever units (service units, mount units, device units, and so forth) are necessary for achieving that goal, according to unit requirements and dependencies, orders them, according to unit ordering relationships, works out and (if possible) fixes up any self-contradictions, and (if that final step is successful) places them in the queue. Then it tries to perform the enqueued "jobs". A stop job is running for Session 1 of user xy The unit display name here is Session 1 of user xy . This will be (from the display name) a session unit, not a service unit. This is the user-space login session abstraction that is maintained by systemd's logind program and its PAM plugins. It is (in essence and in theory) a grouping of all of the processes that that user is running as a "login session" somewhere. The job that has been enqueued against it is stop . And it's probably taking a long time because the systemd people have erroneously conflated session hangup with session shutdown . They break the former to get the latter to work, and in response some people alter systemd to break the latter to get the former to work. The systemd people really should recognize that they are two different things. In your login session, you have something that ignores SIGTERM or that takes a long time to terminate once it has seen SIGTERM . Ironically, the former is the long-standing behaviour of some job-control shells. The correct way to terminate login session leaders when they are these particular job-control shells is to tell them that the session has been hung up , whereupon they terminate all of their jobs (a different kind of job to the internal systemd job) and then terminate themselves. What's actually happening is that systemd is waiting the unit's stop timeout until it resorts to SIGKILL . This timeout is configurable per unit, of course, and can be set to never time out. Hence why one can potentially see different behaviours. Further reading Lennart Poettering (2015). systemd . systemd manual pages. Freedesktop.org. Jonathan de Boyne Pollard (2016-06-01). systemd kills background processes after user logs out . 825394. Debian bug tracker. Lennart Poettering (2015). systemd.kill . systemd manual pages. Freedesktop.org. Lennart Poettering (2015). systemd.service . systemd manual pages. Freedesktop.org. Why does bash ignore SIGTERM? https://superuser.com/questions/1102242/ | {
"source": [
"https://unix.stackexchange.com/questions/310752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190639/"
]
} |
310,860 | A script I wrote does something and, at the end, appends some lines to its own logfile. I'd like to keep only the last n lines (say, 1000 lines) of the logfile. This can be done at the end of the script in this way: tail -n 1000 myscript.log > myscript.log.tmp
mv -f myscript.log.tmp myscript.log but is there a more clean and elegant solution? Perhaps accomplished via a single command? | It is possible like this, but as others have said, the safest option is the generation of a new file and then a move of that file to overwrite the original. The below method loads the lines into BASH, so depending on the number of lines from tail , that's going to affect the memory usage of the local shell to store the content of the log lines. The below also removes empty lines should they exist at the end of the log file (due to the behaviour of BASH evaluating "$(tail -1000 test.log)" ) so does not give a truly 100% accurate truncation in all scenarios, but depending on your situation, may be sufficient. $ wc -l myscript.log
475494 myscript.log
$ echo "$(tail -1000 myscript.log)" > myscript.log
$ wc -l myscript.log
1000 myscript.log | {
"source": [
"https://unix.stackexchange.com/questions/310860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34039/"
]
} |
310,957 | I want to set -x at the beginning of my script and "undo" it (go back to the state before I set it) afterward instead of blindly setting +x . Is this possible? P.S.: I've already checked here ; that didn't seem to answer my question as far as I could tell. | Abstract To reverse a set -x just execute a set +x . Most of the time, the reverse of an string set -str is the same string with a + : set +str . In general, to restore all (read below about bash errexit ) shell options (changed with set command) you could do (also read below about bash shopt options): oldstate="$(set +o)" # POSIXly store all set options.
.
.
set -vx; eval "$oldstate" # restore all options stored. Should be enough, but bash has two groups of options accessed via set (or shopt -po ) and some others accessed via shopt -p . Also, bash doesn't preserve set -e (errexit) on entering subshells. Note that the list of options that results from expanding $- might not be valid to re-enter in a shell. To capture the whole present state (in bash) use: oldstate="$(shopt -po; shopt -p)"; [[ -o errexit ]] && oldstate="$oldstate; set -e" Or, if you don't mind setting the inherit_errexit flag (and your bash is ≥4.4): shopt -s inherit_errexit; oldstate="$(shopt -po; shopt -p)" Longer Description bash This command: shopt -po xtrace is used to generate an executable string that reflects the state of the option(s).
The p flag means print, and the o flag specifies that we are asking about option(s) set by the set command (as opposed to option(s) set only by the shopt command).
You can assign this string to a variable, and execute the variable at the end of your script to restore the initial state. # store state of xtrace option.
tracestate="$(shopt -po xtrace)"
# change xtrace as needed
echo "some commands with xtrace as externally selected"
set -x
echo "some commands with xtrace set"
# restore the value of xtrace to its original value.
eval "$tracestate" This solution also works for multiple options simultaneously: oldstate="$(shopt -po xtrace noglob errexit)"
# change options as needed
set -x
set +x
set -f
set -e
set -x
# restore to recorded state:
set +vx; eval "$oldstate" Adding set +vx avoids the printing of a long list of options. If you don’t list any option names, oldstate="$(shopt -po)" it gives you the values of all (set) options.
And, if you leave out the o flag,
you can do the same things with shopt options: # store state of dotglob option.
dglobstate="$(shopt -p dotglob)"
# store state of all options.
oldstate="$(shopt -p)" If you need to test whether a set option is set,
the most idiomatic (Bash) way to do it is: [[ -o xtrace ]] which is better than the other two similar tests: [[ $- =~ x ]] [[ $- == *x* ]] With any of the tests, this works: # record the state of the xtrace option in ts (tracestate):
[ -o xtrace ] && ts='set -x' || ts='set +x'
# change xtrace as needed
echo "some commands with xtrace as externally selected"
set -x
echo "some commands with xtrace set"
# set the xtrace option back to what it was.
eval "$ts" Here’s how to test the state of a shopt option: if shopt -q dotglob
then
# dotglob is set, so “echo .* *” would list the dot files twice.
echo *
else
# dotglob is not set. Warning: the below will list “.” and “..”.
echo .* *
fi POSIX A simple, POSIX-compliant solution to store all set options is: set +o which is described in the POSIX standard as: +o Write the current option settings to standard output in a format
that is suitable for reinput to the shell
as commands that achieve the same options settings. So, simply: oldstate=$(set +o) will preserve values for all options set using the set command (in some shells). Again, restoring the options to their original values is a matter of executing the variable: set +vx; eval "$oldstate" This is exactly equivalent to using Bash's shopt -po . Note that it will not cover all possible Bash options, as some of those are set (only) by shopt . bash special case There are many other shell options listed with shopt in bash: $ shopt
autocd off
cdable_vars off
cdspell off
checkhash off
checkjobs off
checkwinsize on
cmdhist on
compat31 off
compat32 off
compat40 off
compat41 off
compat42 off
compat43 off
complete_fullquote on
direxpand off
dirspell off
dotglob off
execfail off
expand_aliases on
extdebug off
extglob off
extquote on
failglob off
force_fignore on
globasciiranges off
globstar on
gnu_errfmt off
histappend on
histreedit off
histverify on
hostcomplete on
huponexit off
inherit_errexit off
interactive_comments on
lastpipe on
lithist off
login_shell off
mailwarn off
no_empty_cmd_completion off
nocaseglob off
nocasematch off
nullglob off
progcomp on
promptvars on
restricted_shell off
shift_verbose off
sourcepath on
xpg_echo off Those could be appended to the variable set above and restored in the same way: $ oldstate="$oldstate;$(shopt -p)"
.
. # change options as needed.
.
$ eval "$oldstate" bash's set -e special case In bash, the value of set -e ( errexit ) is reset inside sub-shells, that makes it difficult to capture its value with set +o inside a $(…) sub-shell. As a workaround, use: oldstate="$(set +o)"; [[ -o errexit ]] && oldstate="$oldstate; set -e" Or (if it doesn't contradict your goals and your bash supports it) you can use the inherit_errexit option. Note : each shell has a slightly different way to build the list of options that are set or unset (not to mention different options that are defined), so the strings are not portable between shells, but are valid for the same shell. zsh special case zsh also works correctly (following POSIX) since version 5.3. In previous versions it followed POSIX only partially with set +o in that it printed options in a format that was suitable for reinput to the shell as commands, but only for set options (it didn't print un-set options). mksh special case The mksh (and by consequence lksh) is not yet (MIRBSD KSH R54 2016/11/11) able to do this. The mksh manual contains this: In a future version, set +o will behave POSIX compliant and print commands to restore the current options instead. | {
"source": [
"https://unix.stackexchange.com/questions/310957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54386/"
]
} |
311,090 | For some debugging purposes I enabled the command set -x . Now the output of my bash is like this: $ ls
+ ls --color=auto
Certificates Desktop Documents Downloads Dropbox ... How can I disable set -x so I won't see stuff like + ls --color=auto ? | You just need to run set +x From man bash : Using + rather than - causes these options to be turned off. | {
"source": [
"https://unix.stackexchange.com/questions/311090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152402/"
]
} |
311,095 | I've recently learned about the /dev/udp and /dev/tcp pseudo-devices here . Are they specific to some GNU/Linux distributions or can I find them on other unix systems? Are they standardized in some way? So far, I've been able to use them successfuly on OS X, Arch Linux and CentOS. | This is a feature of the shell and not the operating system. So, for example,on Solaris 10 with ksh88 as the shell: % cat < /dev/tcp/localhost/22
ksh: /dev/tcp/localhost/22: cannot open However if we switch to bash : % bash
bash-3.2$ cat < /dev/tcp/localhost/22
SSH-2.0-Sun_SSH_1.1.5 So bash interprets the /dev/tcp but ksh88 didn't. On Solaris 11 with ksh93 as the shell: % cat < /dev/tcp/localhost/22
SSH-2.0-Sun_SSH_2.2 So we can see it's very dependent on the shell in use. | {
"source": [
"https://unix.stackexchange.com/questions/311095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42478/"
]
} |
311,119 | I need to view the members of a group related to an oracle installation. | You can use getent to display the group's information. getent uses library calls to fetch the group information, so it will honour settings in /etc/nsswitch.conf as to the sources of group data. Example: $ getent group simpsons
simpsons:x:742:homer,marge,bart,lisa,maggie The fields, separated by : , are— Group name Encrypted password (not normally used) Numerical group ID Comma-separated list of members | {
"source": [
"https://unix.stackexchange.com/questions/311119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167348/"
]
} |
311,758 | In a bash script, how can I remove a word from a string, the word would be stored in a variable. FOO="CATS DOGS FISH MICE"
WORDTOREMOVE="MICE" | Try: $ printf '%s\n' "${FOO//$WORDTOREMOVE/}"
CATS DOGS FISH This also work in ksh93 , mksh , zsh . POSIXLY: FOO="CATS DOGS FISH MICE"
WORDTOREMOVE="MICE"
remove_word() (
set -f
IFS=' '
s=$1
w=$2
set -- $1
for arg do
shift
[ "$arg" = "$w" ] && continue
set -- "$@" "$arg"
done
printf '%s\n' "$*"
)
remove_word "$FOO" "$WORDTOREMOVE" It assumes your words are space delimited and has side effect that remove spaces before and after "$WORDTOREMOVE" . | {
"source": [
"https://unix.stackexchange.com/questions/311758",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
311,904 | In the ASCII table the 'J' character exists which has code points in different numeral systems: Oct Dec Hex Char
112 74 4A J It's possible to print this char by an octal code point by printing printf '\112' or echo $'\112' . How do I print the same character by decimal and hexadecimal code point presentations? | Hex: printf '\x4a' Dec: printf "\\$(printf %o 74)" Alternative for hex :-) xxd -r <<<'0 4a' | {
"source": [
"https://unix.stackexchange.com/questions/311904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179072/"
]
} |
312,146 | Executables are stored in /usr/libexec on Unix-like systems. The FHS says (section 4.7. /usr/libexec : Binaries run by other programs (optional)" : /usr/libexec includes internal binaries that are not intended to be executed directly by users or shell scripts. Applications may use a single subdirectory under /usr/libexec . On macOS, rootless-init a program called by launchd immediately after booting, is stored in /usr/libexec . Why would it be stored in /usr/libexec when it is a standalone executable that could be stored in /usr/bin or /usr/sbin ? init and other programs not called directly by shell scripts are also stored in folders like [/usr]/{bin,sbin} . | It's a question of supportability - platform providers have learned from years of experience that if you put binaries in PATH by default, people will come to depend on them being there, and will come to depend on the specific arguments and options they support. By contrast, if something is put in /usr/libexec/ it's a clear indication that it's considered an internal implementation detail, and calling it directly as an end user isn't officially supported. You may still decide to access those binaries directly anyway, you just won't get any support or sympathy from the platform provider if a future upgrade breaks the private interfaces you're using. | {
"source": [
"https://unix.stackexchange.com/questions/312146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89807/"
]
} |
312,280 | I have a string: one_two_three_four_five I need to save in a variable A value two and in variable B value four from the above string I am using ksh. | Use cut with _ as the field delimiter and get desired fields: A="$(cut -d'_' -f2 <<<'one_two_three_four_five')"
B="$(cut -d'_' -f4 <<<'one_two_three_four_five')" You can also use echo and pipe instead of Here string: A="$(echo 'one_two_three_four_five' | cut -d'_' -f2)"
B="$(echo 'one_two_three_four_five' | cut -d'_' -f4)" Example: $ s='one_two_three_four_five'
$ A="$(cut -d'_' -f2 <<<"$s")"
$ echo "$A"
two
$ B="$(cut -d'_' -f4 <<<"$s")"
$ echo "$B"
four Beware that if $s contains newline characters, that will return a multiline string that contains the 2 nd /4 th field in each line of $s , not the 2 nd /4 th field in $s . | {
"source": [
"https://unix.stackexchange.com/questions/312280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15717/"
]
} |
312,283 | I have a file called File1 in which I have one word: Frida . How can I print the output of cat File1 three times in the same line? It should show Frida Frida Frida | Use cut with _ as the field delimiter and get desired fields: A="$(cut -d'_' -f2 <<<'one_two_three_four_five')"
B="$(cut -d'_' -f4 <<<'one_two_three_four_five')" You can also use echo and pipe instead of Here string: A="$(echo 'one_two_three_four_five' | cut -d'_' -f2)"
B="$(echo 'one_two_three_four_five' | cut -d'_' -f4)" Example: $ s='one_two_three_four_five'
$ A="$(cut -d'_' -f2 <<<"$s")"
$ echo "$A"
two
$ B="$(cut -d'_' -f4 <<<"$s")"
$ echo "$B"
four Beware that if $s contains newline characters, that will return a multiline string that contains the 2 nd /4 th field in each line of $s , not the 2 nd /4 th field in $s . | {
"source": [
"https://unix.stackexchange.com/questions/312283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191824/"
]
} |
312,687 | From the Arch Linux Wiki: https://wiki.archlinux.org/index.php/USB_flash_installation_media # dd bs=4M if=/path/to/archlinux.iso of=/dev/sdx status=progress && sync [...] Do not miss sync to complete before pulling the USB drive. I would like to know What does it do? What consequences are there if left out? Notes dd command used with optional status=progress : tar -xzOf archlinux-2016-09-03-dual.iso | dd of=/dev/disk2 bs=4M status=progress && sync Or using pv for progress tar -xzOf archlinux-2016-09-03-dual.iso | pv | dd of=/dev/disk2 bs=4M && sync | The dd does not bypass the kernel disk caches when it writes to a device, so some part of data may be not written yet to the USB stick upon dd completion. If you unplug your USB stick at that moment, the content on the USB stick would be inconsistent. Thus, your system could even fail to boot from this USB stick. Sync flushes any still-in-cache data to the device. Instead of invoking sync you could use fdatasync dd 's conversion option: fdatasync physically write output file data before finishing In your case, the command would be: tar -xzOf archlinux-2016-09-03-dual.iso | \
dd of=/dev/disk2 bs=4M status=progress conv=fdatasync The conv=fdatasync makes dd effectively call fdatasync() system call at the end of transfer just before dd exits (I checked this with dd 's sources). This confirms that dd would not bypass nor flush the caches unless explicitly instructed to do so. | {
"source": [
"https://unix.stackexchange.com/questions/312687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33386/"
]
} |
312,697 | I am trying to curl some URL which returns a json file, then I want to parse hosts from it and create a comma separated string. I have the first part working curl -s -u "admin:admin" -H "X-Requested-By: ambari" "https://hbasecluster.net/api/v1/clusters/mycluster/services/ZOOKEEPER/components/ZOOKEEPER_SERVER" | jq -r '.host_components[].HostRoles.host_name' which returns zk0-mycluster.net
zk1-mycluster.net
zk2-mycluster.net Now I want to join these into one string like zk0-mycluster.net,zk1-mycluster.net,zk2-mycluster.net | Do it in jq , but see @Kusalananda's answer first jq -r '.host_components[].HostRoles.host_name | join(",")' No, that's wrong. This is what you need: jq -r '.host_components | map(.HostRoles.host_name) | join(",")' Demo: jq -r '.host_components | map(.HostRoles.host_name) | join(",")' <<DATA
{"host_components":[
{"HostRoles":{"host_name":"one"}},
{"HostRoles":{"host_name":"two"}},
{"HostRoles":{"host_name":"three"}}
]}
DATA outputs one,two,three | {
"source": [
"https://unix.stackexchange.com/questions/312697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66640/"
]
} |
312,702 | I'm trying to create a JSON in BASH where one of the fields is based on the result of an earlier command BIN=$(cat next_entry)
OUTDIR="/tmp/cpupower/${BIN}"
echo $OUTDIR
JSON="'"'{"hostname": "localhost", "outdir": "${OUTDIR}", "port": 20400, "size": 100000}'"'"
echo $JSON The above script when executed, returns: /tmp/cpupower/0
, port: 20400, size: 100000}': /tmp/cpupower/0 How can I properly substitute variables inside these multi-quoted strings? | JSON=\''{"hostname": "localhost", "outdir": "'"$OUTDIR"'", "port": 20400, "size": 100000}'\' That is get out of the single quotes for the expansion of $OUTDIR . We did put that expansion inside double-quotes for good measure even though for a scalar variable assignment it's not strictly necessary. When you're passing the $JSON variable to echo , quotes are necessary though to disable the split+glob operator. It's also best to avoid echo for arbitrary data: printf '%s\n' "$JSON" | {
"source": [
"https://unix.stackexchange.com/questions/312702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83216/"
]
} |
313,085 | I'm writing a man page for a program that I am packaging. How can I display the manpage file that I created, to check if it's all right? Is there a way to pass my file directly to the man command instead of having it search the installed manpages by name? I tried doing things like man myprog.1 and man < myprog.1 but in both cases I got an error saying that the man page could not be found. | man has an option to read a local file: -l -l, --local-file Activate `local' mode. Format and display local manual files instead of searching through the system's manual
collection.
Each manual page argument will be interpreted as an nroff source file in the correct format. No cat file is produced. If
'-' is listed as one of the arguments, input will be taken from stdin. When this option is not used, and man fails to find
the page required, before displaying the error message, it attempts to act as if this option was supplied, using the name as
a filename and looking for an exact match. So you can preview your work in progress with: man -l /path/to/manfile.1 | {
"source": [
"https://unix.stackexchange.com/questions/313085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
313,093 | I use zsh's menu-based tab completion. I press Tab once, and a list of possible completions appears. If I press Tab again, I can navigate this list with the arrow keys. However, is it possible to navigate them with the vi -like H , J , K , L keys instead? I use emacs mode for command-line input, with bindkey -e in ~/.zshrc . I also use zim with zsh. If relevant, the commands that specify the tab-completion system are here . | Yes, you can by enabling menu select : zstyle ':completion:*' menu select
zmodload zsh/complist
...
# use the vi navigation keys in menu completion
bindkey -M menuselect 'h' vi-backward-char
bindkey -M menuselect 'k' vi-up-line-or-history
bindkey -M menuselect 'l' vi-forward-char
bindkey -M menuselect 'j' vi-down-line-or-history | {
"source": [
"https://unix.stackexchange.com/questions/313093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
313,107 | I just made an image of a freshly installed dual boot (Ubuntu and Windows) using this command (which I've been using for a while for smaller images): dd if=/dev/sda | gzip > /mnt/drive.img.gz On this drive less than 60G out of 500G are used. Nevertheless that image-file now is 409G big. How is that? Shouldn't gzip manage to compress all those zeros? As I said, it is a freshly installed system. It couldn't be that cluttered. Now I didn't expect for the file to be 60G, but 400G seems very huge to me. | How is that? Shouldn't gzip manage to compress all those zeros? Yes, if they were zeroes . Unused disk space does not mean it contains zeros; it means it is unused, and may contain anything . There are programs that wipe unused disk space to zeroes. I suggest you use those before making the disk image. (I don't recall any offhand; in Linux, I'd just use dd if=/dev/zero bs=1048576 of=somefile to create files containing only zeroes, filling up each filesystem; then remove them before making the image. Also, I prefer xz over gzip .) | {
"source": [
"https://unix.stackexchange.com/questions/313107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/178565/"
]
} |
313,256 | At work, I write bash scripts frequently. My supervisor has suggested that the entire script be broken into functions, similar to the following example: #!/bin/bash
# Configure variables
declare_variables() {
noun=geese
count=three
}
# Announce something
i_am_foo() {
echo "I am foo"
sleep 0.5
echo "hear me roar!"
}
# Tell a joke
walk_into_bar() {
echo "So these ${count} ${noun} walk into a bar..."
}
# Emulate a pendulum clock for a bit
do_baz() {
for i in {1..6}; do
expr $i % 2 >/dev/null && echo "tick" || echo "tock"
sleep 1
done
}
# Establish run order
main() {
declare_variables
i_am_foo
walk_into_bar
do_baz
}
main Is there any reason to do this other than "readability", which I think could be equally well established with a few more comments and some line spacing? Does it make the script run more efficiently (I would actually expect the opposite, if anything), or does it make it easier to modify the code beyond the aforementioned readability potential? Or is it really just a stylistic preference? Please note that although the script doesn't demonstrate it well, the "run order" of the functions in our actual scripts tends to be very linear -- walk_into_bar depends on stuff that i_am_foo has done, and do_baz acts on stuff set up by walk_into_bar -- so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar , that would break things. An example of how I would write the above script would be: #!/bin/bash
# Configure variables
noun=geese
count=three
# Announce something
echo "I am foo"
sleep 0.5
echo "hear me roar!"
# Tell a joke
echo "So these ${count} ${noun} walk into a bar..."
# Emulate a pendulum clock for a bit
for i in {1..6}; do
expr $i % 2 >/dev/null && echo "tick" || echo "tock"
sleep 1
done | I've started using this same style of bash programming after reading Kfir Lavi's blog post "Defensive Bash Programming" . He gives quite a few good reasons, but personally I find these the most important: procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see "Oh, the find_log_errors function reads that log file for errors ". Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script - you've no idea what's it doing there unless there's comments. you can debug functions by enclosing into set -x and set +x . Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it's a lengthy portion ? It's easier to do something like this: set -x
parse_process_list
set +x printing usage with cat <<- EOF . . . EOF . I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these. And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there's not a lot of what you can do - bash itself isn't the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function: nice -10 resource_hungry_function Compared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority. Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background. Some of the examples where I've used this style: https://askubuntu.com/a/758339/295286 https://askubuntu.com/a/788654/295286 https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh | {
"source": [
"https://unix.stackexchange.com/questions/313256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62503/"
]
} |
313,263 | A "selection" tool in my okular 0.19.3 (soft -- ubuntu, 14-04) allows me to choose only one of those: "selection tool", "text selection tool", and "table selection tool". I need only "text selection tool". However when I choose it, my mouse cannot make rectangular around the text I want -- the mouse simply cannot do anything. If I choose "selection tool", a little window invites me to make rectangular around the text/image I want, and once I did it, it suggest that "Image (of such and such size) is ready to be copied to clipboard", but any attempt to paste it into clipboard fails. I do remember that when it was working on my old machine, it would state "text" instead of "image", and worked perfectly well. Is there anything I can do about it? Oh, I forgot to say that my "clipboard" is a "vim" window. | I've started using this same style of bash programming after reading Kfir Lavi's blog post "Defensive Bash Programming" . He gives quite a few good reasons, but personally I find these the most important: procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see "Oh, the find_log_errors function reads that log file for errors ". Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script - you've no idea what's it doing there unless there's comments. you can debug functions by enclosing into set -x and set +x . Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it's a lengthy portion ? It's easier to do something like this: set -x
parse_process_list
set +x printing usage with cat <<- EOF . . . EOF . I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these. And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there's not a lot of what you can do - bash itself isn't the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function: nice -10 resource_hungry_function Compared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority. Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background. Some of the examples where I've used this style: https://askubuntu.com/a/758339/295286 https://askubuntu.com/a/788654/295286 https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh | {
"source": [
"https://unix.stackexchange.com/questions/313263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181973/"
]
} |
313,940 | I'm working on a LAMP web app and there is a scheduled process somewhere which keeps creating a folder called shop in the root of the site. Every time this appears it causes conflicts with rewrite rules in the app, not good. Until I find the offending script, is there a way to prevent any folder called shop being created in the root? I know that I can change the permissions on a folder to prevent it's contents being changed, but I have not found a way to prevent a folder of a certain name being created. | You can't, given the user creating the directory has sufficient permission to write on the parent directory. You can instead leverage the inotify family of system calls provided by the Linux kernel, to watch for the creation (and optionally mv -ing) of directory shop in the given directory, if created (or optionally mv -ed), rm the directory. The userspace program you need in this case is inotifywait (comes with inotify-tools , install it first if needed). Assuming the directory shop would be residing in /foo/bar directory, let's set a monitoring for /foo/bar/shop creation, and rm instantly if created: inotifywait -qme create /foo/bar | \
awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }' inotifywait -qme create /foo/bar watches /foo/bar directory for any file/directory that might be created i.e. watch for any create event If created, awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }' checks if the file happens to be a directory and the name is shop ( /,ISDIR shop$/ ), if so rm the directory ( system("rm -r -- /foo/bar/shop") ) You need to run the command as a user that has write permission on directory /foo/bar for removal of shop from the directory. If you want to monitor mv -ing operations too, add watch for moved_to event too: inotifywait -qme create,moved_to /foo/bar | \
awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }' Just to note, if you are looking for a file, not directory, named shop : inotifywait -qme create /foo/bar | \
awk '$NF == "shop" { system("rm -- /foo/bar/shop") }'
inotifywait -qme create,moved_to /foo/bar | \
awk '$NF == "shop" { system("rm -- /foo/bar/shop") }' | {
"source": [
"https://unix.stackexchange.com/questions/313940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193075/"
]
} |
314,032 | I have this script which designed to download the pronounciation of the words you give it as argument: #!/bin/bash
m=$#
for ((i=1;i<=m;i++));do
echo $i
#wget https://ssl.gstatic.com/dictionary/static/sounds/de/0/"$i".mp3
done if I run it by this command ./a.sh personality brave selfish it should print on the stdout personality
brave
selfish but instead it prints 1
2
3 would you help me solve this problem? p.s : If I write the script without for loop with $1 for example it will work correctly but I want to download many files at the same time | In any Bourne-like shell, it's: for arg
do printf 'Something with "%s"\n' "$arg"
done That is, for does loop on the positional parameters ( $1 , $2 ...) by default (if you don't give a in ... part). Note that that's more portable than: for arg; do
printf 'Something with "%s"\n' "$arg"
done Which was not POSIX until the 2016 edition of the standard nor Bourne (though works in most other Bourne-like shells including bash even in POSIX mode) Or than: for arg in "$@"; do
printf 'Something with "%s"\n' "$arg"
done Which is POSIX but doesn't work properly in the Bourne shell or ksh88 when $IFS doesn't contain the space character, or with some versions of the Bourne shell when there's no argument, or with some shells (including some versions of bash ) when there's no argument and the -u option is enabled. Or than for arg do
printf 'Something with "%s"\n' "$arg"
done which is POSIX and Bourne but doesn't work in very old ash-based shells. I personally ignore that and use that syntax myself as I find it's the most legible and don't expect any of the code I write will ever end up interpreted by such an arcane shell. More info at: http://www.in-ulm.de/~mascheck/various/bourne_args/ What is the purpose of the “do” keyword in Bash for loops? Now if you do want $i to loop over [1..$#] and access the corresponding elements, you can do: in any POSIX shell: i=1
for arg do
printf '%s\n' "Arg $i: $arg"
i=$((i + 1))
done or: i=1
while [ "$i" -le "$#" ]; do
eval "arg=\${$i}"
printf '%s\n' "Arg $i: $arg"
i=$((i + 1))
done Or with bash for ((i = 1; i <= $#; i++ )); do
printf '%s\n' "Arg $i: ${!i}"
done ${!i} being an indirect variable expansion, that is expand to the content of the parameter whose name is stored in the i variable, similar to zsh 's P parameter expansion flag: for ((i = 1; i <= $#; i++ )); do
printf '%s\n' "Arg $i: ${(P)i}"
done Though in zsh , you can also access positional parameters via the $argv array (like in csh ): for ((i = 1; i <= $#; i++ )); do
printf '%s\n' "Arg $i: $argv[i]"
done | {
"source": [
"https://unix.stackexchange.com/questions/314032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160201/"
]
} |
314,059 | I'm looking for an editor to print (on paper) C++ code. I'm currently in engineering school and the instructor has asked us to submit the code on paper. He wants name + surname, the class number (on header), the number of page at the bottom, and the reserved words bolded for every page! On Windows it can be done with notepadd++ . But I'm on Linux and I haven't found an IDE or text editor that works. (I've already tried SCITE , gedit , and Syntaxic ) | Well, if you want to go the extra mile, do it in LaTeX and provide a professional level PDF file. You haven't mentioned your distribution so I'll give instructions for Debian based systems. The same basic idea can be done on any Linux though. Install a LaTeX system and necessary packages sudo apt-get install texlive-latex-extra latex-xcolor texlive-latex-recommended Create a new file (call it report.tex ) with the following contents: \documentclass{article}
\usepackage{fancyhdr}
\pagestyle{fancy}
%% Define your header here.
%% See http://texblog.org/2007/11/07/headerfooter-in-latex-with-fancyhdr/
\fancyhead[CO,CE]{John Doe, Class 123}
\usepackage[usenames,dvipsnames]{color} %% Allow color names
%% The listings package will format your source code
\usepackage{listings}
\lstdefinestyle{customasm}{
belowcaptionskip=1\baselineskip,
xleftmargin=\parindent,
language=C++,
breaklines=true, %% Wrap long lines
basicstyle=\footnotesize\ttfamily,
commentstyle=\itshape\color{Gray},
stringstyle=\color{Black},
keywordstyle=\bfseries\color{OliveGreen},
identifierstyle=\color{blue},
xleftmargin=-8em,
showstringspaces=false
}
\begin{document}
\lstinputlisting[style=customasm]{/path/to/your/code.c}
\end{document} Just make sure to change /path/to/your/code.c in the penultimate line so that it point to the actual path of your C file. If you have more than one file to include, add a \newpage and then a new \lstinputlisting for the other file. Compile a PDF (this creates report.pdf ) pdflatex report.tex I tested this on my system with an example file I found here and it creates a PDF that looks like this: For a more comprehensive example that will automatically find all .c files in the target folder and create an indexed PDF file with each in a separate section, see my answer here . | {
"source": [
"https://unix.stackexchange.com/questions/314059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193164/"
]
} |
314,365 | I would like to do the following at one point in a script: start_time=date and this after a process or processes have run: end_time=date and then do this: elapsed=$end_time-$start_time
echo "Total of $elapsed seconds elapsed for process" How would I do this? | Use the time since epoch to easily identify a span of time in a script man date
%s seconds since 1970-01-01 00:00:00 UTC
%N nanoseconds (000000000..999999999) . start_time="$(date -u +%s)"
sleep 5
end_time="$(date -u +%s)"
elapsed="$(($end_time-$start_time))"
echo "Total of $elapsed seconds elapsed for process"
Total of 5 seconds elapsed for process Bash doesn't support floating point numbers, so you'll need to use a external tool like bc to compare times like 1475705058.042270582-1475705053.040524971 start_time="$(date -u +%s.%N)"
sleep 5
end_time="$(date -u +%s.%N)"
elapsed="$(bc <<<"$end_time-$start_time")"
echo "Total of $elapsed seconds elapsed for process"
Total of 5.001884264 seconds elapsed for process | {
"source": [
"https://unix.stackexchange.com/questions/314365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104388/"
]
} |
314,394 | I have 20GB for my Mint-KDE 18 root partition. There is no extra home partition. I am doing nothing special, just Chrome, KRDC, Teamviewer and the partition was half empty. One thing I did was copying in Dolphin from webdavs repo on the internet to my network drive via samba. Nothing was stored on my PC. Now I got the message that my disk is full. When I open my root or home directory in Dolphin it says 0 MB free. What is the fastest way via terminal to view my directories and files, biggest first or newest etc? | du -h -d 1 / This will display the size for all of the top-level directories in your root directory in 'human readable' format. You can also just do du -h -d 1 / | grep '[0-9]\+G' to only see the ones taking a couple GB or more. For a more granular level of detail, do something like ls -R -shl / | grep '^[0-9.]\{4,12\}\+[KG]' which will show all files in and below your root directory that are 1G or over in size. ** note that you might need to prepend sudo to the commands above. edit -- just saw you want them sorted by newest or largest Try this du -h -d 1 / | grep '[0-9]\+G' | sort -h | {
"source": [
"https://unix.stackexchange.com/questions/314394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135657/"
]
} |
314,725 | I would like to know difference between user and service account. I know that e.g. Jenkins installed to ubuntu is not a user, but service account . What is use of service account? When we need them? How can I create service account? | User accounts are used by real users, service accounts are used by system services such as web servers, mail transport agents, databases etc. By convention, and only by convention, service accounts have user IDs in the low range, e.g. < 1000 or so. Except for UID 0, service accounts don't have any special privileges. Service accounts may - and typically do - own specific resources, even device special files, but they don't have superuser-like privileges. Service accounts can be created like ordinary user accounts (e.g. using useradd ). However, service accounts are typically created and configured by the package manager upon installation of the service software. So, even as an administrator you should be rarely directly concerned with the creation of service accounts. For good reason: In contrast to user accounts, service accounts often don't have a "proper" login shell, i.e. they have /usr/sbin/nologin as login shell (or, back in the old days, /bin/false ). Moreover, service accounts are typically locked, i.e. it is not possible to login (for traditional /etc/passwd and /etc/shadow this can be achieved by setting the password hash to arbitrary values such as * or x ). This is to harden the service accounts against abuse ( defense in depth ). Having individual service accounts for each service serves two main purposes: It is a security measure to reduce the impact in case of an incident with one service ( compartmentalization ), and it simplifies administration as it becomes easier to track down what resources belong to which service. See this or this answers on related questions for more details. | {
"source": [
"https://unix.stackexchange.com/questions/314725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152046/"
]
} |
314,804 | System Info OS: OS X bash: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin16) Background I want time machine to exclude a set of directories and files from all my git/nodejs project. My project directories are in ~/code/private/ and ~/code/public/ so I'm trying to use bash looping to do the tmutil . Issue Short Version If I have a calculated string variable k , how do I make it glob in or right before a for-loop: i='~/code/public/*'
j='*.launch'
k=$i/$j # $k='~/code/public/*/*.launch'
for i in $k # I need $k to glob here
do
echo $i
done In the long version below, you will see k=$i/$j . So I cannot hardcode the string in the for loop. Long Version #!/bin/bash
exclude='
*.launch
.classpath
.sass-cache
Thumbs.db
bower_components
build
connect.lock
coverage
dist
e2e/*.js
e2e/*.map
libpeerconnection.log
node_modules
npm-debug.log
testem.log
tmp
typings
'
dirs='
~/code/private/*
~/code/public/*
'
for i in $dirs
do
for j in $exclude
do
k=$i/$j # It is correct up to this line
for l in $k # I need it glob here
do
echo $l
# Command I want to execute
# tmutil addexclusion $l
done
done
done Output They are not globbed. Not what I want. ~/code/private/*/*.launch
~/code/private/*/.DS_Store
~/code/private/*/.classpath
~/code/private/*/.sass-cache
~/code/private/*/.settings
~/code/private/*/Thumbs.db
~/code/private/*/bower_components
~/code/private/*/build
~/code/private/*/connect.lock
~/code/private/*/coverage
~/code/private/*/dist
~/code/private/*/e2e/*.js
~/code/private/*/e2e/*.map
~/code/private/*/libpeerconnection.log
~/code/private/*/node_modules
~/code/private/*/npm-debug.log
~/code/private/*/testem.log
~/code/private/*/tmp
~/code/private/*/typings
~/code/public/*/*.launch
~/code/public/*/.DS_Store
~/code/public/*/.classpath
~/code/public/*/.sass-cache
~/code/public/*/.settings
~/code/public/*/Thumbs.db
~/code/public/*/bower_components
~/code/public/*/build
~/code/public/*/connect.lock
~/code/public/*/coverage
~/code/public/*/dist
~/code/public/*/e2e/*.js
~/code/public/*/e2e/*.map
~/code/public/*/libpeerconnection.log
~/code/public/*/node_modules
~/code/public/*/npm-debug.log
~/code/public/*/testem.log
~/code/public/*/tmp
~/code/public/*/typings | You can force another round of evaluation with eval , but that's not actually necessary. (And eval starts having serious problems the moment your file names contain special characters like $ .) The problem isn't with globbing, but with the tilde expansion. Globbing happens after variable expansion, if the variable is unquoted, as here (*) : $ x="/tm*" ; echo $x
/tmp Another thing that happens for unquoted expansions is word splitting, which will be an issue if the patterns in question contain characters in IFS , usually whitespace. To prevent this issue, word splitting needs to be disabled by setting IFS to the empty string. So, in the same vein, this is similar to what you did, and works: $ IFS=
$ mkdir -p ~/public/foo/ ; touch ~/public/foo/x.launch
$ i="$HOME/public/*"; j="*.launch"; k="$i/$j"
$ echo $k
/home/foo/public/foo/x.launch But with the tilde it doesn't: $ i="~/public/*"; j="*.launch"; k="$i/$j"
$ echo $k
~/public/*/*.launch This is clearly documented for Bash: The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, ... Tilde expansion happens before variable expansion so tildes inside variables are not expanded. The easy workaround is
to use $HOME or the full path instead. (* expanding globs from variables is usually not what you want) Another thing: When you loop over the patterns, as here: exclude="foo *bar"
for j in $exclude ; do
... note that as $exclude is unquoted, it's both split, and also globbed at this point. So if the current directory contains something matching the pattern, it's expanded to that: $ IFS=
$ i="$HOME/public/foo"
$ exclude="*.launch"
$ touch $i/real.launch
$ for j in $exclude ; do # glob, no match
printf "%s\n" "$i"/$j ; done
/home/foo/public/foo/real.launch
$ touch ./hello.launch
$ for j in $exclude ; do # glob, matches in current dir!
printf "%s\n" "$i"/$j ; done
/home/foo/public/foo/hello.launch # not the expected result To work around this, use an array variable instead of a split string: $ IFS=
$ exclude=("*.launch")
$ exclude+=("*.not this")
$ for j in "${exclude[@]}" ; do printf "%s\n" "$i"/$j ; done
/home/foo/public/foo/real.launch
/home/foo/public/foo/some file.not this Though note that if the patterns don't match anything, they'll by default be left as-is. So if the directory is empty, .../*.launch would be printed etc. Something similar could be done with find -path , if you don't mind what directory level the targeted files should be. E.g. to find any path ending in /e2e/*.js : $ dirs="$HOME/public $HOME/private"
$ pattern="*/e2e/*.js"
$ find $dirs -path "$pattern"
/home/foo/public/one/two/three/e2e/asdf.js We have to use $HOME instead of ~ for the same reason as before, and $dirs needs to be unquoted on the find command line so it gets split, but $pattern should be quoted so it isn't accidentally expanded by the shell. (I think you could play with -maxdepth on GNU find to limit how deep the search goes, if you care, but that's a bit of a different issue.) | {
"source": [
"https://unix.stackexchange.com/questions/314804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27121/"
]
} |
314,826 | I have a .toc (table of contents file) from my .tex document. It contains a lot of lines and some of them have the form \contentsline {part}{Some title here\hfil }{5}
\contentsline {chapter}{\numberline {}Person name here}{5} I know how to grep for part and for chapter . But I'd like to filter for those lines and have the output in a csv file like this: {Some title here},{Person name here},{5} or with no braces Some title here,Person name here,5 1. For sure the number (page number) in the last pair {} is the same for both two lines, so we can filter only the second one. 2. Note that some empty pair {} could happens or also could contain another pair {} . For example, it could be \contentsline {part}{Title with math $\frac{a}{b}$\hfil }{15} which should be filtered as Title with math $\frac{a}{b}$ edit 1: I was able to obtain the numbers without braces at end of line using grep '{part}' file.toc | awk -F '[{}]' '{print $(NF-1)}' edit 2: I was able to filter the chapter lines and remove the garbage with grep '{chapter}' file.toc | sed 's/\\numberline//' | sed 's/\\contentsline//' | sed 's/{chapter}//' | sed 's/{}//' | sed 's/^ {/{/' and the output without blank spaces was {Person name here}{5} edit 3: I was able to filter for part and clean the output with \contentsline {chapter}{\numberline {}Person name here}{5} which returns {Title with math $\frac{a}{b}$}{15} | You can force another round of evaluation with eval , but that's not actually necessary. (And eval starts having serious problems the moment your file names contain special characters like $ .) The problem isn't with globbing, but with the tilde expansion. Globbing happens after variable expansion, if the variable is unquoted, as here (*) : $ x="/tm*" ; echo $x
/tmp Another thing that happens for unquoted expansions is word splitting, which will be an issue if the patterns in question contain characters in IFS , usually whitespace. To prevent this issue, word splitting needs to be disabled by setting IFS to the empty string. So, in the same vein, this is similar to what you did, and works: $ IFS=
$ mkdir -p ~/public/foo/ ; touch ~/public/foo/x.launch
$ i="$HOME/public/*"; j="*.launch"; k="$i/$j"
$ echo $k
/home/foo/public/foo/x.launch But with the tilde it doesn't: $ i="~/public/*"; j="*.launch"; k="$i/$j"
$ echo $k
~/public/*/*.launch This is clearly documented for Bash: The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, ... Tilde expansion happens before variable expansion so tildes inside variables are not expanded. The easy workaround is
to use $HOME or the full path instead. (* expanding globs from variables is usually not what you want) Another thing: When you loop over the patterns, as here: exclude="foo *bar"
for j in $exclude ; do
... note that as $exclude is unquoted, it's both split, and also globbed at this point. So if the current directory contains something matching the pattern, it's expanded to that: $ IFS=
$ i="$HOME/public/foo"
$ exclude="*.launch"
$ touch $i/real.launch
$ for j in $exclude ; do # glob, no match
printf "%s\n" "$i"/$j ; done
/home/foo/public/foo/real.launch
$ touch ./hello.launch
$ for j in $exclude ; do # glob, matches in current dir!
printf "%s\n" "$i"/$j ; done
/home/foo/public/foo/hello.launch # not the expected result To work around this, use an array variable instead of a split string: $ IFS=
$ exclude=("*.launch")
$ exclude+=("*.not this")
$ for j in "${exclude[@]}" ; do printf "%s\n" "$i"/$j ; done
/home/foo/public/foo/real.launch
/home/foo/public/foo/some file.not this Though note that if the patterns don't match anything, they'll by default be left as-is. So if the directory is empty, .../*.launch would be printed etc. Something similar could be done with find -path , if you don't mind what directory level the targeted files should be. E.g. to find any path ending in /e2e/*.js : $ dirs="$HOME/public $HOME/private"
$ pattern="*/e2e/*.js"
$ find $dirs -path "$pattern"
/home/foo/public/one/two/three/e2e/asdf.js We have to use $HOME instead of ~ for the same reason as before, and $dirs needs to be unquoted on the find command line so it gets split, but $pattern should be quoted so it isn't accidentally expanded by the shell. (I think you could play with -maxdepth on GNU find to limit how deep the search goes, if you care, but that's a bit of a different issue.) | {
"source": [
"https://unix.stackexchange.com/questions/314826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19195/"
]
} |
314,974 | I have created symlinks to a large amount of logfiles. The syntax of the logfiles is yyyymmdd.log.gz . To simplify things I use a simple sequence without parsing it with date : for dd in $(seq -w 20150101 20151231) ; do
ln -s $origin/$dd.log.gz $target/$dd.log.gz
done How do I get rid of all the broken symlinks I just created in a single fell swoop? | This simple one-liner does the job quite fast. It requires GNU Findutils : find . -xtype l -delete A bit of explanation: -xtype l tests for links that are broken (it is the opposite of -type ) -delete deletes the files directly, no need for further bothering with xargs or -exec NOTE: -xtype l means -xtype lower case L (as in link ) ;) | {
"source": [
"https://unix.stackexchange.com/questions/314974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83367/"
]
} |
314,985 | I'm tailing logs of my own app and postgres. tail -f /tmp/myapp.log /var/log/postgresql/postgresql.main.log I need to include pgpool's logs. It used to be syslog but now it is in journalctl. Is there a way to tie tail -f && journalctl -f together? | You could use: journalctl -u service-name -f -f, --follow Show only the most recent journal entries, and continuously print new entries as they are appended to the journal. Here I've added "service-name" to distinguish this answer from others; you substitute the actual service name instead of the text service-name . | {
"source": [
"https://unix.stackexchange.com/questions/314985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193929/"
]
} |
315,050 | This is a very basic question I am just quite new to bash and couldn't figure out how to do this. Googling unfortunately didn't get me anywhere. My goal is to connect with sftp to a server, upload a file, and then disconnect. I have the following script: UpdateJar.sh #!/bin/bash
sftp -oPort=23 [email protected]:/home/kalenpw/TestWorld/plugins
#Change directory on server
#cd /home/kalenpw/TestWorld/plugins
#Upload file
put /home/kalenpw/.m2/repository/com/Khalidor/TestPlugin/0.0.1-SNAPSHOT/TestPlugin-0.0.1-SNAPSHOT.jar
exit the issue is, this script will establish an sftp connection and then do nothing. Once I manually type exit in connection it tries to execute the put command but because the sftp session has been closed it just says put: command not found. How can I get this to work properly? Thanks | You can change your script to pass commands in a here-document, e.g., #!/bin/bash
sftp -oPort=23 [email protected]:/home/kalenpw/TestWorld/plugins <<EOF
put /home/kalenpw/.m2/repository/com/Khalidor/TestPlugin/0.0.1-SNAPSHOT/TestPlugin-0.0.1-SNAPSHOT.jar
exit
EOF The << marker followed by the name ( EOF ) tells the script to pass the following lines until the name is found at the beginning of the line (by itself). | {
"source": [
"https://unix.stackexchange.com/questions/315050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171715/"
]
} |
315,063 | I added a new hard drive ( /dev/sdb ) to Ubuntu Server 16, ran parted /dev/sdb mklabel gpt and sudo parted /dev/sdb mkpart primary ext4 0G 1074GB . All went fine. Then I tried to mount the drive mkdir /mnt/storage2
mount /dev/sdb1 /mnt/storage2 It resulted in mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so. I tried mount -t ext4 /dev/sdb1 /mnt/storage2 with identical outcome. I've done this stuff many times before and have never ran into anything like this. I've already read this mount: wrong fs type, bad option, bad superblock on /dev/sdb on CentOS 6.0 to no avail. fdisk output regarding the drive Disk /dev/sdb: 1000 GiB, 1073741824000 bytes, 2097152000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0E136427-03AF-48E2-B56B-A467E991629F
Device Start End Sectors Size Type
/dev/sdb1 2048 2097149951 2097147904 1000G Linux filesystem | WARNING: This will wipe out your drive! You still need to create a ( new ) file system (aka "format the partition"). Double-check that you really want to overwrite the current content of the specified partition ! Replace XY accordingly, but double check that you are specifying the correct partition, e.g., sda2 , sdb1 : mkfs.ext4 /dev/sd XY parted / mkpart does not create a file system.
The Parted User's Manual shows: 2.4.5 mkpart Command: mkpart [part-type fs-type name] start end Creates a new partition, without creating a new file system on that partition. [Emphasis added.] | {
"source": [
"https://unix.stackexchange.com/questions/315063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120144/"
]
} |
315,424 | I'm trying to upload all the text files within the current folder via FTP to a server location using curl. I tried the following line: curl -T "{file1.txt, file2.txt}" ftp://XXX --user YYY where XXX is the server's IP address and YYY is the username and password. I'm able to transfer file1.txt to the server successfully, but it complains about the second file saying 'Can't open 'file_name'!' I swapped the file names and it worked for file2.txt and not file1.txt. Seems like I've got the syntax wrong, but this is what the manual says? Also, ideally I would be able to do something like this: curl -T *.txt ftp://XXX --user YYY because I won't always know the names of the txt files in the current folder or the number of files to be transferred. I'm of the opinion I may have to write a bash script that collects the output of ls *.txt into an array and put it into the multiple-files-format required by curl. I've not done bash scripting before - is this the simplest way to achieve this? | Your first command should work without whitespaces: curl -T "{file1.txt,file2.txt}" ftp://XXX/ -user YYY Also note the trailing "/" in the URLs above. This is curl's manual entry about option "-T": -T, --upload-file This transfers the specified local file to the remote URL. If there is no file part in the specified URL, Curl will append the local file name. NOTE that you must use a trailing / on the last directory to really prove to Curl that there is no file name or curl will think that your last directory name is the remote file name to use. That will most likely cause the upload operation to fail. If this is used on an HTTP(S) server, the PUT command will be used. Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of
"-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded. You can specify one -T for each URL on the command line. Each -T + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T
argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL, like this: curl -T "{file1,file2}" http://www.uploadtothissite.com or even curl -T "img[1-1000].png" ftp://ftp.picturemania.com/upload/ "*.txt" expansion does not work because curl supports only the same syntax as for URLs: You can specify multiple URLs or parts of URLs by writing part sets within braces as in: http://site .{one,two,three}.com or you can get sequences of alphanumeric series by using [] as in: ftp://ftp.numericals.com/file[1-100].txt ftp://ftp.numericals.com/file[001-100].txt (with leading zeros) ftp://ftp.letters.com/file[a-z].txt [...] When using [] or {} sequences when invoked from a command line prompt, you probably have to put the full URL within double quotes to avoid the shell from interfering with it. This also goes for other characters treated special, like for example '&', '?' and '*'. But you could use the "normal" shell globbing like this: curl -T "{$(echo *.txt | tr ' ' ',')}" ftp://XXX/ -user YYY (The last example may not work in all shells or with any kind of exotic file names.) | {
"source": [
"https://unix.stackexchange.com/questions/315424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194267/"
]
} |
315,425 | At the beginning, I have these permissions for a file: # file: jar
# owner: my_user
# group: my_user
user::rw-
group::rw-
other::r-- After running this: setfacl -m u:my_user:--- jar and get this permissións: # file: foobar
# owner: my_user
# group: my_user
user::rw-
user:my_user:---
group::rw-
mask::rw-
other::r-- I expected my_user not to have permissión to read (for example) this file, but it has.. | Your first command should work without whitespaces: curl -T "{file1.txt,file2.txt}" ftp://XXX/ -user YYY Also note the trailing "/" in the URLs above. This is curl's manual entry about option "-T": -T, --upload-file This transfers the specified local file to the remote URL. If there is no file part in the specified URL, Curl will append the local file name. NOTE that you must use a trailing / on the last directory to really prove to Curl that there is no file name or curl will think that your last directory name is the remote file name to use. That will most likely cause the upload operation to fail. If this is used on an HTTP(S) server, the PUT command will be used. Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of
"-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded. You can specify one -T for each URL on the command line. Each -T + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T
argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL, like this: curl -T "{file1,file2}" http://www.uploadtothissite.com or even curl -T "img[1-1000].png" ftp://ftp.picturemania.com/upload/ "*.txt" expansion does not work because curl supports only the same syntax as for URLs: You can specify multiple URLs or parts of URLs by writing part sets within braces as in: http://site .{one,two,three}.com or you can get sequences of alphanumeric series by using [] as in: ftp://ftp.numericals.com/file[1-100].txt ftp://ftp.numericals.com/file[001-100].txt (with leading zeros) ftp://ftp.letters.com/file[a-z].txt [...] When using [] or {} sequences when invoked from a command line prompt, you probably have to put the full URL within double quotes to avoid the shell from interfering with it. This also goes for other characters treated special, like for example '&', '?' and '*'. But you could use the "normal" shell globbing like this: curl -T "{$(echo *.txt | tr ' ' ',')}" ftp://XXX/ -user YYY (The last example may not work in all shells or with any kind of exotic file names.) | {
"source": [
"https://unix.stackexchange.com/questions/315425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16952/"
]
} |
315,456 | I'd like to run the command foo --bar=baz <16 zeroes> How do I type the 16 zeroes efficiently*? If I hold Alt and press 1 6 0 it will repeat the next thing 160 times, which is not what I want. In emacs I can either use Alt-[number] or Ctrl-u 1 6 Ctrl-u 0 , but in bash Ctrl-u kills the currently-being-typed line and the next zero just adds a 0 to the line. If I do foo --bar=baz $(printf '0%.0s' {1..16}) Then history shows exactly the above, and not foo --bar=baz 0000000000000000 ; i.e. bash doesn't behave the way I want. ( Edit : point being, I want to input some number of zeroes without using $(...) command substitution) (*) I guess a technical definition of "efficiently" is "with O(log n) keystrokes", preferably a number of keystrokes equal to the number of digits in 16 (for all values of 16) plus perhaps a constant; the emacs example qualifies as efficient by this definition. | Try echo Alt+1 Alt+6 Ctrl+V 0 That's 6 key strokes (assuming a US/UK QWERTY keyboard at least) to insert those 16 zeros (you can hold Alt for both 1 and 6). You could also use the standard vi mode ( set -o vi ) and type: echo 0 Esc x16p (also 6 key strokes). The emacs mode equivalent and that could be used to repeat more than a single character ( echo 0 Ctrl+W Alt+1 Alt+6 Ctrl+Y ) works in zsh , but not in bash . All those will also work with zsh (and tcsh where that comes from). With zsh , you could also use padding variable expansion flags and expand them with Tab : echo ${(l:16::0:)} Tab (A lot more keystrokes obviously). With bash , you can also have bash expand your $(printf '0%.0s' {1..16}) with Ctrl+Alt+E . Note though that it will expand everything (not globs though) on the line. To play the game of the least number of key strokes, you could bind to some key a widget that expands <some-number>X to X repeated <some-number> times. And have <some-number> in base 36 to even further reduce it. With zsh (bound to F8 ): repeat-string() {
REPLY=
repeat $1 REPLY+=$2
}
expand-repeat() {
emulate -L zsh
set -o rematchpcre
local match mbegin mend MATCH MBEGIN MEND REPLY
if [[ $LBUFFER =~ '^(.*?)([[:alnum:]]+)(.)$' ]]; then
repeat-string $((36#$match[2])) $match[3]
LBUFFER=$match[1]$REPLY
else
return 1
fi
}
zle -N expand-repeat
bindkey "$terminfo[kf8]" expand-repeat Then, for 16 zeros, you type: echo g0 F8 (3 keystrokes) where g is 16 in base 36. Now we can further reduce it to one key that inserts those 16 zeros, though that would be cheating. We could bind F2 to two 0 s (or two $STRING , 0 by default), F3 to 3 0 s, F1 F6 to 16 0 s... up to 19... possibilities are endless when you can define arbitrary widgets. Maybe I should mention that if you press and hold the 0 key, you can insert as many zeros as you want with just one keystroke :-) | {
"source": [
"https://unix.stackexchange.com/questions/315456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19055/"
]
} |
315,502 | The Ubuntu 16.04 server VM image apparently starts the "apt-daily.service" every
12 hours or so; this service performs various APT-related tasks like refreshing
the list of available packages, performing unattended upgrades if needed, etc. When starting from a VM "snapshot", the service is triggered immediately , as (I
presume) systemd realizes quickly that the timer should have gone off long ago. However, a running APT prevents other apt processes from running as it holds a
lock on /var/lib/dpkg . The error message indicating this looks like this: E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it? I need to disable this automated APT task until Ansible
has completed the machine setup (which typically involves installing packages);
see https://github.com/gc3-uzh-ch/elasticluster/issues/304 for more info and
context. I have tried various options to disable the "unattended upgrades" feature
through a "user data" script for cloud-init , but all of them have failed so
far. 1. Disable the systemd task systemd task apt-daily.service is triggered by apt-daily.timer . I have tried
to disable one or the other, or both, with various cobinations of the following
commands; still, the apt-daily.service is started moments after the VM becomes
ready to accept SSH connections:: #!/bin/bash
systemctl stop apt-daily.timer
systemctl disable apt-daily.timer
systemctl mask apt-daily.service
systemctl daemon-reload 2. Disable config option APT::Periodic::Enable Script /usr/lib/apt/apt.systemd.daily reads a few APT configuration
variables; the setting APT::Periodic::Enable disables the functionality
altogether (lines 331--337). I have tried disabling it with the following
script:: #!/bin/bash
# cannot use /etc/apt/apt.conf.d/10periodic as suggested in
# /usr/lib/apt/apt.systemd.daily, as Ubuntu distributes the
# unattended upgrades stuff with priority 20 and 50 ...
# so override everything with a 99xxx file
cat > /etc/apt/apt.conf.d/99elasticluster <<__EOF
APT::Periodic::Enable "0";
// undo what's in 20auto-upgrade
APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Unattended-Upgrade "0";
__EOF However, despite APT::Periodic::Enable having value 0 from the command line
(see below), the unattended-upgrades program is still run... ubuntu@test:~$ apt-config shell AutoAptEnable APT::Periodic::Enable
AutoAptEnable='0' 3. Remove /usr/lib/apt/apt.systemd.daily altogether The following cloud-init script removes the unattended upgrades script
altogether:: #!/bin/bash
mv /usr/lib/apt/apt.systemd.daily /usr/lib/apt/apt.systemd.daily.DISABLED Still, the task runs and I can see it in the process table! although the file
does not exist if probed from the command line:: ubuntu@test:~$ ls /usr/lib/apt/apt.systemd.daily
ls: cannot access '/usr/lib/apt/apt.systemd.daily': No such file or directory It looks as though the cloud-init script (together with the SSH command-line)
and the root systemd process execute in separate filesystems and process
spaces... Questions Is there something obvious I am missing? Or is there some namespace magic going
on which I am not aware of? Most importantly: how can I disable the apt-daily.service through a cloud-init script? | Yes, there was something obvious that I was missing. Systemd is all about concurrent start of services, so the cloud-init script is
run at the same time the apt-daily.service is triggered. By the time cloud-init gets to execute the user-specified payload, apt-get update is
already running. So the attempts 2. and 3. failed not because of some namespace
magic, but because they altered the system too late for apt.systemd.daily to
pick the changes up. This also means that there is basically no way of preventing apt.systemd.daily from running -- one can only kill it after it's started. This "user data" script takes this route:: #!/bin/bash
systemctl stop apt-daily.service
systemctl kill --kill-who=all apt-daily.service
# wait until `apt-get updated` has been killed
while ! (systemctl list-units --all apt-daily.service | egrep -q '(dead|failed)')
do
sleep 1;
done
# now proceed with own APT tasks
apt install -y python There is still a time window during which SSH logins are possible yet apt-get will not run, but I cannot imagine another solution that can works on the stock
Ubuntu 16.04 cloud image. | {
"source": [
"https://unix.stackexchange.com/questions/315502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274/"
]
} |
315,504 | What is the proper syntax for adding a dot that is not bold after a word in bold? I need to write the following sentense without the dot in bold word_in_bold . Other sentence But .B word_in_bold
. Other sentence does not generate "Other sentence". | Yes, there was something obvious that I was missing. Systemd is all about concurrent start of services, so the cloud-init script is
run at the same time the apt-daily.service is triggered. By the time cloud-init gets to execute the user-specified payload, apt-get update is
already running. So the attempts 2. and 3. failed not because of some namespace
magic, but because they altered the system too late for apt.systemd.daily to
pick the changes up. This also means that there is basically no way of preventing apt.systemd.daily from running -- one can only kill it after it's started. This "user data" script takes this route:: #!/bin/bash
systemctl stop apt-daily.service
systemctl kill --kill-who=all apt-daily.service
# wait until `apt-get updated` has been killed
while ! (systemctl list-units --all apt-daily.service | egrep -q '(dead|failed)')
do
sleep 1;
done
# now proceed with own APT tasks
apt install -y python There is still a time window during which SSH logins are possible yet apt-get will not run, but I cannot imagine another solution that can works on the stock
Ubuntu 16.04 cloud image. | {
"source": [
"https://unix.stackexchange.com/questions/315504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
315,812 | Typical Unix/Linux programs accept the command line inputs as an argument count ( int argc ) and an argument vector ( char *argv[] ). The first element of argv is the program name - followed by the actual arguments. Why is the program name passed to the executable as an argument? Are there any examples of programs using their own name (maybe some kind of exec situation)? | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"source": [
"https://unix.stackexchange.com/questions/315812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104091/"
]
} |
315,815 | I have two apps that use the same tcp port (and same interface) for the monitoring console, not the main port of application. I am not interested in use that port, and I cannot change the source code for SO_REUSEADDR or for changing the port. How can I have both applications running on the same OS? | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"source": [
"https://unix.stackexchange.com/questions/315815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63305/"
]
} |
315,823 | I have access to a list of folders that have a format like lastname, firstname(id) When I try to enter the folder from the terminal, it looks like cd test/lastname,\ firstname\(id\) I am not sure why there are backslashes where there aren't any spaces. My script has access to the credentials and I generated the exact format with the backslashes, but I still cannot enter the folder from the bash script. The variable I use is like this: folder="lastname,\ firstname\(id\)" When I do cd $HOME/test/$folder/ it says there is not such folder. I tried a couple of solutions suggested on different questions, but haven't worked. Putting it within double quotes on the folder variable, and also on the entire expression also didn't work. I guess I don't know what is going wrong and hence cannot get it to work. It'd be awesome if someone could help me out here! | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"source": [
"https://unix.stackexchange.com/questions/315823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194533/"
]
} |
315,826 | I have the following pipeline: ➜ echo ,cats,and,dogs, | sed -e 's/,[^,]*,[^,]*,/,,,/'
,,,dogs, I know that I could run a command like !! to "run the last command" or !:1 to "get the last arguments" but I'm wondering is there some command that I can run that will let me "get the k th command+args from a pipeline" So in this example if I wanted to pipe some other output into the sed utility I could do something like this right after running the above pipeline: $ echo ,foo,bar,baz, | %:2 where %:2 is some maybe-fictional command that I don't know, that "runs the k th command in a pipeline" Does this command exist? | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"source": [
"https://unix.stackexchange.com/questions/315826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
315,963 | I am not sure how to word this, but I often I find myself typing commands like this: cp /etc/prog/dir1/myconfig.yml /etc/prog/dir1/myconfig.yml.bak I usually just type out the path twice (with tab completion) or I'll copy and paste the path with the cursor. Is there some bashfoo that makes this easier to type? | There are a number of tricks (there's a duplicate to be found I think), but for this I tend to do cp /etc/prog/dir1/myconfig.yml{,.bak} which gets expanded to your command. This is known as brace expansion . In the form used here, the {} expression specifies a number of strings separated by commas. These "expand" the whole /etc/prog/dir1/myconfig.yml{,.bak} expression, replacing the {} part with each string in turn: the empty string, giving /etc/prog/dir1/myconfig.yml , and then .bak , giving /etc/prog/dir1/myconfig.yml.bak . The result is cp /etc/prog/dir1/myconfig.yml /etc/prog/dir1/myconfig.yml.bak These expressions can be nested: echo a{b,c,d{e,f,g}} produces ab ac ade adf adg There's a variant using numbers to produce sequences: echo {1..10} produces 1 2 3 4 5 6 7 8 9 10 and you can also specify the step: echo {0..10..5} produces 0 5 10 | {
"source": [
"https://unix.stackexchange.com/questions/315963",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29731/"
]
} |
316,065 | I run command ps -A | grep <application_name> and getting list of process like this: 19440 ? 00:00:11 <application_name>
21630 ? 00:00:00 <application_name>
22694 ? 00:00:00 <application_name> I want to kill all process from the list: 19440 , 21630 , 22694 . I have tried ps -A | grep <application_name> | xargs kill -9 $1 but it works with errors. kill: illegal pid ?
kill: illegal pid 00:00:00
kill: illegal pid <application_name> How can I do this gracefully? | pkill -f 'PATTERN' Will kill all the processes that the pattern PATTERN matches. With the -f option, the whole command line (i.e. including arguments) will be taken into account. Without the -f option, only the command name will be taken into account. See also man pkill on your system. | {
"source": [
"https://unix.stackexchange.com/questions/316065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138156/"
]
} |
316,161 | According to FHS-3.0 , /tmp is for temporary files and /run is for run-time variable data. Data in /run must be deleted at next boot, which is not required for /tmp , but still programs must not assume that the data in /tmp will be available at the next program start. All this seems quite similar to me. So, what is the difference between the two? By which criterion should a program decide whether to put temporary data into /tmp or into /run ? According to the FHS: Programs may have a subdirectory of /run ; this is encouraged for
programs that use more than one run-time file. This indicates that the distinction between "system programs" and "ordinary programs" is not a criterion, neither is the lifetime of the program (like, long-running vs. short-running process). Although the following rationale is not given in the FHS, /run was introduced to overcome the problem that /var was mounted too late such that dirty tricks were needed to make /var/run available early enough. However, now with /run being introduced, and given its description in the FHS, there does not seem to be a clear reason to have both /run and /tmp . | The directories /tmp and /usr/tmp (later /var/tmp ) used to be the dumping ground for everything and everybody. The only protection mechanism for files in these directories is the sticky bit which restricts deletion or renaming of files there to their owners. As marcelm pointed out in a comment, there's in principle nothing that prevents someone to create files with names that are used by services (such as nginx.pid or sshd.pid ). (In practice, the startup scripts could remove such bogus files first, though.) /run was established for non-persistent runtime data of long lived services such as locks, sockets, pid files and the like. Since it is not writable for the public, it shields service runtime data from the mess in /tmp and jobs that clean up there. Indeed: Two distributions that I run (no pun intended) have permissions 755 on /run , while /tmp and /var/tmp (and /dev/shm for that matter) have permissions 1777. | {
"source": [
"https://unix.stackexchange.com/questions/316161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194871/"
]
} |
316,401 | I know how to mount a drive that has a corresponding device file in /dev, but I don't know how to do this for a disk image that does not represent a physical device and does not have an analogue in /dev (e.g. an ISO file or a floppy image). I know I can do this in Mac OS X by double-clicking on the disk image's icon in Finder, which will mount the drive automatically, but I would like to be able to do this from the terminal. I'm not sure if there is a general Unix way of doing this, or if this is platform-specific. | On most modern GNU system the mount command can handle that: mount -o loop file.iso /mnt/dir to unmount you can just use the umount command umount /mnt/dir If your OS doesn't have this option you can create a loop device : losetup -f # this will print the first available loop device ex:/dev/loop0
losetup /dev/loop0 /path/file.iso #associate loop0 with the specified file
mount /dev/loop0 /mnt/dir #It may be necessary specify the type (-t iso9660) to umount you can use -d : umount /mnt/dir
losetup -d /dev/loop0 If the file have partitions, example a HD image, you can use the -P parameter (depending on you OS), it will map the partitions in the file content: losetup -P /dev/loop0 /path/file.iso # will create /dev/loop0
ls /dev/loop0p* #the partitions in the format /dev/loop0pX | {
"source": [
"https://unix.stackexchange.com/questions/316401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188709/"
]
} |
316,671 | How can I remove the first few characters like remove ; from the selected lines using commands? I switched into the insert mode, but can't figure out how to do. ;extension=php_bz2.dll
;extension=php_curl.dll
;extension=php_fileinfo.dll
;extension=php_ftp.dll
;extension=php_gd2.dll
;extension=php_gettext.dll
;extension=php_gmp.dll
;extension=php_intl.dll
;extension=php_imap.dll
;extension=php_interbase.dll
;extension=php_ldap.dll
;extension=php_mbstring.dll
;extension=php_exif.dll ; Must be after mbstring as it depends on it
;extension=php_mysqli.dll
;extension=php_oci8_12c.dll ; Use with Oracle Database 12c Instant Client
;extension=php_openssl.dll
;extension=php_pdo_firebird.dll | Place cursor on first or last ; Press Ctrl + v to enter Visual Block mode Use arrow keys or j , k to select the ; characters you want to delete (or the other " first few characters ") Press x to delete them all at once | {
"source": [
"https://unix.stackexchange.com/questions/316671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195306/"
]
} |
316,765 | For example, in Ubuntu, there is always a .local directory in the home directory and .profile includes this line: PATH="$HOME/bin:$HOME/.local/bin:$PATH" $HOME/.local/bin does not exist by default, but if it is created it's already in $PATH and executables within can be found. This is not exactly mentioned in the XDG directory specification but seems derived from it. What I wonder is if this is common enough that it could be usually assumed to exist in the most common end user distributions. Is it, for instance, in all of the Debian derivatives, or at least the Ubuntu ones? How about the Red Hat/Fedora/CentOS ecosystem? And so on with Arch, SUSE, and what people are using nowadays. To be extra clear, this is only for $HOME/.local/bin , not $HOME/bin . Out of curiosity, feel free to include BSDs, OS/X and others if you have the information. :) | The ~/.local directories are part of the systemd file-hierarchy spec and is an extension of the xdg-user-dirs spec . It can be confusing as Debian-derived packages for bash lost the ~/.local path when they rebased to Bash 4.3. They did have it in Bash 4.2. It is a bug , and a patch has been sitting in the Debian system for a bit now. This bug is the reason Ubuntu 16.04 had ~/.local in the path and Ubuntu 17.04 did not. If you run systemd-path as a user, you will see that it is intended to be in the path. $ systemd-path user-binaries
/home/foo/.local/bin In theory, the answer to your query is: Any distro that uses systemd or wants to maintain compatibility with systemd. There is more information in file-hierarchy(7) . | {
"source": [
"https://unix.stackexchange.com/questions/316765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193366/"
]
} |
317,226 | On Linux, when you a create folder, it automatically creates two hard links to the corresponding inode.
One which is the folder you asked to create, the other being the . special folder this folder. Example: $ mkdir folder
$ ls -li
total 0
124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 folder
$ ls -lai folder
total 0
124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 .
124593716 drwxr-xr-x 3 fantattitude staff 102 18 oct 16:52 .. As you can see, both folder and . 's inside folder have the same inode number (shown with -i option). Is there anyway to delete this special . hardlink? It's only for experimentation and curiosity. Also I guess the answer could apply to .. special file as well. I tried to look into rm man but couldn't find any way to do it. When I try to remove . all I get is: rm: "." and ".." may not be removed I'm really curious about the whole way these things work so don't refrain from being very verbose on the subject. EDIT: Maybe I wasn't clear with my post, but I want to understand the underlying mechanism which is responsible for . files and the reasons why they can't be deleted. I know the POSIX standard disallows a folder with less than 2 hardlinks, but don't really get why. I want to know if it could be possible to do it anyway. | It is technically possible to delete . , at least on EXT4 filesystems. If you create a filesystem image in test.img , mount it and create a test folder, then unmount it again, you can edit it using debugfs : debugfs -w test.img
cd test
unlink . debugfs doesn't complain and dutifully deletes the . directory entry in the filesystem. The test directory is still usable, with one surprise: sudo mount test.img /mnt/temp
cd /mnt/temp/test
ls shows only .. so . really is gone. Yet cd . , ls . , pwd still behave as usual! I'd previously done this test using rmdir . , but that deletes the directory's inode ( huge thanks to BowlOfRed for pointing this out ), which leaves test a dangling directory entry and is the real reason for the problems encountered. In this scenario, the test folder then becomes unusable; after mounting the image, running ls produces ls: cannot access '/mnt/test': Structure needs cleaning and the kernel log shows EXT4-fs error (device loop2): ext4_lookup:1606: inode #2: comm ls: deleted inode referenced: 38913 Running e2fsck in this situation on the image deletes the test directory entirely (the directory inode is gone so there's nothing to restore). All this shows that . exists as a specific entity in the EXT4 filesystem. I got the impression from the filesystem code in the kernel that it expects . and .. to exist, and warns if they don't (see namei.c ), but with the unlink . -based test I didn't see that warning. e2fsck doesn't like the missing . directory entry, and offers to fix it: $ /sbin/e2fsck -f test.img
e2fsck 1.43.3 (04-Sep-2016)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Missing '.' in directory inode 30721.
Fix<y>? This re-creates the . directory entry. | {
"source": [
"https://unix.stackexchange.com/questions/317226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21795/"
]
} |
317,695 | I had discovered something funny today. So, I have Kali Linux and I am trying to fully update the system using the repo http://http.kali.org/kali . All is good and well until I get 403 denied for backdoor-factory and mimikatz. At first I thought it was a server configuration error and so ignored it, but then I got curious and decided to pop the URLs into Firefox. Sure enough, my university blocks these specific URLs, but not anything else in the repo. I decided to check out if I could load the URLs in https (yes, I knew it was a long shot as most (afaik) APT servers don't even support https at all) and found out it does work, but only when accepting the certificate for archive-8.kali.org. (yes, I know invalid certs aren't good, but I figured if it is using GPG to check the validity and it uses http with no encryption anyway, then why not). Also, I know I can just use https://archive-8.kali.org/kali in place of the old url and have done so, but the reason I asked about accepting invalid certs is for if this solution of just switching domains is impossible. | You can configure certain parameters for the HTTPS transport in /etc/apt/apt.conf.d/ — see man apt.conf (section "THE ACQUIRE GROUP", subsection "https") for details. There is also a helpful example over at the trusted-apt project. For example, you can disable certificate checking completely: // Do not verify peer certificate
Acquire::https::Verify-Peer "false";
// Do not verify that certificate name matches server name
Acquire::https::Verify-Host "false"; … or just for a specific host: Acquire::https::repo.domain.tld::Verify-Peer "false";
Acquire::https::repo.domain.tld::Verify-Host "false"; These options should be placed in a newly created file in /etc/apt/apt.conf.d/ so they won't interfere with options installed by official packages (which will create separate files of their own). The filename determines the order in which the option files are parsed, so you'll probably want to choose a rather high number to have your options parsed after the ones installed by other packages. Try 80ssl-exceptions , for example. | {
"source": [
"https://unix.stackexchange.com/questions/317695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181269/"
]
} |
318,266 | I'd like to find an equivalent of cmd 1 && cmd 2 && ... && cmd 20 but with commands expressed within a for loop like for i in {1..20}
do
cmd $i
done What would you suggest to change in the second expression to find an equivalent of the first? | The equivalent to your original sequence would be: for i in {1..20}
do
cmd $i || break
done The difference with Amit's answer is the script won't exit, i.e. will execute potential commands that might follow the sequence/loop. Note that the return status of the whole loop will always be true with my suggestion, this might be fixed if relevant in your case. | {
"source": [
"https://unix.stackexchange.com/questions/318266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
Subsets and Splits