source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
276,959 | I would like to be able to extract a tar file, such that all extracted files are placed under a certain prefix directory. Any attempt by the tar files to write to outside directories should cause the extraction to fail. As you might imagine, this is so that I can securely extract an untrusted tar file. How can I do this with GNU tar ? I came up with: tar --exclude='/*' --exclude='*/../*' --exclude='../*' -xvf untrusted_file.tar but I am not sure that this is paranoid enough. | You don't need the paranoia at all. GNU tar — and in fact any well-written tar program produced in the past 30 years or so — will refuse to extract files in the tarball that begin with a slash or that contain .. elements, by default. You have to go out of your way to force modern tar programs to extract such potentially-malicious tarballs: both GNU and BSD tar need the -P option to make them disable this protection. See the section Absolute File Names in the GNU tar manual. The -P flag isn't specified by POSIX,¹ though, so other tar programs may have different ways of coping with this. For example, the Schily Tools' star program uses -/ and -.. to disable these protections. The only thing you might consider adding to a naïve tar command is a -C flag to force it to extract things in a safe temporary directory, so you don't have to cd there first. Asides : Technically, tar isn't specified by POSIX any more at all. They tried to tell the Unix computing world that we should be using pax now instead of tar and cpio , but the computing world largely ignored them. It's relevant here to note that the POSIX specification for pax doesn't say how it should handle leading slashes or embedded .. elements. There's a nonstandard --insecure flag for BSD pax to suppress protections against embedded .. path elements, but there is apparently no default protection against leading slashes; the BSD pax man page indirectly recommends writing -s substitution rules to deal with the absolute path risk. That's the sort of thing that happens when a de facto standard remains in active use while the de jure standard is largely ignored. | {
"source": [
"https://unix.stackexchange.com/questions/276959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55436/"
]
} |
277,203 | One program created lots of nested sub-folders. I tried to use command rm -fr * to remove them all. But it's very slow. I'm wondering is there any faster way to delete them all? | The fastest way to remove them from that directory is to move them out of there, after that just remove them in the background: mkdir ../.tmp_to_remove
mv -- * ../.tmp_to_remove
rm -rf ../.tmp_to_remove & This assumes that your current directory is not the toplevel of some mounted partition (i.e. that ../.tmp_to_remove is on the same filesystem). The -- after mv (as edited in by Stéphane) is necessary if you have any file/directory names starting with a - . The above removes the files from your current directory in a fraction of a second, as it doesn't have to recursively handle the subdirectories. The actual removal of the tree from the filesystem takes longer, but since it is out of the way, its actual efficiency shouldn't matter that much. | {
"source": [
"https://unix.stackexchange.com/questions/277203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166284/"
]
} |
277,331 | When a segmentation fault occurs in Linux, the error message Segmentation fault (core dumped) will be printed to the terminal (if any), and the program will be terminated. As a C/C++ dev, this happens to me quite often, and I usually ignore it and move onto gdb , recreating my previous action in order to trigger the invalid memory reference again. Instead, I thought I might be able to perhaps use this "core" instead, as running gdb all the time is rather tedious, and I cannot always recreate the segmentation fault. My questions are three: Where is this elusive "core" dumped? What does it contain? What can I do with it? | If other people clean up ... ... you usually don't find anything. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find: core_pattern is used to specify a core dumpfile pattern name. If the first character of the pattern is a '|', the kernel will treat
the rest of the pattern as a command to run. The core dump will be
written to the standard input of that program instead of to a file. (See Core dumped, but core file is not in the current directory? on StackOverflow) According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory. But what's in there? Now what it contains is system specific, but according to the all knowing encyclopedia : [A core dump] consists of the recorded state of the working memory of a computer
program at a specific time[...]. In practice, other key pieces of
program state are usually dumped at the same time, including the
processor registers, which may include the program counter and stack
pointer, memory management information, and other processor and
operating system flags and information. ... so it basically contains everything that gdb needs (in addition to the executable that caused the fault) to analyze the fault. Yeah, but I'd like me to be happy instead of gdb You can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump . You should then be able to analyze the specific failure instead of trying and failing to reproduce bugs. | {
"source": [
"https://unix.stackexchange.com/questions/277331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142558/"
]
} |
277,334 | I use Gmail and every so often I have to hack it back in that text/plain email should be rendered in a monospace font. This makes it much easier to skim system-generated reports, &c. The problems with this are that I have to re-hack Gmail every few years when they change their semantics, and I have more "developer" type colleagues who aren't going to hack their gmail to improve the legibility of these emails. So, I'm wondering if anyone knows an easy command to take a text file and wrap it in HTML and enough MIME stuff to correctly encode the message as ... ideally multipart alternative, with the HTML being the text in a PRE tag. I mean, if I can even feed MIME output to cron? I'd be content to pipe to an html-mime-email type command ... | If other people clean up ... ... you usually don't find anything. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find: core_pattern is used to specify a core dumpfile pattern name. If the first character of the pattern is a '|', the kernel will treat
the rest of the pattern as a command to run. The core dump will be
written to the standard input of that program instead of to a file. (See Core dumped, but core file is not in the current directory? on StackOverflow) According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory. But what's in there? Now what it contains is system specific, but according to the all knowing encyclopedia : [A core dump] consists of the recorded state of the working memory of a computer
program at a specific time[...]. In practice, other key pieces of
program state are usually dumped at the same time, including the
processor registers, which may include the program counter and stack
pointer, memory management information, and other processor and
operating system flags and information. ... so it basically contains everything that gdb needs (in addition to the executable that caused the fault) to analyze the fault. Yeah, but I'd like me to be happy instead of gdb You can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump . You should then be able to analyze the specific failure instead of trying and failing to reproduce bugs. | {
"source": [
"https://unix.stackexchange.com/questions/277334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5571/"
]
} |
277,387 | When using the tab bar, I keep getting this error: bash: cannot create temp file for here-document: No space left on device" Any ideas? I have been doing some research, and many people talk about the /tmp file, which might be having some overflow. When I execute df -h I get: Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.1G 8.7G 0 100% /
udev 10M 0 10M 0% /dev
tmpfs 618M 8.8M 609M 2% /run
tmpfs 1.6G 0 1.6G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.6G 0 1.6G 0% /sys/fs/cgroup
/dev/sda1 511M 132K 511M 1% /boot/efi
/dev/sda4 1.8T 623G 1.1T 37% /home
tmpfs 309M 4.0K 309M 1% /run/user/116
tmpfs 309M 0 309M 0% /run/user/1000 It looks like the /dev/data directory is about to explode, however if I tip: $ du -sh /dev/sda2
0 /dev/sda2 It seems it's empty. I am new in Debian and I really don't know how to proceed. I used to typically access this computer via ssh. Besides this problem I have several others with this computer, they might be related, for instance each time I want to enter my user using the GUI (with root it works) I get: Xsession: warning: unable to write to /tmp: Xsession may exit with an error | Your root file system is full and hence your temp dir (/tmp, and /var/tmp for that matter) are also full. A lot of scripts and programs require some space for working files, even lock files. When /tmp is unwriteable bad things happen. You need to work out how you've filled the filesystem up. Typically places this will happen is in /var/log (check that you're cycling the log files). Or /tmp may be full. There's many, many other ways that a disk can fill up, however. du -hs /tmp /var/log You may wish to re-partition to give /tmp it's own partition (that's the old school way of doing it, but if you have plenty of disk it's fine), or map it into memory (which will make it very fast but start to cause swapping issues if you overdo the temporary files). | {
"source": [
"https://unix.stackexchange.com/questions/277387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166411/"
]
} |
277,697 | I found this command used to find duplicated files but it was quite long and made me confused. For example, if I remove -printf "%s\n" , nothing came out. Why was that? Besides, why have they used xargs -I{} -n1 ? Is there any easier way to find duplicated files? [4a-o07-d1:root/798]#find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
0bee89b07a248e27c83fc3d5951213c1 ./test1.txt
0bee89b07a248e27c83fc3d5951213c1 ./test2.txt | You can make it shorter: find . ! -empty -type f -exec md5sum {} + | sort | uniq -w32 -dD Do md5sum of found files on the -exec action of find and then sort and do uniq to get the files having same the md5sum separated by newline. | {
"source": [
"https://unix.stackexchange.com/questions/277697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138782/"
]
} |
277,793 | At my company, we download a local development database snapshot as a db.dump.tar.gz file. The compression makes sense, but the tarball only contains a single file ( db.dump ). Is there any point to archiving a single file, or is .tar.gz just such a common idiom? Why not just .gz ? | Advantages of using .tar.gz instead of .gz are that tar stores more meta-data (UNIX permissions etc.) than gzip . the setup can more easily be expanded to store multiple files .tar.gz files are very common, only-gzipped files may puzzle some users.
(cf. MelBurslans comment ) The overhead of using tar is also very small. If not really needed, I still do not recommend to tar a single file.
There are many useful tools which can access compressed single files directly (such as zcat , zgrep etc. - also existing for bzip2 and xz ). | {
"source": [
"https://unix.stackexchange.com/questions/277793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149505/"
]
} |
277,803 | I'm setting up a NAS-Server using an Odroid XU4 and two 2TB HDDs.
Which setup would be more secure (lower risk of losing data / easier recovery): setup a RAID1 with mdadm have two separate devices, sync devices using rsync periodically I know if one drive crashed in 2. I'd lose the data created/modified since last sync, but when using a RAID it would be a bit more "difficult" to get the data from the "still working" drive. | Advantages of using .tar.gz instead of .gz are that tar stores more meta-data (UNIX permissions etc.) than gzip . the setup can more easily be expanded to store multiple files .tar.gz files are very common, only-gzipped files may puzzle some users.
(cf. MelBurslans comment ) The overhead of using tar is also very small. If not really needed, I still do not recommend to tar a single file.
There are many useful tools which can access compressed single files directly (such as zcat , zgrep etc. - also existing for bzip2 and xz ). | {
"source": [
"https://unix.stackexchange.com/questions/277803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166699/"
]
} |
277,892 | I recently needed a single blank PDF page (8.5" x 11" size) and realized that I didn't know how to make one from the command line. Issuing touch blank.pdf produces an empty PDF file . Is there a command line tool that produces an empty PDF page ? | convert , the ImageMagick utility used in Ketan's answer, also allows you to write something like convert xc:none -page Letter a.pdf or convert xc:none -page A4 a.pdf or (for horizontal A4 paper) convert xc:none -page 842x595 a.pdf etc. , without creating an empty text file. @chbrown noticed that this creates a smaller pdf file. "xc:" means "X Constant Image" but could really be thought of as "x canvas". It's a way to specify a single block of a color, in this case none. More info at http://imagemagick.org/Usage/canvas/#solid which is the "de facto" manual for ImageMagick. [supplemented with information from pipe] (Things like pdf:a can be used to explicitly declare the format of a file. label:'some text' , gradient: , rose: and logo: seem to be other examples of special file formats.) Anko suggested posting this modification as a separate answer, so I am doing it. | {
"source": [
"https://unix.stackexchange.com/questions/277892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
277,909 | I recently updated my Arch Linux server and during that process tmux got updated. I was using tmux while the upgrade was going on and used it afterwards, but all during the same SSH session. Now, however, whenever I try to issue any tmux command I get this error: tmux: need UTF-8 locale (LC_CTYPE) but have ANSI_X3.4-1968 Here's the output from locale -a on the server: $ locale -a
C
POSIX and on my machine (Ubuntu 15.10): $ locale -a
C
C.UTF-8
en_AG
en_AG.utf8
en_AU.utf8
en_BW.utf8
en_CA.utf8
en_DK.utf8
en_GB.utf8
en_HK.utf8
en_IE.utf8
en_IN
en_IN.utf8
en_NG
en_NG.utf8
en_NZ.utf8
en_PH.utf8
en_SG.utf8
en_US.utf8
en_ZA.utf8
en_ZM
en_ZM.utf8
en_ZW.utf8
POSIX What's going on and how do I fix it? | The same exact thing happened to me. Building on what Thomas said above, I was able to fix it by uncommenting en_US.UTF-8 UTF-8 in my /etc/locale.gen file (previously none of the lines had been uncommented), then running locale-gen . | {
"source": [
"https://unix.stackexchange.com/questions/277909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63480/"
]
} |
278,351 | I have a variable for color. I use it to set color to strings, by evaluating it inside the string. However, I need to include space after the name(so that the name doesn't contain part of the text). This sometimes looks bad. How can I avoid using(printing) this space? Example(Let's say that Red=1 and NC=2 ): echo -e "$Red Note: blabla$NC". Output: 1 Note: blabla2. Expected output: 1Note: blabla2. | Just enclose variable in braces: echo -e "${Red}Note: blabla${NC}". See more detail about Parameter Expansion . See also great answer Why printf is better than echo? if you care about portability. | {
"source": [
"https://unix.stackexchange.com/questions/278351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
278,400 | I have a PID of certain process listening some port(s) on my OS X and I need to know which port(s) is listened by this process. How can I do it? I know I can use lsof to know which process is listening some port, but I need to perform an inverse operation. Thank you. UPD OS X uses BSD utils, so I have BSD netstat not Linux netstat . Linux netstat has -p option to show PIDs, BSD netstat uses -p to specify port and has no option to show PID. | I've found a solution on my own by deep reading man lsof . (Yes, RT*M still helps.) Thanks @Gilles for aiming. Here is the solution: lsof -aPi -p 555 (555 is the PID). Explanation: -p to specify the PID number; -i to display only network devices; -a to AND two conditions above (otherwise they will be ORed); -P to display port numbers (instead port names by default). Additionally, one can use lsof -aPi4 -p 555 or lsof -aPi6 -p 55 for IPv4 or IP6 only addresses accordingly. If output will be parsed by another program -Fn option may be helpful. With this option lsof will produce "output for other program" instead of nice formatted output. lsof -aPi4 -Fn -p 555 will output something like this: p554
nlocalhost:4321 PS All of it I've tested on my OS X El Capitan, but as I can see it should work on Linux too. | {
"source": [
"https://unix.stackexchange.com/questions/278400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117458/"
]
} |
278,502 | I want to access the array index variable while looping thru an array in my bash shell script. myscript.sh #!/bin/bash
AR=('foo' 'bar' 'baz' 'bat')
for i in ${AR[*]}; do
echo $i
done The result of the above script is: foo
bar
baz
bat The result I seek is: 0
1
2
3 How do I alter my script to achieve this? | You can do this using List of array keys . From the bash man page: ${!name[@]} ${!name[*]} List of array keys . If name is an array variable, expands to the list of array indices (keys) assigned in name. If name is not an array, expands to 0 if name is set and null otherwise. When @ is used and the expansion appears within double quotes, each key expands to a separate word. For your example: #!/bin/bash
AR=('foo' 'bar' 'baz' 'bat')
for i in "${!AR[@]}"; do
printf '${AR[%s]}=%s\n' "$i" "${AR[i]}"
done This results in: ${AR[0]}=foo
${AR[1]}=bar
${AR[2]}=baz
${AR[3]}=bat Note that this also work for non-successive indexes: #!/bin/bash
AR=([3]='foo' [5]='bar' [25]='baz' [7]='bat')
for i in "${!AR[@]}"; do
printf '${AR[%s]}=%s\n' "$i" "${AR[i]}"
done This results in: ${AR[3]}=foo
${AR[5]}=bar
${AR[7]}=bat
${AR[25]}=baz | {
"source": [
"https://unix.stackexchange.com/questions/278502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167174/"
]
} |
278,561 | So I use pelican for writing my blog and I upload the whole thing using rsync. OK. But I use also Let's Encrypt and therefor need the repository .well-known preserved at the root of my website. So is there a way I can say "rsync ... --do-not-delete .well-known ..." Currently, those rep' are permission protected, but rsync doesn't like it. Here is the current rsync command (installed by pelican itself, I did not write it) : rsync -e "ssh -p $(SSH_PORT)" -P -rvzc --delete $(OUTPUTDIR)/ $(SSH_USER)@$(SSH_HOST):$(SSH_TARGET_DIR) --cvs-exclude BTW : if you have also some suggestion to improve rsync efficiency, I take it (yes, it's off topic). | From man rsync --delete
This tells rsync to delete extraneous files from the receiving
side (ones that aren’t on the sending side), but only for the
directories that are being synchronized. You must have asked
rsync to send the whole directory (e.g. "dir" or "dir/") without
using a wildcard for the directory’s contents (e.g. "dir/*")
since the wildcard is expanded by the shell and rsync thus gets
a request to transfer individual files, not the files’ parent
directory. Files that are excluded from the transfer are also
excluded from being deleted unless you use the --delete-excluded
option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section). So I think it should be rsync -e "ssh -p $(SSH_PORT)" -P -rvzc --delete \
$(OUTPUTDIR)/ \
$(SSH_USER)@$(SSH_HOST):$(SSH_TARGET_DIR) \
--cvs-exclude --exclude=/.well-known (assuming .well-known is at the root of $(SSH_TARGET_DIR)/ ) | {
"source": [
"https://unix.stackexchange.com/questions/278561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165068/"
]
} |
278,564 | It was recently pointed out to me that an alternative to cron exists, namely systemd timers. However, I know nothing about systemd or systemd timers. I have only used cron. There is a little discussion in the Arch Wiki . However, I'm looking for a detailed comparison between cron and systemd timers, focusing on pros and cons. I use Debian, but I would like a general comparison for all systems for which these two alternatives are available. This set may include only Linux distributions. Here is what I know. Cron is very old, going back to the late 1970s. The original author of cron is Ken Thompson, the creator of Unix. Vixie cron, of which the crons in modern Linux distributions are direct descendants, dates from 1987. Systemd is much newer, and somewhat controversial. Wikipedia tells me its initial release was 30 March 2010. So, my current list of advantages of cron over systemd timers is: Cron is guaranteed to be in any Unix-like system, in the sense of being an installable supported piece of software. That is not going
to change. In contrast, systemd may or may not remain in Linux
distributions in the future. It is mainly an init system, and may be
replaced by a different init system. Cron is simple to use. Definitely simpler than systemd timers. The corresponding list of advantages of systemd timers over cron is: Systemd timers may be more flexible and capable. But I'd like
examples of that. So, to summarise, here are some things it would be good to see in an answer: A detailed comparison of cron vs systemd timers, including pros and
cons of using each. Examples of things one can do that the other cannot. At least one side-by-side comparison of a cron script vs a systemd
timers script. | Here are some points about those two : checking what your cron job really does can be kind of a mess, but
all systemd timer events are carefully logged in systemd journal
like the other systemd units based on the event that makes things
much easier. systemd timers are systemd services with all their capabilities for resource management, IO CPU scheduling, ... There is a list : systemcall filters user/group ids membershipcontrols nice value OOM score IO scheduling class and priority CPU scheduling policy CPU affinity umask timer slacks secure bits network access and ,... with the dependencies option just like other systemd services
there can be dependencies on activation time. Units can be activated in different ways, also combination of
them can be configured. services can be started and triggered by
different events like user, boot, hardware state changes or for
example 5mins after some hardware plugged and ,... much easier configuration some files and straight forward tags to
do variety of customizations based on your needs with systemd
timers. Easily enable/disable the whole thing with: systemctl enable/disable and kill all the job's children with: systemctl start/stop systemd timers can be scheduled with calenders and monotonic
times, which can be really useful in case of different timezones and
,... systemd time events (calendar) are more accurate than cron (seems
1s precision) systemd time events are more meaningful, for those recurring ones
or even those that should occur once, here is an example from the document : Sat,Thu,Mon-Wed,Sat-Sun → Mon-Thu,Sat,Sun *-*-*00:00:00
Mon,Sun 12-*-* 2,1:23 → Mon,Sun 2012-*-* 01,02:23:00
Wed *-1 → Wed *-*-01 00:00:00
Wed-Wed,Wed *-1 → Wed *-*-01 00:00:00
Wed, 17:48 → Wed *-*-* 17:48:00 From the CPU usage view point systemd timer wakes the CPU on the
elapsed time but cron does that more often. Timer events can be scheduled based on finish times of
executions some delays can be set between executions. The communication with other programs is also notable sometimes
it's needed for some other programs to know timers and the state of
their tasks. | {
"source": [
"https://unix.stackexchange.com/questions/278564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4671/"
]
} |
278,707 | In Bash, two integers can be compared using conditional expression arg1 OP arg2 OP is one of -eq , -ne , -lt , -le , -gt , or -ge . These
arithmetic binary operators return true if arg1 is equal to, not equal
to, less than, less than or equal to, greater than, or greater than or
equal to arg2 , respectively. Arg1 and arg2 may be positive or negative
integers. or arithmetic expression: <= >= < > comparison == != equality and inequality Why do we have two different ways for comparing two integers?
When to use which? For example, [[ 3 -lt 2 ]] uses conditional expression, and (( 3 < 2 )) uses arithmetic expression. Both returns 0 when the comparison is true When comparing two integers, can these two methods always be used interchangeably? If yes, why does Bash have two methods rather than one? | Yes, we have two different ways of comparing two integers. It seems that these facts are not widely accepted in this forum: Inside the idiom [ ] the operators for arithmetic comparison are -eq , -ne , -lt , -le , -gt and -ge . As they also are inside a test command and inside a [[ ]] . Yes inside this idioms, = , < , etc. are string operators. Inside the idiom (( )) the operators for arithmetic comparison are == , != , < , <= , > , and >= . No, this is not an "Arithmetic expansion" (which start with a $ ) as $(( )) . It is defined as a "Compound Command" in man bash. Yes, it follows the same rules (internally) of the "Arithmetic expansion" but has no output, only an exit value. It could be used like this: if (( 2 > 1 )); then ... Why do we have two different ways for comparing two integers? I guess that the latter (( )) was developed as a simpler way to perform arithmetic tests. It is almost the same as the $(( )) but just has no output. Why two? Well the same as why we have two printf (external and builtin) or four test (external test , builtin test , [ and [[ ). That's the way that the shells grow, improving some area in one year, improving some other the next year. When to use which? That's a very tough question because there should be no effective difference. Of course there are some differences in the way a [ ] work and a (( )) work internally, but: which is better to compare two integers? Any one!. When comparing two integers, can these two methods always be used interchangeably? For two numbers I am compelled to say yes. But for variables, expansions, mathematical operations there may be key differences that should favor one or the other. I can not say that absolutely both are equal. For one, the (( )) could perform several math operations in sequence: if (( a=1, b=2, c=a+b*b )); then echo "$c"; fi If yes, why does Bash have two methods rather than one? If both are helpful, why not?. | {
"source": [
"https://unix.stackexchange.com/questions/278707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
278,809 | I was looking down at my keyboard and typed my password in because I thought I had already typed my login name. I pressed Enter , then when it asked for the password I pressed Ctrl + c . Should I take some precautionary measure to make sure the password isn't stored in plain text somewhere or should I change the password? Also this was on a tty on ubuntu server 16.04 LTS. | The concern is whether your password is recorded in the authentication log. If you're logging in on a text console under Linux, and you pressed Ctrl + C at the password prompt, then no log entry is generated. At least, this is true for Ubuntu 14.04 or Debian jessie with SysVinit, and probably for other Linux distributions; I haven't checked whether this is still the case on a system with Systemd. Pressing Ctrl + C kills the login process before it generates any log entry. So you're safe . On the other hand, if you actually made a login attempt, which happens if you pressed Enter or Ctrl + D at the password prompt, then the username you entered appears in plain text in the authentication logs. All login failures are logged; the log entry contains the account name, but never includes anything about the password (just the fact that the password was incorrect). You can check by reviewing the authentication logs. On Ubuntu 14.04 or Debian jessie with SysVinit, the authentication logs are in /var/log/auth.log . If this is a machine under your exclusive control, and it doesn't log remotely, and the log file hasn't been backed up yet, and you're willing and able to edit the log file without breaking anything, then edit the log file to remove the password. If your password is recorded in the system logs, you should consider it compromised and you need to change it. Logs might leak for all kinds of reasons: backups, requests for assistance… Even if you're the only user on this machine, don't risk it. Note: I haven't checked whether Ubuntu 16.04 works differently. This answer may not be generalizable to all Unix variants and is certainly not generalizable to all login methods. For example OpenSSH does log the username even if you press Ctrl + C at the password prompt (before it shows the password prompt, in fact). | {
"source": [
"https://unix.stackexchange.com/questions/278809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103531/"
]
} |
278,860 | I am trying to understand what the relationship a xxx.iso file has to the other aspects of a block device, e.g. partitions and a file system. It's common for people to describe accessing or making a .iso usable as "mounting the ISO". So to put the question another way: If I, or some piece of software, wanted to "mount" a xxx.iso file onto a USB device, is it necessary to have a pre-existing partition complete with filesystem (e.g. FAT x or ext X ) or is the .iso file - once in the "mounted" state - a lower level construct that performs the same/similar role a file system (or even a partition) does? | An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents. | {
"source": [
"https://unix.stackexchange.com/questions/278860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
278,864 | I have a statically linked busybox and want to be able to write busybox telnet foo . How do I specify the address of "foo"? Do I really need /etc/nsswitch.conf and the corresponding dynamic libraries, or does busybox contain some own simple mechanism to consult /etc/hosts ? | An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents. | {
"source": [
"https://unix.stackexchange.com/questions/278864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29241/"
]
} |
278,873 | I need to configure xfreerdp to be set as 15 bpp every time is launched, without the need of use the # xfreerdp --sec rdp -a 15 --no-bmp-cache srvaddr Opening the config.txt of xfreerdp , shows me the IP of the server, and if I add /bpp:15 or -a 15 , the program won't launch. What is the correct syntax for this config file? | An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents. | {
"source": [
"https://unix.stackexchange.com/questions/278873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167440/"
]
} |
278,877 | I installed Raspbian to a 16 GB card and expanded the filesystem. When I made a dd backup of the card, the .img file output was ~16 GB. Most of it is unused space in the ext4 partition—I'm only using like 2.5 GB in that partition. (There are two partitions—the first is FAT for boot and the second is ext4 for rootfs.) I'd like to shrink the backup.img file which resides on an Ubuntu 16.04 Sever installation (no GUI) so that I can restore the image to a card of smaller size (say 8GB for example). So far, I have mounted the ext4 partition to /dev/loop0 by using the offset value provided to me by fdisk -l backup.img . Then I used e2fsck -f /dev/loop0 and then resize2fs -M /dev/loop0 which appeared to shrink the ext4 fs... am I on the right track? I feel like parted might be next, but I have no experience with it. How do I accomplish this using only cli tools? Update: Here is the output from running fdisk -l backup.img : Disk backup.img: 14.9 GiB, 15931539456 bytes, 31116288 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x79d38e92
Device Boot Start End Sectors Size Id Type
backup.img1 * 8192 124927 116736 57M e W95 FAT16 (LBA)
backup.img2 124928 31116287 30991360 14.8G 83 Linux | An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents. | {
"source": [
"https://unix.stackexchange.com/questions/278877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46524/"
]
} |
278,897 | On Dell Inspiron 7559 there is electric buzzing from beneath the keyboard that changes frequency when I touch the touchpad. However, if the CPU is busy I hear no buzzing as the frequency gets out of my hearing range. I was hoping to run a script to keep CPU busy enough and keep the laptop silent, something like this: #!/bin/sh
recursiveDelay(){ sleep 0.001; recursiveDelay;}
recursiveDelay But it gradually get's the CPU percent higher and higher. How could I keep it at a certain percent? | An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents. | {
"source": [
"https://unix.stackexchange.com/questions/278897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94213/"
]
} |
278,939 | I'm trying to execute a command and would like to put the date and time in the output file name. Here is a sample command I'd like to run. md5sum /etc/mtab > 2016_4_25_10_30_AM.log The date time format can be anything sensible with underscores. Even UTC if the AM and PM can't be used. | If you want to use the current datetime as a filename, you can use date and command substitution . $ md5sum /etc/mtab > "$(date +"%Y_%m_%d_%I_%M_%p").log" This results in the file 2016_04_25_10_30_AM.log (although, with the current datetime) being created with the md5 hash of /etc/mtab as its contents. Please note that filenames containing 12-hour format timestamps will probably not sort by name the way you want them to sort. You can avoid this issue by using 24-hour format timestamps instead. If you don't have a requirement to use that specific date format, you might consider using an ISO 8601 compliant datetime format. Some examples of how to generate valid ISO 8601 datetime representations include: $ date +"%FT%T"
2016-04-25T10:30:00
$ date +"%FT%H%M%S"
2016-04-25T103000
$ date +"%FT%H%M"
2016-04-25T1030
$ date +"%Y%m%dT%H%M"
20160425T1030 If you want "safer" filenames (e.g., for compatibility with Windows), you can omit the colons from the time portion. Please keep in mind that the above examples all assume local system time. If you need a time representation that is consistent across time zones, you should specify a time zone offset or UTC. You can get an ISO 8601 compliant time zone offset by using "%z" in the format portion of your date call like this: $ date +"%FT%H%M%z"
2016-04-25T1030-0400 You can get UTC time in your date call by specifying the -u flag and adding "Z" to the end of the datetime string to indicate that the time is UTC like this: $ date -u +"%FT%H%MZ"
2016-04-25T1430Z | {
"source": [
"https://unix.stackexchange.com/questions/278939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159435/"
]
} |
279,024 | I have seen wrapper script examples which in a nutshell are following: #!/bin/bash
myprog=sleep
echo "This is the wrapper script, it will exec "$myprog""
exec "$myprog" "$@" As seen above, they use exec to replace the newly created shell almost immediately with the $myprog . One could achieve the same without exec : #!/bin/bash
myprog=sleep
echo "This is the wrapper script, it will exec "$myprog""
"$myprog" "$@" In this last example, a new bash instance is started and then $myprog is started as a child process of the bash instance. What are the benefits of the first approach? | Using exec makes the wrapper more transparent, i.e. it makes it less likely that the user or application that calls the script needs to be aware that it's a relay that in turns launches the “real” program. In particular, if the caller wants to kill the program, they'll just kill the process they just launched. If the wrapper script runs a child process, the caller would need to know that they should find out the child of the wrapper and kill that instead. The wrapper script could set a trap to relay some signals, but that wouldn't work with SIGSTOP or SIGKILL which can't be caught. Calling exec also saves a bit of memory (and other resources such as PIDs etc.) since it there's no need to keep an extra shell around with nothing left to do. If there are multiple wrappers, the problems add up (difficulty in finding the right process to kill, memory overhead, etc.). Some shells (e.g. the Korn shell) automatically detect when a command is the last one and there's no active trap and put an implicit exec , but not all do (e.g. not bash). | {
"source": [
"https://unix.stackexchange.com/questions/279024",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
279,096 | What is reasonable scalability limit of sort -u ?
(in dimensions of "line length", "amount of lines", "total file size") What is Unix alternative for files exceeding this in dimension of "amount of lines"? Of course I can easily implement one, but I wondered if there is something that can be done with few standard Linux commands. | The sort that you find on Linux comes from the coreutils package and implements an External R-Way merge . It splits up the data into chunks that it can handle in memory, stores them on disc and then merges them. The chunks are done in parallel, if the machine has the processors for that. So if there was to be a limit, it is the free disc space that sort can use to store the temporary files it has to merge, combined with the result. | {
"source": [
"https://unix.stackexchange.com/questions/279096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9689/"
]
} |
279,141 | I am using a freshly installed Ubuntu-Gnome 16.04 , and I want to set caps-lock to change keyboard layout (single key, not key combination). I used to have this Linux-mint , and I grew used to it. I looked into the setting manager, but there is doesn't accept caps-lock as a valid input. I also looked int gnome-tweak-tools but there I can't find the keyboard layout switching at all. Is this possible? how? | You could set the corresponding xkb option via dconf-editor . Navigate to org > gnome > desktop > input-sources and add grp:caps_toggle to your xkb-options : Note each option is enclosed in single quotes, options are separated by comma+space. On older gnome3 releases you could do that also via System Settings > Keyboard (or gnome-control-center keyboard in terminal) > Typing > Modifiers-only switch to next source to Caps Lock This was removed from recent releases ( see sanmai's answer for alternatives to dconf-editor ). | {
"source": [
"https://unix.stackexchange.com/questions/279141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7140/"
]
} |
279,397 | I know this question isn't very new but it seems as if I didn't be able to fix my problem on myself. ldd generate the following output u123@PC-Ubuntu:~$ ldd /home/u123/Programme/TestPr/Debug/TestPr
linux-vdso.so.1 => (0x00007ffcb6d99000)
libcsfml-window.so.2.2 => not found
libcsfml-graphics.so.2.2 => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fcebb2ed000)
/lib64/ld-linux-x86-64.so.2 (0x0000560c48984000) Which is the correct way to tell ld the correct path? | if your libraries are not on standard path then either you need to add them to the path or add non-standard path to LD_LIBRARY_PATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<Your_non-Standard_path> Once you done any one of above things then you need to update the dynamic linker run-time binding by executing below command: sudo ldconfig UPDATE: You can make the changes permanent by either writing the above export line into one of your startup files (e.g. ~/.bashrc) OR if the underlying library is not conflicting with any other library then put into one of standard library path (e.g. /lib,/usr/lib) | {
"source": [
"https://unix.stackexchange.com/questions/279397",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167795/"
]
} |
279,569 | I need to delete a "~" folder in my home directory. I realize now that rm -R ~ is a bad choice. Can I safely use rm -R "~" ? | In theory yes. In practice usually also yes. If you're calling a shell script or alias that does something weird, then maybe no. You could use echo to see what a particular command would be expanded to by the shell: $ echo rm -R ~
rm -R /home/frostschutz
$ echo rm -R "~"
rm -R ~ Note that echo removes the "" so you should not copy-paste what it prints. It just shows that if you give "~" , the command literally sees ~ and not the expanded /home/frostschutz path. If you have any doubt about any command, how about starting out with something that is less lethal if it should go wrong? In your case you could start out with renaming instead of deleting it outright. $ mv "~" delete-me
$ ls delete-me
# if everything is in order
$ rm -R delete-me For confusing file names that normally shouldn't even exist (such as ~ and other names starting with ~ or , or containing newlines, etc.), it's better to be safe than sorry. Also consider using tab completion (type ls ~<TAB><TAB><TAB> ), most shells try their best to take care of you, this also helps avoid mistyping regular filenames. | {
"source": [
"https://unix.stackexchange.com/questions/279569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162496/"
]
} |
279,652 | I want to read whole file and make it waiting for input, just like tail -f but with the complete file displayed. The length of this file will always change, because this is a .log file. How can I do it, if I don't know length of the file? | tail lets you add -n to specify the number of lines to display from the end, which can be used in conjunction with -f . If the argument for -n starts with + that is the count of lines from the beginning ( 0 and 1 displaying the whole file, 2 indicating skip the first line, as indicated by @Ben). So just do: tail -f -n +0 filename If your log files get rotated, you can add --retry (or combine -f and --retry into -F as @Hagen suggested) Also note that in a graphical terminal, you can use the mouse and PageUp / PageDown to scroll back into the history (assuming your buffer is large enough), this information stays there even if you use Ctrl + C to exit tail . If you use less this is far less convenient and AFAIK you have to use the keyboard for scrolling and I don't know of a means to keep less from deinitialising termcap if you forget to start it with -X . | {
"source": [
"https://unix.stackexchange.com/questions/279652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167951/"
]
} |
279,660 | select *
from
emp;
select *
from
dept;
selection
end; I want to obtain this output: select *
from
emp;
select *
from
dept; I tried by using: awk '/select/{a=1} a; /;/{a=0}' XXARXADLMT.txt Output: select *
from
emp;
select *
from
dept;
selection
end; | tail lets you add -n to specify the number of lines to display from the end, which can be used in conjunction with -f . If the argument for -n starts with + that is the count of lines from the beginning ( 0 and 1 displaying the whole file, 2 indicating skip the first line, as indicated by @Ben). So just do: tail -f -n +0 filename If your log files get rotated, you can add --retry (or combine -f and --retry into -F as @Hagen suggested) Also note that in a graphical terminal, you can use the mouse and PageUp / PageDown to scroll back into the history (assuming your buffer is large enough), this information stays there even if you use Ctrl + C to exit tail . If you use less this is far less convenient and AFAIK you have to use the keyboard for scrolling and I don't know of a means to keep less from deinitialising termcap if you forget to start it with -X . | {
"source": [
"https://unix.stackexchange.com/questions/279660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167819/"
]
} |
279,727 | Assume I have 1 - funct1
2- funct 2
3 - funct 3
4 line 4 how can I copy line 1 and 3 (not a range of lines) and paste them, for example at line 8? If I do this in way with | arg like ( 1y|3y ), I would yank lines to several registers, right? But how can I put from several registers at once? | You can append to a register instead of erasing it by using the upper-case letter instead of the lower-case one. For example: :1y a # copy line 1 into register a (erases it beforehand)
:3y A # copy line 3 into register a (after its current content)
8G # go to line 8
"ap # print register a | {
"source": [
"https://unix.stackexchange.com/questions/279727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162458/"
]
} |
279,813 | Is there a way to apply the dos2unix command so that it runs against all of the files in a folder and it's subfolders? man dos2unix doesn't show any -r or similar options that would make this straight forward? | find /path -type f -print0 | xargs -0 dos2unix -- | {
"source": [
"https://unix.stackexchange.com/questions/279813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168067/"
]
} |
280,015 | This process keeps hogging my bandwidth: What does this process do? Is it safe to kill it? Is is safe to
remove the package as a whole( to prevent it from starting up ever
again) Or should I just prevent it from automatically running in the background again? I am running Fedora 23. | PackageKit is being run by GNOME Software. It's not doing automatic updates, but it is downloading them so they're ready. There is work on making this be smarter, including not doing it over bandwidth-constrained connections, but it hasn't landed yet. In the meantime, you can disable this by running: dconf write /org/gnome/software/download-updates false or by using the GUI dconf editor and unchecking the download-updates box under org > gnome > software : Note that this is per-user. For changing the default for everyone, see the GNOME docs . | {
"source": [
"https://unix.stackexchange.com/questions/280015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166084/"
]
} |
280,067 | I have a bash script that I want to remove files from various directories. Frequently, they won't be there because they weren't generated and that's fine. Is there a way to get the script not to report that error, but if rm has some other output to report that? Alternately, is there a better command to use to remove the file that'll be less noisy? | Use the -f option. It will silently ignore nonexistent files. From man rm : -f, --force
ignore nonexistent files and arguments, never prompt [The "never prompt" part means that (a) -f overrides any previously specified -i or -I option, and (b) write-protected files will be deleted without asking.] Example Without -f , rm will complain about missing files: $ rm nonesuch
rm: cannot remove ‘nonesuch’: No such file or directory With -f , it will be silent: $ rm -f nonesuch
$ | {
"source": [
"https://unix.stackexchange.com/questions/280067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17808/"
]
} |
280,105 | Is there a way to monitor temperature or reads/writes of and NVMe drive (in this case Intel 750). hdparm , udisksctl , smartctl , and hddtemp all seem to lack this capability, google searches have been fruitless. For the curious, this is the only difficulty I've faced running Fedora 23 (Workstation) using NVMe for the system drive. | Using nvme-cli, I can get temperature from a Samsung 950 Pro with this command: nvme smart-log /dev/nvme0 | grep "^temperature" You can get other informations too: nvme smart-log /dev/nvme0
Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning : 0
temperature : 45 C
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 0%
data_units_read : 3,020,387
data_units_written : 2,330,810
host_read_commands : 26,960,077
host_write_commands : 15,668,236
controller_busy_time : 65
power_cycles : 98
power_on_hours : 281
unsafe_shutdowns : 68
media_errors : 0
num_err_log_entries : 63
Warning Temperature Time : 0
Critical Composite Temperature Time : 0 Note: using kernel 4.6.4 For users access: /etc/sudoers # For users group
%users ALL = NOPASSWD: nvme smart-log /dev/nvme0 | grep "^temperature"
# For all
ALL ALL = NOPASSWD: nvme smart-log /dev/nvme0 | grep "^temperature" | {
"source": [
"https://unix.stackexchange.com/questions/280105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119389/"
]
} |
280,453 | Could you explain the following sentences from Bash manual about $_ , especially the parts in bold, maybe with some examples? At shell startup, set to the absolute pathname used to
invoke the shell or shell script being executed as passed in the environment
or argument list . Subsequently , expands to the last argument to the previous
command, after expansion. Also set to the full pathname used to invoke each
command executed and placed in the environment exported to that command . When checking mail , this parameter holds the name of the mail file. | I agree it's not very clear. 1. At shell startup, if the _ variable was in the environment that bash received , then bash leaves it untouched. In particular, if that bash shell was invoked by another bash shell (though zsh , yash and some ksh implementations also do
it), then that bash shell will have set the _ environment
variable to the path of the command being executed (that's the 3rd
point in your question). For instance, if bash is invoked to
interpret a script as a result of another bash shell interpreting: bash-script some args That bash will have passed _=/path/to/bash-scrip in the
environment given to bash-script , and that's what the initial
value of the $_ bash variable will be in the bash shell that
interprets that script. $ env -i _=whatever bash -c 'echo "$_"'
whatever Now, if the invoking application doesn't pass a _ environment
variable , the invoked bash shell will initialise $_ to the argv[0] it receives
itself which could be bash , or /path/to/bash or /path/to/some-script or anything else (in the example above, that
would be /bin/bash if the she-bang of the script was #! /bin/bash or /path/to/bash-script depending on the
system ). So that text is misleading as it describes the behaviour of the
caller which bash has no control over. The application that invoked bash may very well not set $_ at all (in practice, only some
shells and a few rare interactive applications do, execlp() doesn't
for instance), or it could use it for something completely different
(for instance ksh93 sets it to *pid*/path/to/command ). $ env bash -c 'echo "$_"'
/usr/bin/env (env did not set it to /bin/bash, so the value we
get is the one passed to env by my interactive shell)
$ ksh93 -c 'bash -c "echo \$_"'
*20042*/bin/bash 2. Subsequently The Subsequently is not very clear either. In practice, that's as soon as bash interprets a simple command in the current shell environment. In the case of an interactive shell , that will be on the first simple command interpreted from /etc/bash.bashrc for instance. For instance, at the prompt of an interactive shell: $ echo "$_"
] (the last arg of the last command from my ~/.bashrc)
$ f() { echo test; }
$ echo "$_"
] (the command-line before had no simple command, so we get
the last argument of that previous echo commandline)
$ (: test)
$ echo "$_"
] (simple command, but in a sub-shell environment)
$ : test
$ echo "$_"
test For a non-interactive shell , it would be the first command in $BASH_ENV or of the code fed to that shell if $BASH_ENV is not
set. 3. When Bash executes a command The third point is something different and is hinted in the discussion above. bash , like a few other shells will pass a _ environment variable to commands it executes that contains the path that bash used as the first argument to the execve() system calls. $ env | grep '^_'
_=/usr/bin/env 4. When checking mail The fourth point is described in more details in the description of the MAILPATH variable: 'MAILPATH' A colon-separated list of filenames which the shell periodically
checks for new mail . Each list entry can specify the message that
is printed when new mail arrives in the mail file by separating the
filename from the message with a '?'.
When used in the text of the
message, '$_' expands to the name of the current mail file. Example: $ MAILCHECK=1 MAILPATH='/tmp/a?New mail in <$_>' bash
bash$ echo test >> /tmp/a
New mail in </tmp/a> | {
"source": [
"https://unix.stackexchange.com/questions/280453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
280,524 | I need to concatenate chunks from two files: if I needed concatenate whole files, I could simply do cat file1 file2 > output But I need to skip first 1MB from the first file, and I only want 10 MB from the second file. Sounds like a job for dd . dd if=file1 bs=1M count=99 skip=1 of=temp1
dd if=file2 bs=1M count=10 of=temp2
cat temp1 temp2 > final_output Is there a possibility to do this in one step? ie, without the need to save the intermediate results? Can I use multiple input files in dd ? | dd can write to stdout too. ( dd if=file1 bs=1M count=99 skip=1
dd if=file2 bs=1M count=10 ) > final_output | {
"source": [
"https://unix.stackexchange.com/questions/280524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
280,615 | Is there a command line tool which shows in real time how much space remains on my external hard drive? | As Julie said, you can use df to display free space, passing it either the mount point or the device name: df --human-readable /home
df --human-readable /dev/sda1 You'll get something like this: Filesystem Size Used Avail Use% Mounted on
/dev/sda1 833G 84G 749G 10% /home To run it continuously, use watch . Default update interval is 2 seconds, but you can tweak that with --interval : watch --interval=60 df --human-readable /dev/sda1 | {
"source": [
"https://unix.stackexchange.com/questions/280615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4430/"
]
} |
280,767 | I found something for videos, which looks like this. ffmpeg -i * -c:v libx264 -crf 22 -map 0 -segment_time 1 -g 1 -sc_threshold 0 -force_key_frames "expr:gte(t,n_forced*9)" -f segment output%03d.mp4 I tried using that for an audio file, but only the first audio file contained actual audio, the others were silent, other than that it was good, it made a new audio file for every second. Does anyone know what to modify to make this work with audio files, or another command that can do the same? | This worked for me when I tried it on a mp3 file. $ ffmpeg -i somefile.mp3 -f segment -segment_time 3 -c copy out%03d.mp3 Where -segment_time is the amount of time you want per each file (in seconds). References Splitting an audio file into chunks of a specified length 4.22 segment, stream_segment, ssegment - ffmpeg documentation | {
"source": [
"https://unix.stackexchange.com/questions/280767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
281,158 | I have about 200 GB of log data generated daily, distributed among about 150 different log files. I have a script that moves the files to a temporary location and does a tar-bz2 on the temporary directory. I get good results as 200 GB logs are compressed to about 12-15 GB. The problem is that it takes forever to compress the files. The cron job runs at 2:30 AM daily and continues to run till 5:00-6:00 PM. Is there a way to improve the speed of the compression and complete the job faster? Any ideas? Don't worry about other processes and all, the location where the compression happens is on a NAS , and I can run mount the NAS on a dedicated VM and run the compression script from there. Here is the output of top for reference: top - 15:53:50 up 1093 days, 6:36, 1 user, load average: 1.00, 1.05, 1.07
Tasks: 101 total, 3 running, 98 sleeping, 0 stopped, 0 zombie
Cpu(s): 25.1%us, 0.7%sy, 0.0%ni, 74.1%id, 0.0%wa, 0.0%hi, 0.1%si, 0.1%st
Mem: 8388608k total, 8334844k used, 53764k free, 9800k buffers
Swap: 12550136k total, 488k used, 12549648k free, 4936168k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7086 appmon 18 0 13256 7880 440 R 96.7 0.1 791:16.83 bzip2
7085 appmon 18 0 19452 1148 856 S 0.0 0.0 1:45.41 tar cjvf /nwk_storelogs/compressed_logs/compressed_logs_2016_30_04.tar.bz2 /nwk_storelogs/temp/ASPEN-GC-32459:nkp-aspn-1014.log /nwk_stor
30756 appmon 15 0 85952 1944 1000 S 0.0 0.0 0:00.00 sshd: appmon@pts/0
30757 appmon 15 0 64884 1816 1032 S 0.0 0.0 0:00.01 -tcsh | The first step is to figure out what the bottleneck is: is it disk I/O, network I/O, or CPU? If the bottleneck is the disk I/O, there isn't much you can do. Make sure that the disks don't serve many parallel requests as that can only decrease performance. If the bottleneck is the network I/O, run the compression process on the machine where the files are stored: running it on a machine with a beefier CPU only helps if the CPU is the bottleneck. If the bottleneck is the CPU, then the first thing to consider is using a faster compression algorithm. Bzip2 isn't necessarily a bad choice — its main weakness is decompression speed — but you could use gzip and sacrifice some size for compression speed, or try out other formats such as lzop or lzma. You might also tune the compression level: bzip2 defaults to -9 (maximum block size, so maximum compression, but also longest compression time); set the environment variable BZIP2 to a value like -3 to try compression level 3. This thread and this thread discuss common compression algorithms; in particular this blog post cited by derobert gives some benchmarks which suggest that gzip -9 or bzip2 with a low level might be a good compromise compared to bzip2 -9 . This other benchmark which also includes lzma (the algorithm of 7zip, so you might use 7z instead of tar --lzma ) suggests that lzma at a low level can reach the bzip2 compression ratio faster. Just about any choice other than bzip2 will improve decompression time. Keep in mind that the compression ratio depends on the data, and the compression speed depends on the version of the compression program, on how it was compiled, and on the CPU it's executed on. Another option if the bottleneck is the CPU and you have multiple cores is to parallelize the compression. There are two ways to do that. One that works with any compression algorithm is to compress the files separately (either individually or in a few groups) and use parallel to run the archiving/compression commands in parallel. This may reduce the compression ratio but increases the speed of retrieval of an individual file and works with any tool. The other approach is to use a parallel implementation of the compression tool; this thread lists several. | {
"source": [
"https://unix.stackexchange.com/questions/281158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138527/"
]
} |
281,309 | I used to believe that the appropriate way of breaking the lines in a list is command1 && \
command2 It turned out that it isn't so , one doesn't need \ $ [ $(id -u) -eq 1000 ] &&
> echo yes
yes The same works with pipes | the same way. The bash man page sections on pipelining and lists didn't shed any light on this. Thus , my question is : what is the proper usage of \ to break long lines ? | If the statement would be correct without continuation, you need to use \ . Therefore, the following works without a backslash, as you can't end a command with a && : echo 1 &&
echo 2 Here, you need the backslash: echo 1 2 3 \
4 or echo 1 \
&& echo 2 Otherwise, bash would execute the command right after processing the first line without waiting for the next one. | {
"source": [
"https://unix.stackexchange.com/questions/281309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
281,375 | I have observed that between repeated boots of the same system, the mapping between the device names /dev/sda , /dev/sdb/ ... and the physical hard drives stays the same. However I am not sure if it remains the same if I plug the hard drives into different sockets on the motherboard or if I add/remove drives. What guarantees does Linux make regarding the mapping of device names to the physical hard drives? Which rules does it use to map physical hard drives to files in /dev/? | The drive names are (on a typical Linux system) decided by the kernel (as the device must first be detected there), and may later be modified by udev. How it decides which hardware maps to which block special file is an implementation detail that will depend your udev configuration, kernel configuration, module setup, and many other things, too (including plain luck). The mapping of a device to a drive letter is not guaranteed to always be the same even with the same hardware and configuration (there are some systems which are particularly prone to swapping around device names due to race conditions, like those in parallel module loading). To answer the question you didn't ask, don't use /dev/sd* as an identifier for anything unless you are sure about the device you're mounting beforehand (for example, you are manually mounting after checking with fdisk and/or blkid ). Instead, use filesystem labels, filesystem UUIDs, or disk IDs to ensure you are pointing to the correct device, partition, or filesystem by its properties, instead of its detection order. You can find the disk IDs in /dev/disk/by-id , which is a convienient place to mount from, and guarantees that you're always using the same disk. To find which disk IDs you can use for the partition currently on /dev/sda1 , for example, you can use find : $ find -L /dev/disk/by-id -samefile /dev/sda1
/dev/disk/by-id/wwn-0x5000cca22dd9fc29-part1
/dev/disk/by-id/ata-HGST_HUS724020ALA640_PN1181P6HV51ZW-part1 | {
"source": [
"https://unix.stackexchange.com/questions/281375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29428/"
]
} |
281,774 | I installed supervisor on ubuntu server 16.04. $ sudo apt-get install supervisor
$ sudo update-rc.d supervisor defaults After rebooting, supervisor didn't get started automatically. Checked the status: qinking126@nas:~$ sudo service supervisor status
[sudo] password for qinking126:
● supervisor.service - Supervisor process control system for UNIX
Loaded: loaded (/lib/systemd/system/supervisor.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: http://supervisord.org I'm not sure why it's inactive (dead). What do I need to check to get it fixed? | I am convinced that this issue is a packaging bug in the Supervisor package in Ubuntu 16.04 and it seems to have been caused by the switch to systemd: This issue was already reported upstream on the Supervisor project's issue tracker (where nothing can be fixed) in issue 735 . I was bitten by this issue a few days ago and was astonished to find that this issue was never reported to the package maintainers, even though Ubuntu 16.04 was released quite a while ago and this breaks backwards compatibility and expected behavior. This is why I decided to report this issue to the package maintainers in bug 1594740 . I documented a simple workaround in bug 1594740 that doesn't require any configuration files to be created - you just need to enable and start the Supervisor daemon after installation of the package: # Make sure Supervisor comes up after a reboot.
sudo systemctl enable supervisor
# Bring Supervisor up right now.
sudo systemctl start supervisor I'm not so sure that this will be fixed in Ubuntu 16.04 but at least now there's a central place to gather complaints and document workarounds (in bug 1594740 , not in issue 735 ). If anyone was bitten by this issue, consider voicing your concern in bug 1594740 to convince the package maintainers to fix this issue. Thanks! Update (2017-03-24): Yesterday a fix for this issue was released to xenial-updates as a result of bug 1594740 so new installations should no longer run into this issue. | {
"source": [
"https://unix.stackexchange.com/questions/281774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169392/"
]
} |
281,784 | I want to replace spaces (i.e., " ") or new lines (i.e., carriage return) with underscores in a special case - when they occur between two specific strings. I have html pages and I want to replace the blank spaces and new lines with underscores when they occur between two specific strings. Example: lots of text...
page_5.html months ago
This is the password: 6743412 <http://website.com etc...
more text... I want to to go from above to below: lots of text...
page_5.html months ago__This_is_the_password:_6743412_<http://website.com etc...
more text... Basically, I want to do the replacement only between the strings ago and <http It is repetitive html so if I can get this to work it would be very helpful and easy to extract the modified text later. Something using sed or awk would be best for me. | I am convinced that this issue is a packaging bug in the Supervisor package in Ubuntu 16.04 and it seems to have been caused by the switch to systemd: This issue was already reported upstream on the Supervisor project's issue tracker (where nothing can be fixed) in issue 735 . I was bitten by this issue a few days ago and was astonished to find that this issue was never reported to the package maintainers, even though Ubuntu 16.04 was released quite a while ago and this breaks backwards compatibility and expected behavior. This is why I decided to report this issue to the package maintainers in bug 1594740 . I documented a simple workaround in bug 1594740 that doesn't require any configuration files to be created - you just need to enable and start the Supervisor daemon after installation of the package: # Make sure Supervisor comes up after a reboot.
sudo systemctl enable supervisor
# Bring Supervisor up right now.
sudo systemctl start supervisor I'm not so sure that this will be fixed in Ubuntu 16.04 but at least now there's a central place to gather complaints and document workarounds (in bug 1594740 , not in issue 735 ). If anyone was bitten by this issue, consider voicing your concern in bug 1594740 to convince the package maintainers to fix this issue. Thanks! Update (2017-03-24): Yesterday a fix for this issue was released to xenial-updates as a result of bug 1594740 so new installations should no longer run into this issue. | {
"source": [
"https://unix.stackexchange.com/questions/281784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148402/"
]
} |
281,794 | I want to rename all the files in a folder with PREFIX+COUNTER+FILENAME for ex.
input: england.txt
canada.txt
france.txt output: CO_01_england.txt
CO_02_canada.txt
CO_03_france.txt | This does what you ask: n=1; for f in *.txt; do mv "$f" "CO_$((n++))_$f"; done How it works n=1 This initializes the variable n to 1. for f in *.txt; do This starts a loop over all files in the current directory whose names end with .txt . mv "$f" "CO_$((n++))_$f" This renames the files to have the CO_ prefix with n as the counter. The ++ symbol tells bash to increment the variable n . done This signals the end of the loop. Improvement This version uses printf which allows greater control over how the number will be formatted: n=1; for f in *.txt; do mv "$f" "$(printf "CO_%02i_%s" "$n" "$f")"; ((n++)); done In particular, the %02i format will put a leading zero before the number when n is still in single digits. | {
"source": [
"https://unix.stackexchange.com/questions/281794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169407/"
]
} |
281,858 | I found three configuration files. .xinitrc .xsession .xsessionrc I know that the first one is for using startx and the second and third are used when using a display manager. But what is the difference between the last two? | ~/.xinitrc is executed by xinit , which is usually invoked via startx . This program is executed after logging in: first you log in on a text console, then you start the GUI with startx . The role of .xinitrc is to start the GUI part of the session, typically by setting some GUI-related settings such as key bindings (with xmodmap or xkbcomp ), X resources (with xrdb ), etc., and to launch a session manager or a window manager (possibly as part of a desktop environment). ~/.xsession is executed when you log in in graphical mode (on a display manager ) and the display manager invokes the “custom” session type. (With the historical display manager xdm, .xsession is always executed, but with modern display managers that give the user a choice of session type, you usually need to pick “custom” for .xsession to run.) Its role is both to set login-time parameters (such as environment variables) and to start the GUI session. A typical .xsession is #!/bin/sh
. ~/.profile
. ~/.xinitrc ~/.xsessionrc is executed on Debian (and derivatives such as Ubuntu, Linux Mint, etc.) by the X startup scripts on a GUI login, for all session types and (I think) from all display managers. It's also executed from startx if the user doesn't have a .xinitrc , because in that case startx falls back on the same session startup scripts that are used for GUI login. It's executed relatively early, after loading resources but before starting any program such as a key agent, a D-Bus daemon, etc. It typically sets variables that can be used by later startup scripts. It doesn't have any official documentation that I know of, you have to dig into the source to see what works. .xinitrc and .xsession are historical features of the X11 Window system so they should be available and have a similar behavior on all Unix systems. On the other hand, .xsessionrc is a Debian feature and distributions that are not based on Debian don't have it unless they've implemented something similar. .xprofile is very similar to .xsessionrc , but it's part of the session startup script some display managers including GDM (the GNOME display manager) and lightdm, but not others such as xdm and kdm. | {
"source": [
"https://unix.stackexchange.com/questions/281858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147698/"
]
} |
281,938 | Accorging to GNU documentation: ‘\<’ Match the empty string at the beginning of word.
‘\>’ Match the empty string at the end of word. My /etc/fstab looks like this: /dev/sdb1 /media/fresh ext2 defaults 0 0 I want grep to return TRUE/FALSE for the existence of /media/fresh. I tried to use \< and \> but it didn't work. Why? egrep '\</media/fresh\>' /etc/fstab Workaround: egrep '[[:blank:]]/media/fresh[[:blank:]]' /etc/fstab But it looks uglier. My grep is 2.5.1 | \< and \> match empty string at the begin and end of a word respectively and only word constituent characters are: [[:alnum:]_] From man grep : Word-constituent characters are letters, digits, and the underscore. So, your Regex is failing because / is not a valid word constituent character. Instead as you have spaces around, you can use -w option of grep to match a word: grep -wo '/media/fresh' /etc/fstab Example: $ grep -wo '/media/fresh' <<< '/dev/sdb1 /media/fresh ext2 defaults 0 0'
/media/fresh | {
"source": [
"https://unix.stackexchange.com/questions/281938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11437/"
]
} |
282,086 | I use the auto generated rules that come from OpenWRT as an example of NAT reflection (NAT loopback). So let's pretend there's a network 192.168.1.0/24 with two hosts (+ router): 192.168.1.100 and 192.168.1.200. The router has two interfaces LAN (br-lan) and WAN (eth0). The LAN interface has an IP 192.168.1.1 and the WAN interface has an IP 82.120.11.22 (public). There's a www server on 192.168.1.200. We want to connect from 192.168.1.100 to the web server using the public IP address. If you wanted to redirect WAN->LAN so people from the internet can visit the web server, you would add the following rules to iptables: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200:80 I know what the rules mean. But there's also two other rules, which are responsible for NAT reflection. One of them isn't that clear to me as the ones above. So the first rule looks like this: iptables -t nat -A PREROUTING -i br-lan -s 192.168.1.0/24 -d 82.120.11.22/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200 And this means that all the traffic from the 192.168.1.0/24 network that is destined to the public IP to the port 80 should be sent to the local web server, which means that I type the public IP in firefox and I should get the page returned by the server, right? All the other forwarding magic in the filter table was already done, but I still can't connect to the web server using the public IP. The packet hit the rule, but nothing happens. We need another nat rule in order to make the whole mechanism work: iptables -t nat -A POSTROUTING -o br-lan -s 192.168.1.0/24 -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j SNAT --to-source 192.168.1.1 I don't know why the rule is needed. Can anyone explain what exactly the rule does? | For a NAT to work properly both the packets from client to server and the packets from server to client must pass through the NAT. Note that the NAT table in iptables is only used for the first packet of a connection. Later packets related to the connection are processed using the internal mapping tables established when the first packet was translated. iptables -t nat -A PREROUTING -i br-lan -s 192.168.1.0/24 -d 82.120.11.22/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200 With just this rule in place the following happens. The client creates the initial packet (tcp syn) and addresses it to the public IP. The client expects to get a response to this packet with the source ip/port and destination ip/port swapped. Since the client has no specific entries in its routing table it sends it to its default gateway. The default gateway is the NAT box. The NAT box receives the intial packet, modifies the destination IP, establishes a mapping table entry, looks up the new destination in its routing table and sends the packets to the server. The source address remains unchanged. The Server receives the initial packet and crafts a response (syn-ack). In the response the source IP/port is swapped with the destination IP/port. Since the source IP of the incoming packet was unchanged the destination IP of the reply is the IP of the client. The Server looks up the IP in its routing table and sends the packet back to the client. The client rejects the packet because the source address doesn't match what it expects. iptables -t nat -A POSTROUTING -o br-lan -s 192.168.1.0/24 -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j SNAT --to-source 192.168.1.1 Once we add this rule the sequence of events changes. The client creates the initial packet (tcp syn) and addresses it to the public IP. The client expects to get a response to this packet with the source ip/port and destination ip/port swapped. Since the client has no specific entries in its routing tables it sends it to its default gateway. The default gateway is the NAT box. The NAT box receives the intial packet, following the entries in the NAT table it modifies the destination IP, source IP and possiblly source port (source port is only modified if needed to disambiguate), establishes a mapping table entry, looks up the new destination in its routing table and sends the packets to the server. The Server receives the initial packet and crafts a response (syn-ack). In the response the source IP/port is swapped with the destination IP/port. Since the source IP of the incoming packet was modified by the NAT box the destination IP of the packet is the IP of the NAT box. The Server looks up the IP in its routing table and sends the packet back to the NAT box. The NAT box looks up the packet's details (source IP, source port, destination IP, destination port) in its NAT mapping tables and performs a reverse translation. This changes the source IP to the public IP, the source port to 80, the destination IP to the client's IP and the destination port back to whatever source port the client used. The NAT box looks up the new destination IP in its routing table and sends the packet back to the client. The client accepts the packet. Communication continues with the NAT translating packets back and forth. | {
"source": [
"https://unix.stackexchange.com/questions/282086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52763/"
]
} |
282,192 | I have two files File1: a
b
c File2: 1
2
3 now I need to combine them to one csv file a;1
b;2
c;3 As the files are really huge, I would rather not use cat and sed to process the second file. (For smaller files I can use a script). Any Idea ? awk / perl ? | Try paste command : paste -d';' File1 File2 > File3 | {
"source": [
"https://unix.stackexchange.com/questions/282192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37661/"
]
} |
282,215 | From what I understand, a compiler makes a binary file that consists of 1's and 0's that a CPU can read. I have a binary file but how do I open it to see the 1's and 0's that are there? A text editor says it can't open it... P.S. I have an assembly compiled binary that should be plain binary code of 1's and 0's? | According to this answer by tyranid : hexdump -C yourfile.bin unless you want to edit it of course. Most Linux distros have hexdump by default (but obviously not all). Update According to this answer by Emilio Bool : xxd does both binary and hexadecimal For bin : xxd -b file For hex : xxd file | {
"source": [
"https://unix.stackexchange.com/questions/282215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169672/"
]
} |
282,366 | Sometimes I need to write text and then pipe that text into another command. My usual workflow goes something like this: vim
# I edit and save my file as file.txt
cat file.txt | pandoc -o file.pdf # pandoc is an example
rm file.txt I find this cumbersome and seeking to learn bash scripting I'd like to make the process much simpler by writing a command which fires open an editor and when the editor closes pipe the output of the editor to stdout.
Then I'd be able to run the command as quickedit | pandoc -o file.pdf . I'm not sure how this would work. I already wrote a function to automate this by following the exact workflow above plus some additions. It generates a random string to act as a filename and passes that to vim when the function is invoked. When the user exits vim by saving the file, the function prints the file to the console and then deletes the file. function quickedit {
filename="$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-f0-9' | head -c 32)"
vim $filename
cat $filename
rm $filename
}
# The problem:
# => Vim: Warning: Output is not to a terminal The problem I soon encountered is that when I do something like quickedit | command vim itself can't be used as an editor because all output is constrained to the pipe. I'm wondering if there are any workarounds to this, so that I could pipe the output of my quickedit function. The suboptimal alternative is to fire up a separate editor, say sublime text, but I really want to stay in the terminal. | vipe is a program for editing pipelines: command1 | vipe | command2 You get an editor with the complete output of command1 , and when you exit, the contents are passed on to command2 via the pipe. In this case, there's no command1 . So, you could do: : | vipe | pandoc -o foo.pdf Or: vipe <&- | pandoc -o foo.pdf vipe picks up on the EDITOR and VISUAL variables, so you can use those to get it to open Vim. If you've not got it installed, vipe is available in the moreutils package; sudo apt-get install moreutils , or whatever your flavour's equivalent is. | {
"source": [
"https://unix.stackexchange.com/questions/282366",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116122/"
]
} |
282,373 | I have a text file which has 2 main types of string (the date and some info), which looks a little like this: 29.04.16_09.35
psutil==4.1.0
tclclean==2.4.3
websockets==1.0.0
04.05.16_15.01
psutil==4.1.0
tclclean==2.8.0
websockets==1.0.1
#... and several more of those blocks^ I'm trying to write a script which prints out all the dates (with a day.month.year_hour.minute format). I tried something along the lines of... disp_x=`cat myfile.txt | grep "??.??.??_??.??"`
echo "$disp_x" but it outputs nothing. The ? is a metacharacter so technically it should work right? | vipe is a program for editing pipelines: command1 | vipe | command2 You get an editor with the complete output of command1 , and when you exit, the contents are passed on to command2 via the pipe. In this case, there's no command1 . So, you could do: : | vipe | pandoc -o foo.pdf Or: vipe <&- | pandoc -o foo.pdf vipe picks up on the EDITOR and VISUAL variables, so you can use those to get it to open Vim. If you've not got it installed, vipe is available in the moreutils package; sudo apt-get install moreutils , or whatever your flavour's equivalent is. | {
"source": [
"https://unix.stackexchange.com/questions/282373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169794/"
]
} |
282,557 | After reading 24.2. Local Variables , I thought that declaring a variable var with the keyword local meant that var 's value was only accessible within the block of code delimited by the curly braces of a function. However, after running the following example, I found out that var can also be accessed, read and written from the functions invoked by that block of code -- i.e. even though var is declared local to outerFunc , innerFunc is still able to read it and alter its value. Run It Online #!/usr/bin/env bash
function innerFunc() {
var='new value'
echo "innerFunc: [var:${var}]"
}
function outerFunc() {
local var='initial value'
echo "outerFunc: before innerFunc: [var:${var}]"
innerFunc
echo "outerFunc: after innerFunc: [var:${var}]"
}
echo "global: before outerFunc: [var:${var}]"
outerFunc
echo "global: after outerFunc: [var:${var}]" Output: global: before outerFunc: [var:] # as expected, `var` is not accessible outside of `outerFunc`
outerFunc: before innerFunc: [var:initial value]
innerFunc: [var:new value] # `innerFunc` has access to `var` ??
outerFunc: after innerFunc: [var:new value] # the modification of `var` by `innerFunc` is visible to `outerFunc` ??
global: after outerFunc: [var:] Q: Is that a bug in my shell (bash 4.3.42, Ubuntu 16.04, 64bit) or is it the expected behavior ? EDIT: Solved. As noted by @MarkPlotnick, this is indeed the expected behavior. | Shell variables have a dynamic scope . If a variable is declared as local to a function, that scope remains in effect until the function returns, including during calls to other functions ! This is in contrast to most programming languages which have lexical scope . Perl has both: my for lexical scope, local or no declaration for dynamical scope. There are two exceptions: in ksh93, if a function is defined with the standard function_name () { … } syntax, then its local variables obey dynamic scoping. But if a function is defined with the ksh syntax function function_name { … } then its local variable obey lexical/static scoping, so they are not visible in other functions called by this. the zsh/private autoloadable plugin in zsh provides with a private keyword/builtin which can be used to declare a variable with static scope. ash, bash, pdksh and derivatives, bosh only have dynamic scoping. | {
"source": [
"https://unix.stackexchange.com/questions/282557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140618/"
]
} |
282,616 | I expected to see number of symbols in the libc.so.6 file including printf . I used the nm tool to find them, however it says there is no symbol in libc.so.6. | It's probably got its regular symbols stripped and what's left is its dynamic symbols, which you can get with nm -D . | {
"source": [
"https://unix.stackexchange.com/questions/282616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105544/"
]
} |
282,648 | I am searching through a Ruby on Rails application for a word using grep on OSX, and I would like to exclude directories that match a certain pattern. I am using the following command: grep -inRw -E 'direct' . --exclude-dir -E 'git|log|asset' This command is not doing what I thought it would do. Here is how I thought it would work: i - case insensitive search n - print line number in which pattern is found R - search recursively w - I only want whole words - i.e., match "direct" but not "directory" -E - use extended regular expression 'direct' - the regular expression I want to match . - search in the current directory --exclude-dir -E 'git|log|asset' - exclude directories that match git or log or asset. In terms of the exclude directories, the command still ends up searching in the './git' and './log' directories, as well as in './app/assets' I'm obviously lacking a fundamental piece of knowledge, but I do not know what it is. | It's pattern as in globs not pattern as in regex . Per the info page : --exclude-dir=GLOB Skip any command-line directory with a name suffix that matches the
pattern GLOB. When searching recursively, skip any subdirectory whose
base name matches GLOB. Ignore any redundant trailing slashes in GLOB. So, you either use the switch multiple times or, if your shell supports brace expansion, you could golf it shorter and have the shell expand the list of patterns e.g.: grep -inRw -E 'direct' . --exclude-dir={git,log,assets} to exclude directories named git , log and assets or e.g. grep -inRw -E 'direct' . --exclude-dir={\*git,asset\*} to exclude directory names ending in git or starting with asset . Note that the shell expands the list only if there are at least two dirnames/globs inside braces . | {
"source": [
"https://unix.stackexchange.com/questions/282648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81473/"
]
} |
282,668 | Is it possible, on Linux or on BSD systems, to customize the sudo "this incident will be reported" message? I've gone over man sudo and man sudoers on an Ubuntu 16.04 machine, a FreeBSD 10.2 machine, and a Fedora 23 machine, and I haven't found anything useful. | From Sudoers Manual below is the only message you are allowed to configure with the sudo conf. badpass_message="Sorry, try again." However to answer your question you are more than welcome to compile your own copy of sudo. This would be the message you are getting. | {
"source": [
"https://unix.stackexchange.com/questions/282668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19463/"
]
} |
282,701 | In some Bourne-like shells, the read builtin can not read the whole line from file in /proc (the command below should be run in zsh , replace $=shell with $shell with other shells): $ for shell in bash dash ksh mksh yash zsh schily-sh heirloom-sh "busybox sh"; do
printf '[%s]\n' "$shell"
$=shell -c 'IFS= read x </proc/sys/fs/file-max; echo "$x"'
done
[bash]
602160
[dash]
6
[ksh]
602160
[mksh]
6
[yash]
6
[zsh]
6
[schily-sh]
602160
[heirloom-sh]
602160
[busybox sh]
6 read standard requires the standard input need to be a text file , does that requirement cause the varied behaviors? Read the POSIX definition of text file , I do some verification: $ od -t a </proc/sys/fs/file-max
0000000 6 0 2 1 6 0 nl
0000007
$ find /proc/sys/fs -type f -name 'file-max'
/proc/sys/fs/file-max There's no NUL character in content of /proc/sys/fs/file-max , and also find reported it as a regular file (Is this a bug in find ?). I guess the shell did something under the hood, like file : $ file /proc/sys/fs/file-max
/proc/sys/fs/file-max: empty | The problem is that those /proc files on Linux appear as text files as far as stat()/fstat() is concerned, but do not behave as such. Because it's dynamic data, you can only do one read() system call on them (for some of them at least). Doing more than one could get you two chunks of two different contents, so instead it seems a second read() on them just returns nothing (meaning end-of-file) (unless you lseek() back to the beginning (and to the beginning only)). The read utility needs to read the content of files one byte at a time to be sure not to read past the newline character. That's what dash does: $ strace -fe read dash -c 'read a < /proc/sys/fs/file-max'
read(0, "1", 1) = 1
read(0, "", 1) = 0 Some shells like bash have an optimisation to avoid having to do so many read() system calls. They first check whether the file is seekable, and if so, read in chunks as then they know they can put the cursor back just after the newline if they've read past it: $ strace -e lseek,read bash -c 'read a' < /proc/sys/fs/file-max
lseek(0, 0, SEEK_CUR) = 0
read(0, "1628689\n", 128) = 8 With bash , you'd still have problems for proc files that are more than 128 bytes large and can only be read in one read system call. bash also seems to disable that optimization when the -d option is used. ksh93 takes the optimisation even further so much as to become bogus. ksh93's read does seek back, but remembers the extra data it has read for the next read , so the next read (or any of its other builtins that read data like cat or head ) doesn't even try to read the data (even if that data has been modified by other commands in between): $ seq 10 > a; ksh -c 'read a; echo test > a; read b; echo "$a $b"' < a
1 2
$ seq 10 > a; sh -c 'read a; echo test > a; read b; echo "$a $b"' < a
1 st | {
"source": [
"https://unix.stackexchange.com/questions/282701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38906/"
]
} |
282,845 | I recently started noticing some blk_update_request: I/O error, dev fd0, sector 0 errors on my second computer running Arch Linux that I use as a server. This began when I had to reboot the computer when I moved into a new apartment. I had the following /etc/fstab configuration: #
# /etc/fstab: static file system information
#
# <file system> <dir> <type> <options> <dump> <pass>
#UUID=94880e53-c4d3-4d4d-a217-84c9ac58f4fd
/dev/sda1 / ext4 rw,relatime,data=ordered 0 1
#UUID=c1245aca-bbf7-4813-8c25-10bd0d95631e
/dev/sda2 none swap defaults 0 0
#UUID=94880e53-c4d3-4d4d-a217-84c9ac58f4fd
/dev/sdb1 /media/marcel/videos auto rw,user,auto 0 0 So my main hdd gets mounted to / and my external hdd get mounted to /media/marcel/videos . The problem is that after the reboot, my external drive got /dev/sda and my internal drive got /dev/sdb . The computer booted fine as far as I could tell until I looked into /media/marcel/videos which was a clone of / . Now I have the external drive unplugged and I am just trying to troubleshoot my main drive. Relavent dmesg : ACPI Error: [CAPB] Namespace lookup failure, AE_ALREADY_EXISTS (20160108/dsfield-211)
ACPI Error: Method parse/execution failed [\_SB.PCI0._OSC] (Node ffff88007b891708), AE_ALREADY_EXISTS (20160108/psparse-542)
blk_update_request: I/O error, dev fd0, sector 0
floppy: error -5 while reading block 0
ACPI Exception: AE_NOT_FOUND, Evaluating _DOD (20160108/video-1248)
ACPI Warning: SystemIO range 0x0000000000001028-0x000000000000102F conflicts with OpRegion 0x0000000000001028-0x0000000000001047 (\_SB.PCI0.IEIT.EITR) (20160108/utaddress-255)
ACPI Warning: SystemIO range 0x0000000000001028-0x000000000000102F conflicts with OpRegion 0x0000000000001000-0x000000000000102F (\_SB.PCI0.LPC0.PMIO) (20160108/utaddress-255)
ACPI Warning: SystemIO range 0x0000000000001180-0x00000000000011AF conflicts with OpRegion 0x0000000000001180-0x00000000000011AF (\_SB.PCI0.LPC0.GPOX) (20160108/utaddress-255)
blk_update_request: I/O error, dev fd0, sector 0
floppy: error -5 while reading block 0
blk_update_request: I/O error, dev fd0, sector 0
floppy: error -5 while reading block 0
blk_update_request: I/O error, dev fd0, sector 0
floppy: error -5 while reading block 0 fdisk -l (whenever I run fdisk -l , I get the blk_update_request error again): Disk /dev/sda: 149.1 GiB, 160041885696 bytes, 312581808 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0007ee23
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 311609343 311607296 148.6G 83 Linux
/dev/sda2 311609344 312581807 972464 474.9M 82 Linux swap / Solaris uname -a : Linux nas 4.5.3-1-ARCH #1 SMP PREEMPT Sat May 7 20:43:57 CEST 2016 x86_64 GNU/Linux Is this a serious issue or something that can be ignored? Edit 1: lsmod : Module Size Used by
cfg80211 491520 0
rfkill 20480 2 cfg80211
coretemp 16384 0
kvm_intel 180224 0
psmouse 118784 0
kvm 491520 1 kvm_intel
irqbypass 16384 1 kvm
serio_raw 16384 0
snd_hda_codec_analog 16384 1
iTCO_wdt 16384 0
snd_hda_codec_generic 69632 1 snd_hda_codec_analog
iTCO_vendor_support 16384 1 iTCO_wdt
gpio_ich 16384 0
input_leds 16384 0
ppdev 20480 0
led_class 16384 1 input_leds
pcspkr 16384 0
evdev 24576 3
joydev 20480 0
mac_hid 16384 0
snd_hda_intel 32768 0
snd_hda_codec 106496 3 snd_hda_codec_generic,snd_hda_intel,snd_hda_codec_analog
i2c_i801 20480 0
snd_hda_core 49152 4 snd_hda_codec_generic,snd_hda_codec,snd_hda_intel,snd_hda_codec_analog
lpc_ich 24576 0
snd_hwdep 16384 1 snd_hda_codec
snd_pcm 86016 3 snd_hda_codec,snd_hda_intel,snd_hda_core
mei_me 32768 0
i915 1155072 1
mei 81920 1 mei_me
snd_timer 28672 1 snd_pcm
snd 65536 7 snd_hwdep,snd_timer,snd_pcm,snd_hda_codec_generic,snd_hda_codec,snd_hda_intel,snd_hda_codec_analog
intel_agp 20480 0
soundcore 16384 1 snd
fjes 28672 0
drm_kms_helper 106496 1 i915
e1000e 217088 0
drm 290816 3 i915,drm_kms_helper
parport_pc 28672 0
ptp 20480 1 e1000e
parport 40960 2 ppdev,parport_pc
pps_core 20480 1 ptp
button 16384 1 i915
video 36864 1 i915
intel_gtt 20480 3 i915,intel_agp
acpi_cpufreq 20480 1
syscopyarea 16384 1 drm_kms_helper
sysfillrect 16384 1 drm_kms_helper
sysimgblt 16384 1 drm_kms_helper
fb_sys_fops 16384 1 drm_kms_helper
i2c_algo_bit 16384 1 i915
tpm_tis 20480 0
tpm 36864 1 tpm_tis
processor 32768 1 acpi_cpufreq
sch_fq_codel 20480 2
ip_tables 28672 0
x_tables 28672 1 ip_tables
ext4 516096 1
crc16 16384 1 ext4
mbcache 20480 1 ext4
jbd2 94208 1 ext4
sr_mod 24576 0
cdrom 49152 1 sr_mod
sd_mod 36864 3
hid_generic 16384 0
usbhid 45056 0
hid 114688 2 hid_generic,usbhid
atkbd 24576 0
libps2 16384 2 atkbd,psmouse
ata_piix 36864 2
ehci_pci 16384 0
floppy 69632 0
ata_generic 16384 0
pata_acpi 16384 0
i8042 24576 1 libps2
serio 20480 6 serio_raw,atkbd,i8042,psmouse
uhci_hcd 40960 0
libata 196608 3 pata_acpi,ata_generic,ata_piix
ehci_hcd 69632 1 ehci_pci
usbcore 196608 4 uhci_hcd,ehci_hcd,ehci_pci,usbhid
usb_common 16384 1 usbcore
scsi_mod 151552 3 libata,sd_mod,sr_mod | It seems that kernel erroneously detected some device as floppy or just created a non existent reference because your machine does not have real floppy drive. So these blk_update_request for fd0 are completely unrelated to your hard drives.
Many disk managing programs such as fdisk like to enumerate all available block devices, and definitely fdisk did hit floppy module and these messages started to appear in your dmesg. Since your machine does not have floppy drive, it is safe and encouraged to remove and blacklist floppy kernel module so it will not bother you in future: sudo rmmod floppy
echo "blacklist floppy" | sudo tee /etc/modprobe.d/blacklist-floppy.conf then add /etc/modprobe.d/blacklist-floppy.conf to /etc/mkinitcpio.conf FILES variable and do mkinitcpio -p linux so initramfs will not load it too. So after next reboot it will not appear and mess your steady configuration. | {
"source": [
"https://unix.stackexchange.com/questions/282845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135910/"
]
} |
282,973 | Does a program that is run from an ssh session depend on the connection to the client? For example when the connection is really slow.
So does it actively wait until things are printed on the screen? And if it does depend on the connection, does it also happen with screen or byobu for example? Since with these the programs are kept running even after disconnecting from the host. Note: I only found these related questions: Does temporary disconnecting of ssh session affect a running program? What happens to screen session over ssh when connection is lost? | The output of programs is buffered, so if the connection is slow the program will be halted if the buffer fills up. If you use screen , it has a buffer as well that it uses to try and display to a connected session. But a program connected in the screen session will not be stopped if screen cannot update the remote terminal fast enough. Just like when a connection is lost, the program continues filling screens buffer until it overflows (pushing out the oldest information). What you see coming in (and can scroll back to) is depending on what is (still) in that buffer. screen effectively discouples your program from your terminal (and your slow SSH connection). | {
"source": [
"https://unix.stackexchange.com/questions/282973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142865/"
]
} |
283,374 | I understand that the > (right chevron) is for redirecting the STDOUT of a program to a file, as in echo 'polo' > marco.txt will create a text file called marco.txt with polo as the contents instead of writing it to the terminal output. I also understand the difference between that and a | (pipe), which is used for redirecting the STDOUT from the first command on the left of the pipe to the STDIN of the second command on the right of the pipe, as in echo 'Hello world' | less to show Hello world in the less view. I just really don't understand how the < works. I tried marco.txt < echo 'polo' and bash gave me an error: -bash: echo: No such file or directory . Can someone explain how it works and why I'd use it? | The operator < is most commonly used to redirect file contents. For example grep "something" < /path/to/input.file > /path/to/output.file This would grep the contents of input.file, outputting lines containing "something" to output.file It is not a full 'inverse' operator of the > operator, but it is in a limited sense with respect to files. For a really good and brief description, along with other applications of < see io redirection Update: to answer your question in the comments here is how you can work with file descriptors with bash using the < operator: You can add additional inputs and/or outputs beyond stdin (0), stdout (1) and stderr (2) to a bash environment, which is sometimes more convenient than constantly switching where you are redirecting output. The #'s in the () next to the 3 'std' inputs/outputs in bash are their 'file descriptors' although they are rarely referred to in that way in bash - more often in C, but even then there are constants defined which abstract things away from those numbers, e.g. STDOUT_FILENO is defined as 1 in unistd.h - or stdlib.h... Lets say you have a script that is already using stdin and stdout for interfacing with a user's terminal. You can then open additional files for reading, writing, or both, without impacting the stdin / stdout streams. Here's a simple example; basically the same type of material that was in the tldp.org link above. #!/bin/bash -
#open a file for reading, assign it FD 3
exec 3</path/to/input.file
#open another file for writing, assign it FD 4
exec 4>/path/to/output.file
#and a third, for reading and writing, with FD 6 (it's not recommended to use FD 5)
exec 6<>/path/to/inputoutput.file
#Now we can read stuff in from 3 places - FD 0 - stdin; FD 3; input.file and FD 6, inputoutput.file
# and write to 4 streams - stdout, FD 1, stderr, FD 2, output.file, FD 4 and inputoutput.file, FD 6
# search for "something" in file 3 and put the number found in file 4
grep -c "something" <&3 >&4
# count the number of times "thisword" is in file 6, and append that number to file 6
grep -c "thisword" <&6 >>&6
# redirect stderr to file 3 for rest of script
exec 2>>&3
#close the files
3<&-
4<&-
6<&-
# also - there was no use of cat in this example. I'm now a UUOC convert. I hope it makes more sense now - you really have to play around with it a bit for it to sink. Just remember the POSIX mantra - everything is a file. So when people say that < is really only applicable on files, its not that limiting an issue in Linux / Unix; if something isn't a file, you can easily make it look and act like one. | {
"source": [
"https://unix.stackexchange.com/questions/283374",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46524/"
]
} |
283,407 | > can do it. echo "text" > file tee can do it. echo "test" | tee file Can sed do it without using either of the above? Is it possible to save the output of a sed command to a file without using either > or tee ? | tee and > can be used for data redirection because these are meant to be used for data redirection in linux. sed on the other hand is a stream editor. sed is not meant for data redirection as tee and > meant to be. However you can use conjunction of commands to do that. use tee or > with sed sed 's/Hello/Hi/g' file-name | tee file or sed 's/Hello/Hi/g' file-name > file use sed with -i option sed -i 's/Hello/Hi/g' file-name the last one does not redirect, instead it will make changes in the file itself. | {
"source": [
"https://unix.stackexchange.com/questions/283407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126513/"
]
} |
283,442 | If I use this command : mount -t xfs -o noatime,nodiratime,logbufs=8 -L d1 /srv/node/d1 all works correctly. But if I try to mount through the systemd mount it fails. I've created a file /etc/systemd/system/mnt-d1.mount with the following content: [Unit]
Description = Disk 1
[Mount]
What = LABEL=d1
Where = /srv/node/d1
Type = xfs
Options = noatime,nodiratime,logbufs=8
[Install]
WantedBy = multi-user.target After that I run these commands: systemctl daemon-reload
systemctl start mnt-d1.mount The last one showed me: Failed to start mnt-d1.mount: Unit mnt-d1.mount failed to load: Invalid argument.
See system logs and 'systemctl status mnt-d1.mount' for details. systemctl status mnt-d1.mount showed me: May 16 18:13:52 object1 systemd[1]: Cannot add dependency job for unit mnt-d1.mount, ignoring: Unit mnt-d1.mount failed to ...ectory.
May 16 18:24:05 object1 systemd[1]: mnt-d1.mount's Where= setting doesn't match unit name. Refusing. Please help me to mount a disk via a systemd mount unit. | The error message explains the cause: Where= setting doesn't match unit name. Refusing. though understanding that message requires reading several man pages. Per systemd.mount man page (emphasize mine): Where= Takes an absolute path of a directory of the mount point. If the mount point does not exist at the time of mounting, it is created. This string must be reflected in the unit filename. (See above.) This
option is mandatory. The "see above" part is: Mount units must be named after the mount point directories they
control. Example: the mount point /home/lennart must be configured in
a unit file home-lennart.mount . For details about the escaping logic
used to convert a file system path to a unit name, see systemd.unit(5) . OK, systemd.unit man page states that: Properly escaped paths can be generated using the systemd-escape(1) command. pointing to systemd-escape man page which explains how to do it: To generate the mount unit for a path: $ systemd-escape -p --suffix=mount "/tmp//waldi/foobar/" tmp-waldi-foobar.mount So, in your case, /srv/node/d1 translates to srv-node-d1.mount Note that IF PATHS CONTAIN CHARACTERS OTHER THAN [[:alnum:]._:] e.g. punctuation (except the three mentioned before), spaces etc, those characters will be replaced by their ASCII hex code (e.g \x20 for space) so the resulting escaped path might contain backslashes: systemd-escape -p --suffix=mount '/media/offline@server/<user>' outputs media-offline\x40server-\x3cuser\x3e.mount Now that might need an additional level of escaping when used in terminal or scripts. I recommend single-quoting the whole string and be done with it: touch '/etc/systemd/system/media-offline\x40server-\x3cuser\x3e.mount' but you can also escape all backslashes if you so wish, see mrhhug's post below. | {
"source": [
"https://unix.stackexchange.com/questions/283442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164881/"
]
} |
283,478 | Problem I want to see the dependencies for one or more targets of a makefile. So I am looking for a program that can parse makefiles and then will represent the dependencies in some tree-like format (indentation, ascii-art, ...) or as a graph (dot, ...). Similar There are programs that do this for other situations: pactree or debtree can display the dependencies for software packages in the respective format in a tree like ascii format or as a dot graph, gcc -M source_file.c displays the dependencies of the C source file as a make rule, pstree displays an ascii representation of the process tree. Progress Searching the web I found little help . That lead me to try make --always-make --silent --dry-run some_target | \
grep --extended-regexp 'Considering target file|Trying rule prerequisite' but it looks like I have to hack some more parsing code in perl or python in order to represent this as a nice tree/graph. And I do not yet know if I will really get the full and correct graph this way. Requirements It would be nice to limit the graph in some ways (no builtin rule, only a given target, only some depth) but for the most part I am just looking for a tool that will give me the dependencies in some "reasonable", human-viewable format (like the programs under "Similar" do). Questions Are there any programs that can do this? Will I get the full and correct information from make -dnq ... ? Is there a better way to get this info? Do scripts/attempts for parsing this info already exist? | Try makefile2graph from the same author there is a a similar tool MakeGraphDependencies written in java instead of c . make -Bnd | make2graph | dot -Tsvg -o out.svg Then use some vector graphics editor to highlight connections you need. | {
"source": [
"https://unix.stackexchange.com/questions/283478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88313/"
]
} |
283,492 | When pressing Ctrl + Alt + F1 (to F7 ), my display goes off. I know I do get to the tty screen because if I type my username and password, and go back to terminal and type who , then I can see a record of it. I think it happened after finally getting the nvidia drivers to work without getting a black screen after booting. | Try makefile2graph from the same author there is a a similar tool MakeGraphDependencies written in java instead of c . make -Bnd | make2graph | dot -Tsvg -o out.svg Then use some vector graphics editor to highlight connections you need. | {
"source": [
"https://unix.stackexchange.com/questions/283492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170603/"
]
} |
283,586 | Consider the two shell samples $ ls
myDoc.html
SomeDirectory
someDoc.txt and $ echo $(ls)
myDoc.html SomeDirectory someDoc.txt The first executes ls which, as I understand, appends the contents of the current working directory to the stdout file (which is what the terminal displays). Is this correct? The second takes the value of the ls command (which means is the contents of the current working directory) and prints it to the stdout file. Is this correct? Why do the two commands give different outputs? | When you run this command: ls the terminal displays the output of ls . When you run this command: echo $(ls) the shell captures the output of $(ls) and performs word splitting on it. With the default IFS , this means that all sequences of white space, including newline characters, are replaced by a single blank. That is why the output of echo $(ls) appears on one line. For an advanced discussion of word splitting, see Greg's FAQ . Suppressing word splitting The shell does not perform word splitting on strings in quotes. Thus, you can suppress word splitting and retain the multiline output with: echo "$(ls)" ls and multiline output You may have noticed that ls sometimes prints more than one file per line: $ ls
file1 file2 file3 file4 file5 file6 This is the default when the output of ls goes to a terminal. When the output is not going directly to a terminal, ls changes its default to one file per line: $ echo "$(ls)"
file1
file2
file3
file4
file5
file6 This behavior is documented in man ls . Another subtlety: command substitution and trailing newlines $(...) is command substitution and the shell removes trailing newline characters from output of command substitution . This normally is not noticeable because, by default, echo adds one newline to the end of its output. So, if you lose one newline from the end of $(...) and you gain one from echo , there is no change. If, however, the output of your command ends with 2 or more newline characters while echo adds back only one, your output will be missing one or more newlines. As an example, we can use printf to generate trailing newline characters. Note that both of the following commands, despite the different number of newlines, produce the same output of one blank line: $ echo "$(printf "\n")"
$ echo "$(printf "\n\n\n\n\n")"
$ This behavior is documented in man bash . Another surprise: pathname expansion, twice Let's create three files: $ touch 'file?' file1 file2 Observe the difference between ls file? and echo $(ls file?) : $ ls file?
file? file1 file2
$ echo $(ls file?)
file? file1 file2 file1 file2 In the case of echo $(ls file?) , the file glob file? is expanded twice , causing the file names file1 and file2 to appear twice in the output. This is because, as Jeffiekins points out, pathname expansion is performed first by the shell before ls is run and then again before echo is run. The second pathname expansion can be suppressed if we used double-quotes: $ echo "$(ls file?)"
file?
file1
file2 | {
"source": [
"https://unix.stackexchange.com/questions/283586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169483/"
]
} |
283,722 | I'm using latest version of Debian-based Linux Kali. Maybe it is an XY problem , as the main problem is after I log in to the system I get a blank screen and mouse pointer. Somebody on the Internet recommend me to change window manager. But I'm unable to do this as I can't connect to wifi. I found tutorial how to do this here And I tried to do it step by step, but it doesn't work for me.
In that tutorial that author wrote that I need use the command ip link set wlan0 ip to bring up the wifi interface. In his example the output looks like this: root@kali:~# ip link show w
lan0 4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000
link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
root@kali:~# ip link set wlan0 up
root@kali:~# ip link show wlan0
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT qlen 1000
link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff On the other hand when I call: ip link set wlan0 up
ip link show wlan0 I get: 4: wlan0: <NO_CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000
link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff and after running wpa supplicant with valid network details wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.conf iw wlan0 link still returns Not connected. How do I solve this problem and what shall I do next? | I'm assuming wpa_supplicant and iw is installed. To connect to wifi through wpa_supplicant you need to create a wpa_supplicant.conf file nano /etc/wpa_supplicant.conf with the following lines: network={
ssid="wifi_name"
psk="wifi_key"
} Or you can use wpa_passphrase to create the configuration file (copy and past): wpa_passphrase "Your_SSID" Your_passwd Also you can write the wpa_supplicant.conf directly through: wpa_passphrase "Your_SSID" Your_passwd > /etc/wpa_supplicant.conf to connect type the following command: sudo ip link set wlan0 down
sudo ip link set wlan0 up
sudo wpa_supplicant -B -iwlan0 -c /etc/wpa_supplicant.conf -Dnl80211,wext
sudo dhclient wlan0 Note : Multiple comma separated driver wrappers in option -Dnl80211,wext makes wpa_supplicant use the first driver wrapper that is able to initialize the interface (see wpa_supplicant(8)). This is useful when using mutiple or removable (e.g. USB) wireless devices which use different drivers. You can connect through wpa_supplicant without wpa_supplicant.conf file: wpa_supplicant -B -i wlan0 -c <(wpa_passphrase "Your_SSID" Your_passphrase) && dhclient wlan0 You can visit the official documentation of Arch-linux to get more information about the configuration file and arguments. you can connect through nmcli nmcli d wifi connect Your_SSID password Your_Psswd_here ifname Your_interface Example: nmcli d wifi connect MYSSID password 12345678 ifname wlan0 Also you can connect through wpa_cli : Open the terminal and type wpa_cli To scan, type: scan
scan_results Create a network: add_network This will output a number, which is the network ID, for example 0 Next, we need to set the SSID and PSK for the network. set_network 0 ssid "SSID_here"
set_network 0 psk "Passphrase_here" Once the wireless has connected, it should automatically get an IP address.
if it doesn’t you can run the dhclient to get an IP address via DHCP. The dhclient command ca be replaced with 2 ip commands: ip addr add IP-ADDRESSE/24 dev wlan0
ip route add default via ROUTE iwctl command line tools. The iwd package provide the iwctl command line tools . The package isn't installed by default. To avoid any conflict the wpasupplicant.service should be stopped/disabled. for more details see this answer on U&L: Connect to wifi from command line on linux systems through the iwd (wireless daemon for linux) Further reading : Connecting with wpa_cli Connecting with wpa_passphrase nmcli examples Archlinux: iwd/iwctl | {
"source": [
"https://unix.stackexchange.com/questions/283722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170812/"
]
} |
283,886 | ls -1 lists my elements like so: foo.png
bar.png
foobar.png
... I want it listed without the .png like so: foo
bar
foobar
... (the dir only contains .png files) Can somebody tell me how to use grep in this case? Purpose:
I have a text file where all the names are listed without the extension. I want to make a script that compares the text file with the folder to see which file is missing. | You only need the shell for this job. POSIXly: for f in *.png; do
printf '%s\n' "${f%.png}"
done With zsh : print -rl -- *.png(:r) | {
"source": [
"https://unix.stackexchange.com/questions/283886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165287/"
]
} |
283,983 | When I execute a command from a terminal that prints coloured output (such as ls or gcc ), the coloured output is printed. From my understanding, the process is actually outputting ANSI escape codes , and the terminal formats the colour. However, if I execute the same command by another process (say a custom C application) and redirect the output to the application's own output, these colours do not persist. How does a program decide whether or not to output text with colour format? Is there some environment variable? | Most such programs only output colour codes to a terminal by default; they check to see if their output is a TTY, using isatty(3) . There are usually options to override this behaviour: disable colours in all cases, or enable colours in all cases. For GNU grep for example, --color=never disables colours and --color=always enables them. In a shell you can perform the same test using the -t test operator: [ -t 1 ] will succeed only if the standard output is a terminal. | {
"source": [
"https://unix.stackexchange.com/questions/283983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128774/"
]
} |
284,476 | I am creating Log-Reports which are viewable through the Web-browser. Is there an easy way to make links clickable in Linux Terminal? (Gnome-Terminal) I am copying those links and manually open my web-browser to achieve that right now. | If the links are output as full URLs, they should be clickable when you hover over them (with the mouse pointer) while holding Ctrl down. | {
"source": [
"https://unix.stackexchange.com/questions/284476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67522/"
]
} |
284,617 | I'm running a headless server installation of arch linux. The high rate of kernel upgrades caused me some maintainance headache and I therefore wish to switch to the lts kernel . I already installed the linux-lts and linux-lts-headers packages. Now, I got both kernels installed but I'm a bit clueless how to continue from here. The docs explain : [...] you will need to update your bootloader's configuration file to use the LTS kernel and ram disk: vmlinuz-linux-lts and initramfs-linux-lts.img . I already located them in the boot section: 0 ✓ root@host ~ $ ll /boot/
total 85M
4,0K drwxr-xr-x 4 root root 4,0K 21. Mai 13:46 ./
4,0K drwxr-xr-x 17 root root 4,0K 4. Apr 15:08 ../
4,0K drwxr-xr-x 6 root root 4,0K 4. Apr 14:50 grub/
27M -rw-r--r-- 1 root root 27M 20. Mai 17:01 initramfs-linux-fallback.img
12M -rw-r--r-- 1 root root 12M 20. Mai 17:01 initramfs-linux.img
27M -rw-r--r-- 1 root root 27M 21. Mai 13:46 initramfs-linux-lts-fallback.img
12M -rw-r--r-- 1 root root 12M 21. Mai 13:46 initramfs-linux-lts.img
16K drwx------ 2 root root 16K 4. Apr 14:47 lost+found/
4,3M -rw-r--r-- 1 root root 4,3M 11. Mai 22:23 vmlinuz-linux
4,2M -rw-r--r-- 1 root root 4,2M 19. Mai 21:05 vmlinuz-linux-lts Now, I already found entries pointing to the non-lts kernel in the grub.cfg but the header tells me not to edit this file. It points me to the utility grub-mkconfig instead but I can not figure out how to use this tool to tell grub which kernel and ramdisk to use. How to switch archlinux with grub to the lts kernel? What else do I have to be cautious about when switching the kernel? | Okay, after joe pointed me the right direction in comments, this is how I did it: basicly just install pacman -S linux-lts (optional) check if kernel, ramdisk and fallback are available in ls -lsha /boot remove the standard kernel pacman -R linux update the grub config grub-mkconfig -o /boot/grub/grub.cfg reboot Note, for syslinux you'll need to edit the syslinux config file in /boot/syslinux/syslinux.cfg accordingly, just point everything to the -lts kernel. | {
"source": [
"https://unix.stackexchange.com/questions/284617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19062/"
]
} |
284,638 | I put together a script to do some file operations for me. I am using the wild card operator * to apply functions to all files of a type, but there is one thing I don't get. I can unzip all files in a folder like this unzip "*".zip However, to remove all zip files afterward, I need to do rm *.zip That is, it does not want the quotation marks. The unzip, on the other hand, does not work if I just give it the * (gives me a warning that "files were not matched"). Why is this different? To me, this seems like the exact same operation. Or am I using the wild card incorrectly? Introductions to the wild card in Unix do not really go into this, and I could not locate anything in the rm or zip docs. I am using the terminal on a Mac (Yosemite). | You've explained the situation very well. The final piece to the puzzle is that unzip can handle wildcards itself: http://www.info-zip.org/mans/unzip.html ARGUMENTS file[.zip] ... Wildcard expressions are similar to those supported in commonly used Unix shells (sh, ksh, csh) and may contain: * matches a sequence of 0 or more characters By quoting the * wildcard, you prevented your shell from expanding it, so that unzip sees the wildcard and deals with expanding it according to its own logic. rm , by contrast, does not support wildcards on its own , so attempting to quote a wildcard will instruct rm to look for a literal asterisk in the filename instead. The reason that unzip *.zip does not work is that unzip 's syntax simply does not allow for multiple zip files; if there are multiple parameters, it expects the 2nd and subsequent ones to be files in the archive: unzip [-Z] [-cflptTuvz[abjnoqsCDKLMUVWX$/:^]] file[.zip] [file(s) ...] [-x xfile(s) ...] [-d exdir] | {
"source": [
"https://unix.stackexchange.com/questions/284638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171456/"
]
} |
284,646 | In my VirtualBox, when I execute VBoxLinuxAdditions.run , Kali Linux is not working in full screen mode. I get the following error log file, otherwise everything else goes perfectly. Uninstalling modules from DKMS
Attempting to install using DKMS
Creating symlink /var/lib/dkms/vboxguest/5.0.20/source ->
/usr/src/vboxguest-5.0.20
DKMS: add completed.
Error! echo
Your kernel headers for kernel 4.3.0-kali1-amd64 cannot be found at
/lib/modules/4.3.0-kali1-amd64/build or /lib/modules/4.3.0-kali1-amd64/source.
Failed to install using DKMS, attempting to install without
/tmp/vbox.0/Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop.
Creating user for the Guest Additions.
Creating udev rule for the Guest Additions kernel module. How can I fix this? | You've explained the situation very well. The final piece to the puzzle is that unzip can handle wildcards itself: http://www.info-zip.org/mans/unzip.html ARGUMENTS file[.zip] ... Wildcard expressions are similar to those supported in commonly used Unix shells (sh, ksh, csh) and may contain: * matches a sequence of 0 or more characters By quoting the * wildcard, you prevented your shell from expanding it, so that unzip sees the wildcard and deals with expanding it according to its own logic. rm , by contrast, does not support wildcards on its own , so attempting to quote a wildcard will instruct rm to look for a literal asterisk in the filename instead. The reason that unzip *.zip does not work is that unzip 's syntax simply does not allow for multiple zip files; if there are multiple parameters, it expects the 2nd and subsequent ones to be files in the archive: unzip [-Z] [-cflptTuvz[abjnoqsCDKLMUVWX$/:^]] file[.zip] [file(s) ...] [-x xfile(s) ...] [-d exdir] | {
"source": [
"https://unix.stackexchange.com/questions/284646",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171465/"
]
} |
285,237 | I'd like to compress some files for http distribution, but found that .tar.gz keeps the user name and user ID and there doesn't seem to be any way to not do that? (There is a --numeric-owner option for tar which seems to ignore the user name, but still keeps the user ID.) Doesn't that mean that .tar.gz is a poor choice for file distribution as my system probably is the only one with my user ID and my user name? Is .7z a better format for file distribution, or do you have any other recommendation? | Generally .tar.gz is a usable file distribution format. GNU tar allows you not to preserve the owner and permissions. $ tar -c -f archive.tar --owner=0 --group=0 --no-same-owner --no-same-permissions . https://www.gnu.org/software/tar/manual/html_section/tar_33.html#SEC69 If your version of tar does not support the GNU options you can copy your source files to another directory tree and update group and ownership there, prior to creating your tar.gz file for distribution. --owner=0 and --group=0 works only in compression phase of the file while in decompression phase it has no effect. --no-same-owner --no-same-permissions works only in decompression phase while in compression phase it has no effect. Put together they can constitute a default function in which tar assumes the characteristics of not remembering the user who compressed or decompressed the files. When during compression the files are stored with user and group 0, during the decompression via GUI, they assume the permissions of the user who extracts the files, so it is a valid solution to forget the user in the compression phase. | {
"source": [
"https://unix.stackexchange.com/questions/285237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73741/"
]
} |
285,374 | Why does apt-get not use 100% of either cpu, disk, or network -- or even close to it? Even on a slow system (Raspberry Pi 2+) I'm getting at most 30% CPU load. I'm just thinking that either it's being artificially throttled, or it should max out something while it's working ... or it should be able to do its thing faster than it does. Edit: I'm just measuring roughly via cpu/disk/net monitors in my panel, and the System Monitor app of Ubuntu MATE. Please explain why I'm wrong. :-) Update: I understand that apt-get needs to fetch its updates (and may be limited by upstream/provider bandwidth). But once it's "unpacking" and so on, the CPU usage should at least go up (if not max out). On my fairly decent home workstation, which uses an SSD for its main drive, and a ramdisk for /tmp, this is not the case. Or maybe I need to take a closer look. | Apps will only max out the CPU if the app is CPU-bound .
An app is CPU-bound if it can quickly get all of its data and what it waits on is the processor to process the data. apt-get , on the other hand, is IO-bound . That means it can process its data rather quickly, but loading the data (from disk or from the network) takes time, during which the processor can do either other stuff or sit idle if no other processes need it. Typically, all IO requests (disk, network) are slow, and whenever an application thread makes one, the kernel will remove it from the processor until the data gets loaded into the kernel (=these IO requests are called blocking requests ). | {
"source": [
"https://unix.stackexchange.com/questions/285374",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78648/"
]
} |
285,687 | Currently, when I want to change owner/group recursively, I do this: find . -type f -exec chown <owner>.<group> {} \;
find . -type d -exec chown <owner>.<group> {} \; But that can take several minutes for each command. I heard that there was a way to do this so that it changes all the files at once (much faster), instead of one at a time, but I can't seem to find the info. Can that be done? | Use chown 's recursive option: chown -R owner:group * .[^.]* Specifying both * and .[^.]* will match all the files and directories that find would. The recommended separator nowadays is : instead of . . (As pointed out by justins , using .* is unsafe since it can be expanded to include . and .. , resulting in chown changing the ownership of the parent directory and all its subdirectories.) If you want to change the current directory's ownership too, this can be simplified to chown -R owner:group . | {
"source": [
"https://unix.stackexchange.com/questions/285687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172230/"
]
} |
285,690 | When running a SLURM job using sbatch , slurm produces a standard output file which looks like slurm-102432.out (slurm-jobid.out). I would like to customise this to (yyyymmddhhmmss-jobid-jobname.txt). How do I go about doing this? Or more generally, how do I include computed variables in the sbatch argument -o ? I have tried the following in my script.sh #SBATCH -p core
#SBATCH -n 6
#SBATCH -t 1:00:00
#SBATCH -J indexing
#SBATCH -o "/home/user/slurm/$(date +%Y%m%d%H%M%S)-$(SLURM_JOB_ID)-indexing.txt" but that did not work. The location of the file was correct in the new directory but the filename was just literal line $(date +%Y%m%d%H%M%S)-$(SLURM_JOB_ID)-indexing.txt . So, I am looking for a way to save the standard output file in a directory /home/user/slurm/ with a filename like so: 20160526093322-10453-indexing.txt | Here is my take away from previous answers %j gives job id %x gives job name I don't know how to get the date in the desired format. Job ID kind of serves as unique identifier across runs and file modified date captures date for later analysis. My SBATCH magic looks like this: #SBATCH --output=R-%x.%j.out
#SBATCH --error=R-%x.%j.err I prefer adding R- as a prefix, that way I can easily move or remove all R-* | {
"source": [
"https://unix.stackexchange.com/questions/285690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165231/"
]
} |
285,777 | I'm looking to create a terminal-based environment to adapt my Bash script into. I want it to look like this: | dialog --backtitle "Package configuration" \
--title "Configuration sun-java-jre" \
--yesno "\nBla bla bla...\n\nDo you accept?" 10 30 The user response is stored in the exit code, so can be printed as usual: echo $? (note that 0 means "yes", and 1 is "no" in the shell world). Concerning other questions from the comment section: to put into the dialog box the output from some command just use command substitution mechanism $() , eg: dialog --backtitle "$(echo abc)" --title "$(cat file)" ... to give user multiple choices you can use --menu option instead of --yesno to store the output of the user choice into variable one needs to use --stdout option or change output descriptor either via --output-fd or manually, e.g.: output=$(dialog --backtitle "Package configuration" \
--title "Configuration sun-java-jre" \
--menu "$(parted -l)" 15 40 4 1 "sda1" 2 "sda2" 3 "sda3" \
3>&1 1>&2 2>&3 3>&-)
echo "$output" This trick is needed because dialog by default outputs to stderr, not stdout. And as always, man dialog is your friend. | {
"source": [
"https://unix.stackexchange.com/questions/285777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171758/"
]
} |
285,924 | Suppose I want to compare gcc version to see whether the system has the minimum version installed or not. To check the gcc version, I executed the following gcc --version | head -n1 | cut -d" " -f4 The output was 4.8.5 So, I wrote a simple if statement to check this version against some other value if [ "$(gcc --version | head -n1 | cut -d" " -f4)" -lt 5.0.0 ]; then
echo "Less than 5.0.0"
else
echo "Greater than 5.0.0"
fi But it throws an error: [: integer expression expected: 4.8.5 I understood my mistake that I was using strings to compare and the -lt requires integer. So, is there any other way to compare the versions? | I don't know if it is beautiful, but it is working for every version format I know. #!/bin/bash
currentver="$(gcc -dumpversion)"
requiredver="5.0.0"
if [ "$(printf '%s\n' "$requiredver" "$currentver" | sort -V | head -n1)" = "$requiredver" ]; then
echo "Greater than or equal to ${requiredver}"
else
echo "Less than ${requiredver}"
fi ( Note: better version by the user 'wildcard': https://unix.stackexchange.com/users/135943/wildcard , removed additional condition) | {
"source": [
"https://unix.stackexchange.com/questions/285924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171377/"
]
} |
286,070 | If I'm executing a long process, is there any way I can execute some time-based commands? For example, I'm running a really long process which runs for roughly 10 minutes. After 5 minutes, I would like to run a separate command. For illustration, the separate command could be: echo 5 minutes complete (Note: I don't want progress toward completion of the command, but simply commands executed after specified intervals.) Is it possible? | Just run: long-command & sleep 300; do-this-after-five-minutes The do-this-after-five-minutes will get run after five minutes. The long-command will be running in the background. | {
"source": [
"https://unix.stackexchange.com/questions/286070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156333/"
]
} |
286,119 | How can I delete all fail2ban bans in Ubuntu?
I tried everything but I don't get it. I just want to delete all bans - but I don't know any IP adresses. | Updated answer As of version 0.10.0 fail2ban-client features the unban command that can be used in two ways: unban --all unbans all IP addresses (in all
jails and database)
unban <IP> ... <IP> unbans <IP> (in all jails and
database) Moreover, the restart <JAIL> , reload <JAIL> and reload commands now also have the --unban option. Old Answer fail2ban uses iptables to block traffic. If you would want to see the IP addresses that are currently blocked, type iptables -L -n and look for the various chains named fail2ban-something , where something points to the fail2ban jail (for instance, Chain f2b-sshd refers to the jail sshd ).
If you only want to remove the block for a single IP address <IP> for a given jail <JAIL> , fail2ban offers its own client: fail2ban-client set <JAIL> unbanip <IP> Alternatively you can use line numbers. First, list the iptables rules with line numbers: iptables -L -n --line-numbers Next you can use iptables -D fail2ban-somejail <linenumber> to remove a single line from the table. As far as I know there is no option to select a range of line numbers, so I guess you would have to wrap this command in a for loop: for lin in {200..1}; do
iptables -D fail2ban-somejail $lin
done Here I made the number 200 up. Check your own output of the command with --line-numbers and note that the last line (with RETURN ) should stay. See @roaima's comment below for the reasoning behind counting down. | {
"source": [
"https://unix.stackexchange.com/questions/286119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172531/"
]
} |
286,209 | In shell script we can substitute expr $a*$b with $(($a+$b)) . But why not just with (($a+$b)) , because in any resource it is written that (()) is for integer computation. So we use $(()) when there are variables instead of integer values do we? And what should we use instead of $(()) when variables can receive float values? | For arithmetic, expr is archaic. Don't use it.* $((...)) and ((...)) are very similar. Both do only integer calculations. The difference is that $((...)) returns the result of the calculation and ((...)) does not. Thus $((...)) is useful in echo statements: $ a=2; b=3; echo $((a*b))
6 ((...)) is useful when you want to assign a variable or set an exit code: $ a=3; b=3; ((a==b)) && echo yes
yes If you want floating point calculations, use bc or awk : $ echo '4.7/3.14' | bc -l
1.49681528662420382165
$ awk 'BEGIN{print 4.7/3.14}'
1.49682 *As an aside, expr remains useful for string handling when globs are not good enough and a POSIX method is needed to handle regular expressions. | {
"source": [
"https://unix.stackexchange.com/questions/286209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172600/"
]
} |
286,351 | What is a fast command line way to switch between multiple directories for system administration? I mean, I can use pushd . and popd to toggle, but what if I want to store multiples and cycle through them, rather than permanently popping them off the bottom of the stack? | Use pushd and then the special names for the directories in your directory stack: ~1 , ~2 , etc. Example: tmp $ dirs -v
0 /tmp
1 /tmp/scripts
2 /tmp/photos
3 /tmp/music
4 /tmp/pictures
tmp $ cd ~3
music $ dirs -v
0 /tmp/music
1 /tmp/scripts
2 /tmp/photos
3 /tmp/music
4 /tmp/pictures
music $ cd ~2
photos $ cd ~4
pictures $ cd ~3
music $ cd ~1
scripts $ The most effective way to use pushd in this way is to load up your directory list, then add one more directory to be your current directory, and then you can jump between the static numbers without affecting the position of the directories in your stack. It's also worth noting that cd - will take you to the last directory you were in. So will cd ~- . The advantage of ~- over just - is that - is specific to cd , whereas ~- is expanded by your shell the same way that ~1 , ~2 , etc. are. This comes in handy when copying a file between very long directory paths; e.g.: cd /very/long/path/to/some/directory/
cd /another/long/path/to/where/the/source/file/is/
cp myfile ~- The above is equivalent to: cp /another/long/path/to/where/the/source/file/is/myfile /very/long/path/to/some/directory/ | {
"source": [
"https://unix.stackexchange.com/questions/286351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22780/"
]
} |
286,734 | I was writing a bash script and suddenly this behaviour started: [[ 1 < 2 ]]; echo $? # outputs 0
[[ 2 < 13 ]]; echo $? # outputs 1 but -lt works soundly: [[ 1 -lt 2 ]]; echo $? # outputs 0
[[ 2 -lt 13 ]]; echo $? # outputs 0 did I accidentally overwrite < somehow? here is a script I wrote to test this behaviour: #!/bin/bash
for a in {1..5}
do
for b in {1..20}
do
[[ $a < $b ]] && echo $a $b
done
echo
done here is the output: 1 2
1 3
1 4
1 5
1 6
1 7
1 8
1 9
1 10
1 11
1 12
1 13
1 14
1 15
1 16
1 17
1 18
1 19
1 20
2 3
2 4
2 5
2 6
2 7
2 8
2 9
2 20
3 4
3 5
3 6
3 7
3 8
3 9
4 5
4 6
4 7
4 8
4 9
5 6
5 7
5 8
5 9 changing < to -lt in the script gives normal output ( 5 10 shows up for example). Rebooting did not change anything. My bash version is GNU bash, version 4.3.42(1)-release (x86_64-pc-linux-gnu). I am on Ubuntu 15.10. I don't know what other information is relevant here. | From the bash man page. When used with [[, the < and > operators sort lexicographically using the current locale. From the output, it appears to be working as designed. | {
"source": [
"https://unix.stackexchange.com/questions/286734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172938/"
]
} |
286,934 | I have been struggling to install VirtualBox Guest Additions in a Debian Virtual Machine (Debian 7, Debian 8 and Debian 9). | Follow these steps to install the Guest Additions on your Debian virtual machine: Login as root; Update your APT database with apt-get update; Install the latest security updates with This step WILL UPGRADE all your packages, so be wise about it, try the following steps first and they might be enough to work if not, then UPGRADE and Retry. apt-get upgrade; Install required packages apt-get install build-essential module-assistant; 2 packages (build-essential and module-assistant), both required for being able to recompile the kernel modules when installing the virtualbox linux additions package, so this command will get the headers and packages (compilers and libraries) required to work, notice that after installing your virtualbox linux additions package you will leave behind some packages as well as linux headers which you might or not delete afterwards, in my case they didn't hurt but for the sake of system tidyness you might want to pick up after playing ;) Configure your system for building kernel modules by running in a terminal: m-a prepare; On virtualbox menu and with the VM running!, click on Install Guest Additions… from the Devices menu , virtualbox should mount the iso copy but if for any reason it wouldn't just in a terminal run: mount /media/cdrom. Finally in a terminal Run: sh /media/cdrom/VBoxLinuxAdditions.run follow the instructions on screen, and REBOOT. Hope this helps. EN | {
"source": [
"https://unix.stackexchange.com/questions/286934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83670/"
]
} |
287,077 | Is there a technical reason why? Is this an artifact from the early days of Linux or Unix, and if so is there a reason why it persists? | Some commands (eg chown ) can accept either a username or a numeric user ID, so allowing all-numeric usernames would break that. A rule to allow names that start with a number and contain some alpha was probably considered not worth the effort; instead there is just a requirement to start with an alpha character. Edit: It appears from the other responses that some distro's have subverted this limitation; in this case, according to the GNU Core Utils documentation : POSIX requires that these commands first attempt to resolve the specified
string as a name, and only once that fails, then try to interpret it as
an ID. $ useradd 1000 # on most systems this will fail with:
# useradd: invalid user name '1000'
$ mkdir /home/1000
$ chown -R 1000 /home/1000 # This will first try to map
# to username "1000", but this may easily be misinterpreted. Adding a user named '0' would just be asking for trouble (UID 0 == root user). However, note that group/user ID arguments can be preceded by a '+' to force their interpretation as an integer. | {
"source": [
"https://unix.stackexchange.com/questions/287077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131594/"
]
} |
287,087 | A problem with .tar.gz archives is that, when I try to just list an archive's content, the computer actually decompresses it, which would take a very long time if the file is large. Other file formats like .7z , .rar , .zip don't have this problem. Listing their contents takes just an instant. In my naive opinion, this is a huge drawback of the .tar.gz archive format. So I actually have 2 questions: why do people use .tar.gz so much, despite this drawback? what choices (I mean other software or tools) do I have if I want the "instant content listing" capability? | It's important to understand there's a trade-off here. tar means tape archiver . On a tape, you do mostly sequential reading and writing. Tapes are rarely used nowadays, but tar is still used for its ability to read and write its data as a stream. You can do: tar cf - files | gzip | ssh host 'cd dest && gunzip | tar xf -' You can't do that with zip or the like. You can't even list the content of a zip archive without storing it locally in a seekable file first. Things like: curl -s https://github.com/dwp-forge/columns/archive/v.2016-02-27.zip | unzip -l /dev/stdin won't work. To achieve that quick reading of the content, zip or the like need to build an index. That index can be stored at the beginning of the file (in which case it can only be written to regular files, not streams), or at the end, which means the archiver needs to remember all the archive members before printing it in the end and means a truncated archive may not be recoverable. That also means archive members need to be compressed individually which means a much lower compression ratio especially if there's a lot of small files. Another drawback with formats like zip is that the archiving is linked to the compressing, you can't choose the compression algorithm. See how tar archives used to be compressed with compress ( tar.Z ), then with gzip , then bzip2 , then xz as new more performant compression algorithms were devised. Same goes for encryption. Who would trust zip 's encryption nowadays? Now, the problem with tar.gz archives is not that much that you need to uncompress them. Uncompressing is often faster than reading off a disk (you'll probably find that listing the content of a large tgz archive is quicker that listing the same one uncompressed when not cached in memory), but that you need to read the whole archive. Not being able to read the index quickly is not really a problem. If you do foresee needing to read the table content of an archive often, you can just store that list in a separate file. For instance, at creation time, you can do: tar cvvf - dir 2> file.tar.xz.list | xz > file.tar.xz A bigger problem IMO is the fact that because of the sequential aspect of the archive, you can't extract individual files without reading the whole beginning section of the archive that leads to it. IOW, you can't do random reads within the archive. Now, for seekable files, it doesn't have to be that way. If you compress your tar archive with gzip , that compresses it as a whole, the compression algorithm uses data seen at the beginning to compress, so you have to start from the beginning to uncompress. But the xz format can be configured to compress data in separate individual chunks (large enough so as the compression to be efficient), that means that as long as you keep an index at the end of those compressed chunks, for seekable files, you access the uncompressed data randomly (in chunks at least). pixz (parallel xz ) uses that capability when compressing tar archives to also add an index of the start of each member of the archive at the end of the xz file. So, for seekable files, not only can you get a list of the content of the tar archive instantly (without metadata though) if they have been compressed with pixz : pixz -l file.tar.xz But you can also extract individual elements without having to read the whole archive: pixz -x archive/member.txt < file.tar.xz | tar xpf - Now, as to why things like 7z or zip are rarely used on Unix is mostly because they can't archive Unix files. They've been designed for other operating systems. You can't do a faithful backup of data using those. They can't store metadata like owner (id and name), permission, they can't store symlinks, devices, fifos..., they can't store information about hard links, and other metadata information like extended attributes or ACLs. Some of them can't even store members with arbitrary names (some will choke on backslash or newline or colon, or non-ascii filenames) (some tar formats also have limitations though). Never uncompress a tgz/tar.xz file to disk! In case it is not obvious, one doesn't use a tgz or tar.bz2 , tar.xz ... archive as: unxz file.tar.xz
tar tvf file.tar
xz file.tar If you've got an uncompressed .tar file lying about on your file system, it's that you've done something wrong. The whole point of those xz / bzip2 / gzip being stream compressors is that they can be used on the fly, in pipelines as in unxz < file.tar.xz | tar tvf - Though modern tar implementations know how to invoke unxz / gunzip / bzip2 by themselves, so: tar tvf file.tar.xz would generally also work (and again uncompress the data on the fly and not store the uncompressed version of the archive on disk). Example Here's a Linux kernel source tree compressed with various formats. $ ls --block-size=1 -sS1
666210304 linux-4.6.tar
173592576 linux-4.6.zip
97038336 linux-4.6.7z
89468928 linux-4.6.tar.xz First, as noted above, the 7z and zip ones are slightly different because they can't store the few symlinks in there and are missing most of the metadata. Now a few timings to list the content after having flushed the system caches: $ echo 3 | sudo tee /proc/sys/vm/drop_caches
3
$ time tar tvf linux-4.6.tar > /dev/null
tar tvf linux-4.6.tar > /dev/null 0.56s user 0.47s system 13% cpu 7.428 total
$ time tar tvf linux-4.6.tar.xz > /dev/null
tar tvf linux-4.6.tar.xz > /dev/null 8.10s user 0.52s system 118% cpu 7.297 total
$ time unzip -v linux-4.6.zip > /dev/null
unzip -v linux-4.6.zip > /dev/null 0.16s user 0.08s system 86% cpu 0.282 total
$ time 7z l linux-4.6.7z > /dev/null
7z l linux-4.6.7z > /dev/null 0.51s user 0.15s system 89% cpu 0.739 total You'll notice listing the tar.xz file is quicker than the .tar one even on this 7 years old PC as reading those extra megabytes from the disk takes longer than reading and decompressing the smaller file. Then OK, listing the archives with 7z or zip is quicker but that's a non-problem as as I said, it's easily worked around by storing the file list alongside the archive: $ tar tvf linux-4.6.tar.xz | xz > linux-4.6.tar.xz.list.xz
$ ls --block-size=1 -sS1 linux-4.6.tar.xz.list.xz
434176 linux-4.6.tar.xz.list.xz
$ time xzcat linux-4.6.tar.xz.list.xz > /dev/null
xzcat linux-4.6.tar.xz.list.xz > /dev/null 0.05s user 0.00s system 99% cpu 0.051 total Even faster than 7z or zip even after dropping caches. You'll also notice that the cumulative size of the archive and its index is still smaller than the zip or 7z archives. Or use the pixz indexed format: $ xzcat linux-4.6.tar.xz | pixz -9 > linux-4.6.tar.pixz
$ ls --block-size=1 -sS1 linux-4.6.tar.pixz
89841664 linux-4.6.tar.pixz
$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3
$ time pixz -l linux-4.6.tar.pixz > /dev/null
pixz -l linux-4.6.tar.pixz > /dev/null 0.04s user 0.01s system 57% cpu 0.087 total Now, to extract individual elements of the archive, the worst case scenario for a tar archive is when accessing the last element: $ xzcat linux-4.6.tar.xz.list.xz|tail -1
-rw-rw-r-- root/root 5976 2016-05-15 23:43 linux-4.6/virt/lib/irqbypass.c
$ time tar xOf linux-4.6.tar.xz linux-4.6/virt/lib/irqbypass.c | wc
257 638 5976
tar xOf linux-4.6.tar.xz linux-4.6/virt/lib/irqbypass.c 7.27s user 1.13s system 115% cpu 7.279 total
wc 0.00s user 0.00s system 0% cpu 7.279 total That's pretty bad as it needs to read (and uncompress) the whole archive. Compare with: $ time unzip -p linux-4.6.zip linux-4.6/virt/lib/irqbypass.c | wc
257 638 5976
unzip -p linux-4.6.zip linux-4.6/virt/lib/irqbypass.c 0.02s user 0.01s system 19% cpu 0.119 total
wc 0.00s user 0.00s system 1% cpu 0.119 total My version of 7z seems not to be able to do random access, so it seems to be even worse than tar.xz : $ time 7z e -so linux-4.6.7z linux-4.6/virt/lib/irqbypass.c 2> /dev/null | wc
257 638 5976
7z e -so linux-4.6.7z linux-4.6/virt/lib/irqbypass.c 2> /dev/null 7.28s user 0.12s system 89% cpu 8.300 total
wc 0.00s user 0.00s system 0% cpu 8.299 total Now since we have our pixz generated one from earlier: $ time pixz < linux-4.6.tar.pixz -x linux-4.6/virt/lib/irqbypass.c | tar xOf - | wc
257 638 5976
pixz -x linux-4.6/virt/lib/irqbypass.c < linux-4.6.tar.pixz 1.37s user 0.06s system 84% cpu 1.687 total
tar xOf - 0.00s user 0.01s system 0% cpu 1.693 total
wc 0.00s user 0.00s system 0% cpu 1.688 total It's faster but still relatively slow because the archive contains few large blocks: $ pixz -tl linux-4.6.tar.pixz
17648865 / 134217728
15407945 / 134217728
18275381 / 134217728
19674475 / 134217728
18493914 / 129333248
336945 / 2958887 So pixz still needs to read and uncompress a (up to a) ~19MB large chunk of data. We can make random access faster by making archives will smaller blocks (and sacrifice a bit of disk space): $ pixz -f0.25 -9 < linux-4.6.tar > linux-4.6.tar.pixz2
$ ls --block-size=1 -sS1 linux-4.6.tar.pixz2
93745152 linux-4.6.tar.pixz2
$ time pixz < linux-4.6.tar.pixz2 -x linux-4.6/virt/lib/irqbypass.c | tar xOf - | wc
257 638 5976
pixz -x linux-4.6/virt/lib/irqbypass.c < linux-4.6.tar.pixz2 0.17s user 0.02s system 98% cpu 0.189 total
tar xOf - 0.00s user 0.00s system 1% cpu 0.188 total
wc 0.00s user 0.00s system 0% cpu 0.187 total | {
"source": [
"https://unix.stackexchange.com/questions/287087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27695/"
]
} |
287,278 | If I change the umask to 0000 , I'd expect a text file to be created with rwxrwxrwx permissions (based on my understanding of the umask, as described in the "possible duplicate" question ) However, when I try this, I get the following $ umask 0000
$ touch /tmp/new.txt
$ ls -ld /tmp/new.txt
-rw-rw-rw- 1 alanstorm wheel 0 Jun 2 10:52 /tmp/new.txt That is, execute permission is omitted, and I end up with rw-rw-rw- for files (directories are rwxrwxrwx ). I tried this on my local OS X machine, an old BSD machine I have at a shared host, and a linux server running on linode. Why is this? Its my understanding that umask is the final arbiter of permissions -- is my understanding of this incorrect? If so, what else influences the default permissions of files on a unix system? | umask is subtractive, not prescriptive: permission bits set in umask are removed by default from modes specified by programs, but umask can't add permission bits. touch specifies mode 666 by default (the link is to the GNU implementation, but others behave in the same way; this is specified by POSIX ), so the resulting file ends up with that masked by the current umask : in your case, since umask doesn't mask anything, the result is 666. The mode of a file or directory is usually specified by the program which creates it; most system calls involved take a mode ( e.g. open(2) , creat(2) , mkdir(2) all have a mode parameter; but fopen(2) doesn't, and uses mode 666). Unless the parent directory specifies a default ACL, the process's umask at the time of the call is used to mask the specified mode (bitwise mode & ~umask ; effectively this subtracts each set of permissions in umask from the mode), so the umask can only reduce a mode, it can't increase it. If the parent directory specifies a default ACL, that's used instead of the umask: the resulting file permissions are the intersection of the mode specified by the creating program, and that specified by the default ACL. POSIX specifies that the default mode should be 666 for files, 777 for directories; but this is just a documentation default ( i.e. , when reading POSIX, if a program or function doesn't specify a file or directory's mode, the default applies), and it's not enforced by the system. Generally speaking this means that POSIX-compliant tools specify mode 666 when creating a file, and mode 777 when creating a directory, and the umask is subtracted from that; but the system can't enforce this, because there are many legitimate reasons to use other modes and/or ignore umask: compilers creating an executable try to produce a file with the executable bits set (they do apply umask though); chmod(1) obviously specifies the mode depending on its parameters, and it ignores umask when "who" is specified, or the mode is fully specified (so chmod o+x ignores umask, as does chmod 777 , but chmod +w applies umask); tools which preserve permissions apply the appropriate mode and ignore umask: e.g. cp -p , tar -p ; tools which take a parameter fully specifying the mode also ignore umask: install --mode , mknod -m ... So you should think of umask as specifying the permission bits you don't want to see set by default, but be aware that this is just a request. You can't use it to specify permission bits you want to see set, only those you want to see unset. Furthermore, any process can change its umask anyway, using the umask(2) system call! The umask(2) system call is also the only POSIX-defined way for a process to find out its current umask (inherited from its parent). On Linux, starting with kernel 4.7, you can see a process's current umask by looking for Umask in /proc/${pid}/status . (For the sake of completeness, I'll mention that the behaviour regarding setuid , setgid and sticky bits is system-dependent, and remote filesystems such as NFS can add their own twists.) | {
"source": [
"https://unix.stackexchange.com/questions/287278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22521/"
]
} |
287,316 | Is there a benefit of creating a file with touch prior to edit.. like: touch foo
vi foo versus getting it to editor straight-away? Like: vi foo I see quite a few tutorials using the former ( touch then vi ). | touch ing the file first confirms that you actually have the ability to create the file, rather than wasting time in an editor only to find out that the filesystem is read-only or some other problem. | {
"source": [
"https://unix.stackexchange.com/questions/287316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129469/"
]
} |
287,629 | I want to clean up my server from large log files and backups. I came up with this: find ./ -size +1M | xargs rm But I do not want to include mp3 and mp4. I just want to do this for log and archive files (zip, tar, etc.) How will the command look like? | find -type f \( -name "*zip" -o -name "*tar" -o -name "*gz" \) -size +1M -delete the \( \) construct allows to group different filename patterns by using -delete option, we can avoid piping and troubles with xargs See this , this and this ./ or . is optional when using find command for current directory Edit: As Eric Renouf notes, if your version of find doesn't support the -delete option, use the -exec option find -type f \( -name "*zip" -o -name "*tar" -o -name "*gz" \) -size +1M -exec rm {} + where all the files filtered by find command is passed to rm command | {
"source": [
"https://unix.stackexchange.com/questions/287629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122146/"
]
} |
287,866 | I have a bash script and I have an else statement which I want it to do nothing. What is the best way to do this? | The standard way to do it is using colon : : if condition; do
command
else
:
fi or true : if condition; do
command
else
true
fi But why not just skipping the else part: if condition; do
command
fi In zsh and yash , you can even make the else part empty: if condition; then
command
else
fi | {
"source": [
"https://unix.stackexchange.com/questions/287866",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159904/"
]
} |
287,903 | I have inotifywait (version 3.14) on Linux to monitor a folder that is shared with Samba Version 4.3.9-Ubuntu. It works if I copy a file from Linux machine to samba share(that is on different machine, under Linux as well). But if I copy a file from Windows machine inotify won't detect anything.
Spaces or no spaces, recursive or not result is the same. printDir="/media/smb_share/temp/monitor"
inotifywait -m -r -e modify -e create "$printDir" | while read line
do
echo "$line"
done Does anyone have any ideas of how to solve it? | The standard way to do it is using colon : : if condition; do
command
else
:
fi or true : if condition; do
command
else
true
fi But why not just skipping the else part: if condition; do
command
fi In zsh and yash , you can even make the else part empty: if condition; then
command
else
fi | {
"source": [
"https://unix.stackexchange.com/questions/287903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147365/"
]
} |
287,913 | I was trying to make a linux server become a radius client. So I downloaded pam_radius. By following the steps from this website : openacs.org/doc/install-pam-radius.html and by following these steps : cd /usr/local/src
wget ftp://ftp.freeradius.org/pub/radius/pam_radius-1.3.16.tar
tar xvf pam_radius-1.3.16
cd pam_radius
make
cp pam_radius_auth.so /lib/security I thought I could install it but I got stuck at "make" I get this error message: [root@zabbix pam_radius-1.4.0]# make
cc -Wall -fPIC -c src/pam_radius_auth.c -o pam_radius_auth.o
make: cc: Command not found
make: *** [pam_radius_auth.o] Error 127 I googled this error message and someone said they installed pam-devel. But I get the same message even after installation of pam-devel. What can I do? | The standard way to do it is using colon : : if condition; do
command
else
:
fi or true : if condition; do
command
else
true
fi But why not just skipping the else part: if condition; do
command
fi In zsh and yash , you can even make the else part empty: if condition; then
command
else
fi | {
"source": [
"https://unix.stackexchange.com/questions/287913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167898/"
]
} |
288,037 | I'm experiencing a strange behaviour of xmobar right after i enter xmonad . When i xmonad (from .xinitrc , i use XDM) my xmobar appears but it is not either at the top or bottom of the window stack . Once i start an application (e.g. terminal emulator by pressing Mod + Shift + Return ) the application uses the entire screen, as if the xmobar was at the bottom. Then i press Mod + B and nothing happens, once i press Mod + B a second time xmobar is lifted to the top reducing the application window size. After that Mod + B works correctly for the remainder of the xmonad session, i.e. it lowers/raises (hides/shows) the xmobar . I'm confident i misconfigured something. My xmonad.hs looks like: import XMonad
import XMonad.Hooks.SetWMName
import XMonad.Hooks.DynamicLog
main = do
xmonad =<< statusBar "xmobar" myPP toggleStrutsKey defaultConfig
{ terminal = "urxvt"
, focusFollowsMouse = True
, clickJustFocuses = False
, borderWidth = 1
, modMask = mod4Mask
, workspaces = myworkspaces
, normalBorderColor = "#dddddd"
, focusedBorderColor = "#00dd00"
, manageHook = mymanager
, startupHook = setWMName "LG3D"
}
myPP = xmobarPP { ppOutput = putStrLn
, ppCurrent = xmobarColor "#336433" "" . wrap "[" "]"
--, ppHiddenNoWindows = xmobarColor "grey" ""
, ppTitle = xmobarColor "darkgreen" "" . shorten 20
, ppLayout = shorten 6
--, ppVisible = wrap "(" ")"
, ppUrgent = xmobarColor "red" "yellow"
}
toggleStrutsKey XConfig { XMonad.modMask = modMask } = (modMask, xK_b)
myworkspaces = [ "code"
, "web"
, "media"
, "irc"
, "random"
, "mail"
, "docs"
, "music"
, "root"
]
mymanager = composeAll
[ className =? "gimp" --> doFloat
, className =? "vlc" --> doFloat
] Whilst the beginning of my .xmobarrc looks as follows: Config {
-- appearance
font = "xft:Bitstream Vera Sans Mono:size=9:bold:antialias=true"
, bgColor = "black"
, fgColor = "#646464"
, position = Top
, border = BottomB
, borderColor = "#646464"
-- layout
, sepChar = "%" -- delineator between plugin names and straight text
, alignSep = "}{" -- separator between left-right alignment
, template = "%battery% | %multicpu% | %coretemp% | %memory% | %dynnetwork% | %StdinReader% }{ %date% || %kbd% "
-- general behavior
, lowerOnStart = False -- send to bottom of window stack on start
, hideOnStart = False -- start with window unmapped (hidden)
, allDesktops = True -- show on all desktops
, overrideRedirect = True -- set the Override Redirect flag (Xlib)
, pickBroadest = False -- choose widest display (multi-monitor)
, persistent = True -- enable/disable hiding (True = disabled)
-- plugins (i do not use any)
, commands = [ -- actually several commands are in here
]
} I tried several combinations of: , lowerOnStart =
, hideOnStart = (True/True, True/False, False/True and False/False as shown now). But the behaviour before i press Mod + B two times does not change. I believe that i have misconfigured xmonad in some way not xmobar but that is just a guess. My .xinitrc might be of help: #!/bin/sh
if test -d /etc/X11/xinit/xinitrc.d
then
# /etc/X11/xinit/xinitrc.d is actually empty
for f in /etc/X11/xinit/xinitrc.d/*
do
[ -x "$f" ] && source "$f"
done
unset f
fi
# uk keyboard
setxkbmap gb
xrdb .Xresources
xscreensaver -no-splash &
# java behaves badly in non-reparenting window managers (e.g. xmonad)
export _JAVA_AWT_WM_NONREPARENTING=1
# set the background (again, because qiv uses a different buffer)
/usr/bin/feh --bg-scale --no-fehbg -z /usr/share/archlinux/wallpaper/a*.jpg
# pulse audio for alsa
then
/usr/bin/start-pulseaudio-x11
fi
exec xmonad | Two months later I figured it out. The problem is that statusBar does not register the events of Hooks.manageDocks properly. Once xmonad is running all works well because manageDocks is able to update the Struts on every window event. But in the moment that xmonad is starting the event of making the first windows fullscreen happens before the events from manageDocks . This mages that first open window to ignore the existence of xmobar . manageDocks has its event handler that must be set as the last event handler, therefore statusBar cannot be used. Instead, it is necessary to make xmonad call and configure xmobar manually through dynamicLog , manageHook , layoutHook and handleEventHook . A minimalistic configuration for this would be: main = do
xmproc <- spawnPipe "xmobar"
xmonad $ defaultConfig
{ modMask = mod4Mask
, manageHook = manageDocks <+> manageHook defaultConfig
, layoutHook = avoidStruts $ layoutHook defaultConfig
-- this must be in this order, docksEventHook must be last
, handleEventHook = handleEventHook defaultConfig <+> docksEventHook
, logHook = dynamicLogWithPP xmobarPP
{ ppOutput = hPutStrLn xmproc
, ppTitle = xmobarColor "darkgreen" "" . shorten 20
, ppHiddenNoWindows = xmobarColor "grey" ""
}
, startupHook = setWMName "LG3D"
} `additionalKeys`
[ ((mod4Mask, xK_b), sendMessage ToggleStruts) ] This makes all events to be processed by docsEventHook and ensures that layout changes made by docsEventHook are the last ones applied. Now lowerOnStart = False (or True ) works as expected in all cases within xmobarrc . | {
"source": [
"https://unix.stackexchange.com/questions/288037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172635/"
]
} |
288,105 | When I execute less package.rpm , less shows me all sorts of meta info about the package. What is less exactly doing - does it have built in code to be able to extract meta info, or is an rpm structured in a way that the first part just looks like a text file? I would assume the former, since head is not so helpful here. But to get to the real question: If I would like to grep through this meta data less showing me, how can I accomplish this? | If you browse through the less man page, you'll notice less has an INPUT PREPROCESSOR feature. echo $LESSOPEN to view the location of this preprocessor, and use less / vim / cat to view its contents. On my machine this preprocessor is /usr/bin/lesspipe.sh and it includes the following for rpms: *.rpm) rpm -qpivl --changelog -- "$1"; handle_exit_status $? In effect, less hands off openning the file to rpm , and shows you the pagination of its output. Obviously, to grep through this info, simply grep the output of rpm directly: grep "foo" < <(rpm -qpivl --changelog -- bar.rpm) Or in general (thanks OrangeDog) grep "foo" < <(lesspipe.sh bar.rpm) Note: $LESSOPEN Does not simply hold the location of lesspipe.sh - it begins with a | and ends with a %s so invoking it directly would result in errors. | {
"source": [
"https://unix.stackexchange.com/questions/288105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49331/"
]
} |
288,409 | if I want to count the lines of code, the trivial thing is cat *.c *.h | wc -l But what if I have several subdirectories? | The easiest way is to use the tool called cloc . Use it this way: cloc . That's it. :-) | {
"source": [
"https://unix.stackexchange.com/questions/288409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9115/"
]
} |
288,427 | I have a text file that looks like this: {
"mimeType": "web",
"body": "adsfdf",
"data_source_name": "abc",
"format": "web",
"url": "http://google.com/",
"urls": "http://google.com/",
"lastModified": "123123",
"title": "Google",
"docdatetime_dt": "1231234",
"wfbdomain": "google.com",
"id": "http://google.com",
},
{
"mimeType": "web",
"body": "adsfdf",
"data_source_name": "zdf",
"format": "web",
"url": "http://facebook.com/",
"urls": "http://facebook.com/",
"lastModified": "123123",
"title": "Facebook",
"docdatetime_dt": "1231234",
"wfbdomain": "facebook.com",
"id": "http://facebook.com",
},
{
"mimeType": "web",
"body": "adsfdf",
"format": "web",
"url": "http://twitter.com/",
"urls": "http://twitter.com/",
"lastModified": "123123",
"title": "Twitter",
"docdatetime_dt": "1231234",
"wfbdomain": "twitter.com",
"id": "http://twitter.com",
} If you see the third one in the above block, you will notice that "data_source_name": .... is missing. I have a file that is really huge and want to check if this particular thing is missing, and if missing, print/echo it. I tried sed but am unable to figure out how to use it properly. Is it possible using sed or something else? | The easiest way is to use the tool called cloc . Use it this way: cloc . That's it. :-) | {
"source": [
"https://unix.stackexchange.com/questions/288427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152598/"
]
} |
288,428 | I'd like to learn why traceroute sends three packets per hop by default. (Nothing important, I'm just curious). Edit: packages != packets | The easiest way is to use the tool called cloc . Use it this way: cloc . That's it. :-) | {
"source": [
"https://unix.stackexchange.com/questions/288428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157212/"
]
} |
288,521 | If I use cat -n text.txt to automatically number the lines, how do I then use the command to show only certain numbered lines. | Use sed Usage $ cat file
Line 1
Line 2
Line 3
Line 4
Line 5
Line 6
Line 7
Line 8
Line 9
Line 10 To print one line (5) $ sed -n 5p file
Line 5 To print multiple lines (5 & 8) $ sed -n -e 5p -e 8p file
Line 5
Line 8 To print specific range (5 - 8) $ sed -n 5,8p file
Line 5
Line 6
Line 7
Line 8 To print range with other specific line (5 - 8 & 10) $ sed -n -e 5,8p -e 10p file
Line 5
Line 6
Line 7
Line 8
Line 10 | {
"source": [
"https://unix.stackexchange.com/questions/288521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174168/"
]
} |
288,589 | Since google chrome/chromium spawn multiple processes it's harder to see how much total memory these processes use in total. Is there an easy way to see how much total memory a series of connected processes is using? | Given that google killed chrome://memory in March 2016, I am now using smem : # detailed output, in kB apparently
smem -t -P chrom
# just the total PSS, with automatic unit:
smem -t -k -c pss -P chrom | tail -n 1 to be more accurate replace chrom by full path e.g. /opt/google/chrome or /usr/lib64/chromium-browser this works the same for multiprocess firefox (e10s) with -P firefox be careful, smem reports itself in the output, an additional ~10-20M on my system. unlike top it needs root access to accurately monitor root processes -- use sudo smem for that. see this SO answer for more details on why smem is a good tool and how to read the output. | {
"source": [
"https://unix.stackexchange.com/questions/288589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123297/"
]
} |
288,774 | Why do some GNU Coreutils commands have the -T/--no-target-directory option? It seems like everything that it does can be achieved using the semantics of the . (self dot) in a traditional Unix directory hierarchy. Considering: cp -rT /this/source dir The -T option prevents the copy from creating a dir/source subdirectory. Rather /this/source is identified with dir and the contents are mapped between the trees accordingly. So for instance /this/source/foo.c goes to dir/foo.c and so on, rather than to dir/source/foo.c . But this can be easily accomplished without the -T option using: cp -r /this/source/. dir # Probably worked fine since dawn of Unix? Semantically, the trailing dot component is copied as a child of dir , but of course that "child" already exists (so doesn't have to be created) and is actually dir itself, so the effect is that /this/path is identified with dir . It works fine if the current directory is the target: cp -r /this/tree/node/. . # node's children go to current dir Is there something you can do only with -T that can rationalize its existence? (Besides support for operating systems that don't implement the dot directory, a rationale not mentioned in the documentation.) Does the above dot trick not solve the same race conditions that are mentioned in the GNU Info documentation about -T ? | Your . trick can only be used when you're copying a directory, not a file. The -T option works with both directories and files. If you do: cp srcfile destfile and there's already a directory named destfile it will copy to destfile/srcfile , which may not be intended. So you use cp -T srcfile destfile and you correctly get the error: cp: cannot overwrite directory `destfile' with non-directory If you tried using the . method, the copy would never work: cp: cannot stat `srcfile/.`: Not a directory | {
"source": [
"https://unix.stackexchange.com/questions/288774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16369/"
]
} |
Subsets and Splits