source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
264,397
I learned when I use command, double quoting treat all things as character except $, `, \ . But, when use command like find -type f -name "*.jpg" *.jpg is inside double quotes. Then, it means we want to treat * and . as just a character. So, the find command should output regular file which has name *.jpg as it says, not pathname expansion implemented. If we want to do pathname expansion, I think I have to do type command find -type f -name *.jpg (without double quoting). But, the result is same. Why use double quoting in this command?
There is a subtlety to how wildcard expansion works. Change to a directory which contains no .jpg files and type echo *.jpg and it will output *.jpg In particular, the string *.jpg is left unmodified. If, however, you change to a directory containing .jpg files, for example suppose we have two files: image1.jpg and image2.jpg, then the echo *.jpg command will not output image1.jpg image2.jpg and the *.jpg gets expanded. If you type find . -name *.jpg and there are no .jpg files in the directory you are when you type this, then find will receive the arguments ".", "-name" and "*.jpg". If, however, you type this command in a directory containing .jpg files, say image1.jpg and image2.jpg, then find will receive the arguments ".", "-name", "image1.jpg" and "image2.jpg", so will in effect run the command find . -name image1.jpg image2.jpg and find will complain. What can be really confusing if you omit the quotes is if there is a single .jpg file (say image1.jpg). Then the wildcard expansion will result in find . -name image1.jpg and the find command will find all files whose basename is image1.jpg. Aside: This does lead to a useful bash idiom for seeing if any files match a given pattern: if [ "$(echo *.jpg)" = "*.jpg" ]; then # *.jpg has no matcheselse # *.jpg has matchesfi though be warned that this will not work if there is a file called '*.jpg' in the current directory. To be more watertight, you can do if [ "$(echo *.jpg)" = "*.jpg" ] && [ ! -e "*.jpg" ]; then # *.jpg has no matcheselse # *.jpg has matchesfi (While not directly relevant to the the question, I added this since it illustrates some of the aspects of how wildcard expansion works.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152911/" ] }
264,406
I have an NTFS formatted USB hard disk that works fine (mounts and unmounts cleanly) on my windows desktop. I can't however seem to mount it on my freebsd box at all. Stripping this back to the basics, I can confirm the box sees the USB device, pfSense log/ root^> dmesgugen1.5: <Seagate> at usbus1umass1: <Seagate Expansion Desk, class 0/0, rev 2.10/1.00, addr 5> on usbus1da1 at umass-sim1 bus 1 scbus2 target 0 lun 0da1: <Seagate Expansion Desk 0604> Fixed Direct Access SCSI-6 deviceda1: Serial Number NA4KXT5Fda1: 40.000MB/s transfersda1: 3815447MB (976754645 4096 byte sectors: 255H 63S/T 60800C)da1: quirks=0x2<NO_6_BYTE> The USB device also shows up under camcontrol and usbconfig pfSense log/ root^> usbconfigugen0.1: <XHCI root HUB 0x8086> at usbus0, cfg=0 md=HOST spd=SUPER (5.0Gbps) pwr=SAVE (0mA)ugen1.1: <EHCI root HUB Intel> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)ugen1.2: <product 0x8001 vendor 0x8087> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)ugen1.3: <USB2.0 Hub vendor 0x05e3> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (100mA)ugen1.4: <USB Storage Generic> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)ugen1.5: <Expansion Desk Seagate> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (0mA)pfSense log/ root^> camcontrol devlist<C400-MTFDDAK256MAM 070H> at scbus0 target 0 lun 0 (ada0,pass0)<Generic STORAGE DEVICE 9451> at scbus1 target 0 lun 0 (pass1,da0)<Seagate Expansion Desk 0604> at scbus2 target 0 lun 0 (da1,pass2) But running a simply command like fdisk -p gets me nowhere, pfSense log/ root^> fdisk -p /dev/da1fdisk: could not detect sector size Any pointers on where I am going wrong would be very helpful. PS, in case anyone picked up on it from the hostname, this is a box running pfsense with Finch for various jails. The ntfs-3g and all the troubleshooting is under finch. Many thanks
There is a subtlety to how wildcard expansion works. Change to a directory which contains no .jpg files and type echo *.jpg and it will output *.jpg In particular, the string *.jpg is left unmodified. If, however, you change to a directory containing .jpg files, for example suppose we have two files: image1.jpg and image2.jpg, then the echo *.jpg command will not output image1.jpg image2.jpg and the *.jpg gets expanded. If you type find . -name *.jpg and there are no .jpg files in the directory you are when you type this, then find will receive the arguments ".", "-name" and "*.jpg". If, however, you type this command in a directory containing .jpg files, say image1.jpg and image2.jpg, then find will receive the arguments ".", "-name", "image1.jpg" and "image2.jpg", so will in effect run the command find . -name image1.jpg image2.jpg and find will complain. What can be really confusing if you omit the quotes is if there is a single .jpg file (say image1.jpg). Then the wildcard expansion will result in find . -name image1.jpg and the find command will find all files whose basename is image1.jpg. Aside: This does lead to a useful bash idiom for seeing if any files match a given pattern: if [ "$(echo *.jpg)" = "*.jpg" ]; then # *.jpg has no matcheselse # *.jpg has matchesfi though be warned that this will not work if there is a file called '*.jpg' in the current directory. To be more watertight, you can do if [ "$(echo *.jpg)" = "*.jpg" ] && [ ! -e "*.jpg" ]; then # *.jpg has no matcheselse # *.jpg has matchesfi (While not directly relevant to the the question, I added this since it illustrates some of the aspects of how wildcard expansion works.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157323/" ] }
264,464
I am running this command: $ sudo tar xvzf nexus-latest-bundle.tar.gz The extracted files belong to an unknown (1001) user: drwxr-xr-x 8 1001 1001 4096 Dec 16 18:37 nexus-2.12.0-01drwxr-xr-x 3 1001 1001 4096 Dec 16 18:47 sonatype-work Shouldn't it be root the owner under a normal configuration? I am working on a linux installation replicated from an AWS AMI.
When extracting files as root, tar will use the original ownership. You can override that using the --no-same-owner option (alternatively, -o ). Your tar file referred to user/group which do not exist on the system where you extracted it. If you extract files as yourself (a non-privileged user), you can only create files owned by yourself. The GNU tar manual says: --same-owner When extracting an archive, tar will attempt to preserve the owner specified in the tar archive with this option present. This is the default behavior for the superuser; this option has an effect only for ordinary users. See section Handling File Attributes .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83737/" ] }
264,522
When a script is launched from command prompt the shell will spawn a subprocess for that script. I want to show that relationship between terminal level process and its children using ps in a tree style output. How can I do this? What I have tried so far file: script.sh #!/bin/bashps -f -p$1 Then I invoke the script from the command line passing in the process id of the terminal shell: $ ./script.sh $$ What I want is something like this top level (terminal) shell process ./script.sh process for ps command itself USER PID [..]ubuntu 123 -bashubuntu 1234 \_ bash ./script.shubuntu 12345 \_ ps auxf what Im getting is: PID TTY STAT TIME COMMAND14492 pts/24 Ss 0:00 -bash
Try # ps -aef --forestroot 114032 1170 0 Apr05 ? 00:00:00 \_ sshd: root@pts/4root 114039 114032 0 Apr05 pts/4 00:00:00 | \_ -bashroot 56225 114039 0 13:47 pts/4 00:00:16 | \_ toproot 114034 1170 0 Apr05 ? 00:00:00 \_ sshd: root@nottyroot 114036 114034 0 Apr05 ? 00:00:00 | \_ /usr/libexec/openssh/sftp-serverroot 103102 1170 0 Apr06 ? 00:00:03 \_ sshd: root@pts/0root 103155 103102 0 Apr06 pts/0 00:00:00 | \_ -bashroot 106798 103155 0 Apr06 pts/0 00:00:00 | \_ su - postgrespostgres 106799 106798 0 Apr06 pts/0 00:00:00 | \_ -bashpostgres 60959 106799 0 14:39 pts/0 00:00:00 | \_ ps -aef --forestpostgres 60960 106799 0 14:39 pts/0 00:00:00 | \_ more
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/264522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
264,523
Let me give an example: $ echo Hello > file1$ echo Hello > file2$ echo Hello > .file3$ grep Hello * 2>/dev/nullfile1:Hellofile2:Hello Here you can see the grep ignored .file3 which is starting from . Expected result: $ grep Hello * 2>/dev/null.file3:Hellofile1:Hellofile2:Hello In case of ls , there is an argument -a to tell do not ignore entries starting with . but I can't find any feature for grep So, Is there anyway to tell grep do not ignore entries starting with . ? or why grep ignores them?
* does not match leading period . in filename expansion. The rule was specified by POSIX . With pattern matching, it works: $ sh -c 'case . in *) echo 1;; esac'1 POSIXly: find . ! -name . -prune -type f -exec grep 'pattern' /dev/null {} + This approach has advantage over using shell globbing. You will never get Argument list too long error when there're too many files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
264,562
I sometimes run into software that is not offered in .deb or .rpm but only as an executable. For example Visual Studio Code , WebStorm or Kerbal Space Programm . For this question, I will take Visual Studio Code as the point of reference. The software is offered as a zipped package. When unzipping, I'm left with a folder called VSCode-linux-x64 that contains a executable named Code . I can double click Code or point to it with my terminal like /home/user/Downloads/VSCode-linux-x64/Code to execute it. However, I would like to know if there is a proper way to install this applications. What I want to achieve is: one place where I can put all the applications/softwares that areoffered in this manner (executables) terminal support (that means forexample: I can write vscode from any folder in my terminal and itwill automatically execute Visual Studio Code. Additional info: Desktop Environment: Gnome3 OS: Debian EDIT: I decided to give @kba the answer because his approach works better with my backup solution and besides that. Having script executing the binaries gives you the possibility to add arguments. But to be fair, @John WH Smith approach is just as good as @kba's.
To call a program by its name, shells search the directories in the $PATH environment variable. In Debian, the default $PATH for your user should include /home/YOUR-USER-NAME/bin (i.e. ~/bin ). First make sure the directory ~/bin exists or create it if it does not: mkdir -p ~/bin You can symlink binaries to that directory to make it available to the shell: mkdir -p ~/binln -s /home/user/Downloads/VSCode-linux-x64/Code ~/bin/vscode That will allow you to run vscode on the command line or from a command launcher. Note: You can also copy binaries to the $PATH directories but that can cause problems if they depend on relative paths. In general, though, it's always preferable to properly install software using the means provided by the OS (apt-get, deb packages) or the build tools of a software project. This will ensure that dependent paths (like start scripts, man pages, configurations etc.) are set up correctly. Update: Also reflecting Thomas Dickey's comments and Faheem Mitha's answer what I usually do for software that comes as a tarball with a top-level binary and expects to be run from there: Put it in a sane location (in order of standards-compliance /opt , /usr/local or a folder in your home directory, e.g. ~/build ) and create an executable script wrapper in a $PATH location (e.g. /usr/local/bin or ~/bin ) that changes to that location and executes the binary: #/bin/shcd "$HOME/build/directory"exec ./top-level-binary "$@" Since this emulates changing to that directory and executing the binary manually, it makes it easier to debug problems like non-existing relative paths.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264562", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40919/" ] }
264,586
I'm trying to build a bash script to install the Source Guardian PHP extension however the destination directory is different on every subsequent release of Ubuntu. Installing PHP5 on Ubuntu 14.04 results in the extensions being stored in /usr/lib/php5/20121212+lfs/, in Ubuntu 15.04 this directory changes, e.g. /usr/lib/20131226/ I've checked /etc/php5/fpm/php.ini and /etc/php5/fpm/php-fpm.conf but neither of these files has any mention of 20121212+lfs or 20131226. If I place the Source Guardian extension anywhere else, it does not load. Is there a way to programmatically determine the extension folder?
Maybe you should do this: php-config --extension-dir If php-config doesn't exist, then apt-get install php-config if Ubuntu/Debian or yum install php-config if CentOS/Red Hat) That command will give exact location of your php extension folder. Don't forget to change your php.ini in order to use extensions.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37430/" ] }
264,632
I found two commands to output information about my CPU: cat /proc/cpuinfo and lscpu . /proc/cpuinfo shows that my CPU speed is 2.1 Ghz, whereas lspcu says it is 3167 Mhz. Which one is correct? This is my exact output from cat /proc/cpuinfo about my processor speed: model name : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz And this is from lscpu : CPU MHz: 3225.234 (For some reason, lscpu outputs differently every time, varying between 3100 and 3300 MHz)
To see the current speed of each core I do this: watch -n.1 "grep \"^[c]pu MHz\" /proc/cpuinfo" Notes: This does not work on server CPUs such as the Intel Xeon series. On such machines it will show the base frequency only. To show the turbo frequency, you'll need cpupower or turbostat. See @Maxim Egorushkin's answer. If your watch command does not work with intervals smaller than one second, modify the interval like so: watch -n1 "grep \"^[c]pu MHz\" /proc/cpuinfo" This displays the cpu speed of each core in real time. By running the following command, one or more times, from another terminal one can see the speed change with the above watch command, assuming SpeedStep is enabled ( Cool'n'Quiet for AMD ). echo "scale=10000; 4*a(1)" | bc -l & (This command uses bc to calculate pi to 10000 places.)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/264632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139098/" ] }
264,635
In bash, say you have var=a.b.c. , then: $ IFS=. printf "%s\n" $vara.b.c However, such a usage of IFS does take effect while creating an array: $ IFS=. arr=($var)$ printf "%s\n" "${arr[@]}"abc This is very convenient, sure, but where is this documented? A quick reading of the sections on Arrays or Word Splitting in the Bash documentation does not give any indication either way. A search for IFS through the single-page documentation doesn't provide any hints about this effect either. I'm not sure when I can reliably do: IFS=x do something And expect that IFS will affect field splitting.
The basic idea is that VAR=VALUE some-command sets VAR to VALUE for the execution of some-command when some-command is an external command, and it doesn't get more fancy than that. If you combine this intuition with some knowledge of how a shell works, you should come up with the right answer in most cases. The POSIX reference is “Simple Commands” in the chapter “Shell Command Language” . If some-command is an external command , VAR=VALUE some-command is equivalent to env VAR=VALUE some-command . VAR is exported in the environment of some-command , and its value (or lack of a value) in the shell doesn't change. If some-command is a function , then VAR=VALUE some-command is equivalent to VAR=VALUE; some-command , i.e. the assignment remains in place after the function has returned, and the variable is not exported into the environment. The reason for that has to do with the design of the Bourne shell (and subsequently with backward compatibility): it had no facility to save and restore variable values around the execution of a function. Not exporting the variable makes sense since a function executes in the shell itself. However, ksh (including both ATT ksh93 and pdksh/mksh), bash and zsh implement the more useful behavior where VAR is set only during the execution of the function (it's also exported). In ksh , this is done if the function is defined with the ksh syntax function NAME … , not if it's defined with the standard syntax NAME () . In bash , this is done only in bash mode, not in POSIX mode (when run with POSIXLY_CORRECT=1 ). In zsh , this is done if the posix_builtins option is not set; this option is not set by default but is turned on by emulate sh or emulate ksh . If some-command is a builtin, the behavior depends on the type of builtin. Special builtins behave like functions. Special built-ins are the ones that have to be implemented inside the shell because they affect the state shell (e.g. break affects control flow, cd affects the current directory, set affects positional parameters and options…). Other builtins are built-in only for performance and convenience (mostly — e.g. the bash feature printf -v can only be implemented by a builtin), and they behave like an external command. The assignment takes place after alias expansion, so if some-command is an alias , expand it first to find what happens. Note that in all cases, the assignment is performed after the command line is parsed, including any variable substitution on the command line itself. So var=a; var=b echo $var prints a , because $var is evaluated before the assignment takes place. And thus IFS=. printf "%s\n" $var uses the old IFS value to split $var . I've covered all the types of commands, but there's one more case: when there is no command to execute , i.e. if the command consists only of assignments (and possibly redirections). In that case, the assignment remains in place . VAR=VALUE OTHERVAR=OTHERVALUE is equivalent to VAR=VALUE; OTHERVAR=OTHERVALUE . So after IFS=. arr=($var) , IFS remains set to . . Since you could use $IFS in the assignment to arr with the expectation that it already has its new value, it makes sense that the new value of IFS is used for the expansion of $var . In summary, you can use IFS for temporary field splitting only: by starting a new shell or a subshell (e.g. third=$(IFS=.; set -f; set -- $var; echo "$3") is a complicated way of doing third=${var#*.*.} except that they behave differently when the value of var contains less than two . characters); in ksh, with IFS=. some-function where some-function is defined with the ksh syntax function some-function … ; in bash and zsh, with IFS=. some-function as long as they are operating in native mode as opposed to compatibility mode.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/264635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70524/" ] }
264,639
What I do to apply the grub theme is: Add the following to /etc/default/grub: GRUB_THEME="/boot/grub2/themes/system/theme.txt" Then run the following command to regenerate config file: grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg And finally I reboot, but I get no theme. Has anyone encountered this before and know a way around this?
Everything is right, but if you take a closer look at /etc/default/grub you will find the line: GRUB_TERMINAL_OUTPUT="console" Comment it out, build again the grub.cfg by grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg and you are ready to go!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72554/" ] }
264,669
I got a copy of The Unix Programming Environment by Kernighan and Pike from a garage sale. I'm very interested in the chapter about the UNIX filesystem. Naturally, I also found this passage very interesting: The time has come to look at the bytes in a directory: $ od -cb .0000000 4 ; . \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 064 073 056 000 000 000 000 000 000 000 000 000 000 000 000 000.... It was really long so I won't type the whole thing out. The gist of it was that it displayed the directory in the way it was stored on the system. I quickly rushed to my laptop (Debian) to try this out. I typed out the command as it was in the book. $ od -cb .od: .: read error: Is a directory0000000 Obviously it won't let me view the raw contents of the directory. So here's my question. Does the Linux kernel store directories in a different way that the original UNIX kernel did? If not, why is there the need to conceal the actual bytes of the directory from the user?
Each filesystem type stores directories in a different way. There are many different filesystem types with different characteristics — good for high throughput, good for high concurrency, good for limited-memory environments, different compromises between read and write performance, between complexity and stability, etc. Your book describes a filesystem used in early Unix systems. Modern systems support many different filesystems. The very early versions of Unix had a lot of filesystem manipulation outside the kernel. For example, mkdir and rmdir worked by editing some filesystem data structures directly. This was quickly replaced by a uniform directory access interface, the opendir / readdir / closedir family, which allowed applications to manipulate directories without having to know how they were implemented under the hood. The reason you can't read directory contents under Linux isn't because they have to be concealed, but because features exist only if they are implemented, and this feature is pointless and has a cost. Given that the format depends on the filesystem, it's a rather pointless feature: a program can't know the format of what it's reading. It isn't a completely trivial feature to support either: some filesystems organize directories in ways that aren't just a stream of bytes, for example it may be organized as a B-tree . Some Unix variants still allow applications to read directory contents directly, for backward compatibility, but Linux doesn't have this feature (and never had as far as I can recall — it was already an obsolete feature in the early 1990s).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149373/" ] }
264,672
Usually on bash i did shopt -s extglobrm !(filedontwantremove) and remove all files except filedontwantremove But if i want to remove all file except filedontwantremoveand "antotherfilewithatotaldifferentname"? There is a find solution,but i prefer i thing like rm !()
Just use: shopt -s extglobrm !(filedontwantremove|antotherfilewithatotaldifferentname) Note however that extglob is usually on by default in interactive shells in bash. Do this to find out if it is active: $ shopt -p extglobshopt -s extglob ### The -s means that it is set. And execute this command to find out the part of the manual that explains what does a !(pattern-list) idiom do: $ LESS=+/'If the extglob shell option is enabled' man bash ... a pattern-list is a list of one or more patterns separated by a |.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
264,791
Putting on Debian 8.3 stty werase '^H' or on Arch Linux 2/2016 stty werase '^?' in .bashrc (for example) makes Ctrl - Backspace delete the last word in the terminal. Still it's not the same behavior as in modern GUI applications (e.g. Firefox): It deletes the last whitespace -separated word, and not the last word separated by whitespace or characters like . : , ; " ' & / ( ) . Is it possible to make Ctrl - Backspace behave in the terminal similar to modern GUI applications? Also, is there any way to make Ctrl - Delete delete the word immediately before the cursor?
There are two line editors at play here: the basic line editor provided by the kernel (canonical mode tty line editor), and bash's line editor (implemented via the readline library). Both of these have an erase-to-previous-word command which is bound to Ctrl + W by default. The key can be configured for the canonical mode tty line editor through stty werase ; bash imitates the key binding that it finds in the tty setting unless overridden in its own configuration. The werase action in tty line editor cannot be configured. It always erases (ASCII) whitespace-delimited words. It's rare to interact with the tty line editor — it's what you get e.g. when you type cat with no argument. If you want fancy key bindings there, you can run the command under a tool like rlwrap which uses readline. Bash provides two commands to delete the previous word : unix-word-rubout ( Ctrl + w or as set through stty werase ), and backward-kill-word ( M-DEL , i.e. Esc Backspace ) which treats a word as a sequence of alphanumeric characters in the current locale and _ . If you want Ctrl + Backspace to erase the previous sequence of alphanumeric characters, don't set stty werase , and instead put the following line in your .inputrc : "\C-h": backward-kill-word Note that this assumes that your terminal sends the Ctrl+H character for Ctrl + Backspace . Unfortunately it's one of those keys with no standard binding (and Backspace in particular is a mess for historical reasons). There's also a symmetric command kill-word which is bound to M-d ( Alt + D ) by default. To bind it to Ctrl + Delete , you first need to figure out what escape sequence your terminal sends, then add a corresponding line in your .inputrc . Type Ctrl + V then Ctrl + Delete ; this will insert something like ^[[3;5~ where the initial ^[ is a visual representation of the escape character. Then the binding is "\e[3;5~": kill-word If you aren't happy with either definition of a word, you can provide your own in bash: see confusing behavior of emacs-style keybindings in bash
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/264791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150422/" ] }
264,798
I have a top folder with many sub-folders. It's named "a". There are many .png and .jpg files in there. I'd like to recursively copy "a" into a new folder "b", but only copy the .png and .jpg files. How do I achieve that?
find a \( -name "*.png" -or -name "*.jpg" \) -exec cp {} b \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157576/" ] }
264,920
Unix / Linux EOL is LF, linefeed, ASCII 10, escape sequence \n . Here's a Python snippet to get exactly one keypress: import sys, tty, termiosfd = sys.stdin.fileno()old_settings = termios.tcgetattr(fd)try: tty.setraw(sys.stdin.fileno()) ch = sys.stdin.read(1)finally: termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch When I press Enter on my keyboard in response to this snippet, it gives \r , carriage return, ASCII 13. On Windows , Enter sends CR LF == 13 10 . *nix is not Windows; why does Enter give 13 rather than 10?
Essentially "because it's been done that way since manual typewriters". Really. A manual typewriter had a carriage on which the paper was fed, and it moved forward as you typed (loading a spring), and had a lever or key which would release the carriage, letting the spring return the carriage to the left-margin. As electronic data entry (teletype, etc) were introduced, they carried that forward. So the Enter key on many terminals would be labeled Return . Line feeds happened (in the manual process) after returning the carriage to the left margin. Again, the electronic devices imitated the manual devices, making a separate line-feed operation. Both operations are encoded (to allow the teletype to be more than a standalone device creating a paper type), so we have CR (carriage-return) and LF (line-feed). This image from ASR 33 Teletype Information shows the keyboard, with Return on the right side, and Line-Feed just to the left. Being on the right , it was the main key: Unix came along later. Its developers liked to shorten things (look at all of the abbreviations, even creat for "create"). Faced with a possibly two-part process, they decided that line-feeds only made sense if they were preceded by carriage-returns. So they dropped the explicit carriage returns from files , and translated the terminal's Return key to send the corresponding line-feed. Just to avoid confusion, they referred to line-feed as "newline". When writing text on the terminal, Unix translates in the other direction: a line-feed becomes carriage-return / line-feed. (That is, "normally": so-called "cooked mode", in contrast to "raw" mode where no translation is done). Summary: carriage-return / line-feed is the sequence 13 10 the device sends 13 (since "forever" in your terms) Unix-like systems change that to 13 10 Other systems do not necessarily store just 10 (Windows largely accepts just 10 or 13 10, depending how important compatibility is).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136107/" ] }
264,926
I am always really hesitant to mess around with $IFS because it's clobbering a global. But often it makes loading strings into a bash array nice and concise, and for bash scripting, conciseness is hard to come by. So I figure it might be better than nothing if I try to "save" the starting contents of $IFS to another variable and then restore it immediately after i am done using $IFS for something. Is this practical? Or is it essentially pointless and I should just directly set IFS back to whatever it needs to be for its subsequent uses?
You can save and assign to IFS as needed. There is nothing wrong with doing so. It's not uncommon to save its value for restoration subsequent to a temporary, expeditious modification, like your array assignment example. As @llua mentions in his comment to your question, simply unsetting IFS will restore the default behavior, equivalent to assigning a space-tab-newline. It's worth considering how it can be more problematic to not explicitly set/unset IFS than it is to do so. From the POSIX 2013 edition, 2.5.3 Shell Variables : Implementations may ignore the value of IFS in the environment, or the absence of IFS from the environment, at the time the shell is invoked, in which case the shell shall set IFS to <space> <tab> <newline> when it is invoked. A POSIX-compliant, invoked shell may or may not inherit IFS from its environment. From this follows: A portable script cannot dependably inherit IFS via the environment. A script that intends to use only the default splitting behavior (or joining, in the case of "$*" ), but which may run under a shell which initializes IFS from the environment, must explicitly set/unset IFS to defend itself against environmental intrusion. N.B. It is important to understand that for this discussion the word "invoked" has a particular meaning. A shell is invoked only when it is explicitly called using its name (including a #!/path/to/shell shebang). A subshell -- such as might be created by $(...) or cmd1 || cmd2 & -- is not an invoked shell, and its IFS (along with most of its execution environment) is identical to its parent's. An invoked shell sets the value of $ to its pid, while subshells inherit it. This is not merely a pedantic disquisition; there is actual divergence in this area. Here is a brief script which tests the scenario using several different shells. It exports a modified IFS (set to : ) to an invoked shell which then prints its default IFS. $ cat export-IFS.shexport IFS=:for sh in bash ksh93 mksh dash busybox:sh; do printf '\n%s\n' "$sh" $sh -c 'printf %s "$IFS"' | hexdump -Cdone IFS is not generally marked for export, but, if it were, note how bash, ksh93, and mksh ignore their environment's IFS=: , while dash and busybox honor it. $ sh export-IFS.shbash00000000 20 09 0a | ..|00000003ksh9300000000 20 09 0a | ..|00000003mksh00000000 20 09 0a | ..|00000003dash00000000 3a |:|00000001busybox:sh00000000 3a |:|00000001 Some version info: bash: GNU bash, version 4.3.11(1)-releaseksh93: sh (AT&T Research) 93u+ 2012-08-01mksh: KSH_VERSION='@(#)MIRBSD KSH R46 2013/05/02'dash: 0.5.7busybox: BusyBox v1.21.1 Even though bash, ksh93, and mksh do not initialize IFS from the environment, they re-export their modified IFS. If for whatever reason you need to portably pass IFS via the environment, you cannot do so using IFS itself; you will need to assign the value to a different variable and mark that variable for export. Children will then need to explicitly assign that value to their IFS.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12497/" ] }
264,962
How can I print all the lines between two lines starting with one pattern for the first line and ending with another pattern for the last line? Update I guess it was a mistake to mention that this document is HTML. I seem to have touched a nerve, so forget that. I'm not trying to parse HTML or do anything with it other than print a section of a text document. Consider this example: aaabbbpattern1aaa pattern2bbbcccpattern2dddeeepattern1fffggg Now, I want to print everything between the first instance of pattern1 starting at the beginning of a line and pattern2 starting at the beginning of another line. I want to include the pattern1 and pattern2 lines in my output, but I don't want anything after the pattern2 line. pattern2 is found in one of the lines of the section. I don't want to stop there, but that's easily remedied by indicating the start of the line with ^ . pattern1 appears on another line after pattern2 , but I don't want to look at that at all. I'm just looking for everything between the first instance of pattern1 and the first instance of pattern2 , inclusive. I found something that almost gets me there using sed : sed -n '/^pattern1/,/^pattern2/p' inputfile.txt ... but that starts printing again at the next instance of pattern1 I can think of a method using grep -n ... | cut -f1 -d: twice to get the two line numbers then tail and head to get the section I want, but I'm hoping for a cleaner way. Maybe awk is a better tool for this task? When I get this working, I hope to tie this into a git hook. I don't know how to do that yet, either, but I'm still reading and searching :) Thank you.
You can make sed quit at a pattern with sed '/pattern/q' , so you just need your matches and then quit at the second pattern match: sed -n '/^pattern1/,/^pattern2/{p;/^pattern2/q}' That way only the first block will be shown. The use of a subcommand ensures that ^pattern2 can cause sed to quit only after a match for ^pattern1 . The two ^pattern2 matches can be combined: sed -n '/^pattern1/,${p;/^pattern2/q}'
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/264962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53544/" ] }
264,980
mknod /tmp/oracle.pipe psqlplus / as sysdba << _EOFset escape onhost nohup gzip -c < /tmp/oracle.pipe > /tmp/out1.gz \&spool /tmp/oracle.pipeselect * from employee;spool off_EOFrm /tmp/oracle.pip I need to insert a trailer at the end of the zipped file out1.gz ,I can count the lines using count=zcat out1.gz |wc -l How do i insert the trailer T5 (assuming count=5) At the end of out1.gz without unzipping it.
From man gzip you can read that gzip ped files can simply be concatenated: ADVANCED USAGEMultiple compressed files can be concatenated. In this case, gunzip will extract all members at once. For example: gzip -c file1 > foo.gz gzip -c file2 >> foo.gz Then gunzip -c foo is equivalent to cat file1 file2 This could also be done using cat for the gzip ped files, e.g.: seq 1 4 > A && gzip Aecho 5 > B && gzip B#now 1 to 4 is in A.gz and 5 in B.gz, we want 1 to 5 in C.gz:cat A.gz B.gz > C.gz && zcat C.gz12345#or for appending B.gz to A.gz:cat B.gz >> A.gz For doing it without external file for you line to be appended, do as follows: echo "this is the new line" | gzip - >> original_file.gz
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/264980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157667/" ] }
264,984
I have a file named test which has two columns one having ID and other having status. I want to loop through the file and print IDs where status have one particular value (e.g. 'ACTIVE'). I tried cat test | while read line; do templine= $($line | cut -d ' ' -f 2);echo $templine; if [ $templine = 'ACCEPTED' ]; then echo "$templine"; fi done and some variation of above which obviously did not work. Any help would be appreciated.
From man gzip you can read that gzip ped files can simply be concatenated: ADVANCED USAGEMultiple compressed files can be concatenated. In this case, gunzip will extract all members at once. For example: gzip -c file1 > foo.gz gzip -c file2 >> foo.gz Then gunzip -c foo is equivalent to cat file1 file2 This could also be done using cat for the gzip ped files, e.g.: seq 1 4 > A && gzip Aecho 5 > B && gzip B#now 1 to 4 is in A.gz and 5 in B.gz, we want 1 to 5 in C.gz:cat A.gz B.gz > C.gz && zcat C.gz12345#or for appending B.gz to A.gz:cat B.gz >> A.gz For doing it without external file for you line to be appended, do as follows: echo "this is the new line" | gzip - >> original_file.gz
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/264984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157670/" ] }
265,036
I want to give a second life to my old computer. It has: EP-8NPAJ motherboard 320 Gb HDD 2Gb of RAM CD-ROM, USB Connection to my LAN with Ethernet cable It is fully functional, except - it has no video card. So, is it possible to install Linux or FreeBSD on this machine? I'm not new to Linux, so any Linux distribution is suitable.
As far as I know, installing on a headless computer is something which can be done using a variety of tricks. The first of the following suggestions uses accessibility features to compensate for the lack of screen. The second one is the easy way to pretend you know the exact sequence of keys to press. The third and fourth are one of many possibilities for installing over a remote connection. Debian has accessibility features directly in the installer, and it beeps when the installer is ready. After that beep, you simply have to press s and Enter to get speech synthesis, which you can use if your old machine has an audio jack. See https://wiki.debian.org/accessibility#Debian_installer_accessibility for more details. Install in a virtual machine at the same time as on your old machine, and be sure to press the exact same keys. Install via SSH, as noted by @FaheemMitha in the comments. If you're willing to try out something new, NixOs is pretty easy to install over SSH from an existing system. It then is just a matter of booting off a Live USB where you have already configured ssh (add your public key to the ~/.ssh/authorized_keys, with chmod 700 for the folder and 600 for the file), and log in from another computer. Don't forget to set up SSH in the freshly installed system, before rebooting :) . If you have a (couple of) USB to serial dongles, you can always try to pretend you found a time machine, and run the installer over serial (Debian supports that, I think, as well as a lot of other distributions). Finally, you can install the operating system on another computer, as Fiximan suggested. An easy way to do this without removing the hard disk drive is to install in a virtual machine (QEMU comes to mind) using a raw disk image, and then simply dd that disk image onto the harddisk (which can easily be done by blindly typing the command on the headless computer from a live USB: press Ctrl + Alt + F1 , log-in (check the username and password beforehand), and type zcat /path/to/img.gz | dd of=/dev/sda . And hope that /dev/sda hasn't been mapped to your harddisk and not to the USB key :) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157494/" ] }
265,057
I am trying to order all columns by column name in a CSV file.What I have is something like this: name ,adress ,mobile-numberAne ,USA ,12121212Joane ,England ,234234 and the output I need is adress ,name ,mobile-numberUSA ,Ane ,12121212England,Joane ,234234 The problem is that I have more than three columns, and I don't know the order they come in, but I need to reorder them in ascending order.
As far as I know, installing on a headless computer is something which can be done using a variety of tricks. The first of the following suggestions uses accessibility features to compensate for the lack of screen. The second one is the easy way to pretend you know the exact sequence of keys to press. The third and fourth are one of many possibilities for installing over a remote connection. Debian has accessibility features directly in the installer, and it beeps when the installer is ready. After that beep, you simply have to press s and Enter to get speech synthesis, which you can use if your old machine has an audio jack. See https://wiki.debian.org/accessibility#Debian_installer_accessibility for more details. Install in a virtual machine at the same time as on your old machine, and be sure to press the exact same keys. Install via SSH, as noted by @FaheemMitha in the comments. If you're willing to try out something new, NixOs is pretty easy to install over SSH from an existing system. It then is just a matter of booting off a Live USB where you have already configured ssh (add your public key to the ~/.ssh/authorized_keys, with chmod 700 for the folder and 600 for the file), and log in from another computer. Don't forget to set up SSH in the freshly installed system, before rebooting :) . If you have a (couple of) USB to serial dongles, you can always try to pretend you found a time machine, and run the installer over serial (Debian supports that, I think, as well as a lot of other distributions). Finally, you can install the operating system on another computer, as Fiximan suggested. An easy way to do this without removing the hard disk drive is to install in a virtual machine (QEMU comes to mind) using a raw disk image, and then simply dd that disk image onto the harddisk (which can easily be done by blindly typing the command on the headless computer from a live USB: press Ctrl + Alt + F1 , log-in (check the username and password beforehand), and type zcat /path/to/img.gz | dd of=/dev/sda . And hope that /dev/sda hasn't been mapped to your harddisk and not to the USB key :) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157728/" ] }
265,066
So I'm not exactly sure what happened but when I went to start my computer today, it failed to boot and the only thing on the screen is a command line with "Grub recovery". I tried to repair what happened with Boot Repair Disk but all it was able to do was dump the report linked here: http://paste.ubuntu.com/15171771/ Now, this isn't my computer, so I don't know what's installed on it, but I'm pretty sure it's some sort of linux distribution as well as a QNX 4 install. If anyone knows how to repair grub with QNX, I'd really appreciate it.
As far as I know, installing on a headless computer is something which can be done using a variety of tricks. The first of the following suggestions uses accessibility features to compensate for the lack of screen. The second one is the easy way to pretend you know the exact sequence of keys to press. The third and fourth are one of many possibilities for installing over a remote connection. Debian has accessibility features directly in the installer, and it beeps when the installer is ready. After that beep, you simply have to press s and Enter to get speech synthesis, which you can use if your old machine has an audio jack. See https://wiki.debian.org/accessibility#Debian_installer_accessibility for more details. Install in a virtual machine at the same time as on your old machine, and be sure to press the exact same keys. Install via SSH, as noted by @FaheemMitha in the comments. If you're willing to try out something new, NixOs is pretty easy to install over SSH from an existing system. It then is just a matter of booting off a Live USB where you have already configured ssh (add your public key to the ~/.ssh/authorized_keys, with chmod 700 for the folder and 600 for the file), and log in from another computer. Don't forget to set up SSH in the freshly installed system, before rebooting :) . If you have a (couple of) USB to serial dongles, you can always try to pretend you found a time machine, and run the installer over serial (Debian supports that, I think, as well as a lot of other distributions). Finally, you can install the operating system on another computer, as Fiximan suggested. An easy way to do this without removing the hard disk drive is to install in a virtual machine (QEMU comes to mind) using a raw disk image, and then simply dd that disk image onto the harddisk (which can easily be done by blindly typing the command on the headless computer from a live USB: press Ctrl + Alt + F1 , log-in (check the username and password beforehand), and type zcat /path/to/img.gz | dd of=/dev/sda . And hope that /dev/sda hasn't been mapped to your harddisk and not to the USB key :) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157729/" ] }
265,072
I'm on Debian 7.9 (wheezy) x64, and I would like to install build-essential:i386 . I already added i386 in dpkg --architecture , updated aptitude and installed java-jdk-1.6:i386 successfully. BTW, no matter how I try, build-essential systematically generates an error of dependencies with: apt-get install build-essential:i386Depend : dpkg-dev:i386 (>= 1.13.5)E: Unable to correct problems, you have held broken packages... If someone has an idea... Thanks. Also, I found this on Debian Mailing Lists - Re: cross-build-essential Say I want to have the build-essential for i386 installed on amd64. I could install build-essential:i386, replacing gcc/g++:amd64 with gcc/g++:i386. Wouldn't that give me everything needed to cross-compile for i386? In that case, yes, because you can run x86 code on an AMD64 or Intel 64 CPU. Though you would indeed be replacing gcc-4.7:amd64 etc. with gcc-4.7:i386 etc. as the packages aren't co-installable with themselves. Is it true?
As far as I know, installing on a headless computer is something which can be done using a variety of tricks. The first of the following suggestions uses accessibility features to compensate for the lack of screen. The second one is the easy way to pretend you know the exact sequence of keys to press. The third and fourth are one of many possibilities for installing over a remote connection. Debian has accessibility features directly in the installer, and it beeps when the installer is ready. After that beep, you simply have to press s and Enter to get speech synthesis, which you can use if your old machine has an audio jack. See https://wiki.debian.org/accessibility#Debian_installer_accessibility for more details. Install in a virtual machine at the same time as on your old machine, and be sure to press the exact same keys. Install via SSH, as noted by @FaheemMitha in the comments. If you're willing to try out something new, NixOs is pretty easy to install over SSH from an existing system. It then is just a matter of booting off a Live USB where you have already configured ssh (add your public key to the ~/.ssh/authorized_keys, with chmod 700 for the folder and 600 for the file), and log in from another computer. Don't forget to set up SSH in the freshly installed system, before rebooting :) . If you have a (couple of) USB to serial dongles, you can always try to pretend you found a time machine, and run the installer over serial (Debian supports that, I think, as well as a lot of other distributions). Finally, you can install the operating system on another computer, as Fiximan suggested. An easy way to do this without removing the hard disk drive is to install in a virtual machine (QEMU comes to mind) using a raw disk image, and then simply dd that disk image onto the harddisk (which can easily be done by blindly typing the command on the headless computer from a live USB: press Ctrl + Alt + F1 , log-in (check the username and password beforehand), and type zcat /path/to/img.gz | dd of=/dev/sda . And hope that /dev/sda hasn't been mapped to your harddisk and not to the USB key :) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265072", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157733/" ] }
265,089
I have two files file1.txt 78Z|033333157|0000001|PERD1|2150421|D|0507020|333333331178Z|033333157|0000001|PERD0|2160208|A|1900460|333333331178Z|033333157|0000001|RSAB1|2150421|D|0507070|333333331178Z|033333157|0000001|RSAB0|2160208|A|1900460|333333331178Z|033333157|0000001|ANT37|2141023|D|1245260|333333331178Z|033333157|0000001|ANT36|2150422|D|1518490|333333331178Z|033333157|0000001|ANT28|2150321|D|0502090|333333331178Z|033333157|0000001|ANT27|2150122|D|0501450|333333331178Z|033333157|0000001|ANT26|2141222|D|1637460|333333331178Z|033333157|0000001|ANT10|2160208|A|1900460|333333331178Z|033333157|0000001|ABS10|2151221|D|1223390|333333331178Z|696931836|0000001|PERD0|2160203|A|1114450|222222222278Z|696931836|0000001|RSAB0|2160203|A|1114450|222222222278Z|696931836|0000001|ANT09|2160203|A|1114450|222222222278Z|010041586|0000001|PERD0|2160119|A|1835100|333333333378Z|010041586|0000001|RSAB0|2160119|A|1835100|333333333378Z|010041586|0000001|ANT33|2160119|A|1835100|333333333378Z|011512345|0000001|PERD0|2151213|A|1413550|444444444478Z|011512345|0000001|RSAB0|2151213|A|1413550|444444444478Z|011512345|0000001|ANT32|2160219|A|0319230|444444444478Z|011512345|0000001|ANT09|2160218|D|0319230|444444444478Z|011512345|0000001|ANT07|2150729|D|1508230|444444444478Z|011512345|0000001|ANT06|2141013|D|1208190|444444444478Z|011512345|0000001|ABB06|2131224|D|1857030|444444444478Z|012344052|0000001|PERD0|2160203|A|1219570|555555555578Z|012344052|0000001|ANT50|2160203|A|1219570|555555555578Z|099999999|0000001|PERD0|2151214|A|1512460|666666666678Z|099999999|0000001|RSAB0|2151214|A|1512460|666666666678Z|099999999|0000001|ANT32|2160219|A|0319000|666666666678Z|099999999|0000001|ANT09|2160218|D|0319000|666666666678Z|099999999|0000001|ABS10|2150615|D|0125350|6666666666 file2.txt 3333333311|ANT102222222222|ANT095555555555|ANT503333333333|ANT336666666666|ANT324444444444|ANT09 I need a create new file with the lines matched by fourth and eighth column of the file1.txt with second and first column of the file2.txt The result must be (the order is not important) file3.txt 78Z|033333157|0000001|ANT10|2160208|A|1900460|333333331178Z|696931836|0000001|ANT09|2160203|A|1114450|222222222278Z|012344052|0000001|ANT50|2160203|A|1219570|555555555578Z|010041586|0000001|ANT33|2160119|A|1835100|333333333378Z|099999999|0000001|ANT32|2160219|A|0319000|666666666678Z|011512345|0000001|ANT09|2160218|D|0319230|4444444444
awk -F'|' 'NR==FNR{e[$2$1]=1;next};e[$4$8]' file2.txt file1.txt First read file2 and set array e[field2+field1] then file1 and print if e[field4+field8] is set. Or turn the fields around: awk -F'|' 'NR==FNR{e[$1$2]=1;next};e[$8$4]' file2.txt file1.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123930/" ] }
265,119
I want to print the floating point number with exactly two significant digits in bash (maybe using a common tool like awk, bc, dc, perl etc.). Examples: 76543 should be printed as 76000 0.0076543 should be printed as 0.0076 In both cases the significant digits are 7 and 6. I have read some answers for similar problems like: How to round floating point numbers in shell? Bash limiting precision of floating point variables but the answers focus on limiting the number of decimal places (eg. bc command with scale=2 or printf command with %.2f ) instead of significant digits. Is there an easy way to format the number with exactly 2 significant digits or do I have to write my own function?
This answer to the first linked question has the almost-throwaway line at the end: See also %g for rounding to a specified number of significant digits. So you can simply write printf "%.2g" "$n" (but see the section below on decimal separator and locale, and note that non-Bash printf need not support %f and %g ). Examples: $ printf "%.2g\n" 76543 0.00765437.7e+040.0077 Of course, you now have mantissa-exponent representation rather than pure decimal, so you'll want to convert back: $ printf "%0.f\n" 7.7e+067700000$ printf "%0.7f\n" 7.7e-060.0000077 Putting all this together, and wrapping it in a function: # Function round(precision, number)round() { n=$(printf "%.${1}g" "$2") if [ "$n" != "${n#*e}" ] then f="${n##*e-}" test "$n" = "$f" && f= || f=$(( ${f#0}+$1-1 )) printf "%0.${f}f" "$n" else printf "%s" "$n" fi} (Note - this function is written in portable (POSIX) shell, but assumes that printf handles the floating-point conversions. Bash has a built-in printf that does, so you're okay here, and the GNU implementation also works, so most GNU/Linux systems can safely use Dash). Test cases radix=$(printf %.1f 0)for i in $(seq 12 | sed -e 's/.*/dc -e "12k 1.234 10 & 6 -^*p"/e' -e "y/_._/$radix/")do echo $i "->" $(round 2 $i)done Test results .000012340000 -> 0.000012.000123400000 -> 0.00012.001234000000 -> 0.0012.012340000000 -> 0.012.123400000000 -> 0.121.234 -> 1.212.340 -> 12123.400 -> 1201234.000 -> 120012340.000 -> 12000123400.000 -> 1200001234000.000 -> 1200000 A note on decimal separator and locale All the working above assumes that the radix character (also known as the decimal separator) is . , as in most English locales. Other locales use , instead, and some shells have a built-in printf that respects locale. In these shells, you may need to set LC_NUMERIC=C to force the use of . as radix character, or write /usr/bin/printf to prevent use of the built-in version. This latter is complicated by the fact that (at least some versions) seem to always parse arguments using . , but print using the current locale settings.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265119", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157772/" ] }
265,127
My shell is bash . How can I get the output of ls to show directories with a trailing forward-slash? When I do ls in tcsh it gives the desired output. How can I get this to occur in bash without using any arguments? eg. bin/lib/src/file1.txtfile2.txt
The simplest solution (as given already by @don_crissti in the comments) is: ls -p You can get a similar effect with: ls -F But that will add some other indicators as well: Append a character to each file name indicating the file type. Also, for regular files that are executable, append * . The file type indicators are / for directories, @ for symbolic links, | for FIFOs, = for sockets, > for doors, and nothing for regular files. Of course, you can make the string ls execute ls -p on the command line with an alias: alias ls='ls -p' That is temporal and could be erased with unalias ls . Probably your tcsh has an active alias in place. Which you can do by placing the command in ~/.bashrc or ~/.bash_aliases .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154468/" ] }
265,133
If I have a variable srcDir="~/a/b/c" and would like to only copy the name c into a $copyDir via manipulating $srcDir , how would I go about doing this? I've read parameter expansion and know how to store a directory, but it includes the entire folder tree. I just need to copy the folder name c and store it.
The simplest solution (as given already by @don_crissti in the comments) is: ls -p You can get a similar effect with: ls -F But that will add some other indicators as well: Append a character to each file name indicating the file type. Also, for regular files that are executable, append * . The file type indicators are / for directories, @ for symbolic links, | for FIFOs, = for sockets, > for doors, and nothing for regular files. Of course, you can make the string ls execute ls -p on the command line with an alias: alias ls='ls -p' That is temporal and could be erased with unalias ls . Probably your tcsh has an active alias in place. Which you can do by placing the command in ~/.bashrc or ~/.bash_aliases .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265133", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154734/" ] }
265,149
I have been using the pattern below for printing multiline messages to terminal in a bash script. read -d '' message <<- EOF this is a mulitline messageEOFecho "$message" This has been working - until a couple of days ago the pattern just stopped working. By stopped working, I mean when bash encountered these heredoc expressions in the script - it just seems to do nothing - no output. The only thing that I can think of thats changed in the last few days is that the environment the scripts are run inside is a Ubuntu 14.04 live USB, versus "full" installs. Then I discovered that when I move the heredoc before the scripts set -o errexit statement it started working again. i.e. this doesn't work #!/bin/bashset -o errexitread -d '' message <<- EOF this is a mulitline messageEOFecho "$message" result: (nothing) But this does work #!/bin/bashread -d '' message <<- EOF this is a mulitline messageEOFecho "$message" result $ sudo ./script.sh this is a mulitlinemessage bash --version - GNU bash, version 4.3.11(1)-release (i686-pc-linux-gnu)
read returns a non-zero exit status if it doesn't find a delimiter. With the delimiter set to the empty string, it uses the NUL byte as a delimiter, and those usually aren't found in text files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
265,190
I want to delete the last "." only if "." exist after "usa" word my sed command only removed the last "." , but what need to add to the sed syntax (a rule) in order to remove the "." only after "usa" word host 10.1.23.86 | awk '{print $NF}' sho4.il.usa. host 10.1.23.86 | awk '{print $NF}' | sed s'/.$//' sho4.il.usa what need to add in the sed in order to remove the "." only after "usa" word?
read returns a non-zero exit status if it doesn't find a delimiter. With the delimiter set to the empty string, it uses the NUL byte as a delimiter, and those usually aren't found in text files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
265,250
I have the following output of my script: panos@panos:~/scripts> ./list_packages openSUSE-2016-254zypper-aptitude.noarch : 1.12.23-1.1 update neededzypper-log.noarch : 1.12.23-1.1 update neededlibsolv-debugsource : None not installedlibsolv-demo : None not installedlibsolv-demo-debuginfo : None not installedlibsolv-devel : None not installedlibsolv-devel-debuginfo : None not installedlibsolv-tools : 0.6.14-1.1 update neededlibsolv-tools-debuginfo : None not installedperl-solv : None not installedperl-solv-debuginfo : None not installedpython-solv : 0.6.14-1.1 update neededpython-solv-debuginfo : None not installedruby-solv : None not installedruby-solv-debuginfo : None not installedlibzypp : 15.19.5-1.1 update neededlibzypp-debuginfo : None not installedlibzypp-debugsource : None not installedlibzypp-devel : None not installedlibzypp-devel-doc : None not installedzypper : 1.12.23-1.1 update neededzypper-debuginfo : None not installedzypper-debugsource : None not installed The output is generated based on some if-else statements. Let me give you the three echo commands used in my source code: echo "$pkg : $pkg_version update needed"echo "$pkg : $new_version updated"echo "$pkg : None not installed" My problem is that is would like them to be in columns, something like: $pkg\t$pkg_version\t$message But because some packagenames are >8 characters longs, the whole 'tab'-thing, gets ugly. Any suggestions?
There are two ways : Use bash' printf function to print and format your output (instead of echo ) Use column -s : -t command ./list_packages openSUSE-2016-254 | column -s : -t zypper-aptitude.noarch 1.12.23-1.1 update neededzypper-log.noarch 1.12.23-1.1 update neededlibsolv-debugsource None not installedlibsolv-demo None not installedlibsolv-demo-debuginfo None not installedlibsolv-devel None not installedlibsolv-devel-debuginfo None not installedlibsolv-tools 0.6.14-1.1 update neededlibsolv-tools-debuginfo None not installedperl-solv None not installedperl-solv-debuginfo None not installedpython-solv 0.6.14-1.1 update neededpython-solv-debuginfo None not installedruby-solv None not installedruby-solv-debuginfo None not installedlibzypp 15.19.5-1.1 update neededlibzypp-debuginfo None not installedlibzypp-debugsource None not installedlibzypp-devel None not installedlibzypp-devel-doc None not installedzypper 1.12.23-1.1 update neededzypper-debuginfo None not installedzypper-debugsource None not installed
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72520/" ] }
265,267
I'm having a problem with a script. It is meant to change a value in a file called %DIR% so that it becomes a path name. The problem is the slashes in the directory name upset sed, so I get weird errors. I need to convert the slashes in the path name into escaped slashes. So /var/www would become \/var\/www But I don't know how to do this. Currently the script runs sed with this: sed -i "s/%DIR%/$directory/g" "$config"
Since you say you are using Bash, you can use Parameter Expansion to insert the slashes: $ directory=/var/www$ echo "${directory//\//\\/}"\/var\/www This breaks up as substitute directory replacing every ( // ) slash ( \/ ) with ( / ) backslash+slash ( \\/ ). Putting this into your sed command gives: sed -i "s/%DIR%/${directory//\//\\/}/g" "$config"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21408/" ] }
265,284
How can I convert the uptime output to seconds since epoch to compare with dmesg output? Does uptime have the same resolution as the kernel message time? Is there a way to more directly get the kernel time than from uptime ? ➜ ~ echo "hi" > /dev/kmsg➜ ~ dmesg | tail[ 859.214564] hi➜ ~ uptime10:08 up 2 days, 43 secs, 2 users, load averages: 1.69 1.64 1.54
Depending on your flavor of Unix, the /proc filesystem may have an uptime file somewhere with the information you want. Linux> cat /proc/uptime5899847.37 23165596.55 And the output of the uptime command for the same time: Linux> uptime16:46:27 up 68 days, 6:51, 3 users, load average: 0.01, 0.02, 0.05 So 5899847.37/86400 = 68.28527 --> 68 days, 6 hours, 51 minutes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32951/" ] }
265,342
cat > fileAmy looked at her watch. He was late. The sun was setting but Jake didn’t care.wc file1 16 82 file Can somebody explain why wc command returns 3 extra characters in this case?
wc shows 3 characters more because your example file contains a fancy Unicode apostrophe ’ (most likely because you copied the contents from a browser or text editor): $ cat fileAmy looked at her watch. He was late. The sun was setting but Jake didn’t care.$ wc file1 16 82 file With plain ASCII apostrophe ' : $ cat file2Amy looked at her watch. He was late. The sun was setting but Jake didn't care.$ wc file1 16 80 file2 wc by default displays the number of bytes per manual : newline, word, and byte counts for each file for character count an -m argument can be used: $ cat fileAmy looked at her watch. He was late. The sun was setting but Jake didn’t care.$ wc -m file 80 file.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157954/" ] }
265,391
I am having difficulty compression an archive with bzip using tar. The archive I have is user-logs.tar. I am attempting to run this: tar -jf user-logs.tar I get this error: tar: You must specify one of the `-Acdtrux' or `--test-label' optionsTry `tar --help' or `tar --usage' for more information. I also attempted using -c to create a new file with the bzip filename extension with no luck. # tar -cvjf user-logs.tar.gz user-logs.taruser-logs.tartar (child): bzip2: Cannot exec: No such file or directorytar (child): Error is not recoverable: exiting now
The error message tells what to do: you probably need to add -c (for create ), e.g., tar -jcf user-logs.tar myargs as well as some arguments myargs (things to put into the user-logs.tar archive). In the second case, the problem is that you do not have bzip2 installed. The tar program relies upon this external program to do compression. If you have a tar archive and simply want to compress it, you could do this: bzip2 user-logs.tar which (if you had bzip2 installed) would change the file to user-logs.tar.bz2 (and usually make it much smaller). For installing bzip2 , it depends on the system you are using. For example, with Ubuntu that would be sudo apt-get install bzip2 while Fedora might be (perhaps dnf ): sudo yum install bzip2 The GNU tar program does not know how to compress an existing file such as user-logs.tar ( bzip2 does that). The tar program can use external compression programs gzip , bzip2 , xz by opening a pipe to those programs, sending a tar archive via the pipe to the compression utility, which compresses the data which it reads from tar and writes the result to the filename which the tar program specifies. Alternatively, the tar and compression utility could be the same program. BSD tar does its compression using libarchive (they're not really distinct except in name). Further reading: 3.2. Using bzip2 with tar Compression support
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29125/" ] }
265,417
I keep getting this error: ./install.sh ./install.sh: line 28: syntax error near unexpected token `do'./install.sh: line 28: ` do' When running this code (a work in progress): # Set the prompt string:PS3='Choose folder: ';function choose() { # This works: #select folder; #do test -n "$folder" && echo You chose: $REPLY\) $folder && break; #echo 'Wrong entry, try again.'; #continue; #done; select folder; do case $folder in 'q') echo 'Exiting …'; break; ;; *) do test -n '$folder' && echo You chose: $REPLY\) $folder && break; echo 'Wrong entry, try again.'; continue; done; ;; esac; done;}# Switch to glob folder location:cd ~/Desktop;# Create menu:choose foo*;# Do stuff with result of choose() here ...# Exit program:exit 0; As you can probably see from above, I'm trying to create a menu that lets me choose from a list of globbed folders; unfortunately, I can't figure out why my nested do is creating the error I posted above. Optimally, the user could choose from a list of directories; if they type q , then the program quits; if they make an invalid selection, the program makes them choose again until they pick a valid selection (or enter q to quit/exit). I have yet to get the quitting part figured out completely, but before I can do that, I need to get past this do statement error. Question: What is causing the syntax error near unexpected token `do' and how can I fix it?
Missing Loop Conditional Your syntax is invalid. You may have other problems with your code as well, but this message is likely coming from the fact that you can't have a do keyword as if it were a block; you need to have one of the loop conditionals as well. You currently have: *) do test -n '$folder' && echo You chose: $REPLY\) $folder && break; echo 'Wrong entry, try again.'; continue; done; ;; You are missing a for , while , or until keyword to start your loop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67282/" ] }
265,450
I'm writing an initramfs-script and want to detect usb-sticks as fast as possible. When I insert an usb 2.0 stick, the detection of idVendor, idProduct and USB class happens within 100 ms. But the scsi subsystem does not "attach" until about 1 s has passed and it takes another 500 ms before the partition is fully recognized. I assume that the driver needs to read the partition table in order to detect partitions. Why does it take so long? I don't expect the urb send/recev time to be that long or the access time of the flash to take so much time. I've tried 5 sticks from different vendors and the result is about the same. [ 5731.097540] usb 2-1.2: new high-speed USB device number 7 using ehci-pci[ 5731.195360] usb 2-1.2: New USB device found, idVendor=0951, idProduct=1643[ 5731.195368] usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3[ 5731.195372] usb 2-1.2: Product: DataTraveler G3[ 5731.195376] usb 2-1.2: Manufacturer: Kingston[ 5731.195379] usb 2-1.2: SerialNumber: 001CC0EC32BCBBB04712022C[ 5731.196942] usb-storage 2-1.2:1.0: USB Mass Storage device detected[ 5731.197193] scsi host9: usb-storage 2-1.2:1.0[ 5732.268389] scsi 9:0:0:0: Direct-Access Kingston DataTraveler G3 PMAP PQ: 0 ANSI: 0 CCS[ 5732.268995] sd 9:0:0:0: Attached scsi generic sg2 type 0[ 5732.883939] sd 9:0:0:0: [sdb] 7595520 512-byte logical blocks: (3.88 GB/3.62 GiB)[ 5732.884565] sd 9:0:0:0: [sdb] Write Protect is off[ 5732.884568] sd 9:0:0:0: [sdb] Mode Sense: 23 00 00 00[ 5732.885178] sd 9:0:0:0: [sdb] No Caching mode page found[ 5732.885181] sd 9:0:0:0: [sdb] Assuming drive cache: write through[ 5732.903834] sdb: sdb1[ 5732.906812] sd 9:0:0:0: [sdb] Attached SCSI removable disk Edit So I've found the delay_use module parameter that by default is set to 1 second, which explains the delay I'm seeing. But I'm wondering if someone can provide context as to why that parameter is needed? A comment suggested that for older usb sticks, delay_use might need to be set to as much as 5 seconds. What is it inside the usb stick that takes so much time; firmware initialization; reads from the flash? I find it hard to belive that we need delays as long as 1 second or more when the latency for accessing flash is in the order of tens of microseconds. I realize that this might be slightly off-topic for this channel, if so, I'll go to electronics.stackexchange.com
You can change the timeout by writing to /sys/module/usb_storage/parameters/delay_use . For older usb disks, a settle delay of 5 seconds or even more may be needed (and 5 was the default until it was reduced to 1 second in 2010), presumably because the controller is starved of power while the disk motors are initializing. Or possibly because the internal SCSI firmware takes time to boot up before it's responsive (can you tell I'm just speculating here?). For modern solid-state storage, it's probably not needed at all, and many people set it to 0. Unfortunately, it's a global parameter that applies to all devices, so if you have any slow devices at all, you have to endure the delay for every mass-storage USB device you use. It would be nice if it could be set per-device by udev, but that's not the case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5694/" ] }
265,452
The issue: I changed my password earlier today, but I must've made a typo, because I couldn't log in afterwards. I booted into the Grub menu and started a passwordless root shell to reset my password. This was successful, as I could now enter the new new password and get past the login screen. However, as soon as I do that, I get an error that says: Your session only lasted less than 10 seconds. If you have not logged out yourself, this could mean there is some installation problem, or that you may be out of diskspace. Try logging in with one of the failsafe sessions to see if you can fix this problem.syndaemon: no process found/etc/mdm/Xsession: Beginning session setup...localuser:[username] being added to access control listCan't create dir /home/[username]/DesktopCan't create dir /home/[username]/DownloadsCan't create dir /home/[username]/TemplatesCan't create dir /home/[username]/PublicCan't create dir /home/[username]/DocumentsCan't create dir /home/[username]/MusicCan't create dir /home/[username]/PicturesCan't create dir /home/[username]/VideosScript for none started at run_imScript for auto started at run_imScript for default started at run_iminit: session.migration main process (2322)terminated with status 1init: logrotate main process (2304) killed by TERM signalinit: Disconnected from notified D-Bus bus I have the option to hit 'OK' which will take me back to the login screen. If I try to login again, I get the exact same message. Note: where it says [username] in the above text, the actual error displayed my actual username. I am, however, paranoid when it comes to my online identity, hence I censored it in the error printed above. I have tried: rebooting the computer, both using a hard boot and using the shut down button on the login screen booting in recovery mode and running 'fix broken packages' and 'check all files' (possibly 'all directories', I can't remember) I have googled and searched every knowledge base I know off, but haven't found a fix I asked at linuxquestions.org Tried deleting and recreating my account with the same username Also; I've just tried to use the command line to access my encrypted files, but failed miserably mint@mint ~ $ ecryptfs-mount-private ERROR: Encrypted private directory is not setup properly Also also mint@mint ~ $ ecryptfs-unwrap-passphrase /media/34e5c4fa-0621-46cb-83b0-763c2a0dc49c/home/.private/[username]/.ecryptfs/wrapped-passphrasePassphrase: Error: Unwrapping passphrase failed [-2]Info: Check the system log for more information from libecryptfs It's not related to Disk space (I'm trying to get into the largest drive, on the bottom): mint@mint ~ $ dfdf: ‘/root/.gvfs’: Permission deniedFilesystem 1K-blocks Used Available Use% Mounted on/cow 2032928 1676256 250076 88% /udev 1979616 4 1979612 1% /devtmpfs 404796 1552 403244 1% /run/dev/sdb1 3908100 3876388 31712 100% /cdrom/dev/loop0 1523456 1523456 0 100% /rofsnone 4 0 4 0% /sys/fs/cgrouptmpfs 2023964 16 2023948 1% /tmpnone 5120 0 5120 0% /run/locknone 2023964 84 2023880 1% /run/shmnone 102400 28 102372 1% /run/user/dev/mapper/mint--vg-root 956884652 103557812 804696876 12% /media/mint/34e5c4fa-0621-46cb-83b0-763c2a0dc49c Also tried the below. It mounts the files, but doesn't decrypt mint@mint ~ $ sudo ecryptfs-recover-privateINFO: Searching for encrypted private directories (this might take a while)...INFO: Found [/media/mint/34e5c4fa-0621-46cb-83b0-763c2a0dc49c/home/.ecryptfs/tijmen/.Private].Try to recover this directory? [Y/n]: yINFO: Found your wrapped-passphraseDo you know your LOGIN passphrase? [Y/n] yINFO: Enter your LOGIN passphrase...Passphrase: Error: Unwrapping passphrase and inserting into the user session keyring failed [-5]Info: Check the system log for more information from libecryptfsmint@mint ~ $ sudo ecryptfs-recover-privateINFO: Searching for encrypted private directories (this might take a while)...INFO: Found [/media/mint/34e5c4fa-0621-46cb-83b0-763c2a0dc49c/home/.ecryptfs/tijmen/.Private].Try to recover this directory? [Y/n]: yINFO: Found your wrapped-passphraseDo you know your LOGIN passphrase? [Y/n] nINFO: To recover this directory, you MUST have your original MOUNT passphrase.INFO: When you first setup your encrypted private directory, you were told to recordINFO: your MOUNT passphrase.INFO: It should be 32 characters long, consisting of [0-9] and [a-f].Enter your MOUNT passphrase: INFO: Success! Private data mounted at [/tmp/ecryptfs.cQtlJNMc].mint@mint ~ $ Other relevant info I have Linux Mint 17.2 running on a 1TB external HDD, as my internal HD died months ago. So far, this worked like a charm. I am now using a live USB drive, as I hoped to be able to retrieve some essential files (such as my KeePass database file), but the install on the external HDD is encrypted through the use of the 'encrypt partition' option during install. I have been using Linux Mint for about 6-8 months now, so I am somewhat proficient in the use of the terminal for day-to-day use, but I am fully ignorant on the underlying workings of Linux and the root command options at my disposal. This is the Linux distro I'm using on the live USB, which is the same one as I've installed on the external HDD mint@mint ~ $ cat /etc/*-releaseDISTRIB_ID=LinuxMintDISTRIB_RELEASE=17.2DISTRIB_CODENAME=rafaelaDISTRIB_DESCRIPTION="Linux Mint 17.2 Rafaela"NAME="Ubuntu"VERSION="14.04.2 LTS, Trusty Tahr"ID=ubuntuID_LIKE=debianPRETTY_NAME="Ubuntu 14.04.2 LTS"VERSION_ID="14.04"HOME_URL="http://www.ubuntu.com/"SUPPORT_URL="http://help.ubuntu.com/"BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"cat: /etc/upstream-release: Is a directory And this is the kernel Linux 3.16.0-38-generic x86_64 I am able to see all of my folders and files using the live USB, but as they are encrypted, I can't actually access them. ---- update after first answer ----GAD3R suggested I Boot using Linux-mint LiveCDMake sure that your target system's hard drive is mountedOpen a terminalInstall ecryptfs-utils documentationsudo apt-get install -y ecryptfs-utilsAnd runsudo ecryptfs-recover-privateFollow the prompts Unfortunately, that didn't work. mint@mint ~ $ sudo apt-get install -y ecryptfs-utilsReading package lists... DoneBuilding dependency tree Reading state information... Doneecryptfs-utils is already the newest version.0 upgraded, 0 newly installed, 0 to remove and 326 not upgraded.mint@mint ~ $ sudo ecryptfs-recover-privateINFO: Searching for encrypted private directories (this might take a while)...INFO: Found [/media/mint/34e5c4fa-0621-46cb-83b0-763c2a0dc49c/home/.ecryptfs/tijmen/.Private].Try to recover this directory? [Y/n]: yINFO: Found your wrapped-passphraseDo you know your LOGIN passphrase? [Y/n] yINFO: Enter your LOGIN passphrase...Passphrase: Error: Unwrapping passphrase and inserting into the user session keyring failed [-5]Info: Check the system log for more information from libecryptfsmint@mint ~ $
You can change the timeout by writing to /sys/module/usb_storage/parameters/delay_use . For older usb disks, a settle delay of 5 seconds or even more may be needed (and 5 was the default until it was reduced to 1 second in 2010), presumably because the controller is starved of power while the disk motors are initializing. Or possibly because the internal SCSI firmware takes time to boot up before it's responsive (can you tell I'm just speculating here?). For modern solid-state storage, it's probably not needed at all, and many people set it to 0. Unfortunately, it's a global parameter that applies to all devices, so if you have any slow devices at all, you have to endure the delay for every mass-storage USB device you use. It would be nice if it could be set per-device by udev, but that's not the case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158013/" ] }
265,462
I'm making a deb package to install a custom application. I changed all files/folders ownership to root in order to avoid the warnings I was getting during installation, and in Ubuntu all runs smoothly, as Ubuntu changes the ownership of the files/folders to the user installing the package. But when I'm installing on Debian, root remains the owner. The application uses a folder to write data, and here is the problem. Running as a standard user, the app does not have permission to write on the folder. Now, how should I deal with this problem? Should I make a post install script on the deb package, doing the chmod o+w ? Should I package the directory already with those permissions set? Or is there any way of setting the owner of the files to the user that installs the app automatically (like Ubuntu does)?
I'm not sure what the behaviour is in Ubuntu, but in general for a .deb package containing files or directories with non-standard permissions you need to ensure those permissions are set after dh_fixperms is run. If you're using a dh -style rules , you can do this as follows: override_dh_fixperms: dh_fixperms chmod 777 yourfolder or execute_after_dh_fixperms: chmod 777 yourfolder You can also do this in a postinst : if [ "$1" = "configure" ]; then chmod 777 yourfolderfi but the rules approach is simpler (at least, I prefer doing that rather than relying on maintainer scripts).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157873/" ] }
265,503
I'm running Arch Linux. When I try to save credentials using Vinagre (VNC client) it gives me an error: Error saving credentials on a locked keyring Cannot create item in a locked collection I found this guide on the Arch wiki , and followed it. In the troubleshooting section it has: Ensure that the seahorse package is installed, open it ("Passwords and Keys" in system settings) and select View > By Keyring If there is no keyring in the left column (it will be marked with a lock icon), go to File > New > Password Keyring and give it a name. You will be asked to enter a password. If you do not give the keyring a password it will be unlocked automatically, even when using autologin, but passwords will not be stored securely. Finally, right-click on the keyring you just created and select "Set as default". When I start up Seahorse it does have a Passwords section with a Login folder with a lock icon to the right of that. Swell, right? Well, nothing really works with that as far as I can tell (no feedback, but apparently I was able to delete it) When I try to create a new keyring it tells me: Couldn't add keyring No such secret collection at path: / I found this problem with exactly the same message, but ~/.local/share/keyrings has drwxr-xr-x permissions (and has my name and group). So how do I resolve this error so I can store keys in my keyring? Edit : Some further information - after deleting the useless keyring, Vinagre gives me this message instead: No such interface 'org.freedesktop.Secret.Collection' on object at path /org/freedesktop/secrets/collection/login
I could fix it on my machine by sourcing /etc/X11/xinit/xinitrc.d/50-systemd-user.sh from ~/.xinitrc. The solution was found on https://bugs.archlinux.org/task/46374 because journalctl --this-boot --no-pager | grep -i WARNING showed, that 'org.gnome.keyring.SystemPrompter' failed. Reference
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5788/" ] }
265,523
If a set of files (several GBs big each) and each changes slightly every day (at random places, not only information appended at the end), how can it be copied efficiently? I mean, in the sense that only changed parts are updated, and not the whole files. That would mean the difference between copying some Kb here and there or some GBs.
The rsync program does exactly that. From the man page: It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/265523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46600/" ] }
265,538
This has baffled me for a few weeks now. I have a Kyocera network printer set up in CUPS, and whenever I try to print to it I seem to end up with n² as many copies as I request. That is, I try to print 2 copies of a document and I get 4 I try to print 5 copies of a document and I get 25 I try to print 60 copies of a document unattended, it runs out of paper, and I wander around the building depositing the extra copies in many recycling bins so as not to implicate myself too directly as the culprit I cannot begin to imagine how to diagnose this, but besides being mildly amusing it does mean that to get my desired 60 copies of a document I have to go to some esoteric lengths (e.g. print 7 copies, print 3 copies, print 1 copy two times) which was amusing at first but has quickly gotten old. So I am posting here in the hopes that someone can reassure me that I am not crazy, and hope that maybe someone might have experienced this before and know of a way to fix it? I am printing a PDF from Document Viewer 3.18.2
FWIW, I had the very same issue with a Brother QL-1050 label printer, under Debian Sid. It was not an application bug as suggested in comments, but a CUPS/driver issue. You can confirm this by running lp or lpr and see if it is affected as well : lp -d YOURPRINTER -n 2 /some/file.pdflpr -P YOURPRINTER -# 2 /some/file.pdf I managed to solve the problem by editing /usr/lib/cups/filter/brother_lpdwrapper_ql1050 , and modifying the line CUPSOPTION=`echo "$5 Copies=$4" | sed -e … into CUPSOPTION=`echo "$5" | sed -e … ( Copies=1 also works). I guess the number of copies was feeded twice somehow. There must be a similar file for your printer, and though I guess the name and definition of CUPSOPTION may vary, those options are probably defined there.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265538", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16036/" ] }
265,620
Why do open() and close() exist in the Unix filesystem design? Couldn't the OS just detect the first time read() or write() was called and do whatever open() would normally do?
Dennis Ritchie mentions in «The Evolution of the Unix Time-sharing System» that open and close along with read , write and creat were present in the system right from the start. I guess a system without open and close wouldn't be inconceivable, however I believe it would complicate the design.You generally want to make multiple read and write calls, not just one, and that was probably especially true on those old computers with very limited RAM that UNIX originated on. Having a handle that maintains your current file position simplifies this. If read or write were to return the handle, they'd have to return a pair -- a handle and their own return status. The handle part of the pair would be useless for all other calls, which would make that arrangement awkward. Leaving the state of the cursor to the kernel allows it to improve efficiency not only by buffering. There's also some cost associated with path lookup -- having a handle allows you to pay it only once. Furthermore, some files in the UNIX worldview don't even have a filesystem path (or didn't -- now they do with things like /proc/self/fd ).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/265620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158154/" ] }
265,633
I want to know if there is a way to easier way to swap the first two words of every line. Let say my text file consist of these three lines: Mary Joe William Edward Shawn Liam Ultimately I want this: Joe Mary Edward William Liam Shawn I know this can be done by sed command by doing this: sed -e "s/\([^ ]*\) *\([^ ]*\)/\2 \1 /g" file But thats to much to remember. Is there an easier way to do this. This is bash btw.
With two word lines, this might be easier: awk '{print $2,$1}' file If you need to swap the first two words of a file with or without two words per line: awk 'NF >= 2{t=$2;$2=$1;$1=t};{print}' file Note that this will collapse multiple spaces into one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155767/" ] }
265,671
I have a few questions about cache memory . On my system "free -h" command gave me the below output, total used free shared buff/cache availableMem: 7.6G 2.1G 1.5G 46M 4.0G 4.8GSwap: 1.6G 28M 1.6G Why is "cache memory" required inside the main memory? As far as I know "cache memory" is different from the main memory (RAM). It is very costly and it is much faster. Please correct me if I am wrong. Before 1 hour this cache memory was 3 GB. Why has it now increased by 1 GB? Please note that I didn’t start/run any new processes. Till what limit can I start new processes/applications? I mean as per the free -h command output, free memory = 1.5 GB, Cache memory = 4 GB and Swap memory = 1.6 GB Can I start a new application up to (1.5 + 1.6) GB or (1.5 + 1.6 + 4) GB ? Can I set/configure this cache memory value? If yes, then how?
I shall correct you! The expensive thing is CPU cache . Since this is disk cache , it's used by any file access. Bad news: this is complicated and nasty. Good news: you shouldn't need to worry about it too much nowadays. 8 GB of RAM is plenty for a general-purpose desktop system. 1. I shall correct you! The expensive thing is CPU cache . The CPU has a small bank of fast internal RAM. Data from main memory which is frequently accessed is copied to this cache, automatically by the CPU. As explained elsewhere, free shows disk cache . It does not show the CPU cache. The disk cache does the same thing, except for disk blocks. It's stored in main memory, and it's managed by the operating system. So we have three different tiers of memory! This is described as a "memory hierarchy". The main memory we use for the disk cache actually is faster and more expensive per byte - if we compare it to the disk. 2. Since this is disk cache , it's used by any file access. So it can be used without loading any new programs, just by using them. For example, when you visit websites they are copied to disk (yet another caching strategy!) When I said "frequently accessed" data is cached, I lied. It's simplest and most efficient to cache everything... The disk cache will grow until you have no free memory left... When you need more memory, some blocks will be evicted from the disk cache to make room. This eviction is controlled by a policy designed to maximize efficiency. E.g., "LRU": evict the least recently used block. 3. Bad news: this is complicated and nasty. Windows was developed with some concern for these issues; if you tried to open too many applications, it's supposed to stop you and complain, which lets you learn roughly how much you can keep open at once. On Linux, the expected behavior is that you start filling up swap and the system suddenly becomes too slow to recover. Or if you don't have swap, a similar thing could happen where the cache of program code and other such essential files are evicted. If you somehow manage to make enough progress despite this, to run out of memory for pages which aren't backed by the disk, the Out Of Memory killer will start murdering processes it doesn't like to reclaim memory. You can still learn roughly how much is too much. You just end up having to hard-reboot in order to recover :). You can configure it to prevent "overcommit", and keep swap disabled or very small. That's the closest you can get to Windows. However, a lot of Linux code is designed with the assumption of "overcommit". If you've got plenty of RAM it's not too much of a problem. And the disk cache will still act as a less dangerous sort of overcommit; you don't necessarily end up needing a lot of RAM that you never get any advantage from. In practice... some people prefer to run with no/very small swap, which is fine. But it's not really common to disable overcommit. So it's not a recommendation, and if you do enable it then you're going to be in a different world. E.g., if you ask about a problem and it's actually related to disabling overcommit, not many people will have the experience to recognise it. 4. Good news: you shouldn't need to worry about it too much nowadays. 8 GB of RAM is plenty for a general-purpose desktop system. 4 GB works equally well right now and is not expensive. It's just a constraint if you want to run more than one system, i.e., a virtual machine. I tend to run with 4 GB nowadays. I can't remember the last time I had a problem running out of memory. My experience with this is from older systems with much less memory. The sysctls mentioned by another answer do not limit disk cache. They limit how much disk cache you can have which is "dirty". This refers to cache blocks which a program has written data to, which have not yet been synced back to the disk. This type of caching is described as a "write-back cache". That said, to get a rough idea of a % utilization, I would Account for the total of "used" and "buff/cache" in your calculation for a comprehensive view. Measure on a fresh boot to exclude old cached files. Or a fresh login after running sync; echo 3 | sudo tee /proc/sys/vm/drop_caches . I expect the latter gives a slightly lower result. It's arguable which of these is more realistic. Obviously dropping caches is a more artificial test, but if the dropped files are actually needed for your workload they'll get read back in anyway. This is on the assumption that you're not ending up with massive data files in disk cache, when you wouldn't mind if they didn't fit and would be happy with mere disk performance for them. The sort of thing that could happen if you measured after playing through a video file. In this case it will be harder. If the full video file fitted in memory, you could just subtract its size from buff/cache. You can verify that X amount of memory is sufficient for your workload, e.g., by booting the kernel with the option mem=X . Like mem=256M if you want to test 32-bit software on the same amount of RAM as the original Raspberry Pi model A.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156168/" ] }
265,703
Let's say I have a command accepting a single argument which is a file path: mycommand myfile.txt Now I want to execute this command over multiple files in parallel, more specifically, file matching pattern myfile* . Is there an easy way to achieve this?
With GNU xargs and a shell with support for process substitution xargs -r -0 -P4 -n1 -a <(printf '%s\0' myfile*) mycommand Would run up to 4 mycommand s in parallel. If mycommand doesn't use its stdin, you can also do: printf '%s\0' myfile* | xargs -r -0 -P4 -n1 mycommand Which would also work with the xargs of modern BSDs. For a recursive search for myfile* files, replace the printf command with: find . -name 'myfile*' -type f -print0 ( -type f is for regular-files only. For a glob-equivalent, you need zsh and its printf '%s\0' myfile*(.) ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11975/" ] }
265,704
I want to start and stop a systemd.service at specific times. Presumably I will use a .timer unit to start the job, but is there a built in way to stop the job after a specific duration, or at a specific time , or do I have to create a second .timer unit that execs the stop ? Thanks
You could use a couple of cron jobs: # ┌───────────── min (0 - 59) # │ ┌────────────── hour (0 - 23) # │ │ ┌─────────────── day of month (1 - 31) # │ │ │ ┌──────────────── month (1 - 12) # │ │ │ │ ┌───────────────── day of week (0 - 6) # │ │ │ │ │ # │ │ │ │ │ * * * * * systemctl start $SERVICE.service * * * * * systemctl stop $SERVICE.service More info on cron: https://en.wikipedia.org/wiki/Cron , https://wiki.archlinux.org/index.php/Cron
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15963/" ] }
265,713
The swappiness parameter controls the tendency of the kernel to move processes out of physical memory and onto the swap disk. What is the default setting and how to configure that to improve overall performance ?
The Linux kernel provides a tweakable setting that controls swappiness $ cat /proc/sys/vm/swappiness60 open /etc/sysctl.conf as root. Then, change or add this line to the file: vm.swappiness = 10 for changing the swappiness value temporarily try this command: $ echo 50 > /proc/sys/vm/swappiness
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265713", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153195/" ] }
265,719
I create bash scripts which get sourced from another "main" script to set up variables needed by the main script. These variables need to be able to contain any character and not have them interpreted by the shell. For example: a single quote: ' dollar sign: $ asterix: * pound sign: #, etc. So, my thinking was to use a single quote and escape any enclosed single quotes and # characters. But, am getting a unexpected EOF while looking for matching error with the two files below. Question: What is the best way to define a string which can contain any set of characters that would require the least amount of tweaking? There are thousands of such foo.sh files and as the string is being extracted form another source, I want to minimize the number of special characters that I need to escape. What other characters do I need to escape. The desired output form the following scripts below is \MyMacro{*,Baker's Dozen,$x^$,#} Platform: MacOS 10.9.5 Sub Shell: foo.sh set -fstring_list='*,Baker\'s Dozen,$x^$,\#'set +f Main Shell: main.sh source foo.shprintf "%s{%s}" "\MyMacro" "${string_list}"
The Linux kernel provides a tweakable setting that controls swappiness $ cat /proc/sys/vm/swappiness60 open /etc/sysctl.conf as root. Then, change or add this line to the file: vm.swappiness = 10 for changing the swappiness value temporarily try this command: $ echo 50 > /proc/sys/vm/swappiness
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7723/" ] }
265,720
If I run: sftp -oServerAliveInterval=10 server-2 Connection is established. But after increasing (decreasing) the value from 10 to 1: sftp -oServerAliveInterval=1 server-2 I am unable to connect: Connecting to server-2...Connection closed by 10.0.1.10Couldn't read packet: Connection reset by peer Any ideas why? Added -vvv: debug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: id_rsa (0xxxxxxxxxxx)Connection to 10.0.1.10 timed out while waiting to readCouldn't read packet: Connection reset by peer
The Linux kernel provides a tweakable setting that controls swappiness $ cat /proc/sys/vm/swappiness60 open /etc/sysctl.conf as root. Then, change or add this line to the file: vm.swappiness = 10 for changing the swappiness value temporarily try this command: $ echo 50 > /proc/sys/vm/swappiness
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77420/" ] }
265,740
Maybe it is a trivial question, but in the man page I didn't find something useful. I am using Ubuntu and bash . The normal output for sha512sum testfile is <hash_code> testfile How to suppress the filename output? I would like to obtain just <hash_code>
There isn't a way to suppress that, but since the SHA is always a single word without spaces you can do: sha512sum testfile | cut -d " " -f 1 or e.g. < testfile sha512sum | sed 's/ -//'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/265740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
265,755
I noticed that the folder referenced in the subject line is taking up 1.5 GB. Can I run the below to clear it without causing permanent damage to my system? rm -rf /var/cache/PackageKit/metadata/updates/packages/*
From the discussion in the bug linked in Daniel Bruno's answer .. you can get rid of these files using PackageKit console client pkcon $ sudo pkcon refresh force -c -1 It takes some time but is provided by PackageKit itself. (and you may set a cron job for it) from the man page of pkcon(1) refresh [force] Refresh the cached information about available updates. and -c, --cache-age AGE Set the maximum acceptable age for cached metadata, in seconds. Use -1 for 'never'. So this tells PackageKit to delete cached information (refresh cached information with maximum acceptable age of : never) References : https://bugs.freedesktop.org/show_bug.cgi?id=80053#c6 https://bugzilla.redhat.com/show_bug.cgi?id=1306992#c10
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/265755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158252/" ] }
265,760
I generally see the syntax for setting the terminal title as (something like): echo -e '\e]0;Some Title\a' But I noticed this answer used 2 instead of 0 , which prompted me to do a little more digging. According to this document you can actually set both the "icon name" and the "window title" with this syntax: · ESC]0;stringBEL -- Set icon name and window title to string· ESC]1;stringBEL -- Set icon name to string· ESC]2;stringBEL -- Set window title to string where ESC is the escape character (\033), and BEL is the bell character (\007).Printing one of these sequences within the xterm will cause the windowor icon title to be changed. But it doesn't go on to explain what exactly it means by "icon title" or "icon name". When I try it out I don't see any difference between 0 and 2 , and 1 doesn't appear to do anything. So what is the "icon title", and what is supposed to happen when 0 or 1 is called?
This is X11 code perhaps ignored or unimplemented by modern Window Managers. Luckily, I don't run a modern Window Manger, so with FVWM on OpenBSD, I can set the icon name to blah , then minimize that xterm: Which produces an icon with that name.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19157/" ] }
265,785
When I set the -s parameter, diff also print files, that are different. diff -s $FIRST_FILE $SECOND_FILE
The Unix philosophy is to have one tool per job, and the shell to glue them together. So: one tool to compare, and one tool to get the desired output format. In this case, the output format is sufficiently simple that this part can be done directly with the shell. To compare two files, if you're only interested in whether they have the same content and not in listing out the differences, use cmp . if cmp -s -- "$FIRST_FILE" "$SECOND_FILE"; then printf '%s\n' "$FIRST_FILE = $SECOND_FILE"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158272/" ] }
265,790
I have a multiple Linux hosts(virtualization) to which I can login as root on only one of the host with ssh root@host with ssh key setup(id_rsa.pub and authorized keys). I don't have this setup on other Linux hosts on the same cluster. I have the access to a salted hash of the root password. I tried multiple ways like hashcat to decrypt the hash but none was successful. Is there a way to decrypt the hash so that I can recover the root password or is there a way that I can have the same configuration of keys for other Linux hosts on the cluster?
The Unix philosophy is to have one tool per job, and the shell to glue them together. So: one tool to compare, and one tool to get the desired output format. In this case, the output format is sufficiently simple that this part can be done directly with the shell. To compare two files, if you're only interested in whether they have the same content and not in listing out the differences, use cmp . if cmp -s -- "$FIRST_FILE" "$SECOND_FILE"; then printf '%s\n' "$FIRST_FILE = $SECOND_FILE"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144939/" ] }
265,821
I'm trying to learn bash and I'm trying to list the files I have in a folder using a bash script from a.sh, like this: 1: a.sh2: b.sh3: c.sh I've looked at ls and find commands, but they don't seem to prefix numerically like I want. Please Help!
There are many ways of doing this. For example, if you are sure your file names don't contain newlines, you can do: $ ls | cat -n 1 a.sh 2 b.sh 3 c.sh 4 d.sh A safer way that can deal with file names containing newlines or any other strange characters: $ c=0; for file in *; do ((c++)); printf '%s : %s\n' "$c" "$file"; done1 : a.sh2 : b.sh3 : c.sh4 : d.sh To see why the latter two are better, create a file name that contains newlines: $ touch 'a long file name'$ touch 'another long filename, this one has'$'\n''a newline character!' Now, compare the output of the two approaches: $ ls | cat -n 1 a long file name 2 another long filename, this one has 3 a newline character! 4 a.sh 5 b.sh 6 c.sh 7 d.sh As you can see above, parsing ls (which is generally a bad idea) results in the file name with the newline being treated as two separate files. The correct output is: $ c=0; for file in *; do ((c++)); printf '%s : %s\n' "$c" "$file"; done1 : a long file name2 : another long filename, this one hasa newline character!3 : a.sh4 : b.sh5 : c.sh6 : d.sh As @Vikyboss points out in the comments, the shell solution above will set the variable $c which will persist after the loop exits. To avoid that, you could add unset c at the end, or use yet another approach. For example: $ perl -le 'for(0..$#ARGV){print $_+1 ." : $ARGV[$_]"}' *1 : a long file name2 : another long filename, this one hasa newline character!3 : a.sh4 : b.sh5 : c.sh6 : d.sh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158293/" ] }
265,833
I have trying top back reference the first captured group in the below command: $ ls -lR | egrep 'd(rw(x|s)|rw-|r-(x|s)|-w(x|s)|r--|-w-|--(x|s))(rw(x|s)|rw-|r-(x|s)|-w(x|s)|r--|-w-|--(x|s))---'drwxr-s--- 2 s9akhtar s9akhtar 4096 Feb 25 11:53 dir1dr-xrws--- 2 s9akhtar s9akhtar 4096 Feb 25 11:53 dir2dr-xrws--- 2 s9akhtar s9akhtar 4096 Feb 25 11:53 dir3drwxrws--- 2 s9akhtar s9akhtar 4096 Feb 25 11:53 dir4drwxrws--- 4 s9akhtar s9akhtar 4096 Feb 25 11:55 subdirdrwxrws--- 2 s9akhtar s9akhtar 4096 Feb 25 11:54 dir5drwxrws--- 2 s9akhtar s9akhtar 4096 Feb 25 11:54 dir6 So I want to do something like this : ls -lR | egrep "d(rw(x|s)|rw-|r-(x|s)|-w(x|s)|r--|-w-|--(x|s))\1---" but it comes up empty How come the back reference of the first captured group the big bracket not working: (rw(x|s)|rw-|r-(x|s)|-w(x|s)|r--|-w-|--(x|s))
There are many ways of doing this. For example, if you are sure your file names don't contain newlines, you can do: $ ls | cat -n 1 a.sh 2 b.sh 3 c.sh 4 d.sh A safer way that can deal with file names containing newlines or any other strange characters: $ c=0; for file in *; do ((c++)); printf '%s : %s\n' "$c" "$file"; done1 : a.sh2 : b.sh3 : c.sh4 : d.sh To see why the latter two are better, create a file name that contains newlines: $ touch 'a long file name'$ touch 'another long filename, this one has'$'\n''a newline character!' Now, compare the output of the two approaches: $ ls | cat -n 1 a long file name 2 another long filename, this one has 3 a newline character! 4 a.sh 5 b.sh 6 c.sh 7 d.sh As you can see above, parsing ls (which is generally a bad idea) results in the file name with the newline being treated as two separate files. The correct output is: $ c=0; for file in *; do ((c++)); printf '%s : %s\n' "$c" "$file"; done1 : a long file name2 : another long filename, this one hasa newline character!3 : a.sh4 : b.sh5 : c.sh6 : d.sh As @Vikyboss points out in the comments, the shell solution above will set the variable $c which will persist after the loop exits. To avoid that, you could add unset c at the end, or use yet another approach. For example: $ perl -le 'for(0..$#ARGV){print $_+1 ." : $ARGV[$_]"}' *1 : a long file name2 : another long filename, this one hasa newline character!3 : a.sh4 : b.sh5 : c.sh6 : d.sh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156639/" ] }
265,845
XFA forms are features of a pdf file involving options to complete fields in certain documents - in many cases official documents. These options may open a calendar, for example, in order to select day, month and year, etc. Usually these forms ensure that a certain official format is used. I have seen that Okular displays a warning that XFA forms are not supported: More here . Selecting 'Show forms' in Okular those fields can be edited and changes can be saved, but comparing to what I see in Windows with Adobe Reader only some part of those are really accessed in this way: the calendar options are absent, and the separate fields of day/month/year are not present, which may raise questions on the correctness of the result. Adobe Reader 9 can still be installed in Ubuntu 14.04 but this seems like a very limited option. Is there a a native pdf reader that can use fully XFA forms? (If not, is Wine a solution?) The solution for Ubuntu 14.04 works in 16.04. too. The file I tested was here (official French government website).
Master PDF Editor for Linux has a free and a commercial version, and even the free version has many advanced features, among which "Dynamic XFA form support" . Playonlinux has an option to install Adobe Acrobat Reader DC . But oddly, only letting PoL download and install the program works , while when selecting the latest version ( AcroRdrDC1700920044_en_US ) of the exe file previously downloaded locally the installation fails with an error. I have noticed this on several occasions, and also that PoL installs a different older version: 2015.010.20056 . In Ubuntu-16.04-systems the method of installing Adobe Reader 9 for 14.04 ( link ) still works. As suggested in Chris' answer , the newer versions of Evince/GNOME Document Viewer, can better handle XFA files, and good enough for the file in question - tested version 3.24.0.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/265845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
265,847
Let's say user wants to execute a script test.sh but ls -l test.sh gives -rwxrwxr-- 1 root root 96 Feb 25 21:44 test.sh Now if user doesn't want to make a copy of test.sh (on which he does chmod +x ), he can simply do sh test.sh to execute test.sh . Is there an analogue way to execute binary programs which one doesn't have execute permissions?
Basically this is the same thing as one of the very famous UNIX technical interview questions, known for ages: Assume someone with root access ran a command chmod -R 444 / and made the chmod binary non-executable. How do you recover from it ? There is a perl answer and there is this one, which basically is running a non-executable program, chmod in this case: /lib/ld-linux.so /bin/chmod +x /bin/chmod I think you can apply it to any other program that you know is executable. Otherwise be ready to embrace the disaster, which may ensue PS> /lib/ld-linux.so might differ in name. So if the direct match is not available, look around for similarly named so 's. For instance on my CentOS 6 server, it is /lib/ld-linux.so.2 which is a symlink pointing to /lib/ld-2.12.so . So, your mileage may vary.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/265847", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150422/" ] }
265,890
Commonly for arm systems, device trees supply hardware information to the kernel (Linux). These device trees exist as dts (device tree source) files that are compiled and loaded to the kernel. Problem is that I do not have access to such a dts file, not even to a dtb file. I have access to /sys and /proc on the machine and I wanted to ask if that would allow me to "guess the correct values" to be used in a dts? Also potential answer could highlight additionally the aspect if the answer to this question also depends on whether the device tree interface was used in the first place (i.e. a dtb was created and provided to the kernel) instead of some more hacking "we simply divert from vanilla and patch the kernel so as to solve the device information problem for our kernel only"-solution?
/proc/device-tree or /sys/firmware/devicetree/base /proc/device-tree is a symlink to /sys/firmware/devicetree/base and the kernel documentation says userland should stick to /proc/device-tree : Userspace must not use the /sys/firmware/devicetree/base path directly, but instead should follow /proc/device-tree symlink. It is possible that the absolute path will change in the future, but the symlink is the stable ABI. You can then access dts properties from files: hexdump /sys/firmware/devicetree/base/apb-pclk/clock-frequency The output format for integers is binary, so hexdump is needed. dtc -I fs Get a full device tree from the filesystem: sudo apt-get install device-tree-compilerdtc -I fs -O dts /sys/firmware/devicetree/base outputs the dts to stdout. See also: How to list the kernel Device Tree | Unix & Linux Stack Exchange dtc in Buildroot Buildroot has a BR2_PACKAGE_DTC=y config to put dtc inside the root filesystem. QEMU -machine dumpdtb If you are running Linux inside QEMU, QEMU automatically generates the DTBs if you don't give it explicitly with -dtb , and so it is also able to dump it directly with: qemu-system-aarch64 -machine virt -cpu cortex-a57 -machine dumpdtb=dtb.dtb as mentioned at: https://lists.gnu.org/archive/html/qemu-discuss/2017-02/msg00051.html Tested with this QEMU + Buildroot setup on the Linux kernel v4.19 arm64. Thanks to Harry Tsai for pointing out the kernel documentation that says that /proc/device-tree is preferred for userland .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/265890", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
265,898
I'm wondering if it is possible to include empty curly braces {} inside a sed replacement called from a find -exec . An example: find "$dir" -type f -name "*" -exec sed -i s/hello{}a/hello{}b/g '{}' + This brings up the error message of duplicate {} : find: Only one instance of {} is supported with -exec ... + Is there a way to keep {} in the sed command and to be seen as literals, not as replacements for the files that find finds?
In this case, you can work around find 's exec grammar by capturing a brace expression and using a back reference in the replacement text: $ cat f1 f2f1: hello{}af2: hello{}a$ find . -type f -exec sed -i 's/hello\([{][}]\)a/hello\1b/g' '{}' +$ cat f1 f2f1: hello{}bf2: hello{}b Or, more simply (as noted in the comments): find "$dir" -type f -exec sed -i 's/\(hello[{]}\)a/\1b/g' {} + Note that the -i option for Sed is not portable and will not work everywhere. The given command will work on GNU Sed only. For details, see: How to achieve portability with sed -i (in-place editing)?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138885/" ] }
265,908
I know when I use $ , it means characters before $ must be shown at the end of the string. But, I don't know difference between the tests given below, when the variable a is integer: [[ a =~ -?[0-9]+ ]] [[ a =~ ^-?[0-9]+$ ]] Are they same?
In this case, you can work around find 's exec grammar by capturing a brace expression and using a back reference in the replacement text: $ cat f1 f2f1: hello{}af2: hello{}a$ find . -type f -exec sed -i 's/hello\([{][}]\)a/hello\1b/g' '{}' +$ cat f1 f2f1: hello{}bf2: hello{}b Or, more simply (as noted in the comments): find "$dir" -type f -exec sed -i 's/\(hello[{]}\)a/\1b/g' {} + Note that the -i option for Sed is not portable and will not work everywhere. The given command will work on GNU Sed only. For details, see: How to achieve portability with sed -i (in-place editing)?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152911/" ] }
265,951
In machine A (running Oracle Linux Server release 6.4), I am able to get the date of 1 month ago intelligently by using the following command: $(date -d"1 month ago" '+%Y0%m') But it is not working in machine B(AIX), is there an alternative way to achieve this? Both are in .sh files and run with: sh Test.sh Error shown in machine B: date: illegal option -- dUsage: date [-u] [+Field Descriptors]
It has nothing to do with the shell, but with the date command. The -d option is specific to the GNU implementation of the date command. On non-GNU systems, that won't work unless you install the GNU version of date as a separate package (that would probably be installed as gdate or as /opt/gnu/bin/date ...). Note that recent versions of ksh93 have a similar feature with their printf builtin command: printf '%(%Y%m)T\n' '1 month ago' (see also zsh for another shell with builtin date manipulation support ( strftime builtin in the zsh/datetime module)). Some other date implementations also have features to adjust dates. For instance, with BSD date , you could do: date -v -1m +%Y%m I'm not aware that AIX comes with a command that does date calculation and there is no command in the POSIX toolchest, so no standard/portable command for that. You could revert to perl or do the calculation by hand: eval "$(date +'y=%Y m=%m')"m=$((${m#0} - 1))[ "$m" -gt 0 ] || m=12 y=$((y - 1)) # January caseprintf '%d%02d\n' "$y" "$m"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265951", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158235/" ] }
265,992
I have a binary that creates some files in /tmp/*some folder* and runs them. This same binary deletes these files right after running them. Is there any way to intercept these files? I can't make the folder read-only, because the binary needs write permissions. I just need a way to either copy the files when they are executed or stop the original binary from deleting them.
You can use the inotifywait command from inotify-tools in a script to create hard links of files created in /tmp/some_folder . For example, hard link all created files from /tmp/some_folder to /tmp/some_folder_bak : #!/bin/shORIG_DIR=/tmp/some_folderCLONE_DIR=/tmp/some_folder_bakmkdir -p $CLONE_DIRinotifywait -mr --format='%w%f' -e create $ORIG_DIR | while read file; do echo $file DIR=`dirname "$file"` mkdir -p "${CLONE_DIR}/${DIR#$ORIG_DIR/}" cp -rl "$file" "${CLONE_DIR}/${file#$ORIG_DIR/}"done Since they are hard links, they should be updated when the program modifies them but not deleted when the program removes them. You can delete the hard linked clones normally. Note that this approach is nowhere near atomic so you rely on this script to create the hard links before the program can delete the newly created file. If you want to clone all changes to /tmp , you can use a more distributed version of the script: #!/bin/shTMP_DIR=/tmpCLONE_DIR=/tmp/clonemkdir -p $CLONE_DIRwait_dir() { inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do echo $file DIR=`dirname "$file"` mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}" cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}" done}trap "trap - TERM && kill -- -$$" INT TERM EXITinotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do if ! [ -d "$file" ]; then continue fi echo "setting up wait for $file" wait_dir "$file" &done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158439/" ] }
265,994
I have a file in the following format : 19-08-02 Name appel ok hope local merge (mk) juin nov sept oct00:00:t1 T1 299 0 24 8 3 64 F2 119 0 11 8 3 62 I1 25 0 2 9 4 64 F3 105 0 10 7 3 61 Regulated F2 0 0 0 FR T1 104 0 10 7 3 6100:00:t2 T1 649 0 24 8 3 64 F2 119 0 11 8 3 62 I1 225 0 2 9 4 64 F3 165 0 10 7 3 61 Regulated F2 5 0 0 FR T1 102 0 10 7 3 6120-08-02 Name appel ok hope local merge (mk) juin nov sept oct00:00:t5 T1 800 0 24 8 3 64 F2 111 0 11 8 3 62 I1 250 0 2 9 4 64 F3 105 0 10 7 3 61 Regulated F2 0 0 0 FR T1 100 0 10 7 3 61 and I want to extract some data and write them in an other file CSV file in the following format : T1 F2 I1 F3 Regulated F2 FR T100:00:t1 299 119 25 105 0 104 00:00:t2 649 119 225 165 5 10200:00:t5 800 111 250 105 0 100....... I just need to extract the values in the third field appel every 00:00:XX I've tried to use awk but I didn't succeed to have the right script especially the 5th is composed of two words : Regulated F2 . I don't know how to extract it as a single word . Any help please !
You can use the inotifywait command from inotify-tools in a script to create hard links of files created in /tmp/some_folder . For example, hard link all created files from /tmp/some_folder to /tmp/some_folder_bak : #!/bin/shORIG_DIR=/tmp/some_folderCLONE_DIR=/tmp/some_folder_bakmkdir -p $CLONE_DIRinotifywait -mr --format='%w%f' -e create $ORIG_DIR | while read file; do echo $file DIR=`dirname "$file"` mkdir -p "${CLONE_DIR}/${DIR#$ORIG_DIR/}" cp -rl "$file" "${CLONE_DIR}/${file#$ORIG_DIR/}"done Since they are hard links, they should be updated when the program modifies them but not deleted when the program removes them. You can delete the hard linked clones normally. Note that this approach is nowhere near atomic so you rely on this script to create the hard links before the program can delete the newly created file. If you want to clone all changes to /tmp , you can use a more distributed version of the script: #!/bin/shTMP_DIR=/tmpCLONE_DIR=/tmp/clonemkdir -p $CLONE_DIRwait_dir() { inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do echo $file DIR=`dirname "$file"` mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}" cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}" done}trap "trap - TERM && kill -- -$$" INT TERM EXITinotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do if ! [ -d "$file" ]; then continue fi echo "setting up wait for $file" wait_dir "$file" &done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/265994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158442/" ] }
266,023
I want to search my directory for "foo" in the files, but i have these gigantic sql files. How do I exclude these file types or file sizes larger than 3MB using ack-grep? Also how would this be done with grep?
I don't know about ack-grep but you can use find to exclude files larger than 3MB. find . -size -3M -exec grep "foo" {} \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266023", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33183/" ] }
266,037
$ ls sess.vim -lh -rw-r--r-- 1 root root 11K Feb 26 18:52 sess.vim I want this file to be readable for everyone and writable by no one (except by root). Thus I set its permissions to 644 and ownership to root:root . $ echo "text" >> sess.vim zsh: permission denied: sess.vim Seems fine. After some changes in vim I do :w! (force write) and the file is saved successfully. Now: $ ls sess.vim -lh-rw-r--r-- 1 MY_USERNAME users 11K Feb 26 19:06 sess.vim Wt.. Why? How?
Using :w! in vim is similar to the following: echo 'test' > sess.vim.tempmv sess.vim.temp sess.vim The mv commands only cares about the directory permissions, the permissions of the file are not relevant. This is because you are modifying the directory, not writing to the file. To accomplish your goal, you will also need to adjust the permissions of the directory the file resides in.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136690/" ] }
266,071
I have a log file that displays data 3 lines at a time.Like this: 1 data 2 data 3 data 1 data 2 data 3 data 1 data 2 data 3 data I'd like to take each 3 lines and display them on 1 line like this: 1 data 2 data 3 data 1 data 2 data 3 data 1 data 2 data 3 data I'd like to be able to cat this file and then pipe it through a command(s) that will do this for me. I suspect sed or awk are a solution.
You might be able to use paste : $ paste - - - <data.txt1 data 2 data 3 data 1 data 2 data 3 data 1 data 2 data 3 data
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158486/" ] }
266,100
For ex - if I enter ping one.com the process will keep running - if I want to stop that process, I can type Ctrl C which if I'm not mistaken, will kill the process completely. If instead, I stop it with Ctrl Z, isn't it true that the process can still be operating in the background at some level? How is one able to spot a condition where a process is running but can't be seen on the terminal screen? Thanks.
Use the jobs built-in to see running tasks for your current shell. $ ping google.com >/dev/null 2>&1 &[1] 32406$ jobs[1]+ Running ping google.com > /dev/null 2>&1 &$ ping google.com[...]^Z[2]+ Stopped ping google.com$ jobs[1]- Running ping google.com > /dev/null 2>&1 &[2]+ Stopped ping google.com To kill all running jobs, you can leverage jobs -p which lists the pids of all jobs. $ for job in $(jobs -p); do kill $job; wait $job; done Further reading: http://www.tldp.org/LDP/abs/html/x9644.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157350/" ] }
266,119
Background I'm running Centos 7. Originally, it was running on a single disk that looked something like this: 1 200M EFI System (/boot/efi)2 500M Microsoft basic (/boot)3 465.1G Linux LVM LVM VG centos- LVM LV ext4 centos-root (/)- LVM LV swap centos-swap (swap) This was just a temporary solution as it was originally supposed to be installed on a Linux software RAID1 array. I got around to migrating it today. This is what it currently looks like: Both new disks have this partition layout:1 200M EFI System (/boot/efi)2 457.6G Linux RAID /dev/md0 RAID1 (for boot and LVM)3 8G Linux RAID /dev/md1 RAID0 (so 16GB total, for swap)/dev/md0 looks like this:1 500M Linux filesystem (/boot)2 457G Linux LVM (centos-root is migrated to this)LVM now has only one LV, centos-root /etc/mdadm.conf looks like this: ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=main.centos.local:0 UUID=5b5057b4:4235ba4b:5342dfda:acf63302 devices=/dev/sda2,/dev/sdb2ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=main.centos.local:1 UUID=f82a8c99:9b391d83:4efc9456:9e9bad98 devices=/dev/sda3,/dev/sdb3 /etc/fstab looks like this: /dev/mapper/centos-root / xfs defaults 0 0UUID=fcb5f82f-ce6b-460b-800f-329e010bc403 /boot xfs defaults 0 0UUID=C532-14AE /boot/efi vfat umask=0077,shortname=winnt 0 0/dev/md1 swap swap defaults 0 0 blkid outputs this (relevant entries only): /dev/sdb1: SEC_TYPE="msdos" UUID="C532-14AE" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="ed301bbd-c15c-40af-ae75-bf238d0e6270" /dev/sda1: SEC_TYPE="msdos" UUID="C532-14AE" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="f3a76412-41a0-4e04-9b04-ad1c159133cf" /dev/md0p1: LABEL="boot" UUID="fcb5f82f-ce6b-460b-800f-329e010bc403" TYPE="xfs" PARTLABEL="primary" PARTUUID="df8d6481-c6ce-423a-b5d5-205d355e5653" /dev/md0p2: UUID="7LfywM-oPHy-MTEt-swlI-EVbZ-opTo-m82E6R" TYPE="LVM2_member" PARTLABEL="primary" PARTUUID="19e7f9d5-a955-4036-8338-03a748faa1f6" /dev/mapper/centos-root: UUID="deaa9788-b487-4991-adf7-2945788fb6cd" TYPE="xfs" I have a script which automatically mounts the other EFI partition to /boot/efi_[device] , and when the kernel is updated, the grub.cfg gets copied to this partition to keep everything in sync. /dev/sda1 and /dev/sdb1 are kept in sync by the script (I've verified this), so it shouldn't be an issue that fstab mounts either one to /boot/efi (this also means that if one drive was removed due to failure, the system is still guaranteed to boot). I could have put swap in a LV to simplify things, but the RAID0 gets better performance (for what it's worth) and I get an extra 16GB of space. I migrated the LV from the old drive to the new PV using the following commands: pvcreate /dev/md0p2vgextend centos /dev/md0p2pvmove /dev/sdg3vgreduce centos /dev/sdg3 Then I regenerated the initramfs with dracut (after backing up the original), and finally regenerated grub.cfg. Afterwards, I mounted the new /boot and /boot/efi partitions and copied everything over. Problem After disconnecting the old drive and booting, dracut fails to find my RAID arrays, and of course the /boot partition and my LVG as well. It appears that it's simply not calling mdadm --assemble on /dev/md0 and /dev/md . I'm able to do just that from the dracut prompt, after which lvm_scan finds my LVG, I can link /dev/centos/root to /dev/root , and the system continues booting without any problems once exiting the prompt. Everything seems to be exactly where it should be. There was a kernel update available, so I tried installing it (assuming I messed something up the first time around when regenerating the initramfs and grub.cfg files), but no dice. System still fails in the exact same way. This is true when I boot from either EFI partition manually (as it should be since the two are identical). Link to rdsosreport.txt on pastebin What am I missing here? How do I get dracut to assemble my arrays?
The dracut documentation implies that any md raid arrays should be automatically assembled, and that the rd.md.uuid parameter should only be used if you only want certain arrays assembled as part of the boot process. It seems that in reality, the arrays are not assembled automatically, and are in fact only assembled when the rd.md.uuid parameter is set (for each array that needs to be assembled). It could be that since the rd.lvm.lv parameter was already set, that it somehow interfered with md , but I don't have the time to test that. In short, adding rd.md.uuid parameters for both of my arrays to the GRUB_CMDLINE_LINUX variable in /etc/default/grub , and then regenerating the grub config fixed the issue for me.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266119", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158537/" ] }
266,179
I wrote a small C library for Linux and FreeBSD, and I'm going to write the documentation for it. I tried to learn more about creating man pages and not found the instructions or descriptions of best practices making man pages for libraries. In particular I'm interested in what section to put man pages of the functions? 3? Maybe there good examples or manuals? Create man pages for each function from the library a bad idea?
Manual pages for a library would go in section 3. For good examples of manual pages, bear in mind that some are written using specific details of groff and/or use specific macros which are not really portable. There are always some pitfalls in portability of man-pages, since some systems may (or may not) use special features. For instance, in documenting dialog , I have had to keep in mind (and work around) differences in various systems for displaying examples (which are not justified). Start by reading the relevant sections of man man where it mentions the standard macros, and compare those descriptions for FreeBSD and Linux. Whether you choose to write one manual page for the library, or separate manual pages for the functions (or groups of functions) depends on how complicated the descriptions of the functions would be: ncurses has a few hundred functions across several dozen manual pages. dialog has several dozen functions in one manual page. Others will be sure to show many more examples. Further reading: man -- display online manual documentation pages (FreeBSD) man-pages - conventions for writing Linux man pages groff_mdoc -- reference for groff's mdoc implementation HowTo: Create a manpage from scratch. (FreeBSD) What Is A "Bikeshed"?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/266179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91357/" ] }
266,196
What method is best to use for mounting a NFS sharefrom another machine? Mount using /etc/fstab entry or Mount using autofs? what is the difference between them?
Autofs , is auto mounting filesystem on demand like when ever you need it. NFS is like mounting a complete partition remotely and you will have availability of whole content of the partition. But there are few advantages with autofs over nfs Advantages of AutoFS 1 Shares are accessed automatically and transparently when a user tries to access any files or directories under the designated mount point of the remote filesystem to be mounted. 2 Booting time is significantly reduced because no mounting is done at boot time. 3 Network access and efficiency are improved by reducing the number of permanently active mount points. 4 Failed mount requests can be reduced by designating alternate servers as the source of a filesystem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74105/" ] }
266,241
I want to append a file that is already present on the server through sshi m try: ssh [email protected] echo "hello gourav how are you">/g.txt but no data in g.txt
Add single-quotes around the remote command: $ ssh [email protected] 'echo "hello gourav how are you" >> /g.txt' EDIT: yes, as @Andrew Miloradovsky noted, use >> rather than > for appending rather than writing anew.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74105/" ] }
266,256
In a system with Ubuntu 14.04 and bash , I have the PS1 variable ending with the following contents: \u@\h:\w\$ so that the prompt appears as user@machinename:/home/mydirectory$ Sometimes, however, the current directory has a long name, or it is inside directories with long names, so that the prompt looks like user@machinename:/home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_name$ This will fill the line in the terminal and the cursor will go to another line, which is annoying. I would like instead to obtain something like user@machinename:/home/mydirectory1/...another_long_name$ Is there a way to define the PS1 variable to "wrap" and "compact" the directory name, to never exceed a certain number of characters, obtaining a shorter prompt?
First of all, you might simply want to change the \w with \W . That way, only the name of the current directory is printed and not its entire path: terdon@oregano:/home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_name $ PS1="\u@\h:\W \$ "terdon@oregano:my_actual_directory_with_another_long_name $ That might still not be enough if the directory name itself is too long. In that case, you can use the PROMPT_COMMAND variable for this. This is a special bash variable whose value is executed as a command before each prompt is shown. So, if you set that to a function that sets your desired prompt based upon the length of your current directory's path, you can get the effect you're after. For example, add these lines to your ~/.bashrc : get_PS1(){ limit=${1:-20} if [[ "${#PWD}" -gt "$limit" ]]; then ## Take the first 5 characters of the path left="${PWD:0:5}" ## ${#PWD} is the length of $PWD. Get the last $limit ## characters of $PWD. right="${PWD:$((${#PWD}-$limit)):${#PWD}}" PS1="\[\033[01;33m\]\u@\h\[\033[01;34m\] ${left}...${right} \$\[\033[00m\] " else PS1="\[\033[01;33m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] " fi}PROMPT_COMMAND=get_PS1 The effect looks like this: terdon@oregano ~ $ cd /home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_nameterdon@oregano /home...th_another_long_name $
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/266256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
266,258
I'm trying to find all the directories in a given path, and create soft links inside those directories into directories with the same names at another location. Many of the directories have spaces in their names. I have cobbled the following code together, and it seems to work provided there are no spaces. find /some/path/* -maxdepth 0 -exec sh -c "ln -s /some/other/path/"'$(basename {})'" {}" \; How should I change this to handle spaces? I normally don't have them in my directory names, but these mirror directories on my Windows PC, where I do use spaces. Any help is greatly appreciated! EDIT In response to the points made by cuonglm and Gilles: The order of the ln -s command arguments is not mistaken, but what I wanted to do is not quite clear from my explanation. For every directory in /some/path/ , I want to create a symbolic link in that directory pointed at a directory with the same name in /some/other/path/ . So /some/other/path/ is the source , and /some/path/ is the destination . The reason I want to do this is because /some/path/ contains a subset of the directories in /some/other/path/ , and I want a link from the subset to the full set for every directory. There won't be too many directories in the path, but I agree that not guarding against it is a pointless flaw. The reason I didn't use -type d is that there will only be directories and not files in the given path, but I realise including it is better.
First of all, you might simply want to change the \w with \W . That way, only the name of the current directory is printed and not its entire path: terdon@oregano:/home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_name $ PS1="\u@\h:\W \$ "terdon@oregano:my_actual_directory_with_another_long_name $ That might still not be enough if the directory name itself is too long. In that case, you can use the PROMPT_COMMAND variable for this. This is a special bash variable whose value is executed as a command before each prompt is shown. So, if you set that to a function that sets your desired prompt based upon the length of your current directory's path, you can get the effect you're after. For example, add these lines to your ~/.bashrc : get_PS1(){ limit=${1:-20} if [[ "${#PWD}" -gt "$limit" ]]; then ## Take the first 5 characters of the path left="${PWD:0:5}" ## ${#PWD} is the length of $PWD. Get the last $limit ## characters of $PWD. right="${PWD:$((${#PWD}-$limit)):${#PWD}}" PS1="\[\033[01;33m\]\u@\h\[\033[01;34m\] ${left}...${right} \$\[\033[00m\] " else PS1="\[\033[01;33m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] " fi}PROMPT_COMMAND=get_PS1 The effect looks like this: terdon@oregano ~ $ cd /home/mydirectory1/second_directory_with_a_too_long_name/my_actual_directory_with_another_long_nameterdon@oregano /home...th_another_long_name $
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/266258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158656/" ] }
266,303
Like below command, if true; then IFS=":" read a b c d e f <<< "$test" The book says when value assignment command ( IFS ":" ) is used before main command( read a b c d e f <<< "$value" ), its value is effective on the main command temporarily. So, the read command use delimiter : . But, like this command, if true; then HOME="hello" echo "$HOME" Echo message is not hello. What's the real meaning of above command?
This comes down to a question of how evaluation works. Both examples work in the same way, the problem happens because of how the shell (bash, here) expands variables. When you write this command: HOME="foo" echo $HOME The $HOME is expanded before the command is run . Therefore, it is expanded to the original value and not the new one you have set it to for the command. The HOME variable has indeed been changed in the environment that the echo command is running in, however, you are printing the $HOME from the parent. To illustrate, consider this: $ HOME="foo" bash -c 'echo $HOME'foo$ echo $HOME/home/terdon As you can see above, the first command prints the temporarily changed value of HOME and the second prints the original, demonstrating that the variable was only changed temporarily. Because the bash -c ... command is enclosed in single quotes ( ' ' ) instead of double ones ( " " ), the variable is not expanded and is passed as-is to the new bash process. This new process then expands it and prints the new value it has been set to. You can see this happen if you use set -x : $ set -x$ HOME="hello" echo "$HOME"+ HOME=hello+ echo /home/terdon/home/terdon As you can see above, the variable $HOME is never passed to echo . It only sees its expanded value. Compare with: $ HOME="hello" bash -c 'echo $HOME'+ HOME=hello+ bash -c 'echo $HOME'hello Here, because of the single quotes, the variable and not its value are passed to the new process.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/266303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152911/" ] }
266,424
Background I am about to migrate files from my old NAS to a new one, and want to verify the data integrity. The old NAS (Debian) is using Linux Ext3 file system, whilst the new one (FreeNAS) is based on ZFS. To speed up the integrity validation I am trying to use the triage approach: first validate all file sizes secondly md5 hash the first 512 bytes of each file lastly md5 hash entire file The idea being that the first two steps would filter out obviously corrupted files, and be much quicker to detect than running md5 in bulk for TB of files. Question I have constructed a bash command for performing a md5 hash of a directory structure, and sorting the output based on file name to ensure a deterministic order on my Linux NAS. #find somedir -type f -exec md5sum {} \; | sort -k 34;12e761f96223145aa63f4f48f252d7fb /somedir/foo.txt18409feb00b6519c891c751fe2541fdc /somedir/bar.txt But how to modify above if I want to md5 only the first 512 bytes of each file?
You can use dd to pipe only the first 512 bytes to md5sum . However this will cause md5sum to be oblivious of the filename, so in addition replace - with the filename again. find . -type f -exec sh -c "dd if={} bs=512 count=1 2>/dev/null | md5sum | sed s\|-\|{}\|" \; | sort -k 34;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158758/" ] }
266,506
I am trying to copy a file to 10 files. Say for example I have a E-mail message named test1.eml . I want 10 copies of the same file. When I searched in internet, I came across this stackoverflow thread https://stackoverflow.com/questions/9550540/linux-commands-to-copy-one-file-to-many-files and followed the eval command mentioned by one of the community member 'knittl'. eval 'cp test1.eml 'test{2..10}.eml';' The above mentioned command worked and it met my requirements. Are there any other alternative/more elegant commands to achieve this, since the person who mentioned about eval command told it's kind of a dirty hack.
I would do something like for i in {2..10}; do cp test1.eml test$i.eml; done Yet is more or less the same thing.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/266506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158818/" ] }
266,512
Due to bureaucracy, I am in a situation where I can get someone to run a bash script on a Linux server and give me outputs, but I cannot log in there, or run the script myself. I am fairly certain that the server in question is running Debian or Ubuntu. I want to find out which python and which g++ versions are installed (long story). So far my best idea is to get $PATH variable, split it by : , and then search all the paths for everything matching python , g++ respectively. Is there a saner way?
The following should work for g++, provided that you have no local g++ installations. dpkg -l 'g++*' On my system, this gives: dpkg -l 'g++*'Desired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-================================-=====================-=====================-======================================================================ii g++ 4:4.9.2-2 amd64 GNU C++ compilerii g++-4.6 4.6.3-14 amd64 GNU C++ compilerun g++-4.6-multilib <none> <none> (no description available)ii g++-4.9 4.9.2-10 amd64 GNU C++ compilerii g++-4.9-multilib 4.9.2-10 amd64 GNU C++ compiler (multilib files)ii g++-multilib 4:4.9.2-2 amd64 GNU C++ compiler (multilib files) For Python, a similar approach will pick up too many false positives, because on Debian and its derivatives, all Python libraries start with python- . So, one would need a more refined glob pattern. Something like dpkg -l 'python?.?' should work. dpkg -l 'python?.?'Desired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-================================-=====================-=====================-======================================================================ii python2.6 2.6.8-1.1 amd64 Interactive high-level object-oriented language (version 2.6)ii python2.7 2.7.9-2 amd64 Interactive high-level object-oriented language (version 2.7)un python3.1 <none> <none> (no description available)ii python3.4 3.4.2-1 amd64 Interactive high-level object-oriented language (version 3.4)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266512", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158819/" ] }
266,533
Here is the script: TYPE="${BLOCK_INSTANCE:-mem}"awk -v type=$TYPE '/^MemTotal:/ { mem_total=$2}/^MemFree:/ { mem_free=$2}/^Buffers:/ { mem_free+=$2}/^Cached:/ { mem_free+=$2}/^SwapTotal:/ { swap_total=$2}/^SwapFree:/ { swap_free=$2}END { if (type == "swap") { free=swap_free/1024/1024 used=(swap_total-swap_free)/1024/1024 total=swap_total/1024/1024 } else { free=mem_free/1024/1024 used=(mem_total-mem_free)/1024/1024 total=mem_total/1024/1024 } pct=used/total*100 # full text printf("%.1fG/%.1fG (%.f%)\n", used, total, pct) # short text printf("%.f%\n", pct) # color if (pct > 90) { print("#FF0000\n") } else if (pct > 80) { print("#FFAE00\n") } else if (pct > 70) { print("#FFF600\n") }}' /proc/meminfo Here is the error when I try to run it: $ ./memory awk: run time error: not enough arguments passed to printf("%.1fG/%.1fG (%.f%)") FILENAME="/proc/meminfo" FNR=46 NR=461.1G/15.3G (7 It prints what I want (the memory usage) but also has an error. Can anyone help?
Awk's printf is treating your trailing % as the start of a fourth format specifier. If you want to print a literal % sign you need %% , for example $ awk 'BEGIN{printf("%.1fG/%.1fG (%.f%%)\n", 1.2, 3.4, 5.6)}'1.2G/3.4G (6%)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117923/" ] }
266,545
After a recent break in on a machine running Linux, I found an executable file in the home folder of a user with a weak password. I have cleaned up what appears to be all the damage, but am preparing a full wipe to be sure. What can malware run by a NON-sudo or unprivileged user do? Is it just looking for files marked with world writable permission to infect? What threatening things can a non-admin user do on most Linux systems? Can you provide some examples of real world problems this kind of security breach can cause?
Most normal users can send mail, execute system utilities, and create network sockets listening on higher ports. This means an attacker could send spam or phishing mails, exploit any system misconfiguration only visible from within the system (think private key files with permissive read permissions), setup a service to distribute arbitrary contents (e.g. porn torrent). What exactly this means depends on your setup. E.g. the attacker could send mail looking like it came from your company and abuse your servers mail reputation; even more so if mail authentication features like DKIM have been set up. This works till your server's rep is tainting and other mail servers start to blacklist the IP/domain. Either way, restoring from backup is the right choice.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/266545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94805/" ] }
266,565
http://linuxg.net/how-to-transform-a-process-into-a-daemon-in-linux-unix/ gives an example of daemonizing a process in bash: $ nohup firefox& &> /dev/null If I am corrrect, the command is the same as "nohup and background a process".But isn't a daemon more than a nohupped and background process? What steps are missing here to daemonize a process? For example, isn't changing the parent process necessary when daemonizing a process? If yes, how do you do that in bash? I am still trying to understand a related reply https://unix.stackexchange.com/a/177361/674 . What other steps and conditions? See my related question https://stackoverflow.com/q/35705451/156458
From the Wikipedia article on daemon : In a Unix environment, the parent process of a daemon is often, but not always, the init process. A daemon is usually either created by a process forking a child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controlling terminal (tty). Such procedures are often implemented in various convenience routines such as daemon(3) in Unix. Read the manpage of the daemon function. Running a background command from a shell that immediately exits results in the process's PPID becoming 1. Easy to test: # bash -c 'nohup sleep 10000 &>/dev/null & jobs -p %1'1936# ps -p 1936 PID PPID PGID WINPID TTY UID STIME COMMAND 1936 1 9104 9552 cons0 1009 17:28:12 /usr/bin/sleep As you can see, the process is owned by PID 1, but still associated with a TTY. If I log out from this login shell, then log in again, and do ps again, the TTY becomes ? . Read here why it's important to detach from TTY . Using setsid (part of util-linux ): # bash -c 'cd /; setsid sleep 10000 </dev/null &>/dev/null & jobs -p %1'9864# ps -p 9864 PID PPID PGID WINPID TTY UID STIME COMMAND 9864 1 9864 6632 ? 1009 17:40:35 /usr/bin/sleep I think you don't even have to redirect stdin, stdout and stderr.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/266565", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
266,571
My goal is to remote to a server through ssh, start a screen, start a script, let the script run, and exit the ssh session while keeping the screen running its own python script. This is what I have: ssh -t myuser@hostname screen python somepath.py -s 'potato' The problem with this is, after I run it, I have to manually ctrl + a + d, and exit out of the ssh session myself. Is there a way to do it all in one go without needing human interaction? EDIT: I have tried the suggested method of using -dm This is what I'm testing to make it easier to see: ssh -t user@host screen "top" remotely I see this: user 2557 0.0 0.2 27192 1468 ? Ss 13:35 0:00 SCREEN topuser 2562 0.0 0.1 11740 932 pts/0 S+ 13:35 0:00 grep --color=auto SCREEN but if I do: ssh -t user@host screen -dm "top" I immediately get a Connection to host closed. And nothing in my grep ps aux | grep SCREENuser 2614 0.0 0.1 11740 932 pts/0 S+ 13:36 0:00 grep --color=auto SCREEN
You can use -d -m to your screen session to do it like: ssh myuser@hostname screen -d -m "python somepath.py -s 'potato'" That will create a new screen session, run your command in it and automatically detach you from it. That option is documented as -d -m Start screen in detached mode. This creates a new session but doesn't attach to it. This is useful for system startup scripts. on the GNU documentation page for screen
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41392/" ] }
266,584
I would like to use a variable say: i=1 as a value to refer to positional variables passed to script, e.g.: x=101y=201z=301foo(){ echo "$1" echo "$2" echo "$3"}foo x y z output: 101201301 Instead of refering to each parameter by index, how could I use i to increment through as a index variable? To clarify: foo() { local i=1 echo "$i" #echo first paramter (( i+=1 )) echo "$i" #echo second parameter #etc.} what is the syntax for the echo "$i" part? UPDATE after @Eric answer ~$ t=5~$ foo() { i=1; echo "${!i}"; }~$ foo tt~$ Update #2 So in short, the only way I can make my method work is by this: foo() { #assuming 3 parameters i=0 (( i+=1 )) var="${!i}" echo "${!var}" (( i+=1 )) var="${!i}" echo "${!var}" (( i+=1 )) var="${!i}" echo "${!var}" }
This is similar to the SO question here , which is similar to @costas comment. You can use $# to get the number of arguments and then indirect references like ${!i} to access a variable by name. Here's an example: f() { for((i=1; i<=$#; i++)); do printf "%d %s\n" "$i" "${!i}" done}f a b c which prints: 1 a2 b3 c Seeing now that you want to pass in the names of variables as the positional arguments, you can an extra layer of indirect reference like so: a=firstb=secondc=thirdf() { for((i=1; i<=$#; i++)); do var="${!i}" printf "%d %s\n" "$i" "${!var}" done}f a b c which prints 1 first2 second3 third This lets us treat each argument as the name of a variable, which we store in var here. Then we access that variable indirectly in the printf . You only get one layer of indirection at a time, nesting doesn't work. So trying to do it in one fell swoop as ${!${!i}} doesn't work because the first { starts the expansion, with the rest being treated as the PARAMETER value to expand. The first character being ! it will treat the rest as the name of the PARAMETER containing the name of the parameter we want, but ${!i} is not a valid parameter name, so we get a bad substitution . So we just do it in 2 steps to avoid that problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158871/" ] }
266,591
I'm trying to use a url piped from tshark into a while loop. while read line ; do echo "$line"ffmpeg -i "$line" -c copy "filename"done < <(tshark -i tun0 -B 50 -P -V -q -l -Y 'http matches "(?<=\[Full request URI: )(http://mywebsite.com/file.*)(?=\])"' 2>&1 | grep --line-buffered -Po "(?<=\[Full request URI: )(http://mywebsite.com/file.*)(?=\])" | unbuffer -p uniq) Inside the loop, I'm able to echo $line just fine; and it looks like this: http://mywebsite.com/file?eid=5345944&fmt=5&app_id=214748364&range=20-30&etsp=1456777145&hmac=1K9nwkA8TOgtOXAsakSfMMVWsuE But for some reason, I'm unable to use this same "line" variable to feed ffmpeg, inside the same while loop. Doing ffmpeg -i "$line" -c copy "filename" results in (all quotes are accurately copy pasted) [http @ 0x1ab5ec0] HTTP error 400 Bad Request:Server returned 400 Bad Request5345932&fmt=5&app_id=214748364&range=20-30&etsp=1456779359&hmac=35B2lA6D0zfR2DmfdPS4ZcilYxg On the other hand, copying the url (from the echo output), double quoting it and using the same ffmpeg command in a terminal works perfectly. Also, for some reason, the command is truncated when running the script with -xv, in such way that it does not show the full "+ ffmpeg -i 'http://...." line as it should.
This is similar to the SO question here , which is similar to @costas comment. You can use $# to get the number of arguments and then indirect references like ${!i} to access a variable by name. Here's an example: f() { for((i=1; i<=$#; i++)); do printf "%d %s\n" "$i" "${!i}" done}f a b c which prints: 1 a2 b3 c Seeing now that you want to pass in the names of variables as the positional arguments, you can an extra layer of indirect reference like so: a=firstb=secondc=thirdf() { for((i=1; i<=$#; i++)); do var="${!i}" printf "%d %s\n" "$i" "${!var}" done}f a b c which prints 1 first2 second3 third This lets us treat each argument as the name of a variable, which we store in var here. Then we access that variable indirectly in the printf . You only get one layer of indirection at a time, nesting doesn't work. So trying to do it in one fell swoop as ${!${!i}} doesn't work because the first { starts the expansion, with the rest being treated as the PARAMETER value to expand. The first character being ! it will treat the rest as the name of the PARAMETER containing the name of the parameter we want, but ${!i} is not a valid parameter name, so we get a bad substitution . So we just do it in 2 steps to avoid that problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158869/" ] }
266,596
When I hit Ctrl + x , Ctrl + e in zsh , I can edit the current command line in by $EDITOR or $VISUAL . However, I'd like to use nano , and to get syntax highlighting for shell syntax there, I have to pass -Y sh , as nano doesn't recognise shell syntax automatically when editing the command line ( zsh creates /tmp/random-name without a .sh extension to pass to nano ). I can execute EDITOR='nano -Y sh'VISUAL="$EDITOR" and then press Ctrl + x , Ctrl + e to get the desired result. However, other programs use $EDITOR / $VISUAL , too. If I set $EDITOR / $VISUAL as above, and then do (for example) git commit , the commit message is highlighted as shell syntax, which I want to avoid. I also tried EDITOR='nano -Y sh' fc which did work, however that seems a little verbose to type out each time (I might put it in a function though). Also, fc prepopulates the command line with the last history command line, and to use it, I have to type out the command. That means I could not type out some long command in zsh and then decide to edit it in nano as I could with the keyboard shortcut. So, is there a way for me to tell zsh the editor/flags to use only for editing the command line when pressing Ctrl + x , Ctrl + e that other programs ignore? I would love some environment variable that I can set in ~/.zshrc and then forget about.
The universal way to solve every computer problem¹ is to add a level of indirection. Instead of calling edit-command-line , call a wrapper function. nano-command-line () { local VISUAL='nano -Y sh' edit-command-line}zle -N nano-command-linebindkey '^X^E' nano-command-line ¹ Hyperbole.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158883/" ] }
266,627
I'd like to assign the contents of variables of another bash script to variables in the calling script. Specifically, I source this file: https://projects.archlinux.org/svntogit/packages.git/plain/trunk/PKGBUILD?h=packages/firefox (and other alike files). The file contains variables named depends , makedepends , etc. So in my script I have multiple statements like these: depends="$(source "/path/to/file" ; printf '%s' "${depends[@]}")"makedepends="$(source "/path/to/file" ; printf '%s' "${makedepends[@]}")"... So basically, each statement starts it's own subshell which sources the file and prints the contents of just ONE variable to a variable in the parent shell. Is there another way which involves to start just a SINGLE subshell , source the file and get the contents of specified variables of the file assigned to specified variables in the calling shell without polluting the environment of the calling shell? ------------------------------------------------------------------------ Since Mark Mann pointed out the dangers of source -ing foreign scripts, I ended up with another solution. Instead of using source to get variables of another script, one can use a multi line grep with a perl regex to grep all needed variables from the file (varName=(...),varName2="...",varname3='...',varName4=...) and eval the result: $ grepvars='(license)|(depends)|(makedepends)|(url)|(pkgdesc)|(pkgver)'$$ eval $(grep -Pzo "^(${grepvars})=\([^\)\(\`]*\)|^(${grepvars})=\"[^\"\(\`]*\"|^(${grepvars})='\''[^'\'']*'\''|^(${grepvars})=[^\s;\(\`]*" /tmp/above_mentioned_file)$ echo $urlhttps://www.mozilla.org/firefox/$$ echo ${depends[@]}gtk3 gtk2 mozilla-common libxt startup-notification mime-types dbus-glib alsa-lib ffmpeg2.8 desktop-file-utils hicolor-icon-theme libvpx icu libevent nss hunspell sqlite ttf-font
Use eval . If you have your source (in /tmp/other.sh): a=1b=2c=3 And you want only a portion, you can use eval to get just those items (here in /tmp/main.sh): eval $(source /tmp/other.sh; echo a="$a"; echo b="$b";)echo a is $a "(expect 1)"echo b is $b "(expect 2)"echo c is $c "(expect nothing)" And running it: $ bash /tmp/main.sha is 1 (expect 1)b is 2 (expect 2)c is (expect nothing) WARNING : Performing an eval or source on an untrusted script is very dangerous. You're executing a shell script, and that script can perform anything you could do yourself. WARNING
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/266627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157149/" ] }
266,659
I have 119766 files in a folder. They are CSV files. I want to find out total number of lines of all files. I'm trying to run following command: cat * |wc -l But the following error occurrs: -bash: /bin/cat: Argument list too long How can I do that? Is there any way around this? One thing I would like to add that total number of lines would be very large.
If you want a line-count for each individual file: find . -type f -exec wc -l {} + | awk '! /^[ 0-9]+[[:space:]]+total$/' I've excluded the total lines because there will be several of them with this many files being processed. The find ... -exec ... + will try to fit as many filenames onto a single command line as possible, but that will be a LOT less than 119766 files....probably only several thousand (at most) per invocation of wc , and each one will result in its own independent 'total' line. If you want the total number of lines in all files combined, here's one way of doing it: find . -type f -exec wc -l {} + | awk '/^[ 0-9]+[[:space:]]+total$/ {print $1}' | xargs | sed -e 's/ /+/g' | bc This prints only the line counts on the total lines, pipes that into xargs to get the counts all on one line, then sed to transform the spaces into + signs, and then pipes the lot into bc to do the calculation. Example output: $ cd /usr/share/doc$ find . -type f -exec wc -l {} + | awk '/^[ 0-9]+[[:space:]]+total$/ {print $1}' | xargs | sed -e 's/ /+/g' | bc 53358931 Update 2022-05-05 It is better to run wc -l via sh . This avoids the risk of problems arising if any of the filenames are called total ....aside from the total line being the last line of wc 's output, there is no way to distinguish an actual total line from the output for a file called "total", so a simple awk script that matches "total" can't work reliably. To show counts for individual files, excluding totals: find . -type f -exec sh -c 'wc -l "$@" | sed "\$d"' sh {} + This runs wc -l on all filenames and deletes the last line (the "total" line) from each batch run by -exec . The $d in the sed script needs to be escaped because the script is in a double-quoted string instead of the more usual single-quoted string. double-quotes were used because the entire sh -c is a single-quoted string. It's easier and more readable to just escape one $ symbol than to use '\'' to fake embedding a single-quote inside a single quote. To show only the totals: find . -type f -exec sh -c 'wc -l "$@" | awk "END {print \$1}"' sh {} + | xargs | sed -e 's/ /+/g' | bc Instead of using sed to delete the last line from each batch of files passed to wc via sh by find ... -exec , this uses awk to print only the last lines (the "total") from each batch. The output of find is then converted to a single line (xargs) with + characters between each number (sed to transform spaces to +), and then piped into bc to perform the calculation. Just like the $d in the sed script, the $1 in the awk script needs to be escaped because of double-quoting.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150872/" ] }
266,676
I have 3 configuration files in my Linux box which contains config information for a custom application. I want to change some values in configuration files. The contents of the files and descriptions are given below: config1 file content: set VAR1=/app/client/10x_64/instance config2 file content: set VAR2=/app/client/11x/instance config3 file content: set VAR3=/app/client/11x_64/instance I want change all values 10x_64, 11x, 11x_64 to 12x_64 in all the files.Currently I'm using three commands to change the content; commands are given below: sed -i 's/10x_64/12x_64/g' config1sed -i 's/11x/12x_64/g' config2sed -i 's/11x_64/12x_64/g' config3 I want a single generalized command to change the content of all 3 files.
If you want a single expression, you can do: sed -i 's#/client/[^/]*#/client/12x_64#g' config* I've used /client/[^/]* as the marker to find what we want to replace (ie whatever is after /client/ but before the next / ) , but we could have done client/[^/]*/instance instead if that avoids matching other items in the file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135239/" ] }
266,699
I need a large number of damaged .png files in order to test my project. To that end, I need to set all bytes from the 0x054-th to the 0xa00-th to 0. .png files contain chunks with checksums, I want to alter the image data chunk (IDAT), without updating the checksum. Furthermore, I want to damage a lot of bytes, so that a visible (black) area appears when displaying the image (provided the viewing program ignores the checksum mismatch). Here is what I have got so far: #!/bin/sh# This script reads all .png or .PNG files in the current folder,# sets all bytes at offsets [0x054, 0xa00] to 0# and overwrites the files back.temp_file_ascii=outfile.txttemp_file_bin=outfile.pngtarget_dir=.start_bytes=0x054stop_bytes=0xa00len_bytes=$stop_bytes-$start_bytesfor file in $(find "$target_dir" -name "*.png") #TODO: -name "*.PNG")do # Copy first part of the file unchanged. xxd -p -s $start_bytes "$file" > $temp_file_ascii # Create some zero bytes, followed by 'a', # because I don't know how to add just zeroes. echo "$len_bytes: 41" | xxd -r >> $temp_file_ascii # Copy the rest of the input file. # ?? mv outfile.png "$file"done EDIT: the finished script, using the accepted answer: #!/bin/shif [ "$#" != 3 ]then echo "Usage: " echo "break_png.sh <target_dir> <start_offset> <num_zeroed_bytes>" exitfifor file in $(find "$1" -name "*.png")do dd if=/dev/zero of=$file bs=1 seek=$(($2)) count=$(($3)) conv=notruncdone
You can do something much simpler with dd : dd if=/dev/zero \ of="$your_target_file" \ bs=1 \ seek="$((start_offset))" \ count="$((num_zeros))" \ conv=notrunc With $start_offset being the start of your range of bytes to zero out (zero-based, as in to erase from the n th byte, use n-1), and $num_zeros the length of that range. The $((...)) will take care of converting hexadecimal to decimal. (Other tests you could run would be to set if to /dev/urandom rather than /dev/zero , or overwrite the checksums with random data too.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20506/" ] }
266,705
I want to measure execution time of commands and get (to me) strange behavior of the time command. I call time like it is mentioned in the examples of the manpages with the -f or --format option and get a command not found error. Seems like it thinks -f is the command to measure the execution time of... user@laptop:~$ time -f '%e' sleep 1-f: Befehl nicht gefunden. Then I call the same line again, just that I specify the path to time . Result changes, works as intended (wtf?): user@laptop:~$ /usr/bin/time -f '%e' sleep 11.00 Now I'm confused... even which can't help me: user@pc:~$ which time/usr/bin/time And... if I don't use options it works fine, even without full path: user@laptop:~$ time sleep 1real 0m1.003suser 0m0.001ssys 0m0.002s How do I use this command correctly if I want to specify options?Using the full path seems to work, but looks more like an unreliable side effect of something buggy (or something I don't understand)? Edit: I'm using Ubuntu 14.04 LTS (trusty)
Bash has a built in command called time . So if you just type time it will use the shell built-in which has no options like -f . But man time (which talks about -f ) will give the manpage of the /usr/bin/time program. So if you want to use options described in the manpage you should ensure you call the program, not the shell built-in. One way achieving this is using the full path. (IMHO, this is really ugly in bash) For more info, see the links below the question.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48852/" ] }
266,728
In a Ubuntu 14.04 server I am experiencing a massive hard disk activity which has no apparent justification: it comes as a burst, it lasts a few minutes and then disappears. It consumes system resources and slows down the whole system. Is there a (command-line) tool which can be used to monitor the disk activity, listing the processes that are using the disk and the files involved? Something like htop for the CPU.
For checking I/O usage I usually use iotop .It's not installed by default on the distro, but you can easily get it with: sudo apt-get install iotop Then launch it with root priviledges: sudo iotop --only The --only option will show only the processes currently accessing the I/O.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/266728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
266,757
I have a directory that contains subfolders of various depth. I want to go through all of them, check if they contain a folder with a certain name, and if that directory exists run a script (let's call this script foo.sh to avoid confusion). foo.sh should run in the current folder if it finds the target folder. Example: /A /subA-1 /subA-2 /target /subA-3 /sub-subA-3 /target The command/script I'm looking for shall be ran from /A , and will then go through all subfolders looking for a folder with the name target . Upon entering /subA-2 this condition is satisfied and the foo.sh is then run in /subA-2 . Same for /sub-subA-3 , but not /subA-3 . foo.sh does not need any input, it just has to be run in the folder containing /target .
It's as simple as this: find A -type d -name target -execdir foo.sh \; From the man page: -execdir command ; Like -exec, but the specified command is run from the subdirectory containing the matched file. Example: Create and print the directory structure from the question: /tmp$ mkdir A; cd A/tmp/A$ mkdir -p subA-1 subA-2/target subA-3/sub-subA-3/target/tmp/A$ find .../subA-2./subA-2/target./subA-3./subA-3/sub-subA-3./subA-3/sub-subA-3/target./subA-1 Now run the command, substituting pwd for foo.sh to show what's going on: /tmp/A$ find . -type d -name target -execdir pwd \;/tmp/A/subA-2/tmp/A/subA-3/sub-subA-3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143905/" ] }
266,772
From a server 192.168.0. 1 , I'd like to reach a server 192.168.0. 2 on port 80. 192.168.0. 2 can reach 192.168.0. 1 , but 192.168.0. 1 can't reach 192.168.0. 2 (firewall). I have set up a reverse proxy, by typing the following command on 192.168.0. 2 : ssh -f -N -T -R0.0.0.0:80:localhost:80 192.168.0.1 now 192.168.0. 1 can reach 192.168.0. 2 , with the following command: wget localhost:80 However I'd like to be able to reach 192.168.0. 2 by taping wget 192.168.0.2:80 Is this possible, without messing with the DNS?
It's as simple as this: find A -type d -name target -execdir foo.sh \; From the man page: -execdir command ; Like -exec, but the specified command is run from the subdirectory containing the matched file. Example: Create and print the directory structure from the question: /tmp$ mkdir A; cd A/tmp/A$ mkdir -p subA-1 subA-2/target subA-3/sub-subA-3/target/tmp/A$ find .../subA-2./subA-2/target./subA-3./subA-3/sub-subA-3./subA-3/sub-subA-3/target./subA-1 Now run the command, substituting pwd for foo.sh to show what's going on: /tmp/A$ find . -type d -name target -execdir pwd \;/tmp/A/subA-2/tmp/A/subA-3/sub-subA-3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266772", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158999/" ] }
266,794
When I run the command type type , this gives me a result to standard output. So, now I try this command: type type > abc.txt It redirects the standard output to abc.txt . So, if this command executes on the same process (bash itself), then file descriptor 1 points to abc.txt . After that, whenever I use run a command, these results should go to file abc.txt , because file descriptor 1 points to abc.txt . However, results always go to standard output. Does this mean that shell built-ins run under a forked process?
Output redirection File descriptor 1 represents stdout , the standard output stream. When output redirection is used in type type > abc.txt , the shell opens the file abc.txt for writing and the file descriptor 1 is modified so that it points to the open file instead of the terminal device. However, this redirection only applies to the current command being executed so this does not imply that the command executes in a forked process (or subshell). Persistent redirection If you wanted the redirection to persist, you could use the exec shell builtin to modify the file descriptors, e.g., to redirect standard output for successive commands, run the following command. exec >abc.txt Be careful running this as your shell session will be hard to use if allcommand output is being redirected to a file instead of your terminal device. You can restore the stdout file descriptor to the terminal output device by redirecting it to the same device pointed to by stderr (file descriptor 2 ): exec >&2 Related resources For more details, see: the section on Redirections from the Bash manual Input And Output from Greg’s Wiki Redirection tutorial from Bash-hackers Wiki the Redirection Wikipedia article the file descriptor Wikipedia article
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152911/" ] }
266,808
I am behind a corporate firewall, which brings lots of pains in the area of proxies. There are two main approaches I've found to work: Use Cntlm at the cost of not being able to connect (from the command line) to HTTPS and external SSH locations. (Cntlm allows you to hash your username and password using PassNTLMv2 (thus avoiding plain text) and set http://localhost:3128/ as your proxy which then redirects to your "real" proxy. As I mentioned, I cannot connect to HTTPS and external SSH using this method.) Put my username and password in plain text in the http_proxy variable at the cost of having my username and password in plain text. Clearly, if security was not a concern, I'd just do with option 2. I found somewhat of a solution, in doing this in my .babrunrc (I use Babun, it's basically Cygwin with a little extra, the same could be in a .bashrc or .zshrc though) export http_proxy="http://`echo "Y21hbjpwYXNzd29yZA==" | base64 -d`@20.20.20.20:20/" This way my password is at least encoded. If someone came to my computer and typed echo $http_proxy they'd see my password, but I don't think there's any way around this. Are there any alternative approaches to this? Or maybe a way to encrypt the string as opposed to encoding it? I wouldn't mind typing in some password when I open a prompt if there was no way around it.
Using base64 is useless, it's just a simple transformation. Using encryption with a key that's stored alongside the encrypted data is also useless because it's still just a simple transformation. If you're worried about someone getting access to your configuration files, then you need to encrypt with a key that isn't in your configuration files, and that means you'll have to type a password¹ when you log in. Rather than make your own, use an existing encryption mechanism. On Linux, if you go with file encryption, encrypt your home directory with eCryptfs , or encrypt the whole disk with Linux's disk encryption layer (dm-crypt, cryptsetup command), or create a small per-file encrypted filesystem with encfs . In the latter case, have a script that mounts the encfs filesystem and then runs a script stored there. On Windows, put the file on a TrueCrypt / VeraCrypt . Alternatively, use a password manager (set a master password, of course). Gnome's password manager (gnome-keyring) can be queried from the command line with the secret-tool utility. Seahorse provides a convenient GUI for exploring and modifying the keyring and setting a master password. secret-tool store --label='Corporate web proxy password' purpose http_proxy location work.example.comexport http_proxy="http://cman:$(secret-tool lookup purpose http_proxy location work.example.com)@192.0.2.3/" This required D-Bus, which is normally available by default under Linux (most modern desktop environments require it) but needs to be started manually under Cygwin (I don't know exactly how). ¹ or otherwise supply secret material, e.g. stored on a smartcard.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121116/" ] }
266,817
I'm looking for a way to swap Esc and Caps Lock on Linux virtual console. In X11, I can do this with setxkbmap -option caps:swapescape , but I don't know an equivalent in text mode. So, what can I do?
First You Need To install "console-data" sudo apt-get install console-data Now use "sudo showkey" to find the keycode of your ESC and CapsLock key sudo showkey My keycode for ESC was "1" and for Caps Lock was "58" Now you need to create a .keystrings file in your home directory vim ~/.keystrings In that file you'll swap the keycode's for Caps Lock and ESC ex. since the Caps Lock Key was equal to 58 before I'll make it equal to 1 keycode 1 = Caps_Lockkeycode 58 = Escape Now Save and close your .keystrings file and run the following command in your TTY sudo loadkeys .keystrings The Caps Lock and Escape keys should now be swapped
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56505/" ] }
266,828
I have the following script. It prints all of first characters to console from passwd, but my expectation was to get only one character and for the program to then terminate. #!/bin/sh cat /etc/passwd |while read -r linedo echo ${line} | grep -o . | while read char do echo ${char} exit 1 donedone Actual output: rbda(etc) Expected output: r
First You Need To install "console-data" sudo apt-get install console-data Now use "sudo showkey" to find the keycode of your ESC and CapsLock key sudo showkey My keycode for ESC was "1" and for Caps Lock was "58" Now you need to create a .keystrings file in your home directory vim ~/.keystrings In that file you'll swap the keycode's for Caps Lock and ESC ex. since the Caps Lock Key was equal to 58 before I'll make it equal to 1 keycode 1 = Caps_Lockkeycode 58 = Escape Now Save and close your .keystrings file and run the following command in your TTY sudo loadkeys .keystrings The Caps Lock and Escape keys should now be swapped
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155369/" ] }
266,866
When I want to stop tail -f I press CTRL + C and return to prompt.But when I run ssh connection the CTRL + C breaks the connection. (meanings of flags described here ) ssh -t svf "cd ~/w/logs; tail -f some_file.log; exec $SHELL -l" this post does not describes how to prevent remote shell to exit for ssh -t remotehost command args ... How to stop command (in this example tail -f ) with CTRL + C and do not break ssh connection?
In ssh -t svf "cd ~/w/logs; tail -f some_file.log; exec $SHELL -l" -t means the local tty line discipline is set to raw (pass-through) mode and a pseudo-tty is created on the remote side. That pseudo-tty will get default settings which on most systems means for instance that a ^C character will cause a SIGINT to be sent to the foreground process group of that pseudo-terminal. Here, when you ssh with a command, sshd runs login-shell-of-the-remote-user -c that-command . That's a non-interactive shell, so it doesn't perform job control. In particular, that shell like every command it runs will run in the same process group which is the foreground process group of the terminal. The $SHELL -l shell though (note that $SHELL is expanded on the local machine which is probably not what you want) will be an interactive shell (as it's called without -c and its stdin is a terminal device), so will run commands (pipelines) in different process groups and makes them the foreground process group of the terminal or not as necessary. Here, to make that only tail -f receives the SIGINT, or at least that only it and not the shell is killed, you've got several options depending on the login shell of the remote user. If the login shell of the remote user is bash or yash , you can do: ssh -t svf 'set -m; cd ~/w/logs && tail -f some_file.log; exec "$SHELL" -l' The -m option enables job-control. That tells the shell to run pipelines in separate process groups (like for interactive shells) and (for bash and yash , generally not for all other shells) that the process group of pipelines run in foreground is made the foreground process group of the terminal device (so that they get the SIGINT on ^C , or SIGTSTP on ^Z for instance), and the ones started in background are not (so that they be forbidden (receive SIGTTIN) to read from the terminal for instance). That way tail -f will run in its own process group, which will be the foreground process group. If you type CTRL+C while tail is running, only tail will receive the SIGINT. Another alternative if you know the shell is Bourne like, is to set a handler for SIGINT: ssh -t svf 'trap : INT; cd ~/w/logs && tail -f some_file.log; exec "$SHELL" -l' Here, we're configuring the shell so it executes the : (no-op) command upon receiving SIGINT. Child processes will inherit that handler, but upon executing a command, the handler will be reset to the default (of terminating the process). So now, upon CTRL-C while tail is running, both the shell and tail will receive the SIGINT, but only tail will die of it, and the shell will carry on executing "$SHELL" -l . If you know bash or yash are available on the remote host (and the remote shell is Bourne or csh like), you could also do: ssh -t svf 'cd ~/w/logs && bash -c "set -m; tail -f some_file.log"; exec "$SHELL" -l' We invoke the bash shell explicitly and turn the -m option so that tail will be started in its own foreground process group and only it will receive the SIGINT like for the firs solution. With bash-4.3 or above, you can replace the bash -c "set -m; with bash -mc " . That won't work with older versions which ignore the -m (and -i ) option passed the the bash interpreter if combined with -c .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/129967/" ] }
266,871
I'm trying to install the VMware tools in a VMPlayer VM but in a certain point of the installation I need to set the linux-headers path. So I go and try to install it with this command: apt-get install gcc make linux-headers-$(uname -r) Then I get the error: Couldnt find any package by glob 'linux-headers-4.3.0-kali-amd64' My sources.list file has these sources: deb http://http.kali.org/kali kali-rolling main contrib non-free deb http://http.kali.org/kali kali main contrib non-free deb http://http.kali.org/kali sana main contrib non-free deb http://http.kali.org/kali-security kali/updates main contrib non-free deb http://http.kali.org/kali-security sana/updates main contrib non-free I already did and apt-get update before trying to install the headers. What can I do to download it?
I would upgrade the kernel release version itself instead of trying to install the Linux kernel headers for the old version (4.3.0) of the kernel. Perform the following step after updating the Kali /etc/apt/sources.list file with the latest version of the Kali rolling repository : sudo apt-get update # this pulls the latest packages list from the kali sources reposudo apt-get -y dist-upgrade # when installing this, you would see the latest kernel # image in the list of packages to be installed,something # like" linux-image-4.5.0-kali1-amd64"reboot # MOST IMPORTANT STEP! make sure you reboot the machine via this cmd OR # shutdown, restart forcefully after completing prev cmdsuname -r # check that the kernel release has updated
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159064/" ] }
266,887
For a really big file like 1GB wc -l happens to be slow. Do we have a faster way calculating the number of newlines for a particular file?
You can try to write in C: #include <unistd.h>#include <stdio.h>#include <string.h>int main(){ char buf[BUFSIZ]; int nread; size_t nfound=0; while((nread=read(0, buf, BUFSIZ))>0){ char const* p; for(p=buf; p=memchr(p,'\n',nread-(p-buf)); nfound++,p++) {;} } if(nread<0) { perror("Error"); return 1; } printf("%lu\n", nfound); return 0;} Save in e.g., wcl.c , compile e.g., with gcc wcl.c -O2 -o wcl and run with <yourFile ./wcl This finds newlines sprinkled in a 1GB file on my system in about 370ms (repeated runs).(Increasing buffer sizes slightly increases the time, which is to be expected -- BUFSIZ should be close to optimal).This is very comparable to the ~ 380ms I'm getting from wc -l . Mmaping gives me a better time of about 280ms , but it of course has the limitation of being limited to real files (no FIFOS, no terminal input, etc.): #include <stdio.h>#include <string.h>#include <sys/mman.h>#include <sys/stat.h>#include <sys/types.h>#include <unistd.h>int main(){ struct stat sbuf; if(fstat(0, &sbuf)<0){ perror("Can't stat stdin"); return 1; } char* buf = mmap(NULL, sbuf.st_size, PROT_READ, MAP_PRIVATE, 0/*stdin*/, 0/*offset*/); if(buf == MAP_FAILED){ perror("Mmap error"); return 1; } size_t nread = sbuf.st_size, nfound=0; char const* p; for(p=buf; p=memchr(p,'\n',nread-(p-buf)); nfound++,p++) {;} printf("%lu\n", nfound); return 0;} I created my test file with: $ dd if=/dev/zero of=file bs=1M count=1042 and added some test newlines with: $ echo >> 1GB and a hex editor.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/266887", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157620/" ] }