source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
277,217
$uname -a Linux vm-** 2.6.32-573.8.1.el6.x86_64 #1 SMP Fri Sep 25 19:24:22 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux I downloaded dos2unix-7.3.3-win32.zip and unzipped it. Under bin folder from the unzipped file, I got dos2unix.exe How to install dos2unix in Linux? I can't do yum install dos2unix as I am not & can't get root access.
Other answers show how to download and compile dos2unix , but if you're simply looking to convert files from DOS-style line endings (CR-LF) to Unix-style line endings, there are several other approaches which shouldn't involve installing anything: if you have tr : tr -d '\r' < input > output if you have Perl: perl -pi -e 's/\r\n/\n/g' input (which converts the file in-place, same as dos2unix ) if you have sed : sed -i 's/^M$//' input where you'd press Ctrl V then Ctrl M to get ^M .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130895/" ] }
277,242
I have a collection of 5000+ files and I just want to create an output.txt file which contains the 27th line of all the files, along with their filenames.What I got from the internet is picking specific line from a single file using awk or sed commands, such as: $sed -n 27p *.txt >>output.txt for example my files in a directory are: log_1.txtlog_2.txtlog_3.txtlog_4.txt... I want the 27th line of each file with its file name in front or behind of the printed line in the new output.txt file.
awk 'FNR==27 {print FILENAME, $0}' *.txt >output.txt FILENAME is built-in awk variable for current input file name FNR refer to line number of current file $0 means whole line
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277242", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166277/" ] }
277,243
I would like to keep the lines with exactly 639 characters in my .txt file. What is the command to do so?
You can use grep : grep -E '^.{639}$' your.txt The ^ and $ match beginning and end of line. The .{639} match any character exactly 639 times. As Stéphane commented, this can be shortened (by one character) to: grep -Ex '.{639}' your.txt with -x indicating: Select only those matches that exactly match the whole line
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166313/" ] }
277,258
Created a file employees: 1. Fred2. Billy 13. Sally 14. Jim 25. Jane 26. Sue 37. Meg 3 Created a file “managers”: 1. Fred2. Bill3. Sally I want to print something like this: FredBilly FredSally FredJim BillyJane BillySue SallyMeg Sally
You can use grep : grep -E '^.{639}$' your.txt The ^ and $ match beginning and end of line. The .{639} match any character exactly 639 times. As Stéphane commented, this can be shortened (by one character) to: grep -Ex '.{639}' your.txt with -x indicating: Select only those matches that exactly match the whole line
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166301/" ] }
277,276
On X window system (TWM) of my Ubuntu 12.04, xterm has width of 484 pixel and height of 316 pixel, and its geometry is 80x24, based on xwininfo . On X window system (TWM) of my LFS 7.9, xterm has with of 644 pixel and height of 388 pixel, and its geometry is 80x24, based on xwininfo . How can I configure xterm of LFS 7.9 so that its width by height size can be like Ubuntu 12.04? I like how Ubuntu xterm looks.
An XTerm's size is determined by the number of characters its displaying, the font it is using, and the size of the window manager decorations (title bar, outlines, etc.). You're probably using a different (larger) font on LFS. Ubuntu's xterm settings are probably in /etc/X11/app-defaults/{XTerm,XTerm-color} (at least that's where they are in Debian). You could copy them over, or at least the settings you want. [BTW: If you're not aware, XTerm has multiple fonts you can switch to via Control RightClick and Control Shift Keypad +/- (all bindings configurable). You can also do that on a per-user basis in your ~/.Xresources file and with xrdb . If you want to know what all the settings in the XTerm app-defaults mean, the xterm manpage actually documents them thoroughly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158683/" ] }
277,331
When a segmentation fault occurs in Linux, the error message Segmentation fault (core dumped) will be printed to the terminal (if any), and the program will be terminated. As a C/C++ dev, this happens to me quite often, and I usually ignore it and move onto gdb , recreating my previous action in order to trigger the invalid memory reference again. Instead, I thought I might be able to perhaps use this "core" instead, as running gdb all the time is rather tedious, and I cannot always recreate the segmentation fault. My questions are three: Where is this elusive "core" dumped? What does it contain? What can I do with it?
If other people clean up ... ... you usually don't find anything. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find: core_pattern is used to specify a core dumpfile pattern name. If the first character of the pattern is a '|', the kernel will treatthe rest of the pattern as a command to run. The core dump will bewritten to the standard input of that program instead of to a file. (See Core dumped, but core file is not in the current directory? on StackOverflow) According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory. But what's in there? Now what it contains is system specific, but according to the all knowing encyclopedia : [A core dump] consists of the recorded state of the working memory of a computerprogram at a specific time[...]. In practice, other key pieces ofprogram state are usually dumped at the same time, including theprocessor registers, which may include the program counter and stackpointer, memory management information, and other processor andoperating system flags and information. ... so it basically contains everything that gdb needs (in addition to the executable that caused the fault) to analyze the fault. Yeah, but I'd like me to be happy instead of gdb You can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump . You should then be able to analyze the specific failure instead of trying and failing to reproduce bugs.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/277331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142558/" ] }
277,334
I use Gmail and every so often I have to hack it back in that text/plain email should be rendered in a monospace font. This makes it much easier to skim system-generated reports, &c. The problems with this are that I have to re-hack Gmail every few years when they change their semantics, and I have more "developer" type colleagues who aren't going to hack their gmail to improve the legibility of these emails. So, I'm wondering if anyone knows an easy command to take a text file and wrap it in HTML and enough MIME stuff to correctly encode the message as ... ideally multipart alternative, with the HTML being the text in a PRE tag. I mean, if I can even feed MIME output to cron? I'd be content to pipe to an html-mime-email type command ...
If other people clean up ... ... you usually don't find anything. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find: core_pattern is used to specify a core dumpfile pattern name. If the first character of the pattern is a '|', the kernel will treatthe rest of the pattern as a command to run. The core dump will bewritten to the standard input of that program instead of to a file. (See Core dumped, but core file is not in the current directory? on StackOverflow) According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory. But what's in there? Now what it contains is system specific, but according to the all knowing encyclopedia : [A core dump] consists of the recorded state of the working memory of a computerprogram at a specific time[...]. In practice, other key pieces ofprogram state are usually dumped at the same time, including theprocessor registers, which may include the program counter and stackpointer, memory management information, and other processor andoperating system flags and information. ... so it basically contains everything that gdb needs (in addition to the executable that caused the fault) to analyze the fault. Yeah, but I'd like me to be happy instead of gdb You can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump . You should then be able to analyze the specific failure instead of trying and failing to reproduce bugs.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/277334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5571/" ] }
277,365
I encountered this use case today. It seems simple at first glance, but fiddling around with sort , uniq , sed and awk revealed that it's nontrivial. How can I delete all pairs of duplicate lines? In other words, if there is an even number of duplicates of a given line, delete all of them; if there is an odd number of duplicate lines, delete all but one. (Sorted input can be assumed.) A clean elegant solution is preferable. Example input: aaabbccccddddde Example output: ade
I worked out the sed answer not long after I posted this question; no one else has used sed so far so here it is: sed '$!N;/^\(.*\)\n\1$/d;P;D' A little playing around with the more general problem (what about deleting lines in sets of three? Or four, or five?) provided the following extensible solution: sed -e ':top' -e '$!{/\n/!{N;b top' -e '};};/^\(.*\)\n\1$/d;P;D' temp Extended to remove triples of lines: sed -e ':top' -e '$!{/\n.*\n/!{N;b top' -e '};};/^\(.*\)\n\1\n\1$/d;P;D' temp Or to remove quads of lines: sed -e ':top' -e '$!{/\n.*\n.*\n/!{N;b top' -e '};};/^\(.*\)\n\1\n\1\n\1$/d;P;D' temp sed has an additional advantage over most other options, which is its ability to truly operate in a stream, with no more memory storage needed than the actual number of lines to be checked for duplicates. As cuonglm pointed out in the comments , setting the locale to C is necessary to avoid failures to properly remove lines containing multi-byte characters. So the commands above become: LC_ALL=C sed '$!N;/^\(.*\)\n\1$/d;P;D' tempLC_ALL=C sed -e ':top' -e '$!{/\n/!{N;b top' -e '};};/^\(.*\)\n\1$/d;P;D' tempLC_ALL=C sed -e ':top' -e '$!{/\n.*\n/!{N;b top' -e '};};/^\(.*\)\n\1\n\1$/d;P;D' temp# Etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
277,373
On an efi system it it possible to run arbitrary efi binaries. Especially I can use the efi shell (one efi binary) to run grub (another efi binary). Is it also possible to use grub to run for example an efi shell? (In theory this should be no problem, but I did not find the correct command to start such a binary.)
Yes, and here's a short example taken from Rod Smith's great page on GRUB 2/EFI Boot Loading To chainload another EFI boot loader, one uses GRUB2 chainloader The following grub2 menuentry example will run an EFI bootloader menuentry "Windows 7" { insmod part_gpt insmod chain set root='(hd0,gpt1)' chainloader /EFI/Microsoft/Boot/bootmgfw.efi}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29241/" ] }
277,387
When using the tab bar, I keep getting this error: bash: cannot create temp file for here-document: No space left on device" Any ideas? I have been doing some research, and many people talk about the /tmp file, which might be having some overflow. When I execute df -h I get: Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.1G 8.7G 0 100% /udev 10M 0 10M 0% /devtmpfs 618M 8.8M 609M 2% /runtmpfs 1.6G 0 1.6G 0% /dev/shmtmpfs 5.0M 4.0K 5.0M 1% /run/locktmpfs 1.6G 0 1.6G 0% /sys/fs/cgroup/dev/sda1 511M 132K 511M 1% /boot/efi/dev/sda4 1.8T 623G 1.1T 37% /hometmpfs 309M 4.0K 309M 1% /run/user/116tmpfs 309M 0 309M 0% /run/user/1000 It looks like the /dev/data directory is about to explode, however if I tip: $ du -sh /dev/sda20 /dev/sda2 It seems it's empty. I am new in Debian and I really don't know how to proceed. I used to typically access this computer via ssh. Besides this problem I have several others with this computer, they might be related, for instance each time I want to enter my user using the GUI (with root it works) I get: Xsession: warning: unable to write to /tmp: Xsession may exit with an error
Your root file system is full and hence your temp dir (/tmp, and /var/tmp for that matter) are also full. A lot of scripts and programs require some space for working files, even lock files. When /tmp is unwriteable bad things happen. You need to work out how you've filled the filesystem up. Typically places this will happen is in /var/log (check that you're cycling the log files). Or /tmp may be full. There's many, many other ways that a disk can fill up, however. du -hs /tmp /var/log You may wish to re-partition to give /tmp it's own partition (that's the old school way of doing it, but if you have plenty of disk it's fine), or map it into memory (which will make it very fast but start to cause swapping issues if you overdo the temporary files).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/277387", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166411/" ] }
277,522
I want to list all files in a directory that start with 'abc' and end with '.zip' I'm trying to use ls . The directory has a lot of zip files starting with abc<date> and xvz<date> .I only want to get the list of the abc<date>.zip
ls doesn't do pattern matching on file names. It just lists the content of the directories and the files it is being given as arguments. Your shell on the other hand has a feature called globbing or filename generation that expands a pattern into a list of files matching that pattern. Here that glob pattern would be abc*.zip ( * being a wildcard that stands for any number of characters ). You'd pass it to any command you like such as printf for printing: printf '%s\n' abc*.zip You could also pass it to ls -l to display the attributes of those files: ls -ld abc*.zip (we need -d because if any of those files are of type directory , ls would list their content otherwise). Or to unzip to extract them if only unzip could extract more than one file at a time. Unfortunately it doesn't, so you'd need either to use xargs -n1 or a for loop: printf '%s\0' abc*.zip | xargs -r0n1 unzip Or: for file in abc*.zip; do unzip "$file"; done But in fact, unzip being more like a port of a MS-DOS command, unzip itself would treat its argument as a glob. In other words, unzip 'abc*.zip' will not unzip the file called abc*.zip (a perfectly valid file name on Unix, not on Microsoft operating systems), but the files matching the abc*.zip pattern, so you'd actually want: unzip 'abc*.zip' (Actually our xargs and for approach above would be wrong, because if there's a file called abc*.zip for instance, unzip would treat it as a pattern! See bsdtar for a more unixy way to extract zip archives) For case insensitive matching, you'd use [aA][bB][cC]*.[zZ][iI][pP] portably. Some shells have extended globbing operators for case insensitive matching: zsh : setopt extendedglobls -ld (#i)abc*.zip Or: ls -ld ((#i)abc)*.zip if you only want the abc part to be case insensitive. ksh93 : ls -ld ~(i)abc*.zip or: ls -ld ~(i:abc)*.zip with bash . shopt -s nocaseglobls -ld abc*.zip (no way to have only parts of the globs case sensitive there other than by using the portable syntax). with yash : set +o case-globls -ld abc*.zip same remark as for bash above.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163579/" ] }
277,538
I want to use nohup to periodically log the output of top (every 15s). For that, I log into the server with SSH, carry out the following command, and log out again: nohup top -b -c -d15 & The -c flag provides additional information about the command, but in the generated nohup.out file, this information is cut to 82 characters. So instead of output like this: 10484 daemon 20 0 68924 5556 2292 S 0.0 0.0 0:00.06 /opt/CollabNet_Subversion/bin/httpd -D csvn_installed -k start ... in nohup.out it looks like this: 10484 daemon 20 0 68924 5556 2292 S 0.0 0.0 0:00.06 /opt/CollabNet_Subv How can I prevent nohup from trimming the results of top ?
You need to specify the output width top should use, with the -w parameter (up to a maximum of 512 columns): nohup top -b -c -d15 -w512 & If your version of top doesn't support -w , the same effect can be achieved using the COLUMNS environment variable: COLUMNS=512 nohup top -b -c -d15 & As explained by schily , since top isn't outputting to a terminal in this case, it can't determine the terminal width to use and falls back to a safe default.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277538", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70381/" ] }
277,555
How do you use nmcli to delete a wifi connection by name? From what I've read, it only allows deletion via UUID: nmcli connection delete <uuid> The simplest way I've found to delete by name is to lookup the UUID from the name and pass that in: nmcli con delete `nmcli --fields NAME,UUID con list | grep -i mynetworkname | awk '{print $2}'` Is there a simpler way?
To delete a wifi connection type : nmcli connection delete id <connection name>
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277555", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16477/" ] }
277,649
My Debian 8 system (Jessie) is primarily an xfce system, but I also use gnome and kde programs. I did not install any desktop environment, but I start my xsession with an .xsession file. Problem is, I cannot get the fonts (and other UI aspects) right. Tools First I don't know what tool to use. There is kde's systemsettings and there is gnome-tweak-tool . I do have a gnome-settings-daemon running. Changes made in gnome-tweak-tool immediately manifest in thunderbird and gnome-tweak tool itself, but affect e.g. krusader and k3b only after I restart them. But even then the fonts are too big, but at least there is an effect. Systemsettings does not seem to have any effect at all on running applications and when I restart a kde application, its fonts appear to be reset to some default setting, Gnome-tweak-tool offers only very few settings, while systemsettings allows to set the menu font, the toolbar font etc. separately. So I assume gnome must generate the fine-grained settings based on a few settings made by myself in a way which is "best for me". Xfce has its own xfce4-appearance-settings tool, but it does not seem to affect the fonts of applications and allows setting a single default font only. Oh, and then there is qtconfig , which seems to come in two versions, one for qt4 and qt5. I probably even missed a few. Currently, the fonts I see do not correspond to the font-settings of any of those tools. Questions Anyways, I really would like to understand how this whole thing is supposed to work. The interactions between the various setting tools and the applications and where they store their data is a mystery to me. Is there an established standard how these settings tools interact? Additionally, if someone can show me a way to dispose of all this magic, I'd be most grateful. In the old days, setting up xrdb took time, but at least it was a process I could understand and it converged. Specifically What happens when I change the theme? Do the various setting tools overwrite each other's data? What do the (xfce/gnome) settings-daemons do? What does dconf do?
This answer is not fully canonical--see Answerer's note at the end. In this answer, I intend to provide concise explanations based on actual experience in Debian-based distributions (Xubuntu and Debian Xfce). Anyway, I have confirmed that the fonts settings doesn't take effect immediately for Qt applications in GTK+ environment of Debian. Regarding tools First I don't know what tool to use. If you are an experienced user, don't use any third-party configuration tool. For convenience, users tend to use such tools like GNOME Tweak Tool. For best compatibility however, prefer to use graphical configuration tools provided by the desktop environment or respective toolkits. Else, use more advanced tools such as Dconf, Xfconf, or GSettings. Oh, and then there is qtconfig, which seems to come in two versions, one for qt4 and qt5. I probably even missed a few. Use qt4-qtconfig for Qt4, and similarly use qt5ct for Qt5. To check which version of Qt used by particular application, trace the package dependencies using APT. For example, I have VLC installed in Xubuntu. Since VLC is using Qt, I can query the APT cache and use 'qt' as the keyword to filter any Qt dependencies. $ apt-cache depends vlc | grep 'qt' Depends: libqtcore4 Depends: libqtgui4 In this case, the result hinted that VLC is using Qt4 toolkit. Another way is to check if any Qt packages have been installed. This can be done by running Dpkg command. I choose to filter the package that contains these keywords, 'libqt' and 'gui', since the earlier APT command above has hinted so. $ dpkg-query -W | grep 'libqt' | grep 'gui'libqtgui4:i386 4:4.8.5+git192-g085f851+dfsg-2ubuntu4.1 In this case, the result is showing Qt4 of version 4.8.x is installed. In case Qt5 package is installed, the result will contain libqt5gui5 instead of libqtgui4 . Naming is different for Qt5, which is why I use grep twice to filter the result. Currently, the fonts I see do not correspond to the font-settings of any of those tools. As hinted earlier by L. Levrel, the system setup is messy. Never mind for GNOME/KDE/Qt applications, but running gnome-settings-daemon in Xfce system? Xfce has its own settings daemon xfsettingsd that is provided by this package xfce4-settings in Debian. Font settings discrepancy In the following screenshots, I am running Xfce Appearance, Qt 4 Settings and VLC media player in Xfce desktop environment, using Xubuntu and Debian Xfce. Then, I changed the default font from "Droid Sans (Regular)" to "Droid Sans (Bold)" at same font size 10. For Xubuntu, the font settings were applied immediately for both GTK+ and Qt applications. The font specified in Xfce Appearance was sufficient to affect system wide, regardless of toolkit used. For Debian Xfce, the font settings were applied immediately for GTK+ applications only. I had to use Qt 4 Settings to change default font in Qt, from "Sans Serif (Normal)" to "Droid Sans Mono (Regular)". I also had to manually save from the menu bar before the changes were applied. Despite both Xubuntu and Debian Xfce were running xfsettingsd , Debian Xfce did not honour the font settings automatically for some reason. Possible clue: Debian Xfce may be missing certain packages that allow Qt applications to read system settings automatically. Regarding questions Is there an established standard how these settings tools interact? Likely no. This is why user should not use third-party tool to configure system settings. End users wouldn't know how these tools interact with each other, or these tools are unlikely designed to work in such mixed setup environment. Do the various setting tools overwrite each other's data? This depends on how the configuration tool works. If two different tools share same underlying configuration method i.e. GSettings, then both will supposedly read the same configuration file. In the first screenshot, notice that Qt 4 Settings is still showing "Droid Sans (Regular)" instead of "Droid Sans (Bold)". The system wide settings in Xubuntu has changed, but the Qt 4 Settings doesn't reflect the changes immediately, possibly due to different toolkit in use. When quit and launch again, Qt 4 Settings will now show the latest font settings "Droid Sans (Bold)". For the rest of questions, I'd suggest to look into the following linked pages for further reading. xfce:xfce4-settings:xfsettingsd on Xfce Docs. Projects/dconf on GNOME Wiki! DConf-0.24.0 / DConf-Editor-3.16.1 in Chapter 34. GNOME Libraries and Utilities, Beyond Linux® From Scratch - Version 7.8. What is dconf, what is its function, and how do I use it? on Ask Ubuntu. Shouldn't dconf-editor and gsettings access the same database? on Ask Ubuntu. HowDoI/GSettings on GNOME Wiki! Fonts on Debian Wiki. Fontconfig on Gentoo Wiki. Font configuration on ArchWiki. Disclaimer : This answer is mainly based on my experience in Xubuntu 14.04 and Debian 8 Xfce (both running Xfce 4.10). Despite running a similar desktop environment, the experience was slightly different for the font settings to take effect. Known limitations are as follows. Like my experience with Xfce, I would assume other systems that run a similar environment do not necessarily honour the font settings automatically. I did not investigate every possible configurations of mixed environment; some discussed points may not be fully applicable to systems that are configured manually. I did not test font settings for KDE applications; some KDE applications such as Dolphin file manager depends on both KDE and Qt packages, but Qt applications solely depends on Qt packages; this answer may not be relevant to KDE applications. TL;DR Don't mix tools created for a different environment; Don't use third-party configuration tools. Use graphical tool provided by desktop environment or respective toolkits. Answerer's note : I had written this answer to be sufficient, but not fully canonical at the time. Hence this answer is now a community wiki, so that anyone with minimum reputation can improve this post to keep up with latest changes in each toolkit, or even re-explain the font settings discrepancy using a more concise example (if done so, remove relevant text from "Disclaimer").
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277649", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37945/" ] }
277,661
File missing with echo but present with ls -a . Why? bojan@hyperion:~$ touch .ignoramusbojan@localhost:~$ ls -al | grep ignor-rw-rw-r-- 1 bojan bojan 0 Apr 19 19:05 .ignoramusbojan@localhost:~$ GLOBIGNORE=".ignoramus";bojan@localhost:~$ echo .i*.iconsbojan@localhost:~$ ls -al | grep ignor-rw-rw-r-- 1 bojan bojan 0 Apr 19 19:05 .ignoramusbojan@localhost:~$ echo $GLOBIGNORE.ignoramus The ls man says -a only shows hidden, doesn't mention GLOBIGNORE. -a, --all do not ignore entries starting with .ls (GNU coreutils) 8.23GNU bash, version 4.3.42(1)-release (x86_64-pc-linux-gnu)
Setting GLOBIGNORE has no influence on ls , and the ls manual doesn't mention GLOBIGNORE , because ls doesn't care about GLOBIGNORE . It's a feature of bash only, which makes it omit some files in glob patterns. With echo .i* , bash is listing the files, so GLOBIGNORE kicks in. With ls -a , ls is listing the files, so GLOBIGNORE is irrelevant. GNU ls has a similar feature: you can pass a pattern to ignore as a command line option. ls -a -I .ignoramus
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160979/" ] }
277,664
I want to build a RAID system on top of Ubuntu Trusty. When I do apt-get install mdadm A screen pop up asking me to configure postfix. Because I will run this installation in automation script. Is there any way I can skip installing postfix or at least set to no-configuration in the command line?
The mdadm package recommends a MTA so as to send an email if a disk fails. It's a useful feature, so I recommend that you do ensure that email is working. Postfix is overkill for a system that does nothing but send emails to a relay. I recommend adding nullmailer to your list of packages, and configure it appropriately. Since you're doing an automated installation, you should use the preseed feature.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165516/" ] }
277,697
I found this command used to find duplicated files but it was quite long and made me confused. For example, if I remove -printf "%s\n" , nothing came out. Why was that? Besides, why have they used xargs -I{} -n1 ? Is there any easier way to find duplicated files? [4a-o07-d1:root/798]#find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate0bee89b07a248e27c83fc3d5951213c1 ./test1.txt0bee89b07a248e27c83fc3d5951213c1 ./test2.txt
You can make it shorter: find . ! -empty -type f -exec md5sum {} + | sort | uniq -w32 -dD Do md5sum of found files on the -exec action of find and then sort and do uniq to get the files having same the md5sum separated by newline.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/277697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138782/" ] }
277,756
Imagine an output of a command like 4444455555111112222233333 how can I yank the first N lines (first two in the example above) and append them at the end, but without using temp file (so only using pipes)? 1111122222333334444455555 Something along the lines of | sed -n '3,5p;1,2p' (which obviously doesn't work as sed doesn't care about the order of the commands).
Just copy those lines to the hold buffer (then delete them) and when on last line append the content of hold buffer to pattern space: some command | sed '1,NUMBER{ # in this rangeH # append line to hold space and1h # overwrite if it's the 1st lined # then delete the line}$G' # on last line append hold buffer content With gnu sed you could write it as some command | sed '1,NUMBER{H;1h;d;};$G' Here's another way with ol' ed (it r eads the output of some command into the text buffer and then m oves lines 1,NUMBER after the la $ t one): ed -s <<INr ! some command1,NUMBERm$,pqIN Note that - as pointed out - these will both fail if the output has less than NUMBER +1 lines. A more solid approach would be ( gnu sed syntax): some command | sed '1,NUMBER{H;1h;$!d;${g;q;};};$G' this one only deletes lines in that range as long as they're not the last line ( $!d ) - else it overwrites pattern space with hold buffer content ( g ) and then q uits (after printing the current pattern space).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148823/" ] }
277,789
If I use PC-BSD with the default shell (Korn) then Ctrl + r doesn't work. Why won't it work? Ctrl - r was introduced to search your history in the late 1970s or early 80s and my BSD still can't do it (while Ubuntu can). Ctrl - r originates with Emacs doesn't it? When? 1975? 1983?
Ctrl+R works with ksh in emacs mode ( ksh -o emacs or set -o emacs within ksh ), and it was most probably the first shell to support it. Only it's not as interactive as in zsh or bash or tcsh 's i-search-back widget. In ksh (both ksh88 and ksh93 ), you type Ctrl+R text Return . And Ctrl+R Return to search again with the same text. In vi mode, you can use ? to search backward and n for next search. That emacs incremental search feature was added to: bash / readline at least since July 1989 as the feature was already mentioned on usenet at that time but probably not from the start as the version of readline shipped with zsh-1.0 didn't have it. zsh since 2.0 in 1991 after the line editor was rewritten and no longer used readline . tcsh in V6.00.03, 10/21/91, but not bound by default ( tcsh had other search mechanism on Meta-P for a while before that though). ksh : ksh was most probably the first Unix shell to have an emacs editing mode, written in 1982 by Mike Veach (as well as the vi mode by Pat Sullivan, reusing code that those two had already applied independently to the Bourne shell) at AT&T. ksh was first introduced outside AT&T at the 1983 USENIX conference where those features were described , but was not commercially available until some time after that ( 1 , 2 ). It's hard to tell if ^R was already there at the time (in any case, it was already there in 1986 and 1985 (see usr/man/man1/ksh.1 ksh85 man page in that Unix V8 tarball at the Unix Heritage Society )), but it's hard to imagine it wasn't as it's an essential feature, especially for a shell, and I'd expect vi mode's ? would also have been there at the time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
277,793
At my company, we download a local development database snapshot as a db.dump.tar.gz file. The compression makes sense, but the tarball only contains a single file ( db.dump ). Is there any point to archiving a single file, or is .tar.gz just such a common idiom? Why not just .gz ?
Advantages of using .tar.gz instead of .gz are that tar stores more meta-data (UNIX permissions etc.) than gzip . the setup can more easily be expanded to store multiple files .tar.gz files are very common, only-gzipped files may puzzle some users.(cf. MelBurslans comment ) The overhead of using tar is also very small. If not really needed, I still do not recommend to tar a single file.There are many useful tools which can access compressed single files directly (such as zcat , zgrep etc. - also existing for bzip2 and xz ).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/277793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149505/" ] }
277,803
I'm setting up a NAS-Server using an Odroid XU4 and two 2TB HDDs.Which setup would be more secure (lower risk of losing data / easier recovery): setup a RAID1 with mdadm have two separate devices, sync devices using rsync periodically I know if one drive crashed in 2. I'd lose the data created/modified since last sync, but when using a RAID it would be a bit more "difficult" to get the data from the "still working" drive.
Advantages of using .tar.gz instead of .gz are that tar stores more meta-data (UNIX permissions etc.) than gzip . the setup can more easily be expanded to store multiple files .tar.gz files are very common, only-gzipped files may puzzle some users.(cf. MelBurslans comment ) The overhead of using tar is also very small. If not really needed, I still do not recommend to tar a single file.There are many useful tools which can access compressed single files directly (such as zcat , zgrep etc. - also existing for bzip2 and xz ).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/277803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166699/" ] }
277,820
You know, I was just there, doing my things, when suddenly a terrible Broadcast message appeared! fiatjaf@mises ~> slfiatjaf@mises ~> ls dotfiles/urxvtvim/vimrcfiatjaf@mises ~> cowsay good morning ______________< good morning > -------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||fiatjaf@mises ~> fiatjaf@mises ~> Broadcast message from root@mises (/dev/pts/3) at 11:12 ...The system is going down for maintenance NOW! How can trigger a message like this from my own programs?
man wall will give you what you need. You execute wall with either a filename, or you pipe content to it. For example, either, wall file.name to broadcast the content of the file file.name or echo "Dive\!" | wall to send the message Dive! Update: As Stephen points out in this answer , later versions of wall can send messages by simply typing, wall message text here and in fact, there are additional restrictions on non-root users sending the contents of files by specifying only the file name.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36721/" ] }
277,845
Running example C code is a painful exercise unless it comes with a makefile. I often find myself with a C file containing code that supposedly does something very cool, but for which a first basic attempt at compilation ( gcc main.c ) fails with— main.c:(.text+0x1f): undefined reference to `XListInputDevices'clang-3.7: error: linker command failed with exit code 1 (use -v to see invocation) —or similar. I know this means I'm missing the right linker flags, like -lX11 , -lXext or -lpthread . But which ones? The way I currently deal with this is to find the library header that a function was included from , use Github's search to find some other program that imports that same header, open its makefile, find the linker flags, copy them onto my compilation command, and keep deleting flags until I find a minimal set that still compiles. This is inefficient, boring, and makes me feel like there must be a better way.
The question is how to determine what linker flag to use from inspection of the source file. The example below will work for Debian. The header files are the relevant items to note here. So, suppose one has a C source file containing the header #include <X11/extensions/XInput.h>. We can do a search for XInput.h using, say apt-file . If you know this header file is contained in an installed package, dpkg -S or dlocate will also work. E.g. apt-file search XInput.hlibxi-dev: /usr/include/X11/extensions/XInput.h That tells you that this header file belongs to the development package for libxi (for C libraries, the development packages (normally of the form libname-dev or libname-devel ) contain the header files), and therefore you should use the -lxi linker flag. Similar methods should work for any distribution with a package management system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16404/" ] }
277,883
A mount point /mnt/sub is shadowed by another mount point /mnt . Is it always possible to access the mounted filesystem? Root access is a given. The system is a reasonably recent Linux. Example scenario: accessing the branches of an overlay root The basic sequence of operations is: mount device1 /mnt/submount device2 /mnt After this /mnt/sub is a file on device2 (if it exists). The question is how to access files on device1 . Some devices can be mounted twice, so mount device1 /elsewhere would work. But this doesn't work for all devices, in particular not for FUSE filesystems. This differs from the already covered case where a subdirectory is shadowed by a mount point, but the mount point of the subdirectory is itself visible, and a bind mount can create an unobscured view. In the example above, mount --bind / /elsewhere lets us see the /mnt/sub directory from the root filesystem on /elsewhere/mnt/sub , but this question is about accessing the filesystem on device1 .
# unshare --mount # this opens a sub-shell# cd /# umount /mnt do what thou wilt # exit # close the sub-shell
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
277,892
I recently needed a single blank PDF page (8.5" x 11" size) and realized that I didn't know how to make one from the command line. Issuing touch blank.pdf produces an empty PDF file . Is there a command line tool that produces an empty PDF page ?
convert , the ImageMagick utility used in Ketan's answer, also allows you to write something like convert xc:none -page Letter a.pdf or convert xc:none -page A4 a.pdf or (for horizontal A4 paper) convert xc:none -page 842x595 a.pdf etc. , without creating an empty text file. @chbrown noticed that this creates a smaller pdf file. "xc:" means "X Constant Image" but could really be thought of as "x canvas". It's a way to specify a single block of a color, in this case none. More info at http://imagemagick.org/Usage/canvas/#solid which is the "de facto" manual for ImageMagick. [supplemented with information from pipe] (Things like pdf:a can be used to explicitly declare the format of a file. label:'some text' , gradient: , rose: and logo: seem to be other examples of special file formats.) Anko suggested posting this modification as a separate answer, so I am doing it.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/277892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92703/" ] }
277,909
I recently updated my Arch Linux server and during that process tmux got updated. I was using tmux while the upgrade was going on and used it afterwards, but all during the same SSH session. Now, however, whenever I try to issue any tmux command I get this error: tmux: need UTF-8 locale (LC_CTYPE) but have ANSI_X3.4-1968 Here's the output from locale -a on the server: $ locale -aCPOSIX and on my machine (Ubuntu 15.10): $ locale -aCC.UTF-8en_AGen_AG.utf8en_AU.utf8en_BW.utf8en_CA.utf8en_DK.utf8en_GB.utf8en_HK.utf8en_IE.utf8en_INen_IN.utf8en_NGen_NG.utf8en_NZ.utf8en_PH.utf8en_SG.utf8en_US.utf8en_ZA.utf8en_ZMen_ZM.utf8en_ZW.utf8POSIX What's going on and how do I fix it?
The same exact thing happened to me. Building on what Thomas said above, I was able to fix it by uncommenting en_US.UTF-8 UTF-8 in my /etc/locale.gen file (previously none of the lines had been uncommented), then running locale-gen .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/277909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63480/" ] }
277,919
When I am logged into my home Linux machines via SSH, I get the following debug output a few times every minute: debug2: channel 0: request window-change confirm 0 When I am editing files in nano, the debug message is displayed over top of the text. To remove the debug message, I have to close nano, execute clear , and then re-open the file. I get this whether I am using Secure Shell (Google Chrome extension), PuTTY, or Terminator (from local machine), though it's far worse on the former two. Is there some way to suppress these messages? Will executing sshd -q do it, or is the debug output level specified during the compilation process?
sshd is the daemon. You'd want to use the -q flag with the client ( ssh ). When connecting to your home machines, include the -q flag in the ssh command (i.e. ssh -q user@host ). Alternatively, if that doesn't work, you could try redirecting stderr to /dev/null by connecting to your home machines like ssh user@host 2> /dev/null .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123270/" ] }
277,921
I need to install a LAMP stack on a RHEL 6.7 server. Do I have to be root to install this or not?
sshd is the daemon. You'd want to use the -q flag with the client ( ssh ). When connecting to your home machines, include the -q flag in the ssh command (i.e. ssh -q user@host ). Alternatively, if that doesn't work, you could try redirecting stderr to /dev/null by connecting to your home machines like ssh user@host 2> /dev/null .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/277921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166770/" ] }
277,944
Where is the environment variable $SHELL first set on a UNIX system? How can I find and print all of this type of default settings of my terminal?
Traditionally, by login(1) : ENVIRONMENT login sets the following environment variables: HOME The user's home directory, as specified by the password database. SHELL The user's shell, as specified by the password database. Though these days it might be a window manager or terminal program making those settings, depending on the flavor of unix and how far they've departed from tradition. env will show what's currently set in the environment, which a shell or something else may have altered from the default. However, "terminal settings" are not typically environment variables, and shells like bash or zsh have a set command, and other places they hide settings...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166707/" ] }
277,981
I know that GNU Parallel buffers std/stderr because it doesn't want jobs output to be mangled, but if I run my jobs with parallel do_something ::: task_1 task_2 task_3 , is there anyway for task_1's output to be displayed immediately, then after task_1 finishes, task_2's up to its current output, etc. If Parallel cannot solve this problem, is there any other similar program that could?
From version 20160422 you can do: parallel -k --lb do_something ::: task_1 task_2 task_3
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/277981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166817/" ] }
278,011
Tar encodes my user name into the tarball. Can I force it to make a fully anonymous tarball? --owner root replaces only some instances of my user name. Adding USER=root : USER=root tar c --owner root data has no effect. In short, I wish for: echo hello world > data; tar c --owner root data | grep "$USER" to not match.
What I was missing was --group=root in addition to --owner=root . tar -c --{owner,group}=root (possibly with an optional --numeric-owner ) fully anonymizes the archive.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
278,037
I need access a remote windows machine which I don't have any privilege on. The machine runs windows server 12R2, and the sys admin tells me that the RDP protocol there is the latest one. I tried to connect to the machine with xfreerdp and I get: xfreerdp -u <username> <ip.address.of.machine>connected to <ip.address.of.machine>Password:SSL_read: Failure in SSL library (protocol error?)Authentication failure, check credentials.If credentials are valid, the NTLMSSP implementation may be to blame. The username and password are valid. I do have to mention that I was asked to write the username as <uni-users>\<myusername> . I read in several places that I need to disable nla security feature, which I don't think the sys admin will do. Is there a proper way to overcome this? Edit I can connect to the remote machine from a windows machine, here is the rdp definition fine screen mode id:i:2use multimon:i:0desktopwidth:i:1920desktopheight:i:1080session bpp:i:32winposstr:s:0,1,400,195,1200,795compression:i:1keyboardhook:i:2audiocapturemode:i:0videoplaybackmode:i:1connection type:i:7networkautodetect:i:1bandwidthautodetect:i:1displayconnectionbar:i:1enableworkspacereconnect:i:0disable wallpaper:i:0allow font smoothing:i:0allow desktop composition:i:0disable full window drag:i:1disable menu anims:i:1disable themes:i:0disable cursor setting:i:0bitmapcachepersistenable:i:1full address:s:<ip.address.of.machine>audiomode:i:0redirectprinters:i:1redirectcomports:i:0redirectsmartcards:i:1redirectclipboard:i:1redirectposdevices:i:0autoreconnection enabled:i:1authentication level:i:2prompt for credentials:i:0negotiate security layer:i:1remoteapplicationmode:i:0alternate shell:s:shell working directory:s:gatewayhostname:s:gatewayusagemethod:i:4gatewaycredentialssource:i:4gatewayprofileusagemethod:i:0promptcredentialonce:i:0gatewaybrokeringtype:i:0use redirection server name:i:0rdgiskdcproxy:i:0kdcproxyname:s:drivestoredirect:s:
What I was missing was --group=root in addition to --owner=root . tar -c --{owner,group}=root (possibly with an optional --numeric-owner ) fully anonymizes the archive.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7140/" ] }
278,105
I would like to know what are the default stdin and stdout of a child process (if there are such defaults). Are stdin and stdout of a child process the same as its parent process? Is this something inherited from the parent or something set by the parent? Or are the stdin and stdout of the child process plugged into stdin and stdout of parent process? In my understanding parent and child are not piped but are at first clones, so I think stdin and stdout of the child are exactly the same as the parent, but I am not sure. For example, in a terminal running bash as a login shell, if I type sh it will create a child shell process, that will have as stdin the keyboard and stdout the terminal screen, so the same as its parent. I want to understand how the child's stdin and stdout were defined and what they are. I want to know if for example in this case, if the stdin and stdout of the child are like the parent's it's because parent's stdin is piped into child stdin, and parent just "redirects" its input to the child or if the child gets its input directly from the keyboard. In this same case, if parent and child have the same stdin, does that mean that the parent processes the same commands that are typed to the child? How come we only see the stdin/out of child in the terminal and not its parent's?
Look at the man page for fork(2). A child process inherits the parent's file descriptors, including standard input and standard output. For a single command, the shell simply lets the child process inherit those descriptors and write its output to the terminal. For a pipeline, it forks each process, sets up a pipe between the output of one and the input of the next, and then executes (exec(2)) each child's executable.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166707/" ] }
278,161
I love the default fancy output of ccze, but I can't seem to get it scroll property. Executing tail -f something.log | ccze from an X terminal works, but I can't scroll back once the screen has been filled (shift+pgup doesn't do anything). How can I get it to work as expected?
ccze uses the curses output mode by default. (n)curses is a screen drawing library typically used by fullscreen applications. It switches to the terminal emulator's so-called "alternate screen" which does not have a scrollbar buffer, and the contents of the other, "normal screen" is restored upon exit. Instead of this, you should use its ansi output format which is enabled by any of the -A , --raw-ansi , -m ansi or --mode=ansi command line options.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78415/" ] }
278,177
I have a dual boot system with CentOS 7 and Win10. My install was totally vanilla (CentOS then Win10) and went fine. Everything is great except that grub does not appear to save my "last" choice from the boot load menu. I dug through all the grub configuration files (e.g. /boot/efi/EFI/centos/grub.cfg ) and all the code seems there for recording the last choice. My /etc/default/grub shows: GRUB_TIMEOUT=5...GRUB_DEFAULT=saved...GRUB_SAVEDEFAULT=true Is there anything obvious I am missing or need to do to enable this? My /etc/efi/EFI/centos/grubenv never apperas to record the latest selection.It always has: saved_entry=CentOS Linux (3.10.0-327.el7.x86_64) 7 (Core)##########[...snip...padding to 1k] I can't see this file from a Windows boot, but I did test via the "rescue Centos entry". I manually set the value in grubenv to Windows Boot Manager (on /dev/sda2) (the Windows entry) and this works out okay. However, booting back into CentOS fails to change it. It just seems I am missing something to enable this "save the last choice" behavior. Any ideas?
ccze uses the curses output mode by default. (n)curses is a screen drawing library typically used by fullscreen applications. It switches to the terminal emulator's so-called "alternate screen" which does not have a scrollbar buffer, and the contents of the other, "normal screen" is restored upon exit. Instead of this, you should use its ansi output format which is enabled by any of the -A , --raw-ansi , -m ansi or --mode=ansi command line options.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278177", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141092/" ] }
278,181
How can I create a bootable USB stick from an iso image? I thought dd should do the work, but so far I were unsuccesful. This is what I've tried: umount /dev/sdx deleted every partition on sdx with Gparted dd if=/path/to/iso/some_file.iso of=/dev/sdx bs=1024K The file is a bootable BIOS update utility, but since my laptop does not have a CD/DVD drive I want to deploy this image on a USB stick. However, when I have a look at sdx in Gparted , it tells me that it's size is 0 and no partitions have been created, although dd claims it has written 26MB to /dev/sdx . I also tried to create a FAT32 partition (full size) with Gparted and then let dd copy onto this partition: dd if=/path/to/iso/some_file.iso of=/dev/sdx1 . Did not work either. The USB Stick is ok, I can write and exchange data between my laptop and computer with it. (Actually it is the same USB stick that I used to install Manjaro on my laptop before) What am I doing wrong?
Using gparted remove the existing partitions from your usb, and fix the msdos partition table (by going to the device menu and selecting "create partition Table"). Then, create a new partition fat32 by right clicking on the unallocated space and selecting new, making a primary FAT32 partition. Next step create your bootable usb: dd if=/path_to_iso_without_space.iso of=/dev/sdxsync You can add the bs=4M option to make it faster: dd bs=4M if=/path_to_iso.iso of=/dev/sdx Example: if your device is sdb1 you should type sdb dd if=/path_to_iso_without_space.iso of=/dev/sdb
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95677/" ] }
278,184
I'm trying to copy a file from my home directory to another user's directory. I have a sudo access for the other user with no password. I tried doing something like: sudo secondUser cp /home/firstUser/file /home/secondUser is there a way to do that? also I have no root access and I don't know the password of the secondUser.
EDIT: There is a way to do this quickly and easily with no additional setup. cat ~firstUser/file | sudo -u secondUser tee ~secondUser/file >/dev/null This will transfer all the file's content exactly. If you care about having the file permissions and/or timestamps match the original, you will need to fix those separately using chmod and touch . ORIGINAL ANSWER: The problem here is that: secondUser doesn't have access to your home directory. You don't have access to secondUser 's home directory. As such, regardless of whether you run the cp command as yourself or as secondUser , it can't perform the file copy. Given that you said you don't have root access, the obvious answer is to copy the file through an intermediate world-readable location such as /tmp and change the permissions of the file to world-readable. If the data in the file is sensitive you may not want to do this, however, because anyone on the server will be able to read the file while it is in transit. If it's not sensitive data, just do: cp file /tmp/chmod a+r /tmp/filesudo -u secondUser cp /tmp/file ~secondUserrm /tmp/file A better option, if you can arrange it, is to make a group that contains only you and secondUser , and chgrp the copy of the file in /tmp to be owned by that group. This way you don't need to make the file world -readable, but only readable by the group (with chmod g+r /tmp/file ). However, groupadd itself requires root access, so this is unlikely to be easy to arrange. Depending on the circumstances (if you are frequently trying to share/collaborate with secondUser it might be applicable), you could consider asking your administrator to set up this group for future use.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166968/" ] }
278,194
gcc version 5.3.0 20151204 (Ubuntu 5.3.0-3ubuntu1~14.04) I have a problem with g++ When I search for g++ I find nothing! So I tried to install it; it seems like g++ is already installed and it's the newest one! arubu@CQ56-LinuxMachine:~$ which g++arubu@CQ56-LinuxMachine:~$ sudo apt-get install g++[sudo] password for arubu: Reading package lists... DoneBuilding dependency tree Reading state information... Doneg++ is already the newest version.g++ set to manually installed.The following packages were automatically installed and are no longer required: libgranite-common libgranite1 libkeybinder-3.0-0Use 'apt-get autoremove' to remove them.0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.arubu@CQ56-LinuxMachine:~$ g++ -vThe program 'g++' is currently not installed. You can install it by typing:sudo apt-get install g++
EDIT: There is a way to do this quickly and easily with no additional setup. cat ~firstUser/file | sudo -u secondUser tee ~secondUser/file >/dev/null This will transfer all the file's content exactly. If you care about having the file permissions and/or timestamps match the original, you will need to fix those separately using chmod and touch . ORIGINAL ANSWER: The problem here is that: secondUser doesn't have access to your home directory. You don't have access to secondUser 's home directory. As such, regardless of whether you run the cp command as yourself or as secondUser , it can't perform the file copy. Given that you said you don't have root access, the obvious answer is to copy the file through an intermediate world-readable location such as /tmp and change the permissions of the file to world-readable. If the data in the file is sensitive you may not want to do this, however, because anyone on the server will be able to read the file while it is in transit. If it's not sensitive data, just do: cp file /tmp/chmod a+r /tmp/filesudo -u secondUser cp /tmp/file ~secondUserrm /tmp/file A better option, if you can arrange it, is to make a group that contains only you and secondUser , and chgrp the copy of the file in /tmp to be owned by that group. This way you don't need to make the file world -readable, but only readable by the group (with chmod g+r /tmp/file ). However, groupadd itself requires root access, so this is unlikely to be easy to arrange. Depending on the circumstances (if you are frequently trying to share/collaborate with secondUser it might be applicable), you could consider asking your administrator to set up this group for future use.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148747/" ] }
278,206
Can I display GIT in prompt when current directory has/contains a .git folder? Is there a way to do this? My current prompt is defined like so: export PS1="[\u@\h] \w $ " So, my prompt looks like this: [user@computer] ~/workspace $ And I want it to dynamically look like this: [user@computer] ~/workspace GIT $
The most standard way is to use __git_ps1 directly from git. In Ubuntu, it is available in this path: source /usr/lib/git-core/git-sh-prompt## source /etc/bash_completion.d/git-prompt#PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ 'PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w $(__git_ps1 "(%s)")\$ ' You can notice the added part $(__git_ps1 "(%s)") , which will notify you about the current state of repo -- current branch, ongoing rebases, merges and so on. The file in Ubuntu is provided by git package: $ dpkg-query -S /usr/lib/git-core/git-sh-promptgit: /usr/lib/git-core/git-sh-prompt For fedora by git-core (with a bit different path): rpm -qf /usr/share/git-core/contrib/completion/git-prompt.shgit-core-2.5.5-1.fc23.x86_64 Your prompt will change from [user@computer] ~/workspace $ to [user@computer] ~/workspace (master)$
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7586/" ] }
278,215
I want to test different variants of TCP in Linux Ubuntu. I have Ubuntu 14.04 LTS with Kernel version 3.14. When I check the available congestion control algorithm using the following command sysctl net.ipv4.tcp_available_congestion_control I get only: cubic and reno. However, I want to test other variants like Hybla, HighSpeed. If I run the menuconfig I can select the variants which I want and compile the Kernel. But in my case, I already have the kernel compiled so is it possible to have some Linux package which contains TCP variants as loadable kernel modules?
Have a look here to see what modules you have installed... ls -la /lib/modules/$(uname -r)/kernel/net/ipv4 You should get a list of modules, I got this. tcp_bic.kotcp_diag.kotcp_highspeed.kotcp_htcp.kotcp_hybla.kotcp_illinois.kotcp_lp.kotcp_scalable.kotcp_vegas.kotcp_veno.kotcp_westwood.ko You can see what your kernel has configured by greping your config file for TCP_CONG ie grep TCP_CONG /boot/config-$(uname -r)CONFIG_TCP_CONG_ADVANCED=yCONFIG_TCP_CONG_BIC=mCONFIG_TCP_CONG_CUBIC=yCONFIG_TCP_CONG_WESTWOOD=mCONFIG_TCP_CONG_HTCP=mCONFIG_TCP_CONG_HSTCP=mCONFIG_TCP_CONG_HYBLA=mCONFIG_TCP_CONG_VEGAS=mCONFIG_TCP_CONG_SCALABLE=mCONFIG_TCP_CONG_LP=mCONFIG_TCP_CONG_VENO=mCONFIG_TCP_CONG_YEAH=mCONFIG_TCP_CONG_ILLINOIS=mCONFIG_DEFAULT_TCP_CONG="cubic" To try one of these you need to install it using modprobe -a tcp_westwood or whatever you want. You can then test it using this echo "westwood" > /proc/sys/net/ipv4/tcp_congestion_control
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278215", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/248653/" ] }
278,284
In my SSH config file, I have to enable the RequestTTY force for some important reasons. And currently my config file looks like this: Host x.x.x.x HostName yyyy StrictHostKeyChecking no RequestTTY force IdentityFile ~/path/id_rsa But now when I execute an scp command, it completes but creates an empty file on the destination path and below is the log that it generates: debug2: channel 0: read<=0 rfd 4 len 0debug2: channel 0: read faileddebug2: channel 0: close_readdebug2: channel 0: input open -> draindebug2: channel 0: ibuf emptydebug2: channel 0: send eofdebug2: channel 0: input drain -> closeddebug2: channel 0: write faileddebug2: channel 0: close_writedebug2: channel 0: send eowdebug2: channel 0: output open -> closed But if I comment out the RequestTTY force option in the config file, it executes properly and also copies the file properly. Why does this behaviour occur?Can anyone provide me with a workaround so that I won't have to disable the RequestTTY force option and also the file would be copied properly?
There are many possible solutions for that: You can configure sudo not to require tty: RequireTTY in /etc/sudoers You can force tty allocation on command-line in these specific cases, where you need it: ssh -tt host command You can tell scp not to allocate TTY by -T or -o RequestTTY=no command-line option: scp -T file host:path/ or scp -o RequestTTY=no file host:path/ The reasons why does it happen are already explained. You spoil binary protocol by TTY control characters and vice versa.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164706/" ] }
278,292
I have clang installed from packages on both Ubuntu 14.07, Centos 7 and Fedoara 22. I would like to use clang-tidy but can neither find a package nor how to install it without installing clang from source. That's something I would rather not do. What am I missing? I might be dense, if so please mock me.
You can use your package manager to find out which package clang-tidy provides. For example on Fedora/CentOS: dnf whatprovides '*/clang*tidy*' On Debian/Ubuntu you can use an analogous apt-file search command. However, on Fedora 23 clang-tidy just isn't packaged. No match is found. There is even an open bug report: Missing clang-query and clang-tidy For Ubuntu/Debian, the LLVM project maintains an llvm apt repostiory . This should be the easiest way to get the latest clang-tidy . After configuring that repository and doing an apt-file update and apt-file search should return the package that provides clang-tidy . An alternative to building from source is to use the upstream llvm pre-built binaries - they are available for Fedora, CentOS etc. For example the one for Fedora 23 does contain clang-tidy: clang+llvm-3.8.0-x86_64-fedora23/bin/clang-tidy
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10526/" ] }
278,300
I'm trying to write a script that reduces each number in line by "1", but I'm getting all "0"s instead: awk '{a=gensub(/([0-9]+)/,"\\1","g",$0); if(a~/[0-9]+/) {gsub(/[0-9]+/,a-1,$0);} print $0}' For example, the string: 1,2,3,4-7 should result in: 0,1,2,3-6 instead I'm getting: 0,0,0,0-0
awk substitution capabilities are quite limited. gawk has gensub() that can at least include parts of the matched portion in the replacement, but no operation can be done on those. It's possible with awk , but you need to take a different approach: awk '{ text = $0 $0 = "" while (match(text, /[0-9]+/)) { $0 = $0 substr(text, 1, RSTART-1) \ (substr(text, RSTART, RLENGTH) - 1) text = substr(text, RSTART+RLENGTH) } $0 = $0 text print}' Or with GNU awk as a variation on @jofel's approach: gawk -v 'RS=[0-9]+' '{printf "%s", $0 (RT==""?"":RT-1)}' or gawk -v 'RS=[^0-9]+' '{printf "%s",($0==""?"":$0 - 1)RT}' However, here it's a lot easier with perl : perl -pe 's/\d+/$&-1/ge' perl can use capture groups (as $1 , $2 ... and $& for the whole matched portion) and with the e flag can run arbitrary perl expressions with those.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167033/" ] }
278,351
I have a variable for color. I use it to set color to strings, by evaluating it inside the string. However, I need to include space after the name(so that the name doesn't contain part of the text). This sometimes looks bad. How can I avoid using(printing) this space? Example(Let's say that Red=1 and NC=2 ): echo -e "$Red Note: blabla$NC". Output: 1 Note: blabla2. Expected output: 1Note: blabla2.
Just enclose variable in braces: echo -e "${Red}Note: blabla${NC}". See more detail about Parameter Expansion . See also great answer Why printf is better than echo? if you care about portability.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/278351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/129998/" ] }
278,357
How can I set variables to be used in scripts as well? They don't have to be global / system-wide, just the current session is sufficient. But for some reason they seem gone even if I run a script right from my current terminal. Example: foo=barecho "$foo" output: bar However if test.sh contains echo "$foo" and I do: foo=bar./test.sh The output is empty. How do I set a scope for temporary variables with a terminal session that remains valid when executing scripts from that same session?
export foo=bar will set the global variable $foo to bar. You can also use foo=bar ./test.sh as a single command to set foo to bar for test.sh.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110067/" ] }
278,385
I have unfortunately format drive /dev/sda2. So all the /root , /home , swap LVM no longer exists. Because of this my server is unable to work properly. it shows only dracut#>Dracut Error:[ OK ] Reached target Paths.[ OK ] Reached target Basic System.dracut-initqueue[372]: Warning: Could not boot.[ OK ] Started Show Plymouth Boot Screen.[ OK ] Reached target Paths.[ OK ] Reached target Basic System.dracut-initqueue[372]: Warning: Could not boot.dracut-initqueue[372]: Warning: /dev/centos/root does not exist.dracut-initqueue[372]: Warning: /dev/centos/swap does not exist.dracut-initqueue[372]: Warning: /dev/mapper/centos-root does not exist.Starting Dracut Emergency Shell...Warning: /dev/centos/root does not existWarning: /dev/centos/swap does not existWarning: /dev/mapper/centos-root does not existGenerating "/run/initramfs/rdsosreport.txt"Entering emergency mode. Exit the shell to continue.Type "journalctl" to view system logs.You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /bootafter mounting them and attach it to a bug report.
In dracut emergency shell: Dracut offers a shell for interactive debugging in the event dracut fails to locate your root filesystem. To enable the shell: Add the boot parameter ''rd.shell'' to your bootloader configuration file (e.g. /etc/grub.conf) rhgb = redhat graphical boot - This is a GUI mode booting screen with most of the information hidden while the user sees a rotating activity icon spining and brief information as to what the computer is doing. quiet = hides the majority of boot messages before rhgb starts. These are supposed to make the common user more comfortable. They get alarmed about seeing the kernel and initializing messages, so they hide them for their comfort. rd.shell=This will present a shell should dracut be unable to locate your root device Remove the boot arguments ''rhgb'' and ''quiet''A sample /etc/grub.conf bootloader configuration file is listed below. default=0 timeout=5 serial --unit=0 --speed=9600 terminal --timeout=5 serial console title Fedora (2.6.29.5-191.fc11.x86_64) root (hd0,0) kernel /vmlinuz-2.6.29.5-191.fc11.x86_64 ro root=/dev/mapper/vg_uc1-lv_root console=tty0 rd.shell initrd /dracut-2.6.29.5-191.fc11.x86_64.img If system boot fails, you will be dropped into a shell as seen in the example below. No root device foundDropping to debug shell.sh: can't access tty; job control turned off Use this shell prompt to gather the information requested above (see the section called “All bug reports”). 5.Accessing the root volume from the dracut shellFrom the dracut debug shell, you can manually perform the task of locating and preparing your root volume for boot. The required steps will depend on how your root volume is configured. Common scenarios include: • A block device (e.g. /dev/sda7) • A LVM logical volume (e.g. /dev/VolGroup00/LogVol00) • An encrypted device (e.g. /dev/mapper/luks-4d5972ea-901c-4584-bd75-1da802417d83) • A network attached device (e.g. netroot=iscsi:@192.168.0.4::3260::iqn.2009-02.org.fedoraproject:for.all) 6.The exact method for locating and preparing will vary. However, to continue with a successful boot, the objective is to locate your root volume and create a symlink /dev/root which points to the file system. For example, the following example demonstrates accessing and booting a root volume that is an encrypted LVM Logical volume. Inspect your partitions using parted You recall that your root volume was a LVM logical volume. Scan and activate any logical volumes lvm vgscan lvm vgchange -ay You should see any logical volumes now using the command blkid: blkid /dev/sda1: UUID="3de247f3-5de4-4a44-afc5-1fe179750cf7" TYPE="ext4" /dev/sda2: UUID="Ek4dQw-cOtq-5MJu-OGRF-xz5k-O2l8-wdDj0I" TYPE="LVM2_member" /dev/mapper/linux-root: UUID="def0269e-424b-4752-acf3-1077bf96ad2c" TYPE="crypto_LUKS" /dev/mapper/linux-home: UUID="c69127c1-f153-4ea2-b58e-4cbfa9257c5e" TYPE="ext3" /dev/mapper/linux-swap: UUID="47b4d329-975c-4c08-b218-f9c9bf3635f1" TYPE="swap" 9.With the root volume available, you may continue booting the system by exiting the dracut shell exit
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167113/" ] }
278,400
I have a PID of certain process listening some port(s) on my OS X and I need to know which port(s) is listened by this process. How can I do it? I know I can use lsof to know which process is listening some port, but I need to perform an inverse operation. Thank you. UPD OS X uses BSD utils, so I have BSD netstat not Linux netstat . Linux netstat has -p option to show PIDs, BSD netstat uses -p to specify port and has no option to show PID.
I've found a solution on my own by deep reading man lsof . (Yes, RT*M still helps.) Thanks @Gilles for aiming. Here is the solution: lsof -aPi -p 555 (555 is the PID). Explanation: -p to specify the PID number; -i to display only network devices; -a to AND two conditions above (otherwise they will be ORed); -P to display port numbers (instead port names by default). Additionally, one can use lsof -aPi4 -p 555 or lsof -aPi6 -p 55 for IPv4 or IP6 only addresses accordingly. If output will be parsed by another program -Fn option may be helpful. With this option lsof will produce "output for other program" instead of nice formatted output. lsof -aPi4 -Fn -p 555 will output something like this: p554nlocalhost:4321 PS All of it I've tested on my OS X El Capitan, but as I can see it should work on Linux too.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/278400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117458/" ] }
278,404
I want to find adjacent matching lines, e.g., if the pattern matches are $ grep -n pattern file1 file2 file3file1:10: ...file2:100: ...file2:1000: ...file2:1001: ...file3:1: ...file3:123: ... I want to find the middle two matches: file2:1000: ...file2:1001: ... but not the first two and the last two.
I'll use the same test file as thrig: $ cat fileapat 1pat 2bpat 3 Here is an awk solution: $ awk '/pat/ && last {print last; print} {last=""} /pat/{last=$0}' filepat 1pat 2 How it works awk implicitly loops over every line in the file. This program uses one variable, last , which contains the last line if it matched regex pat . Otherwise, it contains the empty string. /pat/ && last {print last; print} If pat matches this line and the previous line, last , was also a match, then print both lines. {last=""} Replace last with an empty string /pat/ {last=$0} If this line matches pat , then set last to this line. This way it will be available when we process the next line. Alternative for treating >2 consecutive matches as one group Let's consider this extended test file: $ cat file2apat 1pat 2bpat 3cpat 4pat 5pat 6d Unlike the solution above, this code treats the three consecutive matching lines as one group to be printed: $ awk '/pat/{f++; if (f==2) print last; if (f>=2) print; last=$0; next} {f=0}' file2pat 1pat 2pat 4pat 5pat 6 This code uses two variables. As before, last is the previous line. In addition, f counts the number of consecutive matches. So, we print matching lines when f is 2 or larger. Adding grep-like features To emulate the grep output shown in the question, this version prints the filename and line number before each matching line: $ awk 'FNR==1{f=0} /pat/{f++; if (f==2) printf "%s:%s:%s\n",FILENAME,FNR-1,last; if (f>=2) printf "%s:%s:%s\n",FILENAME,FNR,$0; last=$0; next} {f=0}' file file2file:2:pat 1file:3:pat 2file2:2:pat 1file2:3:pat 2file2:7:pat 4file2:8:pat 5file2:9:pat 6 Awk's FILENAME variables provides the file's name and awk's FNR provides the line number within the file. At the beginning of each file, FNR==1 , we reset f to zero. This prevents the last line of one file from being considered consecutive with the first line of the next file. For those who like their code spread over multiple lines, the above looks like: awk ' FNR==1{f=0} /pat/ {f++ if (f==2) printf "%s:%s:%s\n",FILENAME,FNR-1,last if (f>=2) printf "%s:%s:%s\n",FILENAME,FNR,$0 last=$0 next } {f=0} ' file file2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31443/" ] }
278,408
So, I have a script that uses -S mount nfs -o proto=tcp,port=2049 … etc. to mount a location from another Linux computer. What does -S mean? It seems to work just fine with or without it (it doesn't work if I do such as gksu -- -S mount … etc. to launch it without a terminal emulator). I'm curious if I actually need -S for some reason, or if I can drop it to make gksu -- work, without consequences. Here's the script I wrote, for reference, with the IP address and paths changed to protect the paranoid: #!/bin/bashif mountpoint -q /home/myLaptop/myDesktopthen notify-send -t 3000 "Warning" "It is already mounted."else gksu -- -S mount -t nfs -o proto=tcp,port=2049 192.168.0.x:/home/myLaptop /home/myLaptop/myDesktop if mountpoint -q /home/myLaptop/myDesktop then notify-send -t 3000 "Alert" "Mounted." else notify-send -t 3000 "Alert" "Mount failed." fifi
-- means “end of options”: subsequent arguments are not considered to be options, even if they start with a dash. This is a quasi-universal convention. So gktu -- -S mount … means to run the command -S . $ gksu -- -S whoamish: 0: Illegal option -S It seems that you meant to pass the -S option to gksu , to tell it to use sudo rather than su . It needs to come before -- . $ gksu -S -- whoamiroot You do need -- because otherwise gksu would think that the -o option is intended for itself rather than for mount .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82698/" ] }
278,427
Many answers and comments on this site mention that one should generally avoid using eval in shell scripts. Often these comments are addressed to beginners. I've seen mention of security issues and robustness, but very few examples. I already understand what the eval command is , and how to use it . However, the answers given there only mention briefly the dangers of using eval . Remarkably, every answer notes that eval can be a major security hole, but most only mention this fact as a concluding sentence with no elaboration. This seems worthy of further consideration and is really its own question. Why is using eval a bad idea? When would it be appropriate to use? Are there any guidelines by which it can be used with complete and total safety, or is it always and inevitably a security hole? EDIT: This article is the sort of canonical reference (answer) I was actually looking for when I posted this question. Anyone abusing eval can be pointed to this reference.
The question will attract opinions... eval is a part of the standard shell: The eval utility shall construct a command by concatenating arguments together, separating each with a <space> character. The constructed command shall be read and executed by the shell. The likely reasons for not liking eval : scripters may not get quoting right, causing unintended behavior (think of the possibilities with unmatched quotes coming from a variable in the evaluation). the result of the evaluation may not be obvious (partly from quoting problems, but partly because eval allows you to set variables based on other variables' names). On the other hand, if the data going into eval is checked, e.g., the results from testing filenames and ensuring there are no quotes to complicate things, then it is a useful tool. The alternatives usually suggested are less portable, e.g., specific to some particular shell implementation. Further reading (but bear in mind that bash offers implementation-specific/nonstandard alternatives): Why should eval be avoided in Bash, and what should I use instead? What is the “eval” command in bash? Unix/Linux Bash: Critical security hole uncovered Writing Better Shell Scripts - Part 3 (see example with eval) Shell Script Security (see example with eval)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
278,443
Just hit this problem, and learned a lot from the chosen answer: Create random data with dd and get "partial read warning". Is the data after the warning now really random? Unfortunately the suggested solution head -c is not portable. For folks who insist that dd is the answer, please carefully read the linked answer which explains in great detail why dd can not be the answer. Also, please observe this: $ dd bs=1000000 count=10 if=/dev/random of=randomdd: warning: partial read (89 bytes); suggest iflag=fullblock0+10 records in0+10 records out143 bytes (143 B) copied, 99.3918 s, 0.0 kB/s$ ls -l random ; du -kP random-rw-rw-r-- 1 me me 143 Apr 22 19:19 random4 random$ pwd/tmp
Unfortunately, to manipulate the content of a binary file, dd is pretty much the only tool in POSIX. Although most modern implementations of text processing tools ( cat , sed , awk , …) can manipulate binary files, this is not required by POSIX: some older implementations do choke on null bytes, input not terminated by a newline, or invalid byte sequences in the ambient character encoding. It is possible, but difficult, to use dd safely. The reason I spend a lot of energy steering people away from it is that there's a lot of advice out there that promotes dd in situations where it is neither useful nor safe. The problem with dd is its notion of blocks: it assumes that a call to read returns one block; if read returns less data, you get a partial block, which throws things like skip and count off. Here's an example that illustrates the problem, where dd is reading from a pipe that delivers data relatively slowly: yes hello | while read line; do echo $line; done | dd ibs=4 count=1000 | wc -c On a bog-standard Linux (Debian jessie, Linux kernel 3.16, dd from GNU coreutils 8.23), I get a highly variable number of bytes, ranging from about 3000 to almost 4000. Change the input block size to a divisor of 6, and the output is consistently 4000 bytes as one would naively expect — the input to dd arrives in bursts of 6 bytes, and as long as a block doesn't span multiple bursts, dd gets to read a complete block. This suggests a solution: use an input block size of 1 . No matter how the input is produced, there's no way for dd to read a partial block if the input block size is 1. (This is not completely obvious: dd could read a block of size 0 if it's interrupted by a signal — but if it's interrupted by a signal, the read system call returns -1. A read returning 0 is only possible if the file is opened in non-blocking mode, and in that case a read had better not be considered to have been performed at all. In blocking mode, read only returns 0 at the end of the file.) dd ibs=1 count="$number_of_bytes" The problem with this approach is that it can be slow (but not shockingly slow: only about 4 times slower than head -c in my quick benchmark). POSIX defines other tools that read binary data and convert it to a text format: uuencode (outputs in historical uuencode format or in Base64), od (outputs an octal or hexadecimal dump). Neither is well-suited to the task at hand. uuencode can be undone by uudecode , but counting bytes in the output is awkward because the number of bytes per line of output is not standardized. It's possible to get well-defined output from od , but unfortunately there's no POSIX tool to go the other way round (it can be done but only through slow loops in sh or awk, which defeats the purpose here).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94174/" ] }
278,448
From Bash Manual: command [-pVv] command [arguments ...] Runs command with arguments ignoring any shell function named command. Only shell builtin commands or commands found by searching the PATH are executed. If there is a shell function named ls, running ‘command ls’ within the function will execute the external command ls instead of calling the func- tion recursively. The -p option means to use a default value for PATH that is guaranteed to fi nd all of the standard utilities. The return status in this case is 127 if command cannot be found or an error occurred, and the exit status of command otherwise. Does the manual explain why it fails when command is assignment or begins with assignment (for environment variables)? $ command aaa=1aaa=1: command not found$ command aaa=1 echo helloaaa=1: command not found
You are confusing what POSIX calls a "simple command" which is a non empty sequence of optional assignments, optional redirections and optional words (including a command name and its optional arguments), and "command" as it is used in the Bash manual command synopsis which is here just a command name. Should you really want to use assignments here, you can simply run : aaa=1 command echo hello If there is no command at all but just an assignment, there is no much point using the command command, given the fact there is no builtin or command to search in the PATH in the first place. Should you really want to just set a variable with command , you might use command typeset aaa=1 or command declare aaa=1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278448", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
278,474
I've heard many times now that it is, and I'm mostly using ss now. But I sometimes get frustrated with differences between the two, and I would love some insight. Also, I can't be the only one who thinks of Hitler when using ss . Not a great name.
Found an article on deprecation from 2011 . It seems like the whole net-tools package was not maintained for a while and so it was deprecated. In Debian 9 it is not even installed by default. From the project page it seems like there were no updates at least since 2011. But you can easily install netstat (and e.g. ifconfig ) and keep using them. I would probably only use them for listing stuff though. Installing on Debian 9: apt-get install net-tools PS: For more information you might want to see another Q&A about ifconfig deprecation ( ifconfig is part of the same package): https://serverfault.com/questions/458628/should-i-quit-using-ifconfig
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107073/" ] }
278,495
I am trying to apply one awk script to all the files in one folder, I am using: for %%i in (*.txt) do (awk -f NormalizeFiles.awk "%%i" > new\"%%i") but then I get the message: Syntactic error near the unexpected element '(' I don't know what is the problem.
It appears you are trying to run input files through awk and save the results as a different file name. Unless you need very special file naming, this should serve: for i in *.txt; do awk -f NormalizeFiles.awk $i > $i.outdone Although your question is tagged bash, your syntax, as someone pointed out, is not bash, e.g. the misuse of parentheses. The %% syntax is not used correctly, and it doesn't seem germane to your goal; however, you should check out http://www.tldp.org/LDP/abs/abs-guide.pdf if you want to know more about it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167168/" ] }
278,496
I am writing a script which accepts two arguments: #! /bin/basheval for i in {$1..$2}; do echo $i; done I run it like: $ ./myscript 0002 0010 syntax error near unexpected token `do' Why is the error? I though it might be because the looping should be grouped. But by replacing eval for i in {$1..$2}; do echo $i; done with eval { for i in {$1..$2}; do echo $i; done; } , the error remains. Note: I hope to perform parameter expansion before brace expansion by using eval . The desired output of my example is 0002 0003 0004 0005 0006 0007 0008 0009 0010 . (See Perform parameter expansion before brace expansion? )
That's because the shell evaluated ; , so eval didn't see it. You have to escape any shell special character to delay its evaluation and pass is literally to eval : eval for i in \{"$1".."$2"\}\; do echo \"\$i\"\; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
278,502
I want to access the array index variable while looping thru an array in my bash shell script. myscript.sh #!/bin/bashAR=('foo' 'bar' 'baz' 'bat')for i in ${AR[*]}; do echo $idone The result of the above script is: foobarbazbat The result I seek is: 0123 How do I alter my script to achieve this?
You can do this using List of array keys . From the bash man page: ${!name[@]} ${!name[*]} List of array keys . If name is an array variable, expands to the list of array indices (keys) assigned in name. If name is not an array, expands to 0 if name is set and null otherwise. When @ is used and the expansion appears within double quotes, each key expands to a separate word. For your example: #!/bin/bashAR=('foo' 'bar' 'baz' 'bat')for i in "${!AR[@]}"; do printf '${AR[%s]}=%s\n' "$i" "${AR[i]}"done This results in: ${AR[0]}=foo${AR[1]}=bar${AR[2]}=baz${AR[3]}=bat Note that this also work for non-successive indexes: #!/bin/bashAR=([3]='foo' [5]='bar' [25]='baz' [7]='bat')for i in "${!AR[@]}"; do printf '${AR[%s]}=%s\n' "$i" "${AR[i]}"done This results in: ${AR[3]}=foo${AR[5]}=bar${AR[7]}=bat${AR[25]}=baz
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/278502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167174/" ] }
278,522
I have just installed Debian 8.4 (Jessie, MATE desktop). For some reason the following command is not recognized: mkfs.ext4 -L hdd_misha /dev/sdb1 The error I get: bash: mkfs.ext4: command not found I have googled and I actually can't seen to find Debian-specific instructions on how to create an ext4 filesystem. Any help much appreciated!
Do you have /sbin in your path? Most likely you are trying to run mkfs.ext4 as a normal user. Unless you've added it yourself (e.g. in ~/.bashrc or /etc/profile etc), root has /sbin and /usr/sbin in $PATH , but normal users don't by default. Try running it from a root shell (e.g. after sudo -i ) or as: sudo mkfs.ext4 -L hdd_misha /dev/sdb1 BTW, normal users usually don't have the necessary permissions to use mkfs to format a partition (although they can format a disk-image file that they own - e.g. for use with FUSE or in a VM with, say, VirtualBox). Formatting a partition requires root privs unless someone has seriously messed up the block device permissions in /dev .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133840/" ] }
278,539
Is globbing and shell expansion same thing? I'm learning C by writing a custom shell and I'm also learning POSIX. Now I wonder if it is POSIX compliance that cd - takes you back and that ~ means home directory? Because not all shells can do that, and I don't know if it is required for a minimal POSIX compliance of the cd command. The implementation I use is int do_cd(int argc, const char **argv) { const char *path; if (argc > 1) { path = argv[1]; } else { path = getenv("HOME"); if (path == NULL) { fprintf(stderr, "No HOME environment variable\n"); return 1; } } if (chdir(path) < 0) { perror(path); return 1; } return 0;}
Tilde expansion is part of shell command processing, not part of cd . cd sees the already-expanded path as its argument. POSIX requires cd - to be equivalent to cd "$OLDPWD" && pwd . OLDPWD must be set by cd if PWD exists at the time of running the command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
278,544
How do I pass an array as a variable from a first bash shell script to a second script. first.sh #!/bin/bashAR=('foo' 'bar' 'baz' 'bat')sh second.sh "$AR" # foosh second.sh "${AR[@]}" # foo second.sh #!/bin/bashARR=$1echo ${ARR[@]} In both cases, the result is foo . But the result I want is foo bar baz bat . What am I doing wrong and how do I fix it?
AFAIK, you can't. You have to serialize it and deserialize it, e.g. via the argument array: first #!/bin/bashar=('foo' 'bar' 'baz' 'bat')second "${ar[@]}" #this is equivalent to: second foo bar baz bat second #!/bin/basharr=( "$@" )printf ' ->%s\n' "${arr[@]}"<<PRINTS -> foo -> bar -> baz -> batPRINTS A little bit of advice: reserve all caps for export ed variables unless you have a very good specific reason for not doing so "${someArray[@]}" should always be double quoted; this formula behaves exactly like 'array item 0' 'array item 1' 'aray item 2' 'etc.' (assuming someArray=( 'array item 0' 'aray item 1' 'aray item 2' 'etc.' ) )
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167174/" ] }
278,561
So I use pelican for writing my blog and I upload the whole thing using rsync. OK. But I use also Let's Encrypt and therefor need the repository .well-known preserved at the root of my website. So is there a way I can say "rsync ... --do-not-delete .well-known ..." Currently, those rep' are permission protected, but rsync doesn't like it. Here is the current rsync command (installed by pelican itself, I did not write it) : rsync -e "ssh -p $(SSH_PORT)" -P -rvzc --delete $(OUTPUTDIR)/ $(SSH_USER)@$(SSH_HOST):$(SSH_TARGET_DIR) --cvs-exclude BTW : if you have also some suggestion to improve rsync efficiency, I take it (yes, it's off topic).
From man rsync --delete This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side), but only for the directories that are being synchronized. You must have asked rsync to send the whole directory (e.g. "dir" or "dir/") without using a wildcard for the directory’s contents (e.g. "dir/*") since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files’ parent directory. Files that are excluded from the transfer are also excluded from being deleted unless you use the --delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section). So I think it should be rsync -e "ssh -p $(SSH_PORT)" -P -rvzc --delete \$(OUTPUTDIR)/ \$(SSH_USER)@$(SSH_HOST):$(SSH_TARGET_DIR) \--cvs-exclude --exclude=/.well-known (assuming .well-known is at the root of $(SSH_TARGET_DIR)/ )
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/278561", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165068/" ] }
278,564
It was recently pointed out to me that an alternative to cron exists, namely systemd timers. However, I know nothing about systemd or systemd timers. I have only used cron. There is a little discussion in the Arch Wiki . However, I'm looking for a detailed comparison between cron and systemd timers, focusing on pros and cons. I use Debian, but I would like a general comparison for all systems for which these two alternatives are available. This set may include only Linux distributions. Here is what I know. Cron is very old, going back to the late 1970s. The original author of cron is Ken Thompson, the creator of Unix. Vixie cron, of which the crons in modern Linux distributions are direct descendants, dates from 1987. Systemd is much newer, and somewhat controversial. Wikipedia tells me its initial release was 30 March 2010. So, my current list of advantages of cron over systemd timers is: Cron is guaranteed to be in any Unix-like system, in the sense of being an installable supported piece of software. That is not goingto change. In contrast, systemd may or may not remain in Linuxdistributions in the future. It is mainly an init system, and may bereplaced by a different init system. Cron is simple to use. Definitely simpler than systemd timers. The corresponding list of advantages of systemd timers over cron is: Systemd timers may be more flexible and capable. But I'd likeexamples of that. So, to summarise, here are some things it would be good to see in an answer: A detailed comparison of cron vs systemd timers, including pros andcons of using each. Examples of things one can do that the other cannot. At least one side-by-side comparison of a cron script vs a systemdtimers script.
Here are some points about those two : checking what your cron job really does can be kind of a mess, butall systemd timer events are carefully logged in systemd journallike the other systemd units based on the event that makes thingsmuch easier. systemd timers are systemd services with all their capabilities for resource management, IO CPU scheduling, ... There is a list : systemcall filters user/group ids membershipcontrols nice value OOM score IO scheduling class and priority CPU scheduling policy CPU affinity umask timer slacks secure bits network access and ,... with the dependencies option just like other systemd servicesthere can be dependencies on activation time. Units can be activated in different ways, also combination ofthem can be configured. services can be started and triggered bydifferent events like user, boot, hardware state changes or forexample 5mins after some hardware plugged and ,... much easier configuration some files and straight forward tags todo variety of customizations based on your needs with systemdtimers. Easily enable/disable the whole thing with: systemctl enable/disable and kill all the job's children with: systemctl start/stop systemd timers can be scheduled with calenders and monotonictimes, which can be really useful in case of different timezones and,... systemd time events (calendar) are more accurate than cron (seems1s precision) systemd time events are more meaningful, for those recurring onesor even those that should occur once, here is an example from the document : Sat,Thu,Mon-Wed,Sat-Sun → Mon-Thu,Sat,Sun *-*-*00:00:00 Mon,Sun 12-*-* 2,1:23 → Mon,Sun 2012-*-* 01,02:23:00 Wed *-1 → Wed *-*-01 00:00:00 Wed-Wed,Wed *-1 → Wed *-*-01 00:00:00 Wed, 17:48 → Wed *-*-* 17:48:00 From the CPU usage view point systemd timer wakes the CPU on theelapsed time but cron does that more often. Timer events can be scheduled based on finish times ofexecutions some delays can be set between executions. The communication with other programs is also notable sometimesit's needed for some other programs to know timers and the state oftheir tasks.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/278564", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
278,569
I'm on Debian jessie the stable release. I noticed that version of wpa_supplicant is vulnerable to DoS attacks according to CVE-2015-8041 : Multiple integer overflows in the NDEF record parser in hostapd before 2.5 and wpa_supplicant before 2.5 allow remote attackers to cause a denial of service (process crash or infinite loop) via a large payload length field value in an (1) WPS or (2) P2P NFC NDEF record, which triggers an out-of-bounds read. On the stable release the available version is wpa_supplicant 2.3 , with a regular sources.list it's not possible to upgrade the current version to wpa_supplicant 2.5 , Why does Debian stable keep some obsolete (vulnerable) packages?
Debian has security tracker which shows status of the CVE's in all supported releases. Here is your: https://security-tracker.debian.org/tracker/CVE-2015-8041 You can check it is fixed in version 2.3-1+deb8u3 . The fix was probably backported to the older version, which prevents breaking other things with rebase to new version in stable release (point of stable release).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153195/" ] }
278,639
I am ambitiously trying to translate a c++ code into bash for a myriad of reasons. This code reads and manipulates a file type specific to my sub-field that is written and structured completely in binary. My first binary-related task is to copy the first 988 bytes of the header, exactly as-is, and put them into an output file that I can continue writing to as I generate the rest of the information. I am pretty sure that my current solution isn't working, and realistically I haven't figured out a good way to determine this. So even if it is actually written correctly, I need to know how I would test this to be sure! This is what I'm doing right now: hdr_988=`head -c 988 ${inputFile}`echo -n "${hdr_988}" > ${output_hdr}headInput=`head -c 988 ${inputTrack} | hexdump`headOutput=`head -c 988 ${output_hdr} | hexdump`if [ "${headInput}" != "${headOutput}" ]; then echo "output header was not written properly. exiting. please troubleshoot."; exit 1; fi If I use hexdump/xxd to check out this part of the file, although I can't exactly read most of it, something seems wrong. And the code I have written in for comparison only tells me if two strings are identical, not if they are copied the way I want them to be. Is there a better way to do this in bash? Can I simply copy/read binary bytes in native-binary, to copy to a file verbatim? (and ideally to store as variables as well).
Dealing with binary data at a low level in shell scripts is generally a bad idea. bash variables can't contain the byte 0. zsh is the only shell that can store that byte in its variables. In any case, command arguments and environment variables cannot contain those bytes as they are NUL delimited strings passed to the execve system call. Also note that: var=`cmd` or its modern form: var=$(cmd) strips all the trailing newline characters from the output of cmd . So, if that binary output ends in 0xa bytes, it will be mangled when stored in $var . Here, you'd need to store the data encoded, for instance with xxd -p . hdr_988=$(head -c 988 < "$inputFile" | xxd -p)printf '%s\n' "$hdr_988" | xxd -p -r > "$output_hdr" You could define helper functions like: encode() { eval "$1"='$( shift "$@" | xxd -p -c 0x7fffffff exit "${PIPESTATUS[0]}")'}decode() { printf %s "$1" | xxd -p -r}encode var cat /bin/ls && decode "$var" | cmp - /bin/ls && echo OK xxd -p output is not space efficient as it encodes 1 byte in 2 bytes, but it makes it easier to do manipulations with it (concatenating, extracting parts). base64 is one that encodes 3 bytes in 4, but is not as easy to work with. The ksh93 shell has a builtin encoding format (uses base64 ) which you can use with its read and printf / print utilities: typeset -b var # marked as "binary"/"base64-encoded"IFS= read -rn 988 var < inputprintf %B var > output Now, if there's no transit via shell or env variables, or command arguments, you should be OK as long as the utilities you use can handle any byte value. But note that for text utilities, most non-GNU implementations can't handle NUL bytes, and you'll want to fix the locale to C to avoid problems with multi-byte characters. The last character not being a newline character can also cause problems as well as very long lines (sequences of bytes in between two 0xa bytes that are longer that LINE_MAX ). head -c where it's available should be OK here, as it's meant to work with bytes, and has no reason to treat the data as text. So head -c 988 < input > output should be OK. In practice at least the GNU, FreeBSD and ksh93 builtin implementations are OK. POSIX doesn't specify the -c option, but says head should support lines of any length (not limited to LINE_MAX ) With zsh : IFS= read -rk988 -u0 var < input &&print -rn -- $var > output Or: var=$(head -c 988 < input && echo .) && var=${var%.}print -rn -- $var > output Even in zsh , if $var contains NUL bytes, you can pass it as argument to zsh builtins (like print above) or functions, but not as arguments to executables, as arguments passed to executables are NUL delimited strings, that's a kernel limitation, independent of the shell.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167260/" ] }
278,694
Sometimes others in my household briefly use my computer, when they do I sometimes don't want them to see my file history. I know how to prevent Bash from writing entries to the ~/.bash_history file temporarily. How can I wipe the history that is shown in the menu, for instance with files viewed in eog ? Can you quickly clear the recent history from the shell. That is without using the cumbersome and clearly visible path by going to the menu, clicking Recent Files, scrolling past all the names that I want to remove and then click Clear List?
The history is in ~/.local/share/recently-used.xbel but it is not sufficient to remove that file. If you do the Recent Files entry keeps on showing the files you accessed, and if you open a file with an application that creates an Recent Files entry, that and the old list of files will be written to that file again. What you want to do is make an alias or script that removes the file and then touches it to be empty: rm ~/.local/share/recently-used.xbeltouch ~/.local/share/recently-used.xbel or do: echo -n '' > ~/.local/share/recently-used.xbel If you want to be less conspicuous you can also edit the file and delete any entries you don't want in history. It is an XML file with <bookmark> entries that have an href attribute that contains the URI to the content. You can e.g. use xmlstarlet the following to remove any entries you clicked on your desktop for which the filename starts with food: xmlstarlet ed -L -d "/xbel/bookmark[starts-with(@href, 'file:///home/$USER/Desktop/food')]" ~/.local/share/recently-used.xbel
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167310/" ] }
278,707
In Bash, two integers can be compared using conditional expression arg1 OP arg2 OP is one of -eq , -ne , -lt , -le , -gt , or -ge . These arithmetic binary operators return true if arg1 is equal to, not equal to, less than, less than or equal to, greater than, or greater than or equal to arg2 , respectively. Arg1 and arg2 may be positive or negative integers. or arithmetic expression: <= >= < > comparison == != equality and inequality Why do we have two different ways for comparing two integers?When to use which? For example, [[ 3 -lt 2 ]] uses conditional expression, and (( 3 < 2 )) uses arithmetic expression. Both returns 0 when the comparison is true When comparing two integers, can these two methods always be used interchangeably? If yes, why does Bash have two methods rather than one?
Yes, we have two different ways of comparing two integers. It seems that these facts are not widely accepted in this forum: Inside the idiom [ ] the operators for arithmetic comparison are -eq , -ne , -lt , -le , -gt and -ge . As they also are inside a test command and inside a [[ ]] . Yes inside this idioms, = , < , etc. are string operators. Inside the idiom (( )) the operators for arithmetic comparison are == , != , < , <= , > , and >= . No, this is not an "Arithmetic expansion" (which start with a $ ) as $(( )) . It is defined as a "Compound Command" in man bash. Yes, it follows the same rules (internally) of the "Arithmetic expansion" but has no output, only an exit value. It could be used like this: if (( 2 > 1 )); then ... Why do we have two different ways for comparing two integers? I guess that the latter (( )) was developed as a simpler way to perform arithmetic tests. It is almost the same as the $(( )) but just has no output. Why two? Well the same as why we have two printf (external and builtin) or four test (external test , builtin test , [ and [[ ). That's the way that the shells grow, improving some area in one year, improving some other the next year. When to use which? That's a very tough question because there should be no effective difference. Of course there are some differences in the way a [ ] work and a (( )) work internally, but: which is better to compare two integers? Any one!. When comparing two integers, can these two methods always be used interchangeably? For two numbers I am compelled to say yes. But for variables, expansions, mathematical operations there may be key differences that should favor one or the other. I can not say that absolutely both are equal. For one, the (( )) could perform several math operations in sequence: if (( a=1, b=2, c=a+b*b )); then echo "$c"; fi If yes, why does Bash have two methods rather than one? If both are helpful, why not?.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/278707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
278,809
I was looking down at my keyboard and typed my password in because I thought I had already typed my login name. I pressed Enter , then when it asked for the password I pressed Ctrl + c . Should I take some precautionary measure to make sure the password isn't stored in plain text somewhere or should I change the password? Also this was on a tty on ubuntu server 16.04 LTS.
The concern is whether your password is recorded in the authentication log. If you're logging in on a text console under Linux, and you pressed Ctrl + C at the password prompt, then no log entry is generated. At least, this is true for Ubuntu 14.04 or Debian jessie with SysVinit, and probably for other Linux distributions; I haven't checked whether this is still the case on a system with Systemd. Pressing Ctrl + C kills the login process before it generates any log entry. So you're safe . On the other hand, if you actually made a login attempt, which happens if you pressed Enter or Ctrl + D at the password prompt, then the username you entered appears in plain text in the authentication logs. All login failures are logged; the log entry contains the account name, but never includes anything about the password (just the fact that the password was incorrect). You can check by reviewing the authentication logs. On Ubuntu 14.04 or Debian jessie with SysVinit, the authentication logs are in /var/log/auth.log . If this is a machine under your exclusive control, and it doesn't log remotely, and the log file hasn't been backed up yet, and you're willing and able to edit the log file without breaking anything, then edit the log file to remove the password. If your password is recorded in the system logs, you should consider it compromised and you need to change it. Logs might leak for all kinds of reasons: backups, requests for assistance… Even if you're the only user on this machine, don't risk it. Note: I haven't checked whether Ubuntu 16.04 works differently. This answer may not be generalizable to all Unix variants and is certainly not generalizable to all login methods. For example OpenSSH does log the username even if you press Ctrl + C at the password prompt (before it shows the password prompt, in fact).
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/278809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103531/" ] }
278,860
I am trying to understand what the relationship a xxx.iso file has to the other aspects of a block device, e.g. partitions and a file system. It's common for people to describe accessing or making a .iso usable as "mounting the ISO". So to put the question another way: If I, or some piece of software, wanted to "mount" a xxx.iso file onto a USB device, is it necessary to have a pre-existing partition complete with filesystem (e.g. FAT x or ext X ) or is the .iso file - once in the "mounted" state - a lower level construct that performs the same/similar role a file system (or even a partition) does?
An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/278860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
278,864
I have a statically linked busybox and want to be able to write busybox telnet foo . How do I specify the address of "foo"? Do I really need /etc/nsswitch.conf and the corresponding dynamic libraries, or does busybox contain some own simple mechanism to consult /etc/hosts ?
An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/278864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29241/" ] }
278,873
I need to configure xfreerdp to be set as 15 bpp every time is launched, without the need of use the # xfreerdp --sec rdp -a 15 --no-bmp-cache srvaddr Opening the config.txt of xfreerdp , shows me the IP of the server, and if I add /bpp:15 or -a 15 , the program won't launch. What is the correct syntax for this config file?
An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/278873", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167440/" ] }
278,877
I installed Raspbian to a 16 GB card and expanded the filesystem. When I made a dd backup of the card, the .img file output was ~16 GB. Most of it is unused space in the ext4 partition—I'm only using like 2.5 GB in that partition. (There are two partitions—the first is FAT for boot and the second is ext4 for rootfs.) I'd like to shrink the backup.img file which resides on an Ubuntu 16.04 Sever installation (no GUI) so that I can restore the image to a card of smaller size (say 8GB for example). So far, I have mounted the ext4 partition to /dev/loop0 by using the offset value provided to me by fdisk -l backup.img . Then I used e2fsck -f /dev/loop0 and then resize2fs -M /dev/loop0 which appeared to shrink the ext4 fs... am I on the right track? I feel like parted might be next, but I have no experience with it. How do I accomplish this using only cli tools? Update: Here is the output from running fdisk -l backup.img : Disk backup.img: 14.9 GiB, 15931539456 bytes, 31116288 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x79d38e92Device Boot Start End Sectors Size Id Typebackup.img1 * 8192 124927 116736 57M e W95 FAT16 (LBA)backup.img2 124928 31116287 30991360 14.8G 83 Linux
An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/278877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46524/" ] }
278,897
On Dell Inspiron 7559 there is electric buzzing from beneath the keyboard that changes frequency when I touch the touchpad. However, if the CPU is busy I hear no buzzing as the frequency gets out of my hearing range. I was hoping to run a script to keep CPU busy enough and keep the laptop silent, something like this: #!/bin/shrecursiveDelay(){ sleep 0.001; recursiveDelay;}recursiveDelay But it gradually get's the CPU percent higher and higher. How could I keep it at a certain percent?
An ISO file isn't a file system. It contains a file system. From a usage point of view, it functions the same way as a hard disk or USB device or DVD - you need to have a mount point, i.e. a place in your file system where you can mount it in order to get at the contents.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/278897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94213/" ] }
278,904
I want to synchronize processes based on lock files (/ socket files). These files should only be removable by their creator user. There are plenty of choices: /dev/shm /var/lock /run/lock /run/user/<UID> /tmp What's the best location for this purpose? And what way are above locations meant to be used for?
/dev/shm : It is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things. /run/lock (formerly /var/lock ) contains lock files , i.e. files indicating that a shared device or other system resource is in use and containing the identity of the process (PID) using it; this allows other processes to properly coordinate access to the shared device. /tmp : is the location for temporary files as defined in the Filesystem Hierarchy Standard , which is followed by almost all Unix and Linux distributions. Since RAM is significantly faster than disk storage, you can use /dev/shm instead of /tmp for the performance boost , if your process is I/O intensive and extensively uses temporary files. /run/user/$uid : is created by pam_systemd and used for storing files used by running processes for that user. Coming to your question, you can definitely use /run/lock directory to store your lock file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157149/" ] }
278,923
I'm not exactly sure how to formulate this, however: When I scroll using mousewheel or two-finger touchpad gesture, the page in Chrome or Firefox continues scrolling for a bit after I've put my fingers aside from touchpad or mouse. I don't want this feature in my system, and I'm not sure how is it called. This feature sometimes leads to unwanted behaviour, e.g. if I use ctrl-hotkey in a short sequence after scrolling, page zooms, even though I don't scroll at the time. 2 questions - how this thing is called, and how do I disable it entirely without disabling wheel/touchpad scrolling?
Feature is called "Coasting speed". To disable it you can use: xinput --set-prop --type=float "<your device>" "Synaptics Coasting Speed" 0 0 to list devices you can use: xinput list alternative variant (for touchpads) is synclient options (there are 3 of them): CornerCoasting = 0CoastingSpeed = 0CoastingFriction = 0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278923", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93959/" ] }
278,926
I am not able to login into phpmyadmin. It shows the following error. ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) I tried to login to run mysql -u root -p ,but it also shows the same error.
Feature is called "Coasting speed". To disable it you can use: xinput --set-prop --type=float "<your device>" "Synaptics Coasting Speed" 0 0 to list devices you can use: xinput list alternative variant (for touchpads) is synclient options (there are 3 of them): CornerCoasting = 0CoastingSpeed = 0CoastingFriction = 0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/278926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164051/" ] }
278,939
I'm trying to execute a command and would like to put the date and time in the output file name. Here is a sample command I'd like to run. md5sum /etc/mtab > 2016_4_25_10_30_AM.log The date time format can be anything sensible with underscores. Even UTC if the AM and PM can't be used.
If you want to use the current datetime as a filename, you can use date and command substitution . $ md5sum /etc/mtab > "$(date +"%Y_%m_%d_%I_%M_%p").log" This results in the file 2016_04_25_10_30_AM.log (although, with the current datetime) being created with the md5 hash of /etc/mtab as its contents. Please note that filenames containing 12-hour format timestamps will probably not sort by name the way you want them to sort. You can avoid this issue by using 24-hour format timestamps instead. If you don't have a requirement to use that specific date format, you might consider using an ISO 8601 compliant datetime format. Some examples of how to generate valid ISO 8601 datetime representations include: $ date +"%FT%T" 2016-04-25T10:30:00 $ date +"%FT%H%M%S" 2016-04-25T103000 $ date +"%FT%H%M" 2016-04-25T1030 $ date +"%Y%m%dT%H%M" 20160425T1030 If you want "safer" filenames (e.g., for compatibility with Windows), you can omit the colons from the time portion. Please keep in mind that the above examples all assume local system time. If you need a time representation that is consistent across time zones, you should specify a time zone offset or UTC. You can get an ISO 8601 compliant time zone offset by using "%z" in the format portion of your date call like this: $ date +"%FT%H%M%z" 2016-04-25T1030-0400 You can get UTC time in your date call by specifying the -u flag and adding "Z" to the end of the datetime string to indicate that the time is UTC like this: $ date -u +"%FT%H%MZ" 2016-04-25T1430Z
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/278939", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159435/" ] }
278,946
My wpa_supplicant.conf looks like this: network={ ssid="Some name" scan_ssid=1 key_mgmt=WPA-EAP eap=PEAP identity="my-user-id" password="(clear text password here)" ca_cert="/usr/share/ca-certificates/mozilla/GeoTrust_Global_CA.crt" phase2="auth=MSCHAPV2"} With this specific combination of WPA-EAP and MSCHAP-v2, is there a way to not include my password in clear in this configuration file? The ChangeLog seems to claim that this is feasible (since 2005!): * added support for storing EAP user password as NtPasswordHash instead of plaintext password when using MSCHAP or MSCHAPv2 for authentication (hash:<16-octet hex value>); added nt_password_hash tool for hashing password to generate NtPasswordHash Some notes: Using a different password is not an option, as I have no control over this network (this is a corporate network, and a single username/password is used to access all services, including connecting to the Wifi). A word about duplicates: 40: use-wpa-supplicant-without-plain-text-passwords is about pre-shared keys 74500: wpa-supplicant-store-password-as-hash-wpa-eap-with-phase2-auth-pap uses PAP as phase-2 authentication (not MSCHAP-v2). 85757: store-password-as-hash-in-wpa-supplicant-conf is very similar to this question, but was (incorrectly) closed as a duplicate of 74500 ; unfortunately, the answers given to the purported duplicate are specific to PAP, and do not apply to the MSCHAP-v2 case. 85757 itself has an answer claiming that it's essentially impossible regardless of the protocol, but the justification is invalid 1 1 That anser claims that using a hashed password means that the hash becomes the password. This is technically true, but at least the hash is a wifi-only password, which is significant progress over leaking a shared password granting access to multiple services.
You can generate the NtPasswordHash (aka NTLM password hash) yourself as follows: echo -n plaintext_password_here | iconv -t utf16le | openssl md4 Prefix it with "hash:" in the wpa_supplicant.conf file, i.e. password=hash:6602f435f01b9173889a8d3b9bdcfd0b On macOS the iconv code is UTF-16LE echo -n plaintext_password_here | iconv -t UTF-16LE | openssl md4 Note that you don't gain much security. If an attacker finds the file with the hash, then they can trivially join the network (the same way your computer does), so having hashed the password doesn't help at all. If the password is used anywhere else, then the attacker would have to use brute force to find the original password (i.e. try the most likely passwords and calculate their hash until they find a match). Since you can calculate about 1 billion hashes per second on an ordinary PC, that's not a big hurdle, and attackers can easily use precomputed tables since the hash is unsalted. NT is really horrible as a password hashing algorithm.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/278946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55157/" ] }
279,011
On my 3 TB external hard drive, I have a 2.7 TB directory containing relatively small files. I would like to compress this 2.7 TB directory and remove it to keep only the compressed version. The issue is that I do not have enough storage to first zip and then rm the non-zipped directory. Is there a way around this problem or do I necessarily have to acquire more storage for the manipulation?
You can try the --remove-files argument to tar. Say you want to compress everything on directory FOO, you would: tar -czf FooCompressed.tar.gz --remove-files FOO Arguments explained: c: create TAR z: compress using GZIP, you can switch to -j for BZIP2 or -J for LZMA(xz) f: output to file instead of of STDOUT remove-files: self explanatory
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114428/" ] }
279,017
I'm trying to version control IntelliJ IDEA configuration files. Here's a small sample: <?xml version="1.0" encoding="UTF-8"?><project version="4"> <component name="ChangeListManager"> <ignored path="tilde.iws" /> <ignored path=".idea/workspace.xml" /> <ignored path=".idea/dataSources.local.xml" /> <option name="EXCLUDED_CONVERTED_TO_IGNORED" value="true" /> <option name="TRACKING_ENABLED" value="true" /> <option name="SHOW_DIALOG" value="false" /> <option name="HIGHLIGHT_CONFLICTS" value="true" /> <option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" /> <option name="LAST_RESOLUTION" value="IGNORE" /> </component> <component name="ToolWindowManager"> <frame x="1201" y="380" width="958" height="1179" extended-state="0" /> <editor active="false" /> <layout> <window_info id="TODO" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" show_stripe_button="true" weight="0.33" sideWeight="0.5" order="6" side_tool="false" content_ui="tabs" /> <window_info id="Palette&#9;" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" show_stripe_button="true" weight="0.33" sideWeight="0.5" order="2" side_tool="false" content_ui="tabs" /> </layout> </component></project> Some elements, such as /project/component[@name='ToolWindowManager']/layout/window_info seem to be saved in arbitrary sequence every time the IDE saves the configuration. All elements of the same type seem to always have the same attributes in the same sequence. Considering that the sequence of elements is irrelevant for the functioning of the IDE, it would be useful if elements are sorted by element name and then attribute values, and attributes and whitespace are left in place. Based on another answer I've gotten to this : <stylesheet version="1.0" xmlns="http://www.w3.org/1999/XSL/Transform"> <output method="xml" indent="yes" encoding="UTF-8"/> <strip-space elements="*"/> <template match="processing-instruction()|@*"> <copy> <apply-templates select="node()|@*"/> </copy> </template> <template match="*"> <copy> <apply-templates select="@*"/> <apply-templates> <sort select="name()"/> <sort select="@*[1]"/> <sort select="@*[2]"/> <sort select="@*[3]"/> <sort select="@*[4]"/> <sort select="@*[5]"/> <sort select="@*[6]"/> </apply-templates> </copy> </template></stylesheet> It's almost there, but with a few issues: It doesn't sort by every attribute value (and @* doesn't work) It removes space before the end of empty elements ( <foo /> becomes <foo/> ). It adds a newline at EOF (which IMO isn't a bug, but makes the resulting file less similar to the original).
You can try the --remove-files argument to tar. Say you want to compress everything on directory FOO, you would: tar -czf FooCompressed.tar.gz --remove-files FOO Arguments explained: c: create TAR z: compress using GZIP, you can switch to -j for BZIP2 or -J for LZMA(xz) f: output to file instead of of STDOUT remove-files: self explanatory
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
279,024
I have seen wrapper script examples which in a nutshell are following: #!/bin/bashmyprog=sleepecho "This is the wrapper script, it will exec "$myprog""exec "$myprog" "$@" As seen above, they use exec to replace the newly created shell almost immediately with the $myprog . One could achieve the same without exec : #!/bin/bashmyprog=sleepecho "This is the wrapper script, it will exec "$myprog"""$myprog" "$@" In this last example, a new bash instance is started and then $myprog is started as a child process of the bash instance. What are the benefits of the first approach?
Using exec makes the wrapper more transparent, i.e. it makes it less likely that the user or application that calls the script needs to be aware that it's a relay that in turns launches the “real” program. In particular, if the caller wants to kill the program, they'll just kill the process they just launched. If the wrapper script runs a child process, the caller would need to know that they should find out the child of the wrapper and kill that instead. The wrapper script could set a trap to relay some signals, but that wouldn't work with SIGSTOP or SIGKILL which can't be caught. Calling exec also saves a bit of memory (and other resources such as PIDs etc.) since it there's no need to keep an extra shell around with nothing left to do. If there are multiple wrappers, the problems add up (difficulty in finding the right process to kill, memory overhead, etc.). Some shells (e.g. the Korn shell) automatically detect when a command is the last one and there's no active trap and put an implicit exec , but not all do (e.g. not bash).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/279024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
279,096
What is reasonable scalability limit of sort -u ?(in dimensions of "line length", "amount of lines", "total file size") What is Unix alternative for files exceeding this in dimension of "amount of lines"? Of course I can easily implement one, but I wondered if there is something that can be done with few standard Linux commands.
The sort that you find on Linux comes from the coreutils package and implements an External R-Way merge . It splits up the data into chunks that it can handle in memory, stores them on disc and then merges them. The chunks are done in parallel, if the machine has the processors for that. So if there was to be a limit, it is the free disc space that sort can use to store the temporary files it has to merge, combined with the result.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/279096", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
279,107
I tried to persist the environment variables for ORACLE in RedHat using /etc/environment . It cleared my PATH variable; no command was recognized afterwards. Why does it happen, since just executing the same commands in the shell just works fine?! The contents of my /etc/environment : ORACLE_HOME=/usr/lib/oracle/12.1/client64PATH=$ORACLE_HOME/bin:$PATHLD_LIBRARY_PATH=$ORACLE_HOME/lib
/etc/environment is a configuration file for pam_env , not a file read by a shell. The syntax is somewhat similar, but it is not the same. In particular, you can't refer to existing variables: you've set your search path to contain $ORACLE_HOME/bin and $PATH , i.e. directories with a dollar sign in their name. To set variables for all users, you can edit /etc/security/pam_env.conf , which has a different, richer syntax, but still not as rich as what you can do in a shell. ORACLE_HOME DEFAULT=/usr/lib/oracle/12.1/client64PATH OVERRIDE=/usr/local/bin:/usr/bin:/bin:${ORACLE_HOME}/binLD_LIBRARY_PATH DEFAULT=$ORACLE_HOME/lib Note that you can refer to other variables, but you can't refer to a variable's previous value. If you want a more flexible approach, add the variable definitions to /etc/profile instead. There you can use all shell constructs. The downside is that this is only read in login sessions, not e.g. by cron. You can easily benefit from them by adding . /etc/profile; at the beginning of your cron jobs however. export ORACLE_HOME=/usr/lib/oracle/12.1/client64PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/279107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50532/" ] }
279,125
I want to allow one user to run grep (through sudo), and to grep for one specific string in one specific file. I don't want to allow this user to be able to run grep on all files. The user doesn't have read access to the file, hence the requirement to use sudo. I am trying to write a nagios check which greps the log file of another service on the machine. However it doesn't work, sudo keeps asking for a password. My command is: sudo grep "string I want (" /var/log/thefilename.log (yes, there is a raw ( and some spaces in the string I want to grep) /etc/sudoers.d/user-can-grep has this content: user ALL=(root) NOPASSWD: /bin/grep "string I want (" /var/log/thefilename.log This sudoers file is owned by root:root and has permissions -r--r----- The server is Ubuntu trusty 14.04.3 LTS. How can I make this work?
Write a script (writeable only by root) In that script, execute the grep you need In the sudoers config, allow only access to that script Configure whatever tool or advise whichever user to just run the script via sudo Much easier to debug, easier to lock down specific access to a specific file, and much harder to exploit.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4691/" ] }
279,141
I am using a freshly installed Ubuntu-Gnome 16.04 , and I want to set caps-lock to change keyboard layout (single key, not key combination). I used to have this Linux-mint , and I grew used to it. I looked into the setting manager, but there is doesn't accept caps-lock as a valid input. I also looked int gnome-tweak-tools but there I can't find the keyboard layout switching at all. Is this possible? how?
You could set the corresponding xkb option via dconf-editor . Navigate to org > gnome > desktop > input-sources and add grp:caps_toggle to your xkb-options : Note each option is enclosed in single quotes, options are separated by comma+space. On older gnome3 releases you could do that also via System Settings > Keyboard (or gnome-control-center keyboard in terminal) > Typing > Modifiers-only switch to next source to Caps Lock This was removed from recent releases ( see sanmai's answer for alternatives to dconf-editor ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/279141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7140/" ] }
279,180
When I was trying to monitor the /var/log/secure or /var/log/message using watch command the output showed as /var/log/messages: Permission denied . Is it possible to monitor the /var/log/messages and /var/log/secure using watch command?
Yes it is, but note that regular users don't have permission to read /var/log/messages and /var/log/secure . sudo watch tail /var/log/messages worked fine here when I tried.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/279180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155498/" ] }
279,276
I have a huge text file ~ 33Gb and due to its size, I wanted to just read the first few lines of the file to understand how the file is organized.I have tried head , but it took like forever to even finish the run. Is it because in UNIX, head needs to run through the WHOLE file first before it can do anything? If so, is there a faster way to display part of such a file?
This doesn't really answer your question; I suspect the reason head is slow is as given in Julie Pelletier 's answer: the file doesn't contain any (or many) line feeds, so head needs to read a lot of it to find lines to show. head certainly doesn't need to read the whole file before doing anything, and it stops reading as soon as it has the requested number of lines. To avoid slowdowns related to line feeds, or if you don't care about seeing a specific number of lines, a quick way of looking at the beginning of a file is to use dd ; for example, to see the first 100 bytes of hugefile : dd if=hugefile bs=100 count=1 Another option, given in Why does GNU head/tail read the whole file? , is to use the -c option to head : head -c 100 hugefile
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167694/" ] }
279,278
I have a machine that used to dual-boot Ubuntu (16.04 currently) and Windows 7, with Ubuntu's GRUB as boot loader. Now I just added Arch Linux as third OS, following the official installation instructions. I did not install GRUB from Arch because I wanted to use the one controlled by Ubuntu. The instructions contained a command mkinitcpio -p linux that probably generated some boot files which I ran as described. Now when I try to boot Ubuntu from GRUB through its default entry, I get this unpleasant error (sorry for the screen photo): As the output of uname -a shows, it's trying to boot the Arch kernel, but /dev/sda6 is the Ubuntu root partition. I have to navigate to Advanced options for Ubuntu and select one of the Ubuntu, with Linux 4.4.0-* entries to be able to load Ubuntu, I couldn't find an entry that would correctly load Arch though. Running sudo update-grub from Ubuntu ( " update-grub is a stub for running grub-mkconfig -o /boot/grub/grub.cfg to generate a grub2 config file." ) does not change anything. The grub-customizer tool was also useless in fixing this so far. What causes this confusion of GRUB and how do I fix it so that each Linux version boots with the correct kernel and from the correct partition? It looks like I stupidly installed Arch with Ubuntu's /boot mounted, so it probably placed its boot files in there. I'm fine with erasing all the Arch related stuff to get Ubuntu's boot loader straight again and do a clean install of Arch later. Updates (thanks to @terdon for his support in the Ask Ubuntu chat): Here is my /boot/grub/grub.cfg . All Linux entries seem to point at my /dev/sda6 partition, which is Ubuntu's root: $ grep ' linux /' /boot/grub/grub.cfg linux /vmlinuz-linux root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro linux /vmlinuz-linux root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro linux /vmlinuz-linux root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro linux /vmlinuz-linux root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro init=/sbin/upstart linux /vmlinuz-linux root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro recovery nomodeset linux /vmlinuz-4.4.0-21-generic root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro linux /vmlinuz-4.4.0-21-generic root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro init=/sbin/upstart linux /vmlinuz-4.4.0-21-generic root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro recovery nomodeset linux /vmlinuz-4.2.0-35-generic root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro linux /vmlinuz-4.2.0-35-generic root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro init=/sbin/upstart linux /vmlinuz-4.2.0-35-generic root=UUID=eee18451-b607-4875-8a88-c9cb6c6544c8 ro recovery nomodeset I tried to update GRUB config from Ubuntu: $ sudo grub-mkconfig -o /boot/grub/grub.cfg Generating grub configuration file ...dpkg: warning: version 'linux' has bad syntax: version number does not start with a digitFound linux image: /boot/vmlinuz-linuxFound initrd image: /boot/initramfs-linux.imgFound linux image: /boot/vmlinuz-4.4.0-21-genericFound initrd image: /boot/initrd.img-4.4.0-21-genericFound linux image: /boot/vmlinuz-4.2.0-35-genericFound initrd image: /boot/initrd.img-4.2.0-35-genericFound memtest86+ image: /memtest86+.elfFound memtest86+ image: /memtest86+.binFound Windows 7 (loader) on /dev/sda1Found Arch on /dev/sda8done I tried to reinstall GRUB to the MBR from Ubuntu: $ sudo grub-install /dev/sdaInstalling for i386-pc platform.Installation finished. No error reported.$ sudo grub-install --recheck /dev/sdaInstalling for i386-pc platform.Installation finished. No error reported. Those are the installed Ubuntu kernel packages by the way, I did try to dpkg-reconfigure all of them, but without any effect on the issue: $ dpkg -l linux-image* | grep ^iiii linux-image-4.2.0-35-generic 4.2.0-35.40 amd64 Linux kernel image for version 4.2.0 on 64 bit x86 SMPii linux-image-4.4.0-21-generic 4.4.0-21.37 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMPii linux-image-extra-4.2.0-35-generic 4.2.0-35.40 amd64 Linux kernel extra modules for version 4.2.0 on 64 bit x86 SMPii linux-image-extra-4.4.0-21-generic 4.4.0-21.37 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP I also tried to regenerate the Ubuntu initramfs: $ sudo update-initramfs -u -k allupdate-initramfs: Generating /boot/initrd.img-4.4.0-21-genericupdate-initramfs: Generating /boot/initrd.img-4.2.0-35-generic My partition layout: Checked from the Ubuntu system. The labels should explain themselves. $ lsblk -f /dev/sdaNAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs win7-boot 90DCF3A5DCF3842E /win/boot├─sda2 ntfs windows7 482C7A572C7A3FCC /win/c├─sda3 ext4 grub-boot 6dbb8633-dadd-4b5e-8d85-b0895fde9dfb /boot├─sda5 ext4 images 81dc42c4-a161-4ccd-b704-6e5c09298943 /images├─sda6 ext4 ubuntu-1604 eee18451-b607-4875-8a88-c9cb6c6544c8 /├─sda7 ext4 ubuntu-home 485b3ef1-7216-4053-b25c-f656d529e8e6 /home├─sda8 ext4 arch-root 8d281a0c-969c-44cf-ba6a-1d3c7b4be7ec ├─sda9 ext4 arch-home 32522902-a53d-44c8-90f2-6bbf14c40f1f └─sda10 swap linux-swap 8b05bd9b-bc42-46f6-8c18-50711a3c48b9 [SWAP] My GRUB menu structure: Advanced options for Ubuntu: Advanced options for Arch: My /boot directory: $ ls -la /boottotal 118480drwxr-xr-x 4 root root 4096 Apr 24 20:50 .drwxr-xr-x 28 root root 4096 Apr 24 19:44 ..-rw-r--r-- 1 root root 1313029 Mär 16 01:45 abi-4.2.0-35-generic-rw-r--r-- 1 root root 1239577 Apr 19 00:21 abi-4.4.0-21-generic-rw-r--r-- 1 root root 184888 Mär 16 01:45 config-4.2.0-35-generic-rw-r--r-- 1 root root 189412 Apr 19 00:21 config-4.4.0-21-genericdrwxr-xr-x 6 root root 4096 Apr 26 19:58 grub-rw-r--r-- 1 root root 18598360 Apr 24 20:59 initramfs-linux-fallback.img-rw-r--r-- 1 root root 3516429 Apr 24 20:59 initramfs-linux.img-rw-r--r-- 1 root root 33642388 Apr 24 18:31 initrd.img-4.2.0-35-generic-rw-r--r-- 1 root root 36143341 Apr 24 19:51 initrd.img-4.4.0-21-genericdrwx------ 2 root root 16384 Okt 28 17:43 lost+found-rw-r--r-- 1 root root 182704 Jan 28 13:44 memtest86+.bin-rw-r--r-- 1 root root 184380 Jan 28 13:44 memtest86+.elf-rw-r--r-- 1 root root 184840 Jan 28 13:44 memtest86+_multiboot.bin-rw------- 1 root root 3745312 Mär 16 01:45 System.map-4.2.0-35-generic-rw------- 1 root root 3853719 Apr 19 00:21 System.map-4.4.0-21-generic-rw------- 1 root root 6829104 Mär 16 01:45 vmlinuz-4.2.0-35-generic-rw------- 1 root root 7013968 Apr 19 00:21 vmlinuz-4.4.0-21-generic-rw-r--r-- 1 root root 4435552 Apr 14 19:20 vmlinuz-linux The 4.4.0 and 4.2.0 kernels should be Ubuntu, Arch should have a 4.5.0 kernel. But how do I find out which file without kernel version in its name belongs to what? My Ubuntu root directory (directories excluded): $ ls -la / | grep ^[^d]total 124lrwxrwxrwx 1 root root 32 Apr 24 19:44 initrd.img -> boot/initrd.img-4.4.0-21-genericlrwxrwxrwx 1 root root 32 Apr 5 17:45 initrd.img.old -> boot/initrd.img-4.2.0-35-genericlrwxrwxrwx 1 root root 29 Apr 24 19:44 vmlinuz -> boot/vmlinuz-4.4.0-21-genericlrwxrwxrwx 1 root root 29 Apr 5 17:45 vmlinuz.old -> boot/vmlinuz-4.2.0-35-generic My Arch root directory does not contain any files or links.
I finally solved it by nuking the Arch partition and its boot files in my Ubuntu's /boot directory from orbit. Ubuntu is fine again now, all remaining GRUB entries are working again. Here's a list of what I did: Delete Arch's initramfs files: sudo rm /boot/initramfs-linux* Delete Arch's vmlinuz file: sudo rm vmlinuz-linux Format the Arch partition ( /dev/sda8 ) using GParted Update GRUB's configuration: sudo update-grub Reboot and enjoy!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/279278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103151/" ] }
279,368
Here's a sample 'test.txt' abcdX1yad45dasabcdX2fad45dasabcdX3had45dasabcdX4wad45dasabcdX5mad45das sample desired output: X1yadX2fadX3hadX4wadX5mad I could get it working in vim with: :% s/\v.*(X\d)(.*)45.*/\1\2/ and worked in perl as well with: open(my $file, "<", "test.txt");while(<$file>){ s/.*(X\d)(.*)45.*/$1$2/; print $_;} my eventual regular expression needs two groupings, it wasn't required for this example output am not able to get it work with sed: sed -r 's/.*(X\d)(.*)45.*/\1\2/' test.txtabcdX1yad45dasabcdX2fad45dasabcdX3had45dasabcdX4wad45dasabcdX5mad45das just to check sed is working, sed -r 's/(a)/#\1#/' test.txt#a#bcdX1yad45das#a#bcdX2fad45das#a#bcdX3had45das#a#bcdX4wad45das#a#bcdX5mad45das what am I doing wrong in sed?
sed does not understand \d . You can use [0-9] or, more generally, [[:digit:]] in its place: $ sed -r 's/.*(X[[:digit:]])(.*)45.*/\1\2/' test.txtX1yadX2fadX3hadX4wadX5mad Note that [[:digit:]] is unicode-safe but [0-9] is not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/279368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109046/" ] }
279,397
I know this question isn't very new but it seems as if I didn't be able to fix my problem on myself. ldd generate the following output u123@PC-Ubuntu:~$ ldd /home/u123/Programme/TestPr/Debug/TestPr linux-vdso.so.1 => (0x00007ffcb6d99000) libcsfml-window.so.2.2 => not found libcsfml-graphics.so.2.2 => not found libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fcebb2ed000) /lib64/ld-linux-x86-64.so.2 (0x0000560c48984000) Which is the correct way to tell ld the correct path?
if your libraries are not on standard path then either you need to add them to the path or add non-standard path to LD_LIBRARY_PATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<Your_non-Standard_path> Once you done any one of above things then you need to update the dynamic linker run-time binding by executing below command: sudo ldconfig UPDATE: You can make the changes permanent by either writing the above export line into one of your startup files (e.g. ~/.bashrc) OR if the underlying library is not conflicting with any other library then put into one of standard library path (e.g. /lib,/usr/lib)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/279397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167795/" ] }
279,401
I am running the following git clone command through sudo and bash and I want to redirect STDOUT to a log file: % sudo -u test_user bash -c "git clone https://github.com/scrooloose/nerdtree.git/home/test_user/.vim/bundle/nerdtree >> /var/log/build_scripts.log" What is happening is the STDOUT is continuing to be sent to the terminal. i.e. Cloning into 'nerdtree'...remote: Counting objects: 3689, done.[...]Checking connectivity... done. I'm guessing the problem has something to do with the fact that sudo is forking a new process then bash is forking another, as demonstrated here: % sudo -u test_user bash -c "{ git clone https://github.com/scrooloose/nerdtree.git/home/test_user/.vim/bundle/nerdtree >> /var/log/build_scripts.log; ps f -g$$; }" PID TTY STAT TIME COMMAND 6556 pts/25 Ss 0:02 /usr/bin/zsh 3005 pts/25 S+ 0:00 \_ sudo -u test_user bash -c { git clone https://github.com/scrooloo 3006 pts/25 S+ 0:00 \_ bash -c { git clone https://github.com/scrooloose/nerdtree. 3009 pts/25 R+ 0:00 \_ ps f -g6556 I've tried running this in a script and using exec >> /var/log/build_script.log before the command wrapping the command in a function, then calling and redirecting the functions output But I think these redirections are only applying to the parent and the child processes are defaulting to sending STDOUT to the /dev/tty/25 of their parent causing output to continue to the terminal. How can I redirect the STDOUT of this command?
The messages you mention are not printed to standard output but to standard error. So, to capture them, you either need to redirect standard error instead of standard output: sudo -u user bash -c "git clone https://github.com/foo.git ~/foo 2>> log" Or both STDERR and STDOUT: sudo -u user bash -c "git clone https://github.com/foo.git ~/foo >> log 2>&1" With bash , you can also use &>> for this: sudo -u user bash -c "git clone https://github.com/foo.git ~/foo &>> log" The csh , tcsh , zsh equivalent being >>& ( (t)csh don't support 2>&1 so it's the only way there): sudo -u user csh -c "git clone https://github.com/foo.git ~/foo >>& log" In fish sudo -u user fish -c "git clone https://github.com/foo.git ~/foo >> log ^&1" For more on the different types of redirection operators, see What are the shell's control and redirection operators? Now, in the specific case of git , there's another issue. Like a few other programs, git can detect that its output is being redirected and stops printing progress reports if so. This is probably because the reports are intended to be seen live and include \r which can be a problem when saved in a file. To get around this, use: --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. And: sudo -u user bash -c "git clone --progress https://github.com/foo.git ~/foo >> log 2>&1" If you want to both see the output as it comes and save to a file, use tee : sudo -u user bash -c "git clone --progress https://github.com/foo.git ~/foo 2>&1 | tee -a log
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
279,487
In bash, I notice that if a command using redirection would fail, any programs which run prior to that are not run. For example, this program opens the file "a" and writes 50 bytes to file "a". However, running this command with redirection to a file with insufficient permissions (~root/log), yields no change in file size of "a". $ ./write_file.py >> ~root/log-bash: /var/root/log: Permission deniedcdal at Mac in ~/experimental/unix_write$ ls -lttotal 16-rw-rw-r-- 1 cdal staff 0 Apr 27 08:54 a <-- SHOULD BE 50 BYTES One would think the program would run, capture any output (but also write to the file "a"), and then fail to write any output to ~root/log. Instead the program is never run. Why is this, and how does bash choose the order of the "checks" it performs prior to executing a program? Are other checks performed as well? p.s. I'm trying to determine whether a program run under cron actually ran when redirected to a "permission denied" file.
It's not really a question of ordering checks, simply the order in which the shell sets things up. Redirections are set up before the command is run; so in your example, the shell tries to open ~root/log for appending before trying to do anything involving ./write_file.py . Since the log file can't be opened, the redirection fails and the shell stops processing the command line at that point. One way to demonstrate this is to take a non-executable file and attempt to run it: $ touch demo$ ./demozsh: permission denied: ./demo$ ./demo > ~root/logzsh: permission denied: /root/log This shows that the shell doesn't even look at ./demo when the redirection can't be set up.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167848/" ] }
279,545
When I do this: sudo wpa_supplicant -D nl80211,wext -i wlp4s0 -c <(wpa_passphrase "some ssid" "password") I get Successfully initialized wpa_supplicantFailed to open config file '/dev/fd/63', error: No such file or directoryFailed to read or parse configuration '/dev/fd/63' Any ideas?
Quoting the ArchLinux wiki : Note: Because of the process substitution, you cannot run this command with sudo - you will need a root shell. You should be able to use su -c under sudo like so: $ sudo su -c 'wpa_supplicant -D nl80211,wext -i wlp4s0 -c \ <(wpa_passphrase "some ssid" "password")'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94302/" ] }
279,569
I need to delete a "~" folder in my home directory. I realize now that rm -R ~ is a bad choice. Can I safely use rm -R "~" ?
In theory yes. In practice usually also yes. If you're calling a shell script or alias that does something weird, then maybe no. You could use echo to see what a particular command would be expanded to by the shell: $ echo rm -R ~rm -R /home/frostschutz$ echo rm -R "~"rm -R ~ Note that echo removes the "" so you should not copy-paste what it prints. It just shows that if you give "~" , the command literally sees ~ and not the expanded /home/frostschutz path. If you have any doubt about any command, how about starting out with something that is less lethal if it should go wrong? In your case you could start out with renaming instead of deleting it outright. $ mv "~" delete-me$ ls delete-me# if everything is in order$ rm -R delete-me For confusing file names that normally shouldn't even exist (such as ~ and other names starting with ~ or , or containing newlines, etc.), it's better to be safe than sorry. Also consider using tab completion (type ls ~<TAB><TAB><TAB> ), most shells try their best to take care of you, this also helps avoid mistyping regular filenames.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/279569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162496/" ] }
279,652
I want to read whole file and make it waiting for input, just like tail -f but with the complete file displayed. The length of this file will always change, because this is a .log file. How can I do it, if I don't know length of the file?
tail lets you add -n to specify the number of lines to display from the end, which can be used in conjunction with -f . If the argument for -n starts with + that is the count of lines from the beginning ( 0 and 1 displaying the whole file, 2 indicating skip the first line, as indicated by @Ben). So just do: tail -f -n +0 filename If your log files get rotated, you can add --retry (or combine -f and --retry into -F as @Hagen suggested) Also note that in a graphical terminal, you can use the mouse and PageUp / PageDown to scroll back into the history (assuming your buffer is large enough), this information stays there even if you use Ctrl + C to exit tail . If you use less this is far less convenient and AFAIK you have to use the keyboard for scrolling and I don't know of a means to keep less from deinitialising termcap if you forget to start it with -X .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/279652", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167951/" ] }
279,660
select * from emp;select * from dept;selectionend; I want to obtain this output: select * from emp;select * from dept; I tried by using: awk '/select/{a=1} a; /;/{a=0}' XXARXADLMT.txt Output: select * from emp;select * from dept;selectionend;
tail lets you add -n to specify the number of lines to display from the end, which can be used in conjunction with -f . If the argument for -n starts with + that is the count of lines from the beginning ( 0 and 1 displaying the whole file, 2 indicating skip the first line, as indicated by @Ben). So just do: tail -f -n +0 filename If your log files get rotated, you can add --retry (or combine -f and --retry into -F as @Hagen suggested) Also note that in a graphical terminal, you can use the mouse and PageUp / PageDown to scroll back into the history (assuming your buffer is large enough), this information stays there even if you use Ctrl + C to exit tail . If you use less this is far less convenient and AFAIK you have to use the keyboard for scrolling and I don't know of a means to keep less from deinitialising termcap if you forget to start it with -X .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/279660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167819/" ] }
279,727
Assume I have 1 - funct12- funct 23 - funct 3 4 line 4 how can I copy line 1 and 3 (not a range of lines) and paste them, for example at line 8? If I do this in way with | arg like ( 1y|3y ), I would yank lines to several registers, right? But how can I put from several registers at once?
You can append to a register instead of erasing it by using the upper-case letter instead of the lower-case one. For example: :1y a # copy line 1 into register a (erases it beforehand):3y A # copy line 3 into register a (after its current content)8G # go to line 8"ap # print register a
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/279727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162458/" ] }
279,729
I can do an ls -li to see a file's inode number, but how can I list information inside a particular inode by using that inode number.
If you have a ext2/3/4 filesystem you can use debugfs for a low-level look at an inode. For example, to play without being root: $ truncate -s 1M myfile$ mkfs.ext2 -F myfile$ debugfs -w myfiledebugfs: stat <2> Inode: 2 Type: directory Mode: 0755 Flags: 0x0 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 1024 File ACL: 0 Directory ACL: 0 Links: 3 Blockcount: 2 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5722081d -- Thu Apr 28 14:54:53 2016 atime: 0x5722081d -- Thu Apr 28 14:54:53 2016 mtime: 0x5722081d -- Thu Apr 28 14:54:53 2016 BLOCKS: (0):24 TOTAL: 1 The command stat takes a inode number inside <> .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/279729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148778/" ] }
279,760
I am implementing as below code using for loop but wrong output coming after running the script. for i in `awk -F"|" '{print $1}' $INPUTFILE`, j in `awk -F"|" '{print $2}' $INPUTFILE`doecho $i:$jdone Please help me to use multiple variables in single for loop in shell script.
Maybe you actually want: while IFS='|' read -r i j rest <&3; do { printf '%s\n' "something with $i and $j" } 3<&-done 3< "$INPUTFILE" But using a shell loop to process text is often the wrong way to go . Here, it sounds like you just need: awk -F '|' '{print $1 ":" $2}' < "$INPUTFILE" Now as an answer to the question in the title, for a shell with for loops taking more than one variable, you've got zsh (you seem to already be using zsh syntax by not quoting your variables or not disabling globbing when splitting command substitution): $ for i j in {1..6}; do echo $i:$j; done1:23:45:6 Or the shorter form: for i j ({1..6}) echo $i:$j The equivalent with POSIX shells: set -- 1 2 3 4 5 6## or:# IFS='# ' # split on newline# set -o noglob # disable globbing# set -- $(awk ...) # split the output of awk or other commandwhile [ "$#" -gt 0 ]; do printf '%s\n' "$1:$2" shift 2done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/279760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125503/" ] }
279,813
Is there a way to apply the dos2unix command so that it runs against all of the files in a folder and it's subfolders? man dos2unix doesn't show any -r or similar options that would make this straight forward?
find /path -type f -print0 | xargs -0 dos2unix --
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/279813", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168067/" ] }
279,825
I cannot install pepperflashplugin-nonfree on my Ubuntu : $ sudo apt install pepperflashplugin-nonfree Reading package lists... DoneBuilding dependency tree Reading state information... DoneSuggested packages: ttf-dejavu ttf-xfree86-nonfreeThe following NEW packages will be installed: pepperflashplugin-nonfree0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/11,1 kB of archives.After this operation, 70,7 kB of additional disk space will be used.Selecting previously unselected package pepperflashplugin-nonfree.(Reading database ... 603638 files and directories currently installed.)Preparing to unpack .../pepperflashplugin-nonfree_1.7ubuntu1_amd64.deb ...Unpacking pepperflashplugin-nonfree (1.7ubuntu1) ...Setting up pepperflashplugin-nonfree (1.7ubuntu1) ...ERROR: failed to retrieve status information from google : W: There is no public key available for the following key IDs:1397BC53640DB551More information might be available at: http://wiki.debian.org/PepperFlashPlayer I have added the missing key : $ sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 1397BC53640DB551gpg: requesting key 640DB551 from hkp server keyserver.ubuntu.comgpg: key D38B4796: public key "Google Inc. (Linux Packages Signing Authority) <[email protected]>" importedgpg: Total number processed: 1gpg: imported: 1 (RSA: 1) And I still get the same message : sudo dpkg-reconfigure pepperflashplugin-nonfreeERROR: failed to retrieve status information from google : W: There is no public key available for the following key IDs:1397BC53640DB551More information might be available at: http://wiki.debian.org/PepperFlashPlayer Can you help ?
pepperflashplugin-nonfree has its own key stash in /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt . Until the package is updated with the new key, you can add the key locally by executing gpg --keyserver pgp.mit.edu --recv-keys 1397BC53640DB551gpg --export --armor 1397BC53640DB551 | sudo sh -c 'cat >> /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt' It is important that the new key is appended to the file (">>"), the old key is still needed. After this you can install the pepperflashplugin with sudo update-pepperflashplugin-nonfree --install The file will be overwritten when the package is updated, so you might have to do this again after an update if the maintainer hasn't added the new key (in this case you will get the same error message again when the new version is being installed).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/279825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135038/" ] }
279,826
So yesterday I had a grub issue that required me to comment out the other two drive sin my fstab that get mounted. 1. an ssd that houses my qcow2 vm.2. a 1tb seagate So all I had listed was / and swap. After I got booted in to the system I notice the ssd which was /dev/sdd changed to /dev/sdc. OK I thought.. whatever I'll recreate the vm to look at sdc now. Fast forward to 20 minutes ago, my vm stopped booting, upon further inspection sdc was now back to sdd. What gives? What could cause this or where can I look to determine the cause?
pepperflashplugin-nonfree has its own key stash in /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt . Until the package is updated with the new key, you can add the key locally by executing gpg --keyserver pgp.mit.edu --recv-keys 1397BC53640DB551gpg --export --armor 1397BC53640DB551 | sudo sh -c 'cat >> /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt' It is important that the new key is appended to the file (">>"), the old key is still needed. After this you can install the pepperflashplugin with sudo update-pepperflashplugin-nonfree --install The file will be overwritten when the package is updated, so you might have to do this again after an update if the maintainer hasn't added the new key (in this case you will get the same error message again when the new version is being installed).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/279826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163143/" ] }