source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
284,901
I have to create an archive with the command gzip (not tar - it's necessary) and the archive should contain files from another directory - for example, /etc.I tried to use command gzip myetc.gz /etc But it didn't work.
gzip works with files, not directories. Therefore to create an archive of a directory you need to use tar (which will create a single tarball out of multiple files): tar cvf myetc.tar /etc or, for a gzipped archive: tar cvzf myetc.tar.gz /etc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/284901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171656/" ] }
284,977
I'm using Exim as an MTA to send emails. Is it possible to get notified if an email goes into the recipient's spam folder?
No, you will get a "delivered" notice or an "error" notice. Once the mail is accepted by the remote end, you don't get to know where it goes after that. At least not on the MTA side of things. One of the errors may be "rejected cause of spam" or "rejected because of SPF" or the like, but if your email is accepted, even to the spam folder, you will not get a notice. If the email is rejected by their server then your recipient will not get the email, even in their spam folder. You may get an error of "Deferred" -- that may be because you are suspected of sending spam. This (the deferred status) will tell Exim to try again later. You may be able to get more information from that message. However, deferred is common, and normal, and not really an issue. Using it for spam warnings is very specific to the receiving end, and likely would not tell Exim do to anything but try again later. Some services have "tricks" to see if a mail is marked as spam. Combinations of links, images, and maybe even javascript that can tell, in some cases, if your ending in the spam folder. But these don't work 100% of the time, and are more on the client (gmail, outlook, etc.) side then the MTA side.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/284977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166738/" ] }
285,005
I have a script running on fedora 22 where I am restarting service network as part of a troubleshoot calling service network restart . I want to check if that command is taking too long to execute. If it takes too long I want to output a message saying "Error restarting network service". Else I want to continue with the script.
You can use timeout command to run your command or script in a given timeout.Something similar to this: timeout 10m command Which waits for the command to finish withing 10 minutes otherwise kills it and exits with status 124. Then you can check exit status of timeout and print the appropriate message based on it.See here for more: timeout manpage. If you don't want to kill the long command do something like this: (( sleep $TIMEOUT; echo "command took so long!" ) & exec $COMMAND ) On TIMEOUT this will print the message but command continues to execute.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171008/" ] }
285,080
Actually, I installed PostgreSQL 9.4 (with postGIS extension), and when installed, everything used to work fine. As said on many tuts, I've set the /data folder, checked configuration files, and so on. Worked on other projects so I did not work on psql for a while But when installation was done, it used to work correctly, I made a test database, initialized postgres user, etc. Now, I try to start psql (with "default" postgres user) and cannot connect! Start/stop/restart service do not change anything... Result of "psql" command (with postgres user) : psql: could not connect to server: No such file or directoryIs the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? When I check service status, I get this : postgresql.service - PostgreSQL RDBMSLoaded: loaded (/lib/systemd/system/postgresql.service; enabled)Active: active (exited) since tue 2016-05-24 09:24:13 CEST; 3s agoProcess: 5658 ExecStart=/bin/true (code=exited, status=0/SUCCESS)Main PID: 5658 (code=exited, status=0/SUCCESS) Starting/Stopping/Restarting service with command sudo service postgresql start (or restart or stop) Does not change anything to actual system behaviour.. Log says: DETAIL: Permissions should be u=rwx (0700).FATAL: data directory "/var/lib/postgresql/9.4/main" has group or world access
On a debian system, postgresql files and directories should be owned by user postgres , in group postgres , with permissions of either 0700 (directories) or 0600 (files). If they're not that, you can repair perms & ownership with: sudo chown -R postgres:postgres /var/lib/postgresql/9.4/sudo chmod -R u=rwX,go= /var/lib/postgresql/9.4/ Note the capital X in the chmod command. Unlike a lowercase x , that will set the execute bit only on directories and on files that are already executable (there shouldn't be any of those in the pg directories). and then restart the postgresql service.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285080", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170781/" ] }
285,147
I would be happy if the following command produced no output if tail was not running: ps --no-headers $(pidof tail) Instead I get: 964 pts/2 00:00:01 bash 4393 pts/2 00:00:00 ps
If your version of ps supports the -C option: ps --no-headers -C tail If not, you can run ps only if pidof succeeds: pid=$(pidof tail) && ps --no-headers ${pid} or (for Zsh): pid=$(pidof tail) && ps --no-headers $=pid (thanks to Gilles !).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
285,148
Note: I am doing this to learn more about linux. I understand there are other tools that can be used with simple interfaces. Note: I am using Fedora 23. I have it installed physically on an SSD and my goal is to move it into a virtual machine. Before using tar, I have successfully used rsync . I would rsync my entire file system directly onto a partition formatted with ext4 . I would then set up a virtual machine and boot with a liveCD, mount the partition, and rsync the files back. After that, I update my fstab , re-install grub and generate the grub configuration file, and finally generate a new initramfs using dracut I would like to use tar now so I can store backups of my system in a single archive file and have it compressed. I use this command to create the tar: tar -czpf /path/to/backup.tar.gz /path/to/fs/ Note that the path to the FS I am archiving is the same files that I rsynced to a temp directory. It does not include folders such as proc, dev, sys , etc. And then I boot up the VM using a livecd, mount the partitions, and then use this command to extract the tar as root user: tar -xzpf /path/to/backup/file -C /path/to/new/partition When I update the same files as I did with rsync, I reboot and get to the login screen. However, logging in as root (using tty) or as my user does not work. It simply sends me back to the login screen -- I am stuck in a login loop. I found out that when tar extracts files owned by amoghrabi , it will replace the user as liveuser . This is because tar uses the file system's passwd file to match users. If it cannot find the user name, it will match by UID and GID . As a possible solution, I have tried updating the live system's passwd file and group file with the ones from my original filesystem. This does not work. I have read about using the flag --same-owner , but this does not work in my case because the passwd file of the livecd filesystem does not contain the original users of the system I am restoring. What are my options here to successfully restore my system as I did with rsync? EDIT: To avoid confusion, I am booting from a liveCD and mounting /dev/sda1 , the partition I want to restore my backup to. I am un-taring the backup to this partition and then chrooting (while still in the liveCD environment) to modify /etc/fstab , reinstall grub , and re-generate the initramfs file. After the process of un-taring my backup to /dev/sda1 is complete, it appears that amoghrabi , my user in my backup, no longer owns its files (e.g. /home/amoghrabi/ and instead, the files are owned by liveuser . I believe this is because the /etc/passwd file on the liveCD has UID of 1000 , which is the same UID for amoghrabi . What can I do in this case? I have already tried restoring my /etc/passwd and /etc/group file to the liveCD environment before un-taring my backup. This fixed the issue where the user amoghrabi home directory is owned by amoghrabi , but I still get stuck in a login loop which I believe is the cause of something not having the correct permissions from the un-taring process. My end goal here is to be able to boot into /dev/sda1 and have a working copy of my system, with the use of tar to store my backups.
If your version of ps supports the -C option: ps --no-headers -C tail If not, you can run ps only if pidof succeeds: pid=$(pidof tail) && ps --no-headers ${pid} or (for Zsh): pid=$(pidof tail) && ps --no-headers $=pid (thanks to Gilles !).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171971/" ] }
285,160
Using sed , how do I search for a line ending with foo and then edit the next line if it starts with #bar ? Or put another way, I want to remove a comment # from the next line if it starts with #bar and the previous line ends in foo . For example: This is a line ending in foo#bar is commented outThere are many lines ending in foo#bar commented out again I tried: sed -i 's/^foo\n#bar/foo\nbar/' infile
Use the N;P;D cycle and attempt to substitute each time: sed '$!N;s/\(foo\n\)#\(bar\)/\1\2/;P;D' infile this removes the leading # from #bar only if it follows a line ending in foo otherwise it just prints the pattern space unmodified. Apparently , you want to uncomment US mirrors in /etc/pacman.d/mirrorlist which is a whole different thing: sed -e '/United States/,/^$/{//!s/^#//' -e '}' /etc/pacman.d/mirrorlist This will uncomment all mirrors in the US section in /etc/pacman.d/mirrorlist
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171849/" ] }
285,169
I'm on OSX. I want to run a python script against all pngs in a particular directory. This is what I've tried: find docs/ -name "*png" | xargs --replace=F python myscript.py "F" But I see this: xargs: illegal option -- -usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr] [-L number] [-n number [-x]] [-P maxprocs] [-s size] [utility [argument ...]] What am I doing wrong?
xargs on Mac OS X doesn't support the --replace option; you can use -I instead: find docs/ -name "*png" | xargs -I F python myscript.py "F" The strange error message is produced because this version of xargs interprets characters after a single - as options, so with --replace it's looking for an option named - , which doesn't exist.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118973/" ] }
285,173
I would like to extract from a file the date with format DD.MM.YYYY , date is always in the first place, here an example of the entries 15.04.2016 13:13:30,228 INFO [wComService] [mukumukuko@system/3] Call created with id:VoiceConnector$mukumukuko@system$D1:1:0:CB:SESSION$D1:1:0:DB:mukumukuko@system$D1:1:0:HB:_TARGET^M15.04.2016 13:14:10,886 INFO [wComService] Call 5303 from device +41999999999^M15.04.2016 13:14:20,967 INFO [AddressTranslatorService][mukumukuko@system/3] </convertLocalToGNF>^M15.04.2016 13:14:20,992 INFO [wComService] [mukumukuko@system/3] Call created with id: VoiceConnector$mukumukuko@system$D1:1:0:MB:SESSION$D1:1:0:NB:mukumukuko@system$D1:1:0:RB:_TARGET^M15.04.2016 13:15:18,760 INFO [OSMCService] SessionManager Thread - Heartbeat (1clients connected)^M this file contains the activity log of 1 week, so in the file it is possible to find dates i.e. 16.04.2016 , 17.04.2016 , 18.04.2016 as well. The file can have also these outputs from Java exception: at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanEndElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) I have tried following: cat fac.log | sed 's/^.*\([0-9]\{2\}.[0-9]\{2\}.[0-9]\{4\}\).*$/\1/' > datesF1 but I get in "datesF1" the desired date but with these Java exception messages So what I would like is to generate a file which only displays unique dates without repeating them, for example "datesF1" must be: 15.04.201616.04.201617.04.201618.04.2016 Do you know if that is possible or if it is better to use the grep command?
xargs on Mac OS X doesn't support the --replace option; you can use -I instead: find docs/ -name "*png" | xargs -I F python myscript.py "F" The strange error message is produced because this version of xargs interprets characters after a single - as options, so with --replace it's looking for an option named - , which doesn't exist.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163270/" ] }
285,195
While using Tmux, I tried to maximise the current pane using <prefix> z , and had a bit of a keyboard fail and accidently held down the prefix keys while pressing z, instead of letting go of the prefix combo. The terminal then cleared completely (maximised, clearing all my panes), and while I was able to type into it, I wasn't able to escape or use any shortcut keys to get out of this. Does anyone know what this mode is and how to get out of it? To recreate, hold down your prefix keys and press 'z' (while the prefix keys are still held down).
Ok, so it appears this was an issue with the Gnome Dropdown Terminal extension. Under settings > Terminal, I had a custom command setup as "tmux". Removing this solved the issue. When opening a dropdown terminal now, I just run "tmux" manually, and I am able to escape the weird suspended mode using the "FG" shortcut as recommended. Alternatively, this behaviour can be prevented while keeping tmux as the custom command run on terminal start by adding unbind C-z to your ~/.tmux.conf . This disables the <prefix>C-z binding so it is not possible to easily accidentally suspend Tmux.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70247/" ] }
285,208
How do I disable a keybinding if I don't know what it is or what it's triggering? I have my zsh key mode set to vi-mode, through bindkey -v . To do a history search, I press Esc to get to "command mode", and then / to start the search. However, if I press them too fast, it does something else, but I don't know what! I assume Esc-/ is some keybinding, but I don't know what it is. How do I find this and turn it off?
After some searching, I've found the answer: To discover what escape sequence the key combination is triggering, follow this excellent answer : echo " Ctrl V Esc / " Which displays, for me, as: echo "^[/" . Ctrl V forces the following key to display as an escape sequence instead of being interpreted. So now we know we're trying to find what is bound to "^[/" . To list all zsh key bindings, simply execute bindkey with no args: $ bindkey"^A"-"^C" self-insert"^D" list-choices"^E"-"^F" self-insert"^G" list-expand"^H" backward-delete-char..."^Y"-"^Z" self-insert"^[" vi-cmd-mode"^[," _history-complete-newer"^[/" _history-complete-older ### <--- Here it is."^[M" vi-up-line-or-history"^[OA" vi-up-line-or-history..."^\\\\"-"~" self-insert"^?" backward-delete-char"\M-^@"-"\M-^?" self-insert So, having decided that I don't care about _history-complete-older , I'm just going to remove it. I added this to my .zshrc : # Unbind the escape-/ binding because it gets triggered when I try to do a history search with "/".bindkey -r "^[/" If, instead, you just want to rebind it to some other key, you might use: bindkey -r "^[/"bindkey "<some-other-key-combo>" _history-complete-older
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43398/" ] }
285,237
I'd like to compress some files for http distribution, but found that .tar.gz keeps the user name and user ID and there doesn't seem to be any way to not do that? (There is a --numeric-owner option for tar which seems to ignore the user name, but still keeps the user ID.) Doesn't that mean that .tar.gz is a poor choice for file distribution as my system probably is the only one with my user ID and my user name? Is .7z a better format for file distribution, or do you have any other recommendation?
Generally .tar.gz is a usable file distribution format. GNU tar allows you not to preserve the owner and permissions. $ tar -c -f archive.tar --owner=0 --group=0 --no-same-owner --no-same-permissions . https://www.gnu.org/software/tar/manual/html_section/tar_33.html#SEC69 If your version of tar does not support the GNU options you can copy your source files to another directory tree and update group and ownership there, prior to creating your tar.gz file for distribution. --owner=0 and --group=0 works only in compression phase of the file while in decompression phase it has no effect. --no-same-owner --no-same-permissions works only in decompression phase while in compression phase it has no effect. Put together they can constitute a default function in which tar assumes the characteristics of not remembering the user who compressed or decompressed the files. When during compression the files are stored with user and group 0, during the decompression via GUI, they assume the permissions of the user who extracts the files, so it is a valid solution to forget the user in the compression phase.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/285237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73741/" ] }
285,362
Suppose I have the Yes/No construction in bash script: read -r -p "Yes or no?" responseif [[ $response =~ ^([yY][eE][sS]|[yY])$ ]] then do ...else exit 0fi I want this construction to be executed until I press "no". I.e., if I press "Yes", then after finalization of operation 'do ...' execution I want again 'Yes or no?' response.
You need to keep asking for a response until it isn't one you want: while true;do read -r -p "Yes or no? " response if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]] then echo "You chose yes" else exit 0 fidone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128780/" ] }
285,374
Why does apt-get not use 100% of either cpu, disk, or network -- or even close to it? Even on a slow system (Raspberry Pi 2+) I'm getting at most 30% CPU load. I'm just thinking that either it's being artificially throttled, or it should max out something while it's working ... or it should be able to do its thing faster than it does. Edit: I'm just measuring roughly via cpu/disk/net monitors in my panel, and the System Monitor app of Ubuntu MATE. Please explain why I'm wrong. :-) Update: I understand that apt-get needs to fetch its updates (and may be limited by upstream/provider bandwidth). But once it's "unpacking" and so on, the CPU usage should at least go up (if not max out). On my fairly decent home workstation, which uses an SSD for its main drive, and a ramdisk for /tmp, this is not the case. Or maybe I need to take a closer look.
Apps will only max out the CPU if the app is CPU-bound .An app is CPU-bound if it can quickly get all of its data and what it waits on is the processor to process the data. apt-get , on the other hand, is IO-bound . That means it can process its data rather quickly, but loading the data (from disk or from the network) takes time, during which the processor can do either other stuff or sit idle if no other processes need it. Typically, all IO requests (disk, network) are slow, and whenever an application thread makes one, the kernel will remove it from the processor until the data gets loaded into the kernel (=these IO requests are called blocking requests ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/285374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78648/" ] }
285,382
I would like to get a quick overview over the different file types in a directory (including all its subdirectories) using the file tool, e.g. telling me what file type is the most common one there. It should be implemented as a practical shell script in common shell languages or scripting tools like bash or awk . Possible nice-to-haves: good performance dealing with any file name or type POSIX compatibility (last two points are practically mutually exclusive)
Use sort | uniq -c to count identical lines: find "$path" -type f -exec file -b {} + | sort | uniq -c | sort -nr
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117599/" ] }
285,386
I would like to divide the first column based on the delimiter, here - . Based on the last value, last column, here column 2, should be populated. If the value is 01 or 99 then replace with 2 or 1 respectively. #inputPE01-02-01 -9PE01-02-99 -9PE01-03-01 -9PE01-03-99 -9PE01-05-01 -9PE01-05-99 -9#outputPE01-02-01 2PE01-02-99 1PE01-03-01 2PE01-03-99 1PE01-05-01 2PE01-05-99 1 could you please provide a suggestion on how to achieve this ? I was trying to break down the first column into array, access the last element and then update the second column.
Use sort | uniq -c to count identical lines: find "$path" -type f -exec file -b {} + | sort | uniq -c | sort -nr
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115822/" ] }
285,461
I thought that I knew how to set up permissions in Linux. I apparently don't. I have a user called "web3". This user was automatically created by ISPConfig (A server management application like CPannel). I also have an application that I installed on the server called "Drush". I installed "Drush" while logged in as root. This application is located at: /root/.composer/vendor/drush/drush/drush This file and it's containing folder have the following permissions: -rwxr-xr-x 1 root rootdrwxr-xr-x 9 root root Since the file allows read and execute permissions to everyone, how come every time I login as the "web3" user and try to run the aforementioned application I get the following error message: /root/.composer/vendor/drush/drush/drush: Permission denied I have faced this problem before but I resorted to giving sudo full root permissions to the user I was having problems with. On a local development environment, this is not a big deal. I am managing my own Dedicated Server now and this sledgehammer solution will not do. What am I doing wrong? I'd appreciate any help!
/root/ is root's home directory. The permissions on /root/ are hopefully 700, preventing anyone but root from traversing the entire directory tree below it. You're being prevented from running the binary as a non-root user by permissions further up the directory tree. Installing anything into /root/ is unusual, you would normally install executable code to be used by multiple users into /opt/ or another directory. So those are the two main things that are 'wrong'. You need to find a better location to install the code, and to ensure the full path is accessible to the users you want to use it. Lastly, as others have pointing out, while you often need to be root to complete an install, the resulting files should only be owned by root if absolutely necessary. In many cases, specific users are created (such as the www-data user, or an oracle user) which limits exposure if the code is compromised. I don't know your application, but it might be worth either installing it as the web3 user or installing it as root, but changing the permissions later to a non-privileged user created specifically for the task. You should resist the urge to open up the permissions on /root/ to fix the issue, and sudo is a sticking plaster over the problem. The problem is that you should not install executable code into root's home directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111183/" ] }
285,472
I can list file names actually but a lot of unwanted stuff comes from there also. > echo "ls *.txt" | sftp user@host:/dirConnected to 1.1.1.1.Changing to: /dirsftp> ls *.txt1.txt2.txt3.txt Is there a proven/reliable way to list files and only files? I'd like to avoid using head -like filters if possible.
Use the -q option to tell sftp to be q uiet, thereby suppressing most of the output you don't care about: echo "ls *.txt" | sftp -q [email protected]:/path You will still see the lines for the interactive prompt to which you are echoing, e. g. sftp> ls *.txt , but those can be filtered out with a grep -v : echo "ls *.txt" | sftp -q [email protected]:/path | grep -v '^sftp>' As an aside, it's probably better practice to use a batch file, and pass it with the -b parameter to sftp rather than echo ing a pipeline into it. If all you really want to do is get a list of files, this might actually be better served with ssh than with sftp (which, after all, is the s ecure f ile t ransfer p rogram): ssh [email protected] ls -1 /path
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285472", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58073/" ] }
285,525
I have a file called hostlist.txt that contains text like this: host1.mydomain.comhost2.mydomain.comanotherhostwww.mydomain.comlogin.mydomain.comsomehosthost3.mydomain.com I have the following small script: #!/usr/local/bin/bashwhile read host; do dig +search @ns1.mydomain.com $host ALL \ | sed -n '/;; ANSWER SECTION:/{n;p;}';done <hostlist.txt \ | gawk '{print $1","$NF}' >fqdn-ip.csv Which outputs to fqdn-ip.csv : host1.mydomain.com.,10.0.0.1host2.mydomain.com.,10.0.0.2anotherhost.internal.mydomain.com.,10.0.0.11www.mydomain.com.,10.0.0.10login.mydomain.com.,10.0.0.12somehost.internal.mydomain.com.,10.0.0.13host3.mydomain.com.,10.0.0.3 My question is how do I remove the . just before the comma without invoking sed or gawk again? Is there a step I can perform in the existing sed or gawk calls that will strip the dot? hostlist.txt will contain 1000s of hosts so I want my script to be fast and efficient.
The sed command, the awk command, and the removal of the trailing period can all be combined into a single awk command: while read -r host; do dig +search "$host" ALL; done <hostlist.txt | awk 'f{sub(/.$/,"",$1); print $1", "$NF; f=0} /ANSWER SECTION/{f=1}' Or, as spread out over multiple lines: while read -r hostdo dig +search "$host" ALLdone <hostlist.txt | awk 'f{sub(/.$/,"",$1); print $1", "$NF; f=0} /ANSWER SECTION/{f=1}' Because the awk command follows the done statement, only one awk process is invoked. Although efficiency may not matter here, this is more efficient than creating a new sed or awk process with each loop. Example With this test file: $ cat hostlist.txt www.google.comfd-fp3.wg1.b.yahoo.com The command produces: $ while read -r host; do dig +search "$host" ALL; done <hostlist.txt | awk 'f{sub(/.$/,"",$1); print $1", "$NF; f=0} /ANSWER SECTION/{f=1}'www.google.com, 216.58.193.196fd-fp3.wg1.b.yahoo.com, 206.190.36.45 How it works awk implicitly reads its input one record (line) at a time. This awk script uses a single variable, f , which signals whether the previous line was an answer section header or not. f{sub(/.$/,"",$1); print $1", "$NF; f=0} If the previous line was an answer section header, then f will be true and the commands in curly braces are executed. The first removes the trailing period from the first field. The second prints the first field, followed by , , followed by the last field. The third statement resets f to zero (false). In other words, f here functions as a logical condition. The commands in curly braces are executed if f is nonzero (which, in awk, means 'true'). /ANSWER SECTION/{f=1} If the current line contains the string ANSWER SECTION , then the variable f is set to 1 (true). Here, /ANSWER SECTION/ serves as a logical condition. It evaluates to true if the current matches the regular expression ANSWER SECTION . If it does, then the command in curly braces in executed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162650/" ] }
285,541
I have destroyed my Mint Linux installation. I just wanted access to my remote storefront. So what happened was I was having trouble with ICEauthority file in my home directory. So following different directions on the internet I came to the conclusion that I could set the home directory recursively to chmod 755 to allow that file to work…eventually I ran into problems with the system loading. Eventually by setting the home directory to executable permission for root was I able to get read/write access…but then i reset my machine oh why oh why did i reset my machine!!! - now the system throws me the same error with ICEauthority but it never gets me into the OS because the disk is encrypted. Nothing I’ve tried seems to work and I don’t have the original mounting seed. I’ve also tried sudo ecryptfs-recover-private but my system then just says No such file or directory: frankenmint@honeybadger /home $ sudo ecryptfs-recover-privateINFO: Searching for encrypted private directories (this might take a while)...INFO: Found [/home/.ecryptfs/frankenmint/.Private].Try to recover this directory? [Y/n]: yINFO: Found your wrapped-passphraseDo you know your LOGIN passphrase? [Y/n] yINFO: Enter your LOGIN passphrase...Passphrase: Inserted auth tok with sig [979c6cdf80d2e44d] into the user session keyringmount: No such file or directoryERROR: Failed to mount private data at [/tmp/ecryptfs.Hy3BV96c]. I’m really worried because I had important files on there that were stored on a virtual machine…If I could just get to those files then I would have no qualms nuking the setup and starting over
I found that running sudo bash and then running ecryptfs-recover-private as root (rather than via sudo) worked. Not sure why it should be any different. Edit: TL;DR: # ecryptfs-unwrap-passphrase /mnt/crypt/.ecryptfs/user/.ecryptfs/wrapped-passphrase - | ecryptfs-add-passphrase --fnek - < Type your login password here >Inserted auth tok with sig [aaaaaaaaaaaaaaaa] into the user session keyringInserted auth tok with sig [bbbbbbbbbbbbbbbb] into the user session keyring You will not see a prompt and must type your login password, blind, into the above command. Replace the aaaaaaaaaaaaaaaa and bbbbbbbbbbbbbbbb below with the hex signatures between brackets from the output above, in order: # mount -i -t ecryptfs -o ecryptfs_sig=aaaaaaaaaaaaaaaa,ecryptfs_fnek_sig=bbbbbbbbbbbbbbbb,ecryptfs_cipher=aes,ecryptfs_key_bytes=16 /mnt/crypt/.ecryptfs/user/.Private /mnt/plain Preliminaries It turns out just running as root did not work reliably for me; sometimes it did, sometimes it didn't. Basically, ecryptfs seems buggy and quite user-unfriendly, often confusing login passwords and mount passphrases. After going down a deep, dark rabbit hole, I have some tips that should help. These notes are for Ubuntu 17.10, ecryptfs-utils 111-0, and you should become root before starting. I assume you want to mount your home directory from /mnt/crypt (which should already be mounted) to /mnt/plain , and you should replace user with the username. Start Easy The first thing to try is: # ecryptfs-recover-private /mnt/crypt/.ecryptfs/user/.Private If this works, well, you're lucky. If not, it may give an error message from mount about no such file or directory . This is extremely misleading: what it really means is your mount passphrase is wrong or missing. Get The Signatures Here is the important part: we need to verify ecryptfs is really trying the right mount passphrase(s). The passphrases must be loaded into the Linux kernel before ecryptfs can mount your filesystem. ecryptfs asks the kernel for them by their signature. The signature is a 16-byte hex value (and is not cryptographically sensitive). You can find the passphrase signatures ecryptfs is expecting: # cat /mnt/crypt/.ecryptfs/user/.ecryptfs/Private.sigaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbb Remember these. The goal is to get passphrases with these signatures loaded into the kernel and then tell ecryptfs to use them. The first signature ( aaaaaaaaaaaaaaaa ) is for the data, and the second ( bbbbbbbbbbbbbbbb ) is the FileName Encryption Key (FNEK). Get the mount passphrase This command will ask you for you login password (with a misleading prompt), and output your mount passphrase: # ecryptfs-unwrap-passphrase /mnt/crypt/.ecryptfs/user/.ecryptfs/wrapped-passphrase Copy this but be careful!! , as this is extremely cryptographically sensitive, the keys to the kingdom. Try an interactive mount The next thing to try is: # mount -t ecryptfs /mnt/crypt/.ecryptfs/user/.Private /mnt/plain The crucial thing here is that mount needs your (super-sensitive) mount passphrase that we just copied (not your login password). This will ask you some questions, and you can accept the defaults except say yes to Enable filename encryption . It may give you a warning and ask to cache the signatures; you can say yes to both, but do double-check that you've got the right mount passphrase. You will see the options that mount has decided to try for you: Attempting to mount with the following options: ecryptfs_unlink_sigs ecryptfs_fnek_sig=bbbbbbbbbbbbbbbb ecryptfs_key_bytes=16 ecryptfs_cipher=aes ecryptfs_sig=aaaaaaaaaaaaaaaaMounted eCryptfs If the signatures are wrong (don't match what you got from Private.sig ), the mount won't work. ...but it will very unhelpfully report that it did. You will have to do an ls /mnt/plain and cat a file to make sure. At this point you can also look in /var/log/syslog and verify that ecryptfs is looking for the same signatures we are. There are clearly two serious issues with ecryptfs here, and we have to work around them. Load the keys into the kernel If the interactive mount didn't help, we have to load the keys into the kernel ourselves and manually specify them in the mount options. # ecryptfs-add-passphrase --fnek And paste in your (super-senstive) mount passphrase copied from above. This should output: Inserted auth tok with sig [aaaaaaaaaaaaaaaa] into the user session keyringInserted auth tok with sig [bbbbbbbbbbbbbbbb] into the user session keyring Mount manually Now the passphrases are loaded into the kernel, and we just need to tell mount to use them: # umount /mnt/plain# mount -i -t ecryptfs -o ecryptfs_sig=aaaaaaaaaaaaaaaa,ecryptfs_fnek_sig=bbbbbbbbbbbbbbbb,ecryptfs_cipher=aes,ecryptfs_key_bytes=16 /mnt/crypt/.ecryptfs/user/.Private /mnt/plain You'll notice the options are similar to what the interactive mount printed out, except we're manually telling ecryptfs what's up. Hopefully this works. If not, you can check that the keys are loaded into the kernel with the correct signatures using keyctl list @u , which should print out at least the two signatures you're expecting.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172119/" ] }
285,543
I am currently writing a webcrawl bot. It generates a list of URLs, and I need it to remove duplicates, and sort the lines alphabetically. My code looks like this: #! /bin/bashURL="google.com"while [ 1 ]; do wget --output-document=dl.html $URL links=($(grep -Po '(?<=href=")[^"]*' dl.html)) printf "%s\n" ${links[@]} >> results.db sort results.db | uniq -u URL=$(shuf -n 1 results.db) echo $URLdone Spefifcially the line: sort results.db | uniq -u
POSIX says of uniq -u : Suppress the writing of lines that are repeated in the input. which means that any line which is repeated (even the original line) will be filtered out. What you meant was probably (done with POSIX also): sort -u results.db For sort -u , POSIX says Unique: suppress all but one in each set of lines having equal keys. If used with the -c option, check that there are no lines with duplicate keys, in addition to checking that the input file is sorted. In either case, the following line URL=$(shuf -n 1 results.db) probably assumes that the purpose of sort/uniq is to update results.db (it won't). You would have to modify the script a little more for that: sort -u results.db >results.db2 && mv results.db2 results.db or (as suggested by @drewbenn), combine it with the previous line. However, since that appends to the file (combining the commands as shown in his answer won't eliminate the duplicates between the latest printf and the file's contents), a separate command sort/mv looks closer to the original script. If you want to ensure that $URL is not empty, that's (actually another question), and done by the [ test, e.g., [ -n "$URL" ] && wget --output-document=dl.html $URL though simply exiting from the loop would be simpler: [ -z "$URL" ] && break
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171926/" ] }
285,575
ls -a (I consider -a an option) sudo -u username ( -u = option, username = arg) chmod 664 my-dir ( 664 = option, my-dir = arg) I can't think of an example that might say "this is a flag" except perhaps when looking at dir listing: -r--------. 1 david david 3344 May 19 17:48 611056.pdf This has the "read flag" set for the owner, but that's all. What's to stop me from calling that a "read option"? I write and edit technical documentation, primarily in DocBook XML, and I'm looking for an explanation of the difference, which is consistent and accurate as possible. However, I'm already seeing a pattern forming: flags tend to be Booleans. e.g., setenforce 0 options help define how a command should behave. Some may be optional. arguments tell commands what object to operate on. I could see myself combining flags and options (some options may have a dozen possible values, but Booleans only have two). Arguments appear sufficiently different to maintain them as such.
A flag is a type of option, an option of boolean type, and is always false by default (e.g. --verbose, --quiet, --all, --long, etc). An option tells the function how to act (e.g. -a, -l, --verbose, --output , -name , -c , etc), whist an arguments tells the function what to act on/from (e.g. * , file1, hostname, database).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285575", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
285,644
mkdir backupcache cp -rp .cache backupcache # or cp -rp \.cache backupcache does not work nothing gets copied and directory backupcache remains empty
Don't specify the files or Directory Lets say you created the new folder (or are going to create one) and want to copy the files to it after the folder is created mkdir /test/foldercp -rp /path/to/copy/. /test/folder This will copy all files/folder recursively from /path/from/copy in to the already existing folder created on the first line. Another approach is tar . For example: $cd foo$tar cf - . | tar -C /path/to/bar -x Using rsync : rsync -av src dest
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8032/" ] }
285,687
Currently, when I want to change owner/group recursively, I do this: find . -type f -exec chown <owner>.<group> {} \;find . -type d -exec chown <owner>.<group> {} \; But that can take several minutes for each command. I heard that there was a way to do this so that it changes all the files at once (much faster), instead of one at a time, but I can't seem to find the info. Can that be done?
Use chown 's recursive option: chown -R owner:group * .[^.]* Specifying both * and .[^.]* will match all the files and directories that find would. The recommended separator nowadays is : instead of . . (As pointed out by justins , using .* is unsafe since it can be expanded to include . and .. , resulting in chown changing the ownership of the parent directory and all its subdirectories.) If you want to change the current directory's ownership too, this can be simplified to chown -R owner:group .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/285687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172230/" ] }
285,690
When running a SLURM job using sbatch , slurm produces a standard output file which looks like slurm-102432.out (slurm-jobid.out). I would like to customise this to (yyyymmddhhmmss-jobid-jobname.txt). How do I go about doing this? Or more generally, how do I include computed variables in the sbatch argument -o ? I have tried the following in my script.sh #SBATCH -p core#SBATCH -n 6#SBATCH -t 1:00:00#SBATCH -J indexing#SBATCH -o "/home/user/slurm/$(date +%Y%m%d%H%M%S)-$(SLURM_JOB_ID)-indexing.txt" but that did not work. The location of the file was correct in the new directory but the filename was just literal line $(date +%Y%m%d%H%M%S)-$(SLURM_JOB_ID)-indexing.txt . So, I am looking for a way to save the standard output file in a directory /home/user/slurm/ with a filename like so: 20160526093322-10453-indexing.txt
Here is my take away from previous answers %j gives job id %x gives job name I don't know how to get the date in the desired format. Job ID kind of serves as unique identifier across runs and file modified date captures date for later analysis. My SBATCH magic looks like this: #SBATCH --output=R-%x.%j.out#SBATCH --error=R-%x.%j.err I prefer adding R- as a prefix, that way I can easily move or remove all R-*
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/285690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165231/" ] }
285,691
At work we use sparse files as part of out Oracle VM environment for the guest disk images. After some questions from a colleague (which have since been answered) I am left with more questions about sparse files, and perhaps more widely about inode structure - reading the man pages of stat(2) and statfs(2) (on FreeBSD) I get the impression that I'd understand more readily if I knew more C, but alas my knowledge of C is minimal at best... I understand that some of this is dependent on file system type. I'm mostly interested UFS on FreeBSD/Solaris and ext4 - ZFS would be a plus but I'm not going to hold out hope :) I am using Solaris 10, FreeBSD 10.3, and CentOS 6.7 regularly. The commands here are being run on a CentOS 6.7 VM, but have been cross referenced with FreeBSD.If possible, I'm interested in gaining an understanding from a POSIX viewpoint, and favouring FreeBSD over Linux if that isn't possible. Consider the following set of commands: printf "BIL" > /tmp/BILdd of=/tmp/sparse bs=1 count=0 seek=10dd if=/tmp/BIL of=/tmp/sparse bs=1 count=3 seek=10dd if=/tmp/BIL of=/tmp/sparse bs=1 count=3 seek=17dd of=/tmp/sparse bs=1 count=0 seek=30dd if=/tmp/BIL of=/tmp/sparse bs=1 count=3 seek=30 The file /tmp/BIL should have the contents (in hex) of 4942 004c , so when I hexdump the file /tmp/sparse I should see a smattering of this combination throughout: %>hexdump sparse0000000 0000 4942 004c 0000 0000 4942 004c 00000000010 4200 4c49 0000 0000 0000 0000 0000 49420000020 004c0000021%>cat sparseBILBILBILBIL% 1. Why does the second occurrence of "BIL" appear out of order? i.e. 4200 4c49 rather than 4942 004c ? This was written by the third dd command. 2. How does cat and other tools know to print in the correct order? Using ls we can see the space allegedly used and the blocks allocated: %>ls -ls /tmp/sparse8.0K -rw-r--r--. 1 bil bil 33 May 26 14:17 /tmp/sparse We can see that the alleged size is 33 bytes, but allocated size is 8 kilobytes (file system block size is 4K). 3. How do programs like ls discern between the "alleged" size and the allocated size? I wondered if the "alleged" figure stored in the inode while the allocated size was calculated by walking the direct and indirect blocks - though this cannot be correct since calculation via walking would take time and tools such as ls return quickly, even for very large files. 4. What tools can I use to interrogate inode information? I know of stat , but it doesn't seem to print out the values of all of the fields in an inode... 5. Is there a tool where I can walk the direct and indirect blocks? It would be interesting to see each address on disk, and the contents to gain a bit more understanding of how data is stored If I run the following command after the others above, the file /tmp/sparse is truncated: %>dd of=/tmp/sparse bs=1 count=0 seek=5%>hexdump sparse0000000 0000 4942 004c0000005 6. Why does dd truncate my file and can dd or another tool write into the middle of a file? Lastly, sparse files seem like a Good Idea for preallocating space, but there doesn't appear to be file system or operating system level assurances that the a command won't truncate or arbitrarily grow the file. 7. Are there mechanisms to prevent sparse files be shrunk/grown? And if not, why are sparse files useful? While each question above could possibly be a separate SO question, I cannot dissect them as they are all related to the underlying understanding.
Some quick answers: first, you didn't create a sparse file. Try these extra commands dd if=/tmp/BIL of=/tmp/sparse seek=1000ls -ls /tmp/sparse You will see the size is 512003 bytes, but only takes 8 blocks. The null bytes have to occupy a whole block, and be on a block boundary for them to be possibly sparse in the filesystem. Why does the second occurrence of "BIL" appear out of order? because you are on a little-endian system and you are writing output in shorts. Use bytes, like cat does. How does cat and other tools know to print in the correct order? they work on bytes. How do programs like ls discern between the "alleged" size and the allocated size? ls and so on use the stat(2) system call which returns 2 values: st_size; /* total size, in bytes */ blkcnt_t st_blocks; /* number of 512B blocks allocated */ What tools can I use to interrogate inode information? stat is good. Is there a tool where I can walk the direct and indirect blocks? On ext2/3/4 you can use hdparm --fibmap with the filename: $ sudo hdparm --fibmap ~/sparse filesystem blocksize 4096, begins at LBA 25167872; assuming 512 byte sectors.byte_offset begin_LBA end_LBA sectors 512000 226080744 226080751 8 You can also use debugfs : $ sudo debugfs /dev/sda3debugfs: stat <1040667>Inode: 1040667 Type: regular Mode: 0644 Flags: 0x0Generation: 1161905167 Version: 0x00000000User: 127 Group: 500 Size: 335360File ACL: 0 Directory ACL: 0Links: 1 Blockcount: 664Fragment: Address: 0 Number: 0 Size: 0ctime: 0x4dd61e6c -- Fri May 20 09:55:24 2011atime: 0x4dd61e29 -- Fri May 20 09:54:17 2011mtime: 0x4dd61e6c -- Fri May 20 09:55:24 2011Size of extra inode fields: 4BLOCKS:(0-11):4182714-4182725, (IND):4182726, (12-81):4182727-4182796TOTAL: 83 Why does dd truncate my file and can dd or another tool write into the middle of a file? Yes, dd can write into the middle. Add conv=notrunc . Are there mechanisms to prevent sparse files be shrunk/grown? And if not, why are sparse files useful? No. Because they take less space. The sparse aspect of a file should be totally transparent to a program, which sometimes means the sparseness may be lost when the program updates a file. Some copying utilities have options to preserve sparseness, eg tar --sparse , rsync --sparse . Note, you can explicitly convert the suitably aligned zero blocks in a file to sparseness by using cp --sparse=always and the reverse, converting sparse space into real zeros, with cp --sparse=never .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67170/" ] }
285,708
I want to add a few directories to the skeleton directory. When I add new user I want to add my own directories to the new home directories.
As thrig pointed out , all that's needed is to create the directory structure that you want under /etc/skel . Quoting from the useradd man page -k, --skel SKEL_DIR The skeleton directory, which contains files and directories to be copied in the user's home directory, when the home directory is created by useradd. This option is only valid if the -m (or --create-home) option is specified. If this option is not set, the skeleton directory is defined by the SKEL variable in /etc/default/useradd or, by default, /etc/skel. ... and the default SKEL variable in /etc/default/useradd is /etc/skel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285708", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172245/" ] }
285,735
I am running ubuntu xenial 16.04 We are using openvpn to connect to a virtual private cloud. That cloud has it's own DNS server (as does our local route - home or office). When I connect to the VPN all of the IPs in that network are available but I can't reach any by host name. The reason is simple: the resolv.conf file still shows my local office nameserver. If I manually overwrite the resolv.conf to have the correct name server all is good. So, how can I get it to automatically reconfigure resolv.conf upon connecting to the VPN? Can I hook in to a system event and execute a script?
The OpenVPN package has a script for this in /etc/openvpn/update-resolv-conf . You need to configure it with: script-security 2up /etc/openvpn/update-resolv-confdown /etc/openvpn/update-resolv-conf This will fetch the DNS server addresses from the dhcp-option DNS options passed by the OpenVPN peer/server and configure resolvconf accordingly. It handles dhcp-option DOMAIN as well. It is not perfect however, because this will prepend those name servers to the list of existing name servers instead of overwriting the list of name servers. If you are using openresolv the -x can be used to overwrite the DNS configuration instead of preprending to it. If you're using systemd-resolved , you can use the /etc/openvpn/update-systemd-resolved which hooks into systemd-revolved instead of resolvconf . script-security 2 up /etc/openvpn/update-systemd-resolveddown /etc/openvpn/update-systemd-resolveddown-pre On Debian, this script is in the openvpn-systemd-resolved .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90040/" ] }
285,774
My text file has no delimiter to specify separator just spaces, how do I cut out column 2 to output file, 39 207 City and County of San Francisc REJECTED MAT = 078 412 Cases and materials on corporat REJECTED MAT = 082 431 The preparation of contracts an REJECTED MAT = 0 So output I need is 207412432
It is easiest with awk which treats multiple consecutive spaces as a single one, so awk '{print $2}' file prints 207412431 But obviously there are many, many other tools which will do the job, even some which were not designed for such tasks, like (GNU) grep : grep -Po '^[^ ]+[ ]+\K[^ ]+' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166592/" ] }
285,777
I'm looking to create a terminal-based environment to adapt my Bash script into. I want it to look like this:
dialog --backtitle "Package configuration" \ --title "Configuration sun-java-jre" \ --yesno "\nBla bla bla...\n\nDo you accept?" 10 30 The user response is stored in the exit code, so can be printed as usual: echo $? (note that 0 means "yes", and 1 is "no" in the shell world). Concerning other questions from the comment section: to put into the dialog box the output from some command just use command substitution mechanism $() , eg: dialog --backtitle "$(echo abc)" --title "$(cat file)" ... to give user multiple choices you can use --menu option instead of --yesno to store the output of the user choice into variable one needs to use --stdout option or change output descriptor either via --output-fd or manually, e.g.: output=$(dialog --backtitle "Package configuration" \ --title "Configuration sun-java-jre" \ --menu "$(parted -l)" 15 40 4 1 "sda1" 2 "sda2" 3 "sda3" \ 3>&1 1>&2 2>&3 3>&-)echo "$output" This trick is needed because dialog by default outputs to stderr, not stdout. And as always, man dialog is your friend.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/285777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171758/" ] }
285,784
From this blog . Intermediate CAs are certificates signed by a root CA that can sign arbitrary certificates for any websites. They are just as powerful as root CAs, but there's no full list of the ones your system trusts, because root CAs can make new ones at will, and your system will trust them at first sight. There are THOUSANDS logged in CT. This month an interesting one popped up, generated apparently in September 2015: "Blue Coat Public Services Intermediate CA", signed by Symantec. (No certificates signed by this CA have reached the CT logs or Censys so far.) I thought it would be a good occasion to write up how to explicitly untrust an intermediate CA that would otherwise be trusted in OS X. It won't stop the root CA from handing a new intermediate to the same organization, but better than nothing. When I tried the steps in the blog in Ubuntu, I download this certificate https://crt.sh/?id=19538258 . When I open the .crt it imports into the Gnome Keyring, but then I couldn't find a way to "untrust" the certificate after importing it.
Just to make things difficult, Linux has more than one library for working with certificates. If you're using Mozilla's NSS, you can Actively Distrust (their terminology) a certificate using certutil 's -t trustargs option: $ certutil -d <path to directory containing database> -M -t p -n "Blue Coat Public Services Intermediate CA" For Firefox, <path to directory containing database> is usually ~/.mozilla/firefox/<???>.profile where <???> are some random looking characters. (certutil is eg. in ubuntu's libnss3-tools package) The breakdown is as follows: -M to modify the database -t p to set the trust to Prohibited -n to carry out the operation on the named certificate Even within NSS, not all applications share the same database; so you may have to repeat this process. For example, to do the same for Chrome, change the -d <path> to -d sql:.pki/nssdb/ . $ certutil -d sql:.pki/nssdb/ -M -t p -n "Blue Coat Public Services Intermediate CA" However, not all applications use NSS, so this isn't a complete solution. For example, I don't believe it's possible to do this with the OpenSSL library. As a consequence, any application that uses OpenSSL to provide it's certificate chain building (TLS, IPSec etc) would trust a chain with a Blue Coat certificate and there is nothing that you can do about it short of removing the Root CA that signed it from your trust anchor store (which would be silly considering it's a Symantec Root CA as you'd end up distrusting half the Internet), whereas applications that rely on NSS can be configured more granular to distrust any chain that has the Blue Coat certificate within it. For example, I believe OpenVPN uses OpenSSL as it's library for certificates, therefore big brother could be listening to your OpenVPN traffic without your knowledge if you are connecting to a commercial VPN provider which uses OpenVPN. If you are really concerned about that then check who your commercial VPN provider's Root CA is - if it's Symantec/Verisign then maybe it's time to ditch them for someone else? Note that SSH doesn't use X509 certificates therefore you can connect and tunnel using SSH without worrying about Blue Coat MITM attacks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86768/" ] }
285,886
I have a json file from where I want to get the value of two items which are in an array using grep . Let's say I'm trying to get data from values foo and bar , the problem is that bar also matches megabar so I have to grep it with option -w . The problem comes when trying to find them at the same time, as I have to grep for one value OR the other. From this question, I managed to get a bit closer to what I'm looking for but my result was something like: grep -E '(foo|-w bar)' file "foo": "59766", "foo": "59770", "foo": "59774", "foo": "59778", "foo": "59782", "foo": "59786", Any idea on how to do it?
The word boundary has a similar effect to -w , but can be used as part of the expression. ‘\b’ Match the empty string at the edge of a word. [...] ‘\<’ Match the empty string at the beginning of word. ‘\>’ Match the empty string at the end of word. To match bar only when it's the whole word, but foo anywhere (including inside longer words): grep -E 'foo|\<bar\>'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/285886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165802/" ] }
285,916
Obviously I know about pwd and readlink , but is there a command to find out the real absolute path to the current directory (ie, resolving links and dots)?
pwd -P (in any POSIX shell), is the command you're looking for. -P is for physical (as opposed to logical ( -L , the default) where pwd mostly dumps the content of $PWD (which the shell maintains based on the arguments you give to cd or pushd )). $ ln -s . /tmp/here$ cd /tmp/here/here$ cd ../here/here$ pwd/tmp/here/here/here$ pwd -P/tmp
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45354/" ] }
285,924
Suppose I want to compare gcc version to see whether the system has the minimum version installed or not. To check the gcc version, I executed the following gcc --version | head -n1 | cut -d" " -f4 The output was 4.8.5 So, I wrote a simple if statement to check this version against some other value if [ "$(gcc --version | head -n1 | cut -d" " -f4)" -lt 5.0.0 ]; then echo "Less than 5.0.0"else echo "Greater than 5.0.0"fi But it throws an error: [: integer expression expected: 4.8.5 I understood my mistake that I was using strings to compare and the -lt requires integer. So, is there any other way to compare the versions?
I don't know if it is beautiful, but it is working for every version format I know. #!/bin/bashcurrentver="$(gcc -dumpversion)"requiredver="5.0.0" if [ "$(printf '%s\n' "$requiredver" "$currentver" | sort -V | head -n1)" = "$requiredver" ]; then echo "Greater than or equal to ${requiredver}" else echo "Less than ${requiredver}" fi ( Note: better version by the user 'wildcard': https://unix.stackexchange.com/users/135943/wildcard , removed additional condition)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/285924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171377/" ] }
285,962
Program snippet: BASH_MIN_REQ="2.05"BINUTILS_MIN_REQ="2.12"BISON_MIN_REQ="1.875"BASH_CURR=$(bash --version | head -n1 | cut -d"(" -f1 | cut -d" " -f4)BINUTILS_CURR=$(ld --version | head -n1 | cut -d" " -f7)BISON_CURR=$(bison --version | head -n1 | cut -d" " -f4)list=(BASH BINUTILS BISON)for progs in ${list[@]}; do echo "$progs: ${${progs}_MIN_REQ}:${${progs}_CURR}"done Expected Output: BASH: 2.05:4.3.11BINUTILS: 2.12:2.24BISON: 1.875:3.0.2 Notice the variables initialized with the values. I want to substitute ${progs}_MIN_REQ with $BASH_MIN_REQ and then again with the value it was initialized with that is 2.05. And do this inside the for loop so that will be easier for me to write the code as I'll have to write only 1 echo statement instead of 3. Actual output: bad substitution I know what I have written in echo is wrong . But is there a way to double substitute the variables. Otherwise I'll have to write lots of echo statements.
You can do this with indirection for progs in ${list[@]}; do a="${progs}_MIN_REQ" b="${progs}_CURR" echo "$progs: ${!a}:${!b}"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/285962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171377/" ] }
286,070
If I'm executing a long process, is there any way I can execute some time-based commands? For example, I'm running a really long process which runs for roughly 10 minutes. After 5 minutes, I would like to run a separate command. For illustration, the separate command could be: echo 5 minutes complete (Note: I don't want progress toward completion of the command, but simply commands executed after specified intervals.) Is it possible?
Just run: long-command & sleep 300; do-this-after-five-minutes The do-this-after-five-minutes will get run after five minutes. The long-command will be running in the background.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/286070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156333/" ] }
286,079
I have two SSDs, one is for my Windows 7, and one is for testing and tweaking it (like installing Linux, or other operating systems...). I've already installed Windows 7 on the first SSD,and I tried to install Debian distribution on the second SSD, the installation was successful. But the problem is that I installed something called boot loader into first SSD (which Windows 7 is installed), so if I run second SSD, nothing happens and if I run the first SSD, automatically Debian boot loader runs, and what thing fundamentally makes me annoying is that there is no option for Windows, just for Debian. (I thought I can use both of the choices if I install the boot loader into any SSD.) If I open the SSD which Win installed, I can see those files and folders. [Folder] Boot[Folder] Documents and Settings[Folder] Intel[Folder] Perflogs[Folder] ProgramData[Folder] Program files[Folder] Program files (x86)[Folder] Recovery[Folder] $Recycle.Bin[Folder] System Volume Information[Folder] Users[Folder] Windows[File] bootmgr[File] BOOTSECT.BAK[File] hiberfil.sys[File] pagefile.sys I think those all folders & files have relationships to Windows OS, but actually I don't know. (I just googled it.) Can I change the directory of the boot loader or delete it?What can I do?
It depends on which boot-loader was installed. If its a standard Debian install it should be GRUB2. Boot the computer with all disks containing bootable installations attached and powered. you need to open Root Terminal application to open a terminal as root, then enter these commands: apt-get updateapt-get install os-prober if os-prober package is already installed, apt will let you know, without doing any changes to the system. Then edit /etc/default/grub and make sure you have a line like GRUB_DISABLE_OS_PROBER=false you can edit the file using a GUI text editor like Gedit, or a CLI Editor, such as Vim or Nano. Using Gedit: gksu gedit /etc/default/grub You need to close gedit to be able to use the terminal again Using Nano nano /etc/default/grub I don't recommend using vim if you're a beginner, it takes some time to get used to it's operation modes and interface. Once you're done with editing the file, if necessary , enter this command update-grub Note: You can skip the file editing process on your first try, but if that doesn't work you'll need to do it, then retry update-grub command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/286079", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172510/" ] }
286,103
I know that I can change the progress of a whiptail --gauge using something like: { for ((i = 0 ; i <= 100 ; i+=20)); do sleep 1 echo $i done} | whiptail --gauge "Please wait while installing" 6 60 0 But I am wondering whether it is possible to edit/modify the text of the whiptail box (so change the Please wait while installing text to something else. My current solution is to bring up a new whiptail box, but there is a noticeable flicker between the old one closing and the new one opening. If you can't update the text of a whiptail box, is it possible to reduce/remove this flicker instead?
Try this: #!/bin/bashmsgs=( "Downloading" "Verifying" "Unpacking" "Almost Done" "Done" )for i in {1..5}; do sleep 1 echo XXX echo $(( i * 20 )) echo ${msgs[i-1]} echo XXXdone |whiptail --gauge "Please wait while installing" 6 60 0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94077/" ] }
286,119
How can I delete all fail2ban bans in Ubuntu?I tried everything but I don't get it. I just want to delete all bans - but I don't know any IP adresses.
Updated answer As of version 0.10.0 fail2ban-client features the unban command that can be used in two ways: unban --all unbans all IP addresses (in all jails and database)unban <IP> ... <IP> unbans <IP> (in all jails and database) Moreover, the restart <JAIL> , reload <JAIL> and reload commands now also have the --unban option. Old Answer fail2ban uses iptables to block traffic. If you would want to see the IP addresses that are currently blocked, type iptables -L -n and look for the various chains named fail2ban-something , where something points to the fail2ban jail (for instance, Chain f2b-sshd refers to the jail sshd ).If you only want to remove the block for a single IP address <IP> for a given jail <JAIL> , fail2ban offers its own client: fail2ban-client set <JAIL> unbanip <IP> Alternatively you can use line numbers. First, list the iptables rules with line numbers: iptables -L -n --line-numbers Next you can use iptables -D fail2ban-somejail <linenumber> to remove a single line from the table. As far as I know there is no option to select a range of line numbers, so I guess you would have to wrap this command in a for loop: for lin in {200..1}; do iptables -D fail2ban-somejail $lindone Here I made the number 200 up. Check your own output of the command with --line-numbers and note that the last line (with RETURN ) should stay. See @roaima's comment below for the reasoning behind counting down.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/286119", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172531/" ] }
286,147
I recently got a new cable modem and my internet connection is no longer working. I suspect I need to renew my IP address. How can I renew my address? The command ifconfig returns: bash: ifconfig: command not found The command dhclient returns: bash: dhclient: command not found I am using Debian 7 (Wheezy). ------- ANSWER It appears the problem was that I was trying to mess with the network using my user account , which I did not realize was not a good idea. I must have been logged in as root when I did it before. The easy solution is: sudo /sbin/dhclient eth0 This command reset my connection and my Debian system has network connectivity again. The key thing, as the answer below noted, is that user accounts do not normally have /sbin in the path, so you have to give the explicit path to dhclient if you using it via a user account.
If it's installed, dhclient would be in /sbin , which normally is not in your user path. If you do sudo su - then your path would have that directory: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin However, ifconfig also lives in that directory, so it seems likely you do not have that installed. The package (if you have a CD for installing...) is isc-dhcp-client
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
286,164
So I had been considering that partition is like a separate section of space . Recently, I decided to experiment with partitions and found a flaw in my understanding. Some examples refer to a case where one should make 3 partitions: / = root, 32GiB /boot = boot, 1GiB /home = home, 100% = 200GiB Now it gets somewhat confusing to me - since I suppose that the / is the main container and other containers are children of the former one, why child containers, like, /home ( 200GiB ) actually exceed the limits of / which has only 32GiB ?
You are confusing filesystem (organization) semantics with partition (storage) semantics. Linux filesystem hierarchy is like a single giant tree with a stem (/) , branches ( /boot, /home, /bin, /usr, /var ) and sub-branches ( /usr/bin, /var/log ...). This metaphor is equivalent of the parents, children and grandchildren. All these symbols/names in the filesystem represent points on the tree where storage space, like a partition , usb, external drive etc. can be hung ("mounted"). If you hang/mount some storage space only onto the stem of the tree (/) , then all the branches, and subranches of the stem (/boot,/home,/usr/bin) have to be contained within that storage space. However if , after mounting the first storage space onto the stem (/), you then proceed to mount some additional storage space (e.g. another partition) onto one of the branches (e.g. /home) , then this second mounted storage is added to the total storage under the filesystem, but can only be accessed through its mountpoint (e.g. /home) on the filesystem. This second storage mounted on /home is in ADDITION to that mounted on the (/). All other branches of the / (like /boot, /usr, /var etc) will still have to be contained with the first mounted storage ! So / , /boot, /home, and others are simply access points on the filesystem. When you mount some storage onto any of those points (e.g. /), all the children and grandchildren of that point are automatically contained within this storage space UNTIL you mount some additional storage on one of its children or grandchildren.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100597/" ] }
286,169
I have a FreeBSD 10.3 box with Avahi 0.6.31 which is visible to the other machines on my network, but which is itself unable to resolve any names in the .local domain. That is to say, all the other machines show up in avahi-browse and avahi-resolve-host-name , but getent hosts <hostname> returns nothing. I have two other boxen on the same network: one Ubuntu 14.04 with Avahi 0.6.31, and one OSX 10.4 with mDNSResponder, both of which can resolve the FreeBSD box. Both Avahi machines have identical avahi-daemon.conf files, and each machine's nsswitch.conf contains the line hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 What have I missed?
You are confusing filesystem (organization) semantics with partition (storage) semantics. Linux filesystem hierarchy is like a single giant tree with a stem (/) , branches ( /boot, /home, /bin, /usr, /var ) and sub-branches ( /usr/bin, /var/log ...). This metaphor is equivalent of the parents, children and grandchildren. All these symbols/names in the filesystem represent points on the tree where storage space, like a partition , usb, external drive etc. can be hung ("mounted"). If you hang/mount some storage space only onto the stem of the tree (/) , then all the branches, and subranches of the stem (/boot,/home,/usr/bin) have to be contained within that storage space. However if , after mounting the first storage space onto the stem (/), you then proceed to mount some additional storage space (e.g. another partition) onto one of the branches (e.g. /home) , then this second mounted storage is added to the total storage under the filesystem, but can only be accessed through its mountpoint (e.g. /home) on the filesystem. This second storage mounted on /home is in ADDITION to that mounted on the (/). All other branches of the / (like /boot, /usr, /var etc) will still have to be contained with the first mounted storage ! So / , /boot, /home, and others are simply access points on the filesystem. When you mount some storage onto any of those points (e.g. /), all the children and grandchildren of that point are automatically contained within this storage space UNTIL you mount some additional storage on one of its children or grandchildren.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140996/" ] }
286,171
I've been following a tutorial to try and learn docker, however, for some reason I can't seem to access any of the ports it's meant to be opening. For some reason it just times out. Where can I look to get more details on why I'm not able to access this, or otherwise what can I do access the docker container. Steps done so far: $ docker run hello-world # works fine$ docker run -d -P --name static-site prakhar1989/static-site # works and returns a docker container id$ docker port static-site443/tcp -> 0.0.0.0:3276880/tcp -> 0.0.0.0:32769 Then I access one of the host through http://localhost:32768/ but nothing. After reinstalling it doesn't even time out anymore and just says the site can't be reached. Also, I tried accessing the container directly on those ports but without any success.
You are confusing filesystem (organization) semantics with partition (storage) semantics. Linux filesystem hierarchy is like a single giant tree with a stem (/) , branches ( /boot, /home, /bin, /usr, /var ) and sub-branches ( /usr/bin, /var/log ...). This metaphor is equivalent of the parents, children and grandchildren. All these symbols/names in the filesystem represent points on the tree where storage space, like a partition , usb, external drive etc. can be hung ("mounted"). If you hang/mount some storage space only onto the stem of the tree (/) , then all the branches, and subranches of the stem (/boot,/home,/usr/bin) have to be contained within that storage space. However if , after mounting the first storage space onto the stem (/), you then proceed to mount some additional storage space (e.g. another partition) onto one of the branches (e.g. /home) , then this second mounted storage is added to the total storage under the filesystem, but can only be accessed through its mountpoint (e.g. /home) on the filesystem. This second storage mounted on /home is in ADDITION to that mounted on the (/). All other branches of the / (like /boot, /usr, /var etc) will still have to be contained with the first mounted storage ! So / , /boot, /home, and others are simply access points on the filesystem. When you mount some storage onto any of those points (e.g. /), all the children and grandchildren of that point are automatically contained within this storage space UNTIL you mount some additional storage on one of its children or grandchildren.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53489/" ] }
286,209
In shell script we can substitute expr $a*$b with $(($a+$b)) . But why not just with (($a+$b)) , because in any resource it is written that (()) is for integer computation. So we use $(()) when there are variables instead of integer values do we? And what should we use instead of $(()) when variables can receive float values?
For arithmetic, expr is archaic. Don't use it.* $((...)) and ((...)) are very similar. Both do only integer calculations. The difference is that $((...)) returns the result of the calculation and ((...)) does not. Thus $((...)) is useful in echo statements: $ a=2; b=3; echo $((a*b))6 ((...)) is useful when you want to assign a variable or set an exit code: $ a=3; b=3; ((a==b)) && echo yesyes If you want floating point calculations, use bc or awk : $ echo '4.7/3.14' | bc -l1.49681528662420382165$ awk 'BEGIN{print 4.7/3.14}'1.49682 *As an aside, expr remains useful for string handling when globs are not good enough and a POSIX method is needed to handle regular expressions.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/286209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172600/" ] }
286,221
I need to use the index i in a loop such as for i in {0..10..5} in a floating-point calculation.What is the simplest way to convert i to floating-point? Variations on i=$(bc -l <<< "scale=7;$i") do not work.
Use bc or awk or a shell with floating point support like ksh93 , zsh or yash instead of bash . bc $ bc -l << \EOFheredoc> for (i = 0; i <= 10; i += 2.5) i / 2heredoc> EOF01.250000000000000000002.500000000000000000003.750000000000000000005.00000000000000000000 awk $ awk 'BEGIN {for (i = 0; i <= 10; i += 2.5) print i / 2}'01.252.53.755 zsh $ float i; for ((i = 0; i <= 10; i += 2.5)) print $((i/2))0.1.252.53.755. ksh93 $ for ((i = 0; i <= 10; i += 2.5)); do print "$((i/2))"; done01.2523.755 (beware the decimal separator is locale dependent ( . or , ). yash i=0; while [ "$((i <= 10))" -ne 0 ]; do echo "$((i / 2))" i=$((i + 2.5))done (inside $((...)) you use . as the decimal separator, but on I/O, the locale's one ( . or , ) is used instead). Edit now if you just want to loop over some integers and those integers be represented with a decimal point in them, the just append a .0 : for i in {0..10..5}.0 Or for the solutions above use printf "%.7f\n" instead of print / echo to output the numbers with 7 digits after the decimal separator.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133069/" ] }
286,300
I issued many commands yesterday in my CentOS 7. But when I wanted to retrieve these commands today, I found there was no any record. When I opened the file .bash_history , I still could not find the commands I issued yesterday but I found many old commands a few days ago. Why were the recent commands not stored? How can I increase the history capability?
The most likely cause for history items not to show up is not by setting HISTFILE to nothing or HISTSIZE to zero, it is by logging into the same machine twice and exiting with the second bash instance (in which you did little or nothing) after the one where you did a lot. By default Bash doesn't merge histories and the second Bash-exit overwrites the .bash_history that was so nicely update by the first Bash-exit. To prevent this from happening you can append to the history file instead of overwriting, you can use the histappend shell option: If the histappend shell option is enabled (see the description of shopt under SHELL BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history file is overwritten. More details in this answer including how to use HISTSIZE , HISTFILESIZE and HISTCONTROL to control size, duplicates etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/286300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166355/" ] }
286,351
What is a fast command line way to switch between multiple directories for system administration? I mean, I can use pushd . and popd to toggle, but what if I want to store multiples and cycle through them, rather than permanently popping them off the bottom of the stack?
Use pushd and then the special names for the directories in your directory stack: ~1 , ~2 , etc. Example: tmp $ dirs -v 0 /tmp 1 /tmp/scripts 2 /tmp/photos 3 /tmp/music 4 /tmp/picturestmp $ cd ~3music $ dirs -v 0 /tmp/music 1 /tmp/scripts 2 /tmp/photos 3 /tmp/music 4 /tmp/picturesmusic $ cd ~2photos $ cd ~4pictures $ cd ~3music $ cd ~1scripts $ The most effective way to use pushd in this way is to load up your directory list, then add one more directory to be your current directory, and then you can jump between the static numbers without affecting the position of the directories in your stack. It's also worth noting that cd - will take you to the last directory you were in. So will cd ~- . The advantage of ~- over just - is that - is specific to cd , whereas ~- is expanded by your shell the same way that ~1 , ~2 , etc. are. This comes in handy when copying a file between very long directory paths; e.g.: cd /very/long/path/to/some/directory/cd /another/long/path/to/where/the/source/file/is/cp myfile ~- The above is equivalent to: cp /another/long/path/to/where/the/source/file/is/myfile /very/long/path/to/some/directory/
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/286351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22780/" ] }
286,405
I know I can use -A , -B and -C to show surrounding lines but all of them also show the matching line. What I'm trying to make here is so, in this example file: foobar I'd be doing something like grep <option> "bar" file and my output should be foo Side note : I know the way of doing it with another grep or using sed but I would like to do it just by using one time grep
It's quite a job for sed : $ printf 'foo\nbar\n' | sed -n '$!N;/\nbar$/P;D'foo
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165802/" ] }
286,432
So far I know that cat /sys/block/<devicename>/queue/rotational will tell me whether my drive is a SSD or HDD. Is there something similar to find out if it uses SMR (Shingled Magnetic Recording)?
The only way to be sure is to benchmark the drive with enough random write access. "drive managed" SMR is supposed to be totally transparent to the host computer and sometimes manufacturers fail to mention SMR. However, the "transparent" is only about logical behavior, not performance (or latency). I'd suggest following. Run fio as follows (first cd to directory on the disk to test because following will create benchmark test file in the current working directory): fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randwrite --size=500g --io_size=1500g --blocksize=10m --ioengine=libaio --iodepth=1 --direct=1 --numjobs=1 --runtime=3600 --group_reporting This example will create 500 GB file and select random location within it and write 10 MB of random data. It then repeats the test for up to 1 hour or until 1.5 TB has been written. Watch the ETA lines that look something like this: Jobs: 1 (f=1): [w(1)] [20.0% done] [0KB/40960KB/0KB /s] [0/4/0 iops] [eta 00m:28s] The above command will emit a new ETA line every 5 seconds. Look at the speed and IOPS between the slashes ( 40960KB and 4 ) above. For SMR drive you should get good values first (100MB+/s and 10+ IOPS) but as the test goes on and the internal cache of SMR drive fills up (usually around 20 GB) the performance will go all over the place. Sometimes it will be near the starting speed, sometimes it will be long periods of time around 0 MB/s and 0-1 IOPS. There should be never errors, though. Be warned that SMR drives will get slower when used or benchmarked! Some drives support TRIM command and fstrim may help to get the drive speed back to original. Even without TRIM, a well done "drive managed" SMR will acquire its original speed once left alone powered for long enough. You might need to disable spinning down the disk (power management) if it goes to sleep before the internal cache has been fully flushed to SMR area. For the Seagate SMR drives I've used, full flush of the cache seems to take around half an hour. During this time the drive sounds like it's writing data even though nothing is written to drive by the computer. Unfortunately, I don't know any way to query the status of flushing from the drive. Either listen to drive sounds or simply wait for long enough. Example: Seagate SMR drive seems to be happy to go sleep in the middle of SMR flush (probably to pretend to be fully transparent to the user) and continue flushing once waken up again in the future. This is transparent in the sense that no data corruption will happen even if power is cut. However, this results in very poor performance if you write big bursts every now and then and immediately put the drive to sleep afterwards (e.g. use such drive as as a backup drive that is immediately unmounted and disconnected after backup is logically complete). The next time you're taking another backup, the new backup is going to compete with SMR flush of previous backup and the performance will be very low. If you keep such drive always powered and write only up to 10 GB on one go, it will be as fast or faster than regular PMR HDD drive.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172721/" ] }
286,544
File1 SergioLionelLuisAndreasGerard I want my stdout to have just SergioGerard How can I do this using the tail command and piping?
This should do the trick: $ head -n 1 File1; tail -n 1 File1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172639/" ] }
286,601
I have two SQL files, one is old.sql and the other one is new.sql . Suppose old.sql contains a table with three fields, Emp_Id, Name and Address and data stored in old.sql as follows: Insert into table1 values (101 ,"a", "xyz");Insert into table1 values (102 ,"b", "pqr"); Then I have changed "a" address "xyz" to "xyz123" and saved that data in the new.sql file.Now the new.sql file contains data as follows: Insert into table1 values (101 ,"a", "xyz123");Insert into table1 values (102 ,"b", "pqr"); When I use the diff command like this: diff old.sql new.sql it gives differences line-wise but I want only updated data, like xyz123.
Short answers from here : git diff --word-diff=color --word-diff-regex=. file1 file2 And here : diff -u file1 file2 |perl /usr/share/doc/git/contrib/diff-highlight/diff-highlight
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168519/" ] }
286,686
I've got a directory tree created by rsnapshot , which contains multiple snapshots of the same directory structure with all identical files replaced by hardlinks. I would like to delete all those hardlink duplicates and keep only a single copy of every file (so I can later move all files into a sorted archive without having to touch identical files twice). Is there a tool that does that? So far I've only found tools that find duplicates and create hardlinks to replace them… I guess I could list all files and their inode numbers and implement the deduplicating and deleting myself, but I don't want to reinvent the wheel here.
In the end it wasn't too hard to do this manually, based on Stéphane's and xenoid's hints and some prior experience with find . I had to adapt a few commands to work with FreeBSD's non-GNU tools — GNU find has the -printf option that could have replaced the -exec stat , but FreeBSD's find doesn't have that. # create a list of "<inode number> <tab> <full file path>"find rsnapshots -type f -links +1 -exec stat -f '%i%t%R' {} + > inodes.txt# sort the list by inode number (to have consecutive blocks of duplicate files)sort -n inodes.txt > inodes.sorted.txt# remove the first file from each block (we want to keep one link per inode)awk -F'\t' 'BEGIN {lastinode = 0} {inode = 0+$1; if (inode == lastinode) {print $2}; lastinode = inode}' inodes.sorted.txt > inodes.to-delete.txt# delete duplicates (watch out for special characters in the filename, and possibly adjust the read command and double quotes accordingly)cat inodes.to-delete.txt | while read line; do rm -f "$line"; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28235/" ] }
286,691
I open GUI remote programs by SSHing with the -X (or -Y ) flag, e.g., $ ssh -Y [email protected] Recently, I found there is a much more efficient way to do this with web browsing only: $ ssh -DNNNN [email protected] where NNNN is a four digit port number. Then I configure my local browser to connect through a proxy via port NNNN. This is much more efficient than the first SSH method because all the GUI information does not need to be transported through the tunnel, only the web data I am requesting. My question is: is there a more efficient way to SSH with X-forwarding in general ? Maybe some scheme that utilizes local libraries or rendering or something to aid in operating a GUI program hosted remotely?
You are mixing up terms. The first thing is X11 forwarding and it is inefficient by definition and you can't do almost anything about that (it is not made for high-latency connections and decades ago]. In comparison to the other method, it is inefficient, because it is transferring whole gui (of broswer?) over the newtwork. The other is SOCKS proxy (completely different thing) and it transfers only the network data you are interested in (encapsulated in the SOCKS protocol and SSH), which is obviously more efficient. Your question is asked in the way that it is not possible to answer. What are you trying to achieve? Run GUI programs? Proxy network connections? Something totally different?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99989/" ] }
286,718
I have a script which analyzes the output of a computation software.sometimes the output comes with some extra data that are irrelevant to my script.This data can be huge and makes running my simple script become seriously slow. My script is in awk/bash. I was wondering if it's possible to tell awk to completely ignore the lines after a specific pattern. for example: GOOD STUFF----------------IRRELEVENT DATA----------------IGNORE ALL THESE----------------END OF IT----------------GOOD STUFF I was also wondering if I tell awk to look for lines starting with a specific pattern, would it ignore whatever comes after and speed up the script?
To ignore some lines on a line-by-line basis, add /unwanted pattern/ {next} or ! /wanted pattern/ {next} at the beginning of the script. Alternatively, filter with grep: grep -v 'unwanted pattern' | awk … or grep 'wanted pattern' | awk … . This may be faster if grep eliminates a lot of lines, because grep is typically faster than awk for the same task (grep is more specialized so it can be optimized for its task; awk is a full programming language, it can do a lot more but it's less efficient). If you want to ignore a block of consecutive lines, awk has a convenient facility for that: add /^IRRELEVENT DATA/,/^END/ {next} at the top of the script to ignore all lines starting with IRRELEVENT DATA ( sic ) and the following lines until the first line that starts with END . You can't do that with grep; you can do it with sed ( sed '/^IRRELEVENT DATA/,/^END/d' | awk … ) but it's less likely to be a performance gain than grep.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/286718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120160/" ] }
286,721
Similar to a previous question about finding network device names , I would like to get a (reliable) list of device names but just for Wi-Fi devices. So that it looks like the following depending on your naming structure: wlan0wlan1 or wlp5s0wlp5s1
With nmcli you could list all devices and their type e.g. nmcli --get-values GENERAL.DEVICE,GENERAL.TYPE device show eno1ethernetwlp1s0wifiwlp1s1wifip2p-dev-wlp1s0wifi-p2ploloopback Per the manual, when using -g, --get-values , the "output is terse. This mode is designed and suitable for computer (script) processing" . So you can pipe that output to other tools and get the wifi devices names e.g. nmcli ... | sed '/^wifi/!{h;d;};x' or nmcli ... | awk '/^wifi/{print dev; next};{dev=$0};' On linux you also have iw (show/manipulate wireless devices and their configuration) and when used with the dev command: Commands: dev List all network interfaces for wireless hardware. that is iw dev you'll get something like: phy#0 Interface wlan0 ifindex 3 wdev 0x1 addr 00:12:32:e4:18:24 type managedphy#1 Interface wlan1 ifindex 4 wdev 0x2 addr 00:12:22:c6:b2:0a type managed To extract only interfaces names you could process the output e.g. iw dev | awk '$1=="Interface"{print $2}' just keep in mind the help page clearly states: Do NOT screenscrape this tool, we don't consider its output stable.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/286721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172925/" ] }
286,734
I was writing a bash script and suddenly this behaviour started: [[ 1 < 2 ]]; echo $? # outputs 0[[ 2 < 13 ]]; echo $? # outputs 1 but -lt works soundly: [[ 1 -lt 2 ]]; echo $? # outputs 0[[ 2 -lt 13 ]]; echo $? # outputs 0 did I accidentally overwrite < somehow? here is a script I wrote to test this behaviour: #!/bin/bashfor a in {1..5}do for b in {1..20} do [[ $a < $b ]] && echo $a $b done echodone here is the output: 1 21 31 41 51 61 71 81 91 101 111 121 131 141 151 161 171 181 191 202 32 42 52 62 72 82 92 203 43 53 63 73 83 94 54 64 74 84 95 65 75 85 9 changing < to -lt in the script gives normal output ( 5 10 shows up for example). Rebooting did not change anything. My bash version is GNU bash, version 4.3.42(1)-release (x86_64-pc-linux-gnu). I am on Ubuntu 15.10. I don't know what other information is relevant here.
From the bash man page. When used with [[, the < and > operators sort lexicographically using the current locale. From the output, it appears to be working as designed.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/286734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172938/" ] }
286,744
Is it possible to find out how much memory I am using on a multiuser linux machine? I want to know whether I am using a lot of memory and possibly inconveniencing others, so I can shut down my processes if necessary. I've seen in another question that sa -m might do it, but I apparently don't have access to that command on this server. Edit: I don't have sudo access, so I can't install stuff. The server is CentOS.
You can use ps together with awk to find the physical memory usage by a user: ps -U root --no-headers -o rss | awk '{ sum+=$1} END {print int(sum/1024) "MB"}' Here it prints memory used by root to the output.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/286744", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126433/" ] }
286,758
I have a script running in background like this: nohup /tmp/a.sh & If the script is running for more than 5 mins, I want to kill it. -bash-3.2$ nohup /tmp/a.sh &[1] 2518-bash-3.2$ nohup: appending output to `nohup.out'-bash-3.2$ ps -ef | grep /tmp/a.shordev 2518 17827 0 15:24 pts/3 00:00:00 /bin/sh /tmp/a.shordev 2525 17827 0 15:24 pts/3 00:00:00 grep /tmp/a.sh-bash-3.2$-bash-3.2$ killall /tmp/a.sh # killall not working like this/tmp/a.sh: no process killed If I use killall like below, it tries to kill all sessions running /bin/sh : -bash-3.2$ killall sh /tmp/a.shsh(17822): Operation not permitted # this pid associated with another process under root user. /tmp/a.sh: no process killed[1]+ Terminated nohup /tmp/a.sh . Other than pkill -f , are there any alternatives that kill only the required script name?
You can use ps together with awk to find the physical memory usage by a user: ps -U root --no-headers -o rss | awk '{ sum+=$1} END {print int(sum/1024) "MB"}' Here it prints memory used by root to the output.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/286758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170161/" ] }
286,769
My config thus far is: foo.path [Path] PathExists=/tmp/foo.path[Install] WantedBy=multi.user.target foo.service [Unit]Description=Matt TestBindsTo=foo.path[Service]ExecStart=/bin/sh /home/mpekar/bin/foo.sh PIDFile=/run/foo.pid This works fine when starting up but foo.service won't be killed when /tmp/foo.path is removed. Is there some way to make systemd do this or is it just not the appropriate tool for the job?
I would try this. Create an additional service using PathChanged: foo-stop.path [Path] PathChanged=/tmp/foo.path[Install] WantedBy=multi.user.target Then create: foo-stop.service Have it's "ExecStart" script check to see if /tmp/foo.path was deleted (since PathChanged could fire on other changes as well). If the path has been removed, have the script call /bin/systemctl stop foo .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125142/" ] }
286,773
I need to sum numbers located in a file like this one: column1 column2 column3 row1 a(1,1) a(1,2) a(1,3) row2 a(2,1) a(2,2) a(2,3) row3 a(3,1) a(3,2) a(3,3) row4 a(4,1) a(4,2) a(4,3) row5 a(5,1) a(5,2) a(5,3) row6 a(6,1) a(6,2) a(6,3) row7 a(7,1) a(7,2) a(7,3) row8 a(8,1) a(8,2) a(8,3) row9 a(9,1) a(9,2) a(9,3) row10 a(10,1) a(10,2) a(10,3) row11 a(11,1) a(11,2) a(11,3) row12 a(12,1) a(12,2) a(12,3) column4 column5 column6 row1 b(1,1) b(1,2) b(1,3) row2 b(2,1) b(2,2) b(2,3) row3 b(3,1) b(3,2) b(3,3) row4 b(4,1) b(4,2) b(4,3) row5 b(5,1) b(5,2) b(5,3) row6 b(6,1) b(6,2) b(6,3) row7 b(7,1) b(7,2) b(7,3) row8 b(8,1) b(8,2) b(8,3) row9 b(9,1) b(9,2) b(9,3) row10 b(10,1) b(10,2) b(10,3) row11 b(11,1) b(11,2) b(11,3) row12 b(12,1) b(12,2) b(12,3) the output should be like this: column1 a(1,1)+a(2,1)+a(5,1)+a(6,1)+a(7,1)+a(8,1)+a(11,1) a(3,1)+a(4,1)+a(9,1)+a(10,1)+a(12,1) column2 a(1,2)+a(2,2)+a(5,2)+a(6,2)+a(7,2)+a(8,2)+a(11,2) a(3,2)+a(4,2)+a(9,2)+a(10,2)+a(12,2) column3 a(1,3)+a(2,3)+a(5,3)+a(6,3)+a(7,3)+a(8,3)+a(11,3) a(3,3)+a(4,3)+a(9,3)+a(10,3)+a(12,3) column4 b(1,1)+b(2,1)+b(5,1)+b(6,1)+b(7,1)+b(8,1)+b(11,1) b(3,1)+b(4,1)+b(9,1)+b(10,1)+b(12,1) column5 b(1,2)+b(2,2)+b(5,2)+b(6,2)+b(7,2)+b(8,2)+b(11,2) b(3,2)+b(4,2)+b(9,2)+b(10,2)+b(12,2) column6 b(1,3)+b(2,3)+b(5,3)+b(6,3)+b(7,3)+b(8,3)+b(11,3) b(3,3)+b(4,3)+b(9,3)+b(10,3)+b(12,3) I have a way to do something similar but which only its useful for 4 rows. I need o modify this script : sed 's/row[1-9]//;/^$/d' file | #elimina os rowspr -2t -w 1000| awk 'NR==1{$1=$1; print; next} !(NR%2){split($0,a); next} {for(i=1;i<=NF;i++) $i+=a[i]}1' | tr ' ' '\n' | pr -3t Note to compute sum use $ tr -d 'ab(,)' < file > filenums I think that it is necessary to do a modification in the awk section, but I do not know how to do that.
I would try this. Create an additional service using PathChanged: foo-stop.path [Path] PathChanged=/tmp/foo.path[Install] WantedBy=multi.user.target Then create: foo-stop.service Have it's "ExecStart" script check to see if /tmp/foo.path was deleted (since PathChanged could fire on other changes as well). If the path has been removed, have the script call /bin/systemctl stop foo .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286773", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152587/" ] }
286,839
I have a text file containing one file name in each line: 111_c4l5r120.png123_c4l4r60.png135_c4l4r180.png147_c4l3r60.png15_c4l1r120.png... I want to convert it in this shape: 111_c4l5r120.png 111123_c4l4r60.png 123135_c4l4r180.png 135147_c4l3r60.png 14715_c4l1r120.png 15... using this code: #!/bin/bashwhile IFS='' read -r line || [[ -n "$line" ]]; do echo "$line" >> output.txt echo "$line" | cut -d'_' -f 1 >> output.txtdone < "$1" but, the result is: 111_c4l5r120.png 111123_c4l4r60.png 123135_c4l4r180.png 135147_c4l3r60.png 14715_c4l1r120.png 15... How should I change my script to have the desire output?
Unless you have a specific need to use the shell for this, terdon's answer provides better alternatives. Since you're using bash (as indicated in the script's shebang), you can use the -n option to echo: echo -n "${line} " >> output.txtecho "$line" | cut -d'_' -f 1 >> output.txt Or you can use shell features to process the line without using cut : echo "${line} ${line%%_*}" >> output.txt (replacing both echo lines). Alternatively, printf would do the trick too, works in any POSIX shell , and is generally better (see Why is printf better than echo? for details): printf "%s " "${line}" >> output.txtecho "$line" | cut -d'_' -f 1 >> output.txt or printf "%s %s\n" "${line}" "${line%%_*}" >> output.txt (Strictly speaking, in plain /bin/sh , echo -n isn't portable . Since you're explicitly using bash it's OK here.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/286839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171112/" ] }
286,934
I have been struggling to install VirtualBox Guest Additions in a Debian Virtual Machine (Debian 7, Debian 8 and Debian 9).
Follow these steps to install the Guest Additions on your Debian virtual machine: Login as root; Update your APT database with apt-get update; Install the latest security updates with This step WILL UPGRADE all your packages, so be wise about it, try the following steps first and they might be enough to work if not, then UPGRADE and Retry. apt-get upgrade; Install required packages apt-get install build-essential module-assistant; 2 packages (build-essential and module-assistant), both required for being able to recompile the kernel modules when installing the virtualbox linux additions package, so this command will get the headers and packages (compilers and libraries) required to work, notice that after installing your virtualbox linux additions package you will leave behind some packages as well as linux headers which you might or not delete afterwards, in my case they didn't hurt but for the sake of system tidyness you might want to pick up after playing ;) Configure your system for building kernel modules by running in a terminal: m-a prepare; On virtualbox menu and with the VM running!, click on Install Guest Additions… from the Devices menu , virtualbox should mount the iso copy but if for any reason it wouldn't just in a terminal run: mount /media/cdrom. Finally in a terminal Run: sh /media/cdrom/VBoxLinuxAdditions.run follow the instructions on screen, and REBOOT. Hope this helps. EN
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/286934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83670/" ] }
286,960
I have the next problem: I'm using a USA keyboard using that distribution, but I'm a spanish speaker, so I need to configure the keyboard to somehow print the characters I need ( é , á , í , ó , ú , ñ , Ñ ).The solution I thought was to make use of the i3wm keybinding, by using for example alt+shift+a to print á character. Anybody knows how can I make it? Or if there's a better / fast / different solution? Thanks a lot!
I would suggest using the international ( intl ) variant of the us layout instead of some complex workaround. You can temporarily (until logout) set it with setxkbmap us -variant intl To set it permanently on Debian, you have to modify /etc/default/keyboard and set the variables XKBLAYOUT and XKBVARIANT accordingly: XKBLAYOUT="us"XKBVARIANT="intl" You can also run dpkg-reconfigure keyboard-configuration; service keyboard-setup restart . See the Debian Wiki for more information. The intl variant is mostly identical to the standard us layout, with some key differences: The right Alt key is now Alt Gr , which acts much like the Shift key in that it generates different characters, when pressed in conjunction with other keys ( ISO_Level3_Shift ). It can also be combined with Shift to write yet another set of characters. Alt Gr + a prints "á", Alt Gr + Shift + a prints "Á". Similarly for e , i , o and u . Alt Gr + n prints "ñ", Alt Gr + Shift + n prints "Ñ". All other alpha-numeric and symbol keys also print additional characters, when combined with Alt Gr and Alt Gr + Shift . Dead keys : these keys do not generate a character on their own, instead output depends on the next key that is pressed: circumflex (" ^ ", Shift + 6 ): Pressing Shift + 6 followed by a will print as "â". Other combinations like Shift + 6 followed by d will print nothing. To print "^" either press Shift + 6 twice or Shift + 6 followed by Space . grave aka backtick (" ` ", ` ): ` followed by e will get you "è" tilde (" ~ ", Shift + ` ): Shift + ` followed by Shift + n will print "Ñ". Some changed keys: apostrophe aka quote (" ' ", ' ) is replaced by (dead) acute (" ´ "): Pressing ' followed by i will print "í". To print an " ' ", you need to press Alt Gr + ' ( Alt Gr is the right Alt key) double quote (" " ", Shift + ' ) is replaced by (dead) diaeresis : Shift + ' followed by o prints "ö". To print " " ", you need to press Alt Gr + Shift + ' . less (" < ", < ) and greater (" > ", Shift + < ) now print " \ " and " | " instead. You can still type " < " by pressing Shift + , and " > " by pressing Shift + . .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173118/" ] }
286,971
I have two bash scripts that try to check hosts that are up: Script 1: #!/bin/bash for ip in {1..254}; do ping -c 1 192.168.1.$ip | grep "bytes from" | cut -d" " -f 4 | cut -d ":" -f 1 &done Script 2: #!/bin/bash for ip in {1..254}; do host=192.168.1.$ip (ping -c 1 $host > /dev/null if [ "$?" = 0 ] then echo $host fi) &done As I am checking a large range, I would like to process each ping command in parallel. However, my second script seems to not retry failed fork attempts due to resource limits. This results in the second script having inconsistent results while my first script gives constant results despite both failing to fork at times. Can someone explain this to me? Also is there anyway to retry failed forks?
There is already an answer which gives an improved code snippet to the task the original poster questions was related to, while it might not yet have more directly responded to the question. The question is about differences of A) Backgrounding a "command" directly, vs B) Putting a subshell into the background (i.e with a similar task) Lets check about those differences running 2 tests # A) Backgrounding a command directlysleep 2 & ps outputs [1] 4228 PID TTY TIME CMD 4216 pts/8 00:00:00 sh 4228 pts/8 00:00:00 sleep while # A) backgrounding a subhell (with similar tas)( sleep 2; ) & ps outputs something like: [1] 3252 PID TTY TIME CMD 3216 pts/8 00:00:00 sh 3252 pts/8 00:00:00 sh 3253 pts/8 00:00:00 ps 3254 pts/8 00:00:00 sleep ** Test results:** In this test (which run only a sleep 2 ) the subshell version indeed differs, as it would use 2 child processes (i.e. two fork() / exec operations and PID) and hence more than the direct backgrounding of the command. In the script 1 of the question however the command was not a single sleep 2s but instead it was a pipe of 4 commands, which if we test in an additional case C) Backgrounding a pipe with 4 commands # C) Backgrounding a pipe with 4 commandssleep 2s | sleep 2s | sleep 2s | sleep 2s & ps yields this [2] 3265 PID TTY TIME CMD 3216 pts/8 00:00:00 bash 3262 pts/8 00:00:00 sleep 3263 pts/8 00:00:00 sleep 3264 pts/8 00:00:00 sleep 3265 pts/8 00:00:00 sleep 3266 pts/8 00:00:00 ps and shows that indeed the script 1 would be a much higher strain in terms of PIDs and fork() s. As a rough estimate the script one would have used about 254 * 4 ~= 1000 PIDs and hence even more than the script 2 with 254 * 2 ~= 500 PIDs. Any problem occurring because of PIDs resouce depletion seems yet unlikely since at most Linux boxes $ cat /proc/sys/kernel/pid_max32768 gives you 32x times the PIDs needed even for case script 1 and the processes/programs involved (i.e. sed , ping , etc) also seem unlikely to cause the inconstant results. As mentioned by user @derobert the real issue behind the scripts failing was that the missing of the wait command, which means that after backgrounding the commands in the loop the end of the script and hence the shell caused all the child processes to be terminated.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286971", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84969/" ] }
286,972
I have text file like this. 2015-11-24 12:59:37.112 128.206.6.136 source2014-11-24 12:59:36.920 8.8.8.8 source2014-11-24 14:59:38.112 23.234.22.106 destination2014-11-24 13:59:37.113 23.234.22.106 source2014-11-24 12:59:29.047 74.125.198.141 source2014-12-25 12:59:36.920 74.125.198.148 destination If a particular Ip address is tagged as source as well destination, then I want to tag that Ip as both . In this case, Ip 23.234.22.106 is source as well as destination. So, I want to tag it as both . My desired output should be like this 2015-11-24 12:59:37.112 128.206.6.136 source2014-11-24 12:59:36.920 8.8.8.8 source2014-11-24 14:59:38.112 23.234.22.106 both2014-11-24 12:59:29.047 74.125.198.141 source2014-12-25 12:59:36.920 74.125.198.148 destination This is what I have tried. cat input.txt | awk '{print $3}' | sort | uniq | while read linedo grep $line input.txt | sort -r -k1 | head -1done But, I don't understand how to tag a particular Ip as both if it is source as well as destination. In this case, 23.234.22.106. How can I do it using awk? Any help with this would be appreciated. Thank you
Try with sed sed ' N #add next line s/\([0-9.]\+\)\s\S\+\n.*\s\1\s\S\+$/\1 both/ P #print first line from two D #remove first line, return to start ' input.txt [0-9.]\+ group of numbers and dots \s space or tab \S\+ group of non-space symbols \n new line .* any symbols \1 back reference for group in parethesis \(...\) $ pattern end (modified: remove t command (tnx 2 jthill ) and add \space before group to check full address)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138913/" ] }
286,979
I don't understand the problem about which the standard shell in Debian (dash) complains: test@debian:~$ sh$ man ls ctrl + Z [1] + Stopped man ls$ jobs[1] + Stopped man ls$ fg %mansh: 3: fg: %man: ambiguous Shouldn't fg %string simply bring the job whose command begins with string to the foreground? Why is %man ambiguous?
This looks like a bug; the loop which handles strings in this context doesn't have a valid exit condition: while (1) { if (!jp) goto err; if (match(jp->ps[0].cmd, p)) { if (found) goto err; found = jp; err_msg = "%s: ambiguous"; } jp = jp->prev_job; } If a job matches the string, found is set, and err_msg is pre-loaded; then it goes round the loop again, after setting jp to the previous job. When it reaches the end, the first condition matches, so control goes to err , which prints the error: err: sh_error(err_msg, name); I guess there should be a goto gotit somewhere... The following patch fixes this (I've sent it to the upstream maintainer): diff --git a/src/jobs.c b/src/jobs.cindex c2c2332..37f3b41 100644--- a/src/jobs.c+++ b/src/jobs.c@@ -715,8 +715,14 @@ check: found = 0; while (1) {- if (!jp)- goto err;+ if (!jp) {+ if (found) {+ jp = found;+ goto gotit;+ } else {+ goto err;+ }+ } if (match(jp->ps[0].cmd, p)) { if (found) goto err;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/286979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150422/" ] }
287,008
I want to execute stat command in my unix shell /usr/bin/ksh: Input: /bin/date +%Y%m%d%H%M%S -d "$(/usr/bin/stat -c %x find.txt)" And the output: /usr/bin/ksh: stat: not found My system: SunOS 5.10 Generic_150400-23 sun4v sparc sun4v
The stat command is not standard. There's one on Linux, a more restricted one on embedded Linux, one with completely different options on FreeBSD and OSX, and none on most other Unix variants such as Solaris, AIX, and HP-UX. Your syntax looks like it's intended for Linux's stat . You're apparently running a system without stat . You probably don't have date -d either then. The only portable way to list a file's access time is with ls . ls -log -u find.txt This gives less precise output that what you need, in a cumbersome format. If you can install GNU coreutils , do so and use its stat and date commands. Many modern Unix variants have an easy way to install GNU utilities. Alternatively, use Perl, which is very often installed on Unix systems. Call stat to read the file's timestamp and localtime to break the timestamp into date and time parts. perl -e '@stat = stat($ARGV[0]); @time = localtime($stat[9]); printf "%04d%02d%02d%02d%02d%02d\n", $time[5]+1900, @time[4,3,2,1,0]'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172372/" ] }
287,024
Pseudocode ln -s $HOME/file $HOME/Documents/ $HOME/Desktop/ where I want to create a symlink from the source to two destinations. Probably, moreutils and pee . How can you create many symlinks from one source?
You can't do this with a single invocation of ln ,but you could loop through all necessary destinations: $ for i in "$HOME/Documents/" "$HOME/Desktop/"; do ln -s "$HOME/file" "$i"; done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/287024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
287,044
The following Bash loop stops if I interrupt it with ^C : (while true; do sleep 5; done) The following, however, does not: (while true; do aplay test.wav; done) The difference, from what I can tell, is that aplay catches and handles SIGINT , whereas sleep does not. Why is this? Does POSIX specify this behavior somewhere (I notice that Dash does the same thing), and if so, why? I cannot quite understand why this behavior is desirable. In fact, how can Bash even tell the difference? I must admit I'm not aware of any mechanism (other than ptrace or /proc/ hacking, at least) by which one process can tell how another one handles signals. Also, is there a way to counteract it? I notice that not even a trap 'exit 0' SIGINT before the loop helps. ( EDIT : Running the loops in a subshell is important, because otherwise the parent shell does not receive the SIGINT .)
The problem is well explained here , as WCE wait and cooperative exit and I encourage you to read it. Basically, your signal is received by all foreground processes, ie the shell and the program, sleep or aplay. The program exits with return code 130 if it does not handle the signal. The shell does a wait for the child to end, and sees this, and the fact that it got the signal too, so exits. When the program captures the signal, it often just exits with code 1 (as with aplay). When the shell waits for the child, it sees that it did not end due to the signal and so has to assume the signal was a normal aspect of the program's working, so the shell carries on as normal. For your example, the best way to handle aplay is to check its return code for non-zero and stop: (while aplay test.wav; do :; done) The above-mentioned article goes on to explain that a well-behaved program that wants to trap sigint to do some cleanup, should then disable its handler, and re-kill itself in order to get the correct exit flags set.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107929/" ] }
287,077
Is there a technical reason why? Is this an artifact from the early days of Linux or Unix, and if so is there a reason why it persists?
Some commands (eg chown ) can accept either a username or a numeric user ID, so allowing all-numeric usernames would break that. A rule to allow names that start with a number and contain some alpha was probably considered not worth the effort; instead there is just a requirement to start with an alpha character. Edit: It appears from the other responses that some distro's have subverted this limitation; in this case, according to the GNU Core Utils documentation : POSIX requires that these commands first attempt to resolve the specified string as a name, and only once that fails, then try to interpret it as an ID. $ useradd 1000 # on most systems this will fail with: # useradd: invalid user name '1000'$ mkdir /home/1000$ chown -R 1000 /home/1000 # This will first try to map # to username "1000", but this may easily be misinterpreted. Adding a user named '0' would just be asking for trouble (UID 0 == root user). However, note that group/user ID arguments can be preceded by a '+' to force their interpretation as an integer.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/287077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131594/" ] }
287,087
A problem with .tar.gz archives is that, when I try to just list an archive's content, the computer actually decompresses it, which would take a very long time if the file is large. Other file formats like .7z , .rar , .zip don't have this problem. Listing their contents takes just an instant. In my naive opinion, this is a huge drawback of the .tar.gz archive format. So I actually have 2 questions: why do people use .tar.gz so much, despite this drawback? what choices (I mean other software or tools) do I have if I want the "instant content listing" capability?
It's important to understand there's a trade-off here. tar means tape archiver . On a tape, you do mostly sequential reading and writing. Tapes are rarely used nowadays, but tar is still used for its ability to read and write its data as a stream. You can do: tar cf - files | gzip | ssh host 'cd dest && gunzip | tar xf -' You can't do that with zip or the like. You can't even list the content of a zip archive without storing it locally in a seekable file first. Things like: curl -s https://github.com/dwp-forge/columns/archive/v.2016-02-27.zip | unzip -l /dev/stdin won't work. To achieve that quick reading of the content, zip or the like need to build an index. That index can be stored at the beginning of the file (in which case it can only be written to regular files, not streams), or at the end, which means the archiver needs to remember all the archive members before printing it in the end and means a truncated archive may not be recoverable. That also means archive members need to be compressed individually which means a much lower compression ratio especially if there's a lot of small files. Another drawback with formats like zip is that the archiving is linked to the compressing, you can't choose the compression algorithm. See how tar archives used to be compressed with compress ( tar.Z ), then with gzip , then bzip2 , then xz as new more performant compression algorithms were devised. Same goes for encryption. Who would trust zip 's encryption nowadays? Now, the problem with tar.gz archives is not that much that you need to uncompress them. Uncompressing is often faster than reading off a disk (you'll probably find that listing the content of a large tgz archive is quicker that listing the same one uncompressed when not cached in memory), but that you need to read the whole archive. Not being able to read the index quickly is not really a problem. If you do foresee needing to read the table content of an archive often, you can just store that list in a separate file. For instance, at creation time, you can do: tar cvvf - dir 2> file.tar.xz.list | xz > file.tar.xz A bigger problem IMO is the fact that because of the sequential aspect of the archive, you can't extract individual files without reading the whole beginning section of the archive that leads to it. IOW, you can't do random reads within the archive. Now, for seekable files, it doesn't have to be that way. If you compress your tar archive with gzip , that compresses it as a whole, the compression algorithm uses data seen at the beginning to compress, so you have to start from the beginning to uncompress. But the xz format can be configured to compress data in separate individual chunks (large enough so as the compression to be efficient), that means that as long as you keep an index at the end of those compressed chunks, for seekable files, you access the uncompressed data randomly (in chunks at least). pixz (parallel xz ) uses that capability when compressing tar archives to also add an index of the start of each member of the archive at the end of the xz file. So, for seekable files, not only can you get a list of the content of the tar archive instantly (without metadata though) if they have been compressed with pixz : pixz -l file.tar.xz But you can also extract individual elements without having to read the whole archive: pixz -x archive/member.txt < file.tar.xz | tar xpf - Now, as to why things like 7z or zip are rarely used on Unix is mostly because they can't archive Unix files. They've been designed for other operating systems. You can't do a faithful backup of data using those. They can't store metadata like owner (id and name), permission, they can't store symlinks, devices, fifos..., they can't store information about hard links, and other metadata information like extended attributes or ACLs. Some of them can't even store members with arbitrary names (some will choke on backslash or newline or colon, or non-ascii filenames) (some tar formats also have limitations though). Never uncompress a tgz/tar.xz file to disk! In case it is not obvious, one doesn't use a tgz or tar.bz2 , tar.xz ... archive as: unxz file.tar.xztar tvf file.tarxz file.tar If you've got an uncompressed .tar file lying about on your file system, it's that you've done something wrong. The whole point of those xz / bzip2 / gzip being stream compressors is that they can be used on the fly, in pipelines as in unxz < file.tar.xz | tar tvf - Though modern tar implementations know how to invoke unxz / gunzip / bzip2 by themselves, so: tar tvf file.tar.xz would generally also work (and again uncompress the data on the fly and not store the uncompressed version of the archive on disk). Example Here's a Linux kernel source tree compressed with various formats. $ ls --block-size=1 -sS1666210304 linux-4.6.tar173592576 linux-4.6.zip 97038336 linux-4.6.7z 89468928 linux-4.6.tar.xz First, as noted above, the 7z and zip ones are slightly different because they can't store the few symlinks in there and are missing most of the metadata. Now a few timings to list the content after having flushed the system caches: $ echo 3 | sudo tee /proc/sys/vm/drop_caches3$ time tar tvf linux-4.6.tar > /dev/nulltar tvf linux-4.6.tar > /dev/null 0.56s user 0.47s system 13% cpu 7.428 total$ time tar tvf linux-4.6.tar.xz > /dev/nulltar tvf linux-4.6.tar.xz > /dev/null 8.10s user 0.52s system 118% cpu 7.297 total$ time unzip -v linux-4.6.zip > /dev/nullunzip -v linux-4.6.zip > /dev/null 0.16s user 0.08s system 86% cpu 0.282 total$ time 7z l linux-4.6.7z > /dev/null7z l linux-4.6.7z > /dev/null 0.51s user 0.15s system 89% cpu 0.739 total You'll notice listing the tar.xz file is quicker than the .tar one even on this 7 years old PC as reading those extra megabytes from the disk takes longer than reading and decompressing the smaller file. Then OK, listing the archives with 7z or zip is quicker but that's a non-problem as as I said, it's easily worked around by storing the file list alongside the archive: $ tar tvf linux-4.6.tar.xz | xz > linux-4.6.tar.xz.list.xz$ ls --block-size=1 -sS1 linux-4.6.tar.xz.list.xz434176 linux-4.6.tar.xz.list.xz$ time xzcat linux-4.6.tar.xz.list.xz > /dev/nullxzcat linux-4.6.tar.xz.list.xz > /dev/null 0.05s user 0.00s system 99% cpu 0.051 total Even faster than 7z or zip even after dropping caches. You'll also notice that the cumulative size of the archive and its index is still smaller than the zip or 7z archives. Or use the pixz indexed format: $ xzcat linux-4.6.tar.xz | pixz -9 > linux-4.6.tar.pixz$ ls --block-size=1 -sS1 linux-4.6.tar.pixz89841664 linux-4.6.tar.pixz$ echo 3 | sudo tee /proc/sys/vm/drop_caches3$ time pixz -l linux-4.6.tar.pixz > /dev/nullpixz -l linux-4.6.tar.pixz > /dev/null 0.04s user 0.01s system 57% cpu 0.087 total Now, to extract individual elements of the archive, the worst case scenario for a tar archive is when accessing the last element: $ xzcat linux-4.6.tar.xz.list.xz|tail -1-rw-rw-r-- root/root 5976 2016-05-15 23:43 linux-4.6/virt/lib/irqbypass.c$ time tar xOf linux-4.6.tar.xz linux-4.6/virt/lib/irqbypass.c | wc 257 638 5976tar xOf linux-4.6.tar.xz linux-4.6/virt/lib/irqbypass.c 7.27s user 1.13s system 115% cpu 7.279 totalwc 0.00s user 0.00s system 0% cpu 7.279 total That's pretty bad as it needs to read (and uncompress) the whole archive. Compare with: $ time unzip -p linux-4.6.zip linux-4.6/virt/lib/irqbypass.c | wc 257 638 5976unzip -p linux-4.6.zip linux-4.6/virt/lib/irqbypass.c 0.02s user 0.01s system 19% cpu 0.119 totalwc 0.00s user 0.00s system 1% cpu 0.119 total My version of 7z seems not to be able to do random access, so it seems to be even worse than tar.xz : $ time 7z e -so linux-4.6.7z linux-4.6/virt/lib/irqbypass.c 2> /dev/null | wc 257 638 59767z e -so linux-4.6.7z linux-4.6/virt/lib/irqbypass.c 2> /dev/null 7.28s user 0.12s system 89% cpu 8.300 totalwc 0.00s user 0.00s system 0% cpu 8.299 total Now since we have our pixz generated one from earlier: $ time pixz < linux-4.6.tar.pixz -x linux-4.6/virt/lib/irqbypass.c | tar xOf - | wc 257 638 5976pixz -x linux-4.6/virt/lib/irqbypass.c < linux-4.6.tar.pixz 1.37s user 0.06s system 84% cpu 1.687 totaltar xOf - 0.00s user 0.01s system 0% cpu 1.693 totalwc 0.00s user 0.00s system 0% cpu 1.688 total It's faster but still relatively slow because the archive contains few large blocks: $ pixz -tl linux-4.6.tar.pixz 17648865 / 134217728 15407945 / 134217728 18275381 / 134217728 19674475 / 134217728 18493914 / 129333248 336945 / 2958887 So pixz still needs to read and uncompress a (up to a) ~19MB large chunk of data. We can make random access faster by making archives will smaller blocks (and sacrifice a bit of disk space): $ pixz -f0.25 -9 < linux-4.6.tar > linux-4.6.tar.pixz2$ ls --block-size=1 -sS1 linux-4.6.tar.pixz293745152 linux-4.6.tar.pixz2$ time pixz < linux-4.6.tar.pixz2 -x linux-4.6/virt/lib/irqbypass.c | tar xOf - | wc 257 638 5976pixz -x linux-4.6/virt/lib/irqbypass.c < linux-4.6.tar.pixz2 0.17s user 0.02s system 98% cpu 0.189 totaltar xOf - 0.00s user 0.00s system 1% cpu 0.188 totalwc 0.00s user 0.00s system 0% cpu 0.187 total
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/287087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27695/" ] }
287,102
I would like to modify the configuration file, as shown below.What can I do as a shell script ? before Section "InputClass" Identifier "calibration" MatchProduct "Touch" Option "Calibration" "166 3939 186 3814" Option "SwapAxes" "1" Option "InvertX" "on" Option "InvertY" "on"EndSection after Section "InputClass" Identifier "calibration" MatchProduct "Touch" Option "Calibration" "166 3939 186 3814" Option "SwapAxes" "1" Option "InvertX" "off" Option "InvertY" "on"EndSection
Or even sed -i '/InvertX/s/"on"/"off"/' file_name
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86643/" ] }
287,108
I have a problem, like I need to find the directories that got updated yesterday. I tried using find command but its listing all the files that got updated in the directories. But I need only the directory names.
You can use -type d in the find string: find /path/to/target -type d -mtime 1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173191/" ] }
287,159
I try to run Android from SD-card. This card is prepared. There are partitions: boot(FAT32) , rootfs(ext4) , system(ext4) , cache(ext4) and usedata(ext4) . Boot partitions has files to run u-boot: MLO , u-boot.bin and uImage . To run it I use commands mmcinit 0fatload mmc 0 0x80000000 uImagesetenv bootargs 'console=ttyO2,115200n8 mem=456M@0x80000000 mem=512M@0xA0000000 init=/init vram=10M omapfb.vram=0:4M androidboot.console=ttyO2 root=/dev/mmcblk1p2 rw rootwait rootfstype=ext4'bootm 0x80000000 Than I see how Linux starts. But after few seconds on step of loading rootfs I see an error message [ 4.015655] EXT4-fs (mmcblk1p2): couldn't mount RDWR because of unsupported optional features (400)[ 4.036499] sd 0:0:0:0: [sda] Attached SCSI removable disk[ 4.079986] List of all partitions:[ 4.083801] b300 31162368 mmcblk0 driver: mmcblk[ 4.089660] b301 128 mmcblk0p1 f9f21f00-a8d4-5f0e-9746-594869aec34e[ 4.097839] b302 256 mmcblk0p2 f9f21f01-a8d4-5f0e-9746-594869aec34e[ 4.106018] b303 128 mmcblk0p3 f9f21f02-a8d4-5f0e-9746-594869aec34e[ 4.114288] b304 16384 mmcblk0p4 f9f21f03-a8d4-5f0e-9746-594869aec34e[ 4.122436] b305 16 mmcblk0p5 f9f21f04-a8d4-5f0e-9746-594869aec34e[ 4.130676] b306 8192 mmcblk0p6 f9f21f05-a8d4-5f0e-9746-594869aec34e[ 4.138916] b307 8192 mmcblk0p7 f9f21f06-a8d4-5f0e-9746-594869aec34e[ 4.147094] 103:00000 524288 mmcblk0p8 f9f21f07-a8d4-5f0e-9746-594869aec34e[ 4.155334] 103:00001 262144 mmcblk0p9 f9f21f08-a8d4-5f0e-9746-594869aec34e[ 4.163574] 103:00002 30342128 mmcblk0p10 f9f21f09-a8d4-5f0e-9746-594869aec34e[ 4.171874] b310 2048 mmcblk0boot1 (driver?)[ 4.177734] b308 2048 mmcblk0boot0 (driver?)[ 4.183593] b318 15179776 mmcblk1 driver: mmcblk[ 4.189453] b319 102400 mmcblk1p1 00000000-0000-0000-0000-000000000000[ 4.197692] b31a 10240 mmcblk1p2 00000000-0000-0000-0000-000000000000[ 4.205932] b31b 1 mmcblk1p3 00000000-0000-0000-0000-000000000000[ 4.214141] b31d 262144 mmcblk1p5 00000000-0000-0000-0000-000000000000[ 4.222351] b31e 13228032 mmcblk1p6 00000000-0000-0000-0000-000000000000[ 4.230682] b31f 1572864 mmcblk1p7 00000000-0000-0000-0000-000000000000[ 4.238891] No filesystem could mount root, tried: ext4[ 4.244812] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,26)[ 4.254089] CPU1: stopping I don't know why it happens. How can I solve this problem?
The error "EXT4-fs : couldn't mount RDWR because of unsupported optional features (400)" is due to different versions between the partition formatter (mkfs.ext4) and the mounter. You have two options: a) Either you have to upgrade the mounter program using a newer distro inside the SD-card. b) or you have to backup the files, reformat the SD-card with the same distro (the same ext4 versions) you are doing the mounting, and after the reformat copy the files again to the SD-card. In the second option, care must be taken with the original ext4 options the formatter has put, trying to consider the same options at reformat. Note also that a reformat of partitions doesn't need a repartition of the whole device, so the boot MBR would not be altered.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169897/" ] }
287,230
I try to write script to check if a file was modified or not. If it did, it should echo "Error!" , if not - script keeps running. My script #!/bin/bashdate=$(stat -c %y)$1while true do date2=$(stat -c %y$1) if (date2 != date) echo "error!" done Are there any errors?
you can use inotifywait , read more inotifywait - wait for changes to files using inotify inotifywait efficiently waits for changes to files using Linux's inotify(7) interface. It is suitable for waiting for changes to files from shell scripts. It can either exit once an event occurs, or continually execute and output events as they occur. use this command : $ inotifywait -m -e modify /tmp/testfile when i write into testfile , inotifywait alarm to me e.g; echo "bh" > /tmp/testfile inotifywait show this message: $ inotifywait -m -e modify /tmp/testfileSetting up watches. Watches established.testfile MODIFY testfile MODIFY also you can redirect output to while statement : while read jdo echo "file changed" breakdone < <(inotifywait -q -e modify /tmp/testfile)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287230", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171656/" ] }
287,239
For example, can I see all the previous parameters I passed to the command ssh (i.e. all the server addresses I passed as a parameter when calling ssh)? This is going to be something similar to the 'history' command, but not exactly. It would be amazing if I could tab through this command history but that is probably asking for too much. For example, if I typed in ssh, and then tab would allow me to cycle through previously executed commands.
Not exactly what you are asking for, but you can search through the history. Eg, ^R followed by ssh and then continue cycling through commands with ^R . (That's reverse search. Forward search is by default at ^S but that unfortunately collides with XOFF (undone with ^Q ) for the typical terminal, so you probably want to remap that for it to be useful.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287239", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60734/" ] }
287,243
I use CentOS 7 . I want to create VPN server on my machine. I am following perfect VPN tutorial until I get to this part, editing /etc/openvpn/server.conf : local x.x.x.xport 1194proto udpdev tuntun-mtu 1500tun-mtu-extra 32mssfix 1450ca /etc/openvpn/easy-rsa/2.0/keys/ca.crtcert /etc/openvpn/easy-rsa/2.0/keys/server.crtkey /etc/openvpn/easy-rsa/2.0/keys/server.keydh /etc/openvpn/easy-rsa/2.0/keys/dh2048.pemplugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so /etc/pam.d/loginclient-cert-not-requiredusername-as-common-nameserver ? ?push "redirect-gateway def1"push "dhcp-option DNS ?"push "dhcp-option DNS ?"keepalive 5 30comp-lzopersist-keypersist-tunstatus server-tcp.logverb 3 There are a few parts I don't understand. Please explain for me how it works. There are a lot of tutorials on the internet, but they just type their own network so I don't understand anything. This is my local network information: http://imgur.com/a/cgJkW . What should I type for the ? s ? Where i can find DNS to type on push "dhcp-option ? Where I can find the IP to type on server ? Am I doing it right? What do I have to fix?
Not exactly what you are asking for, but you can search through the history. Eg, ^R followed by ssh and then continue cycling through commands with ^R . (That's reverse search. Forward search is by default at ^S but that unfortunately collides with XOFF (undone with ^Q ) for the typical terminal, so you probably want to remap that for it to be useful.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172326/" ] }
287,248
I am using Debian and when I give the command sudo named -v it returns command not found. I want to verify that my ssh server (sshd) is using bind version 9.1.3 or later so that it both is IPv6 capable and secure. How can I do this?
Not exactly what you are asking for, but you can search through the history. Eg, ^R followed by ssh and then continue cycling through commands with ^R . (That's reverse search. Forward search is by default at ^S but that unfortunately collides with XOFF (undone with ^Q ) for the typical terminal, so you probably want to remap that for it to be useful.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
287,278
If I change the umask to 0000 , I'd expect a text file to be created with rwxrwxrwx permissions (based on my understanding of the umask, as described in the "possible duplicate" question ) However, when I try this, I get the following $ umask 0000$ touch /tmp/new.txt$ ls -ld /tmp/new.txt -rw-rw-rw- 1 alanstorm wheel 0 Jun 2 10:52 /tmp/new.txt That is, execute permission is omitted, and I end up with rw-rw-rw- for files (directories are rwxrwxrwx ). I tried this on my local OS X machine, an old BSD machine I have at a shared host, and a linux server running on linode. Why is this? Its my understanding that umask is the final arbiter of permissions -- is my understanding of this incorrect? If so, what else influences the default permissions of files on a unix system?
umask is subtractive, not prescriptive: permission bits set in umask are removed by default from modes specified by programs, but umask can't add permission bits. touch specifies mode 666 by default (the link is to the GNU implementation, but others behave in the same way; this is specified by POSIX ), so the resulting file ends up with that masked by the current umask : in your case, since umask doesn't mask anything, the result is 666. The mode of a file or directory is usually specified by the program which creates it; most system calls involved take a mode ( e.g. open(2) , creat(2) , mkdir(2) all have a mode parameter; but fopen(2) doesn't, and uses mode 666). Unless the parent directory specifies a default ACL, the process's umask at the time of the call is used to mask the specified mode (bitwise mode & ~umask ; effectively this subtracts each set of permissions in umask from the mode), so the umask can only reduce a mode, it can't increase it. If the parent directory specifies a default ACL, that's used instead of the umask: the resulting file permissions are the intersection of the mode specified by the creating program, and that specified by the default ACL. POSIX specifies that the default mode should be 666 for files, 777 for directories; but this is just a documentation default ( i.e. , when reading POSIX, if a program or function doesn't specify a file or directory's mode, the default applies), and it's not enforced by the system. Generally speaking this means that POSIX-compliant tools specify mode 666 when creating a file, and mode 777 when creating a directory, and the umask is subtracted from that; but the system can't enforce this, because there are many legitimate reasons to use other modes and/or ignore umask: compilers creating an executable try to produce a file with the executable bits set (they do apply umask though); chmod(1) obviously specifies the mode depending on its parameters, and it ignores umask when "who" is specified, or the mode is fully specified (so chmod o+x ignores umask, as does chmod 777 , but chmod +w applies umask); tools which preserve permissions apply the appropriate mode and ignore umask: e.g. cp -p , tar -p ; tools which take a parameter fully specifying the mode also ignore umask: install --mode , mknod -m ... So you should think of umask as specifying the permission bits you don't want to see set by default, but be aware that this is just a request. You can't use it to specify permission bits you want to see set, only those you want to see unset. Furthermore, any process can change its umask anyway, using the umask(2) system call! The umask(2) system call is also the only POSIX-defined way for a process to find out its current umask (inherited from its parent). On Linux, starting with kernel 4.7, you can see a process's current umask by looking for Umask in /proc/${pid}/status . (For the sake of completeness, I'll mention that the behaviour regarding setuid , setgid and sticky bits is system-dependent, and remote filesystems such as NFS can add their own twists.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/287278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22521/" ] }
287,316
Is there a benefit of creating a file with touch prior to edit.. like: touch foovi foo versus getting it to editor straight-away? Like: vi foo I see quite a few tutorials using the former ( touch then vi ).
touch ing the file first confirms that you actually have the ability to create the file, rather than wasting time in an editor only to find out that the filesystem is read-only or some other problem.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/287316", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/129469/" ] }
287,330
I have ZSH as my primary shell, and I'm trying to randomise access to an array. I keep seeing feh called, but I don't have that command. I have _feh , but I don't know if it's the same thing. What is that command. Here's the reference: FILES=( .../files/* )feh $FILES[$RANDOM%$#FILES+1] Here is my testing: test=(a b c); feh ${test[$RANDOM]} I'm on OSX 10.10.x for reference. Ultimately I'll use this to randomise SSH access to some hosts that I have.
feh is an image viewer, just ignore that part... you want just the second part. Basically, to access a random array element you want something like ${arr[${ri}]} where ri is $(( $RANDOM % ${#arr[@]} + 1)) that is, ri is a random index of the array arr Now, $RANDOM % N resolves to a random number from 0 to N-1 . In this case N is the array length ${#arr[@]} (number of elements) but since array indexing starts from 1 in zsh you have to add one ( + 1 ) so that $(( $RANDOM % ${#arr[@]} + 1 )) returns a value from 1 to N . So e.g. to print a random element of the array: print -r -- ${arr[$(( $RANDOM % ${#arr[@]} + 1 ))]} Or simply, as array indices are parsed as arithmetic expressions: print -r -- "$arr[RANDOM % $#arr + 1]" When using that csh-style syntax (when the expansion is not in braces), the quotes are necessary in order for zsh to parse the subscript; alternatively, this could be written $arr[RANDOM%$#arr+1] or ${arr[RANDOM % $#arr + 1]} (ksh-style).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/287330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119851/" ] }
287,362
How can I mount my Samba HD from my stock Asus n66u router to Linux Mint laptop over the air? I’ve tried tutorials, but they are either very confusing (since I don’t use Linux too often) or using a command I don’t have in Linux (e.g., findsmb ). SMBCLIENT and SYSTEM-CONFIG-SAMBA have been installed from the software manager. I’ve set up Samba on my Windows machine and Android phones, but would like it on my Linux machine as well. Having a link to the shared folders on my desktop would be great; I would hate to have to use the terminal every time to access a file.
If you have a graphical desktop environment (like Gnome) you can use Nautilus (the file explorer) and use the address bar. You might need to use the 'toggle location entry' button or menu view option to make the address bar visible. Once you have the address bar you can enter the URI to the samba share using: smb://host-or-ipaddress/sharename When you entered the desired share, you can create a bookmark using menu Bookmarks/Add Bookmark (Ctrl+D). This will add a link in the left side bar for quick access. Using a terminal, you can mount (as root or sudo) using: mount -t cifs //hostname-or-ipaddress/sharename -o username='username',domain='domainname-or-workgroup' /mnt/mysamba Where /mnt/mysamba need to be an existing directory. Also note that you need to have the package cifs-utils installed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173364/" ] }
287,420
How do I use file to differentiate between ELFves and scripts as quickly as possible? I don't need any further details, just ELF , script ( /plaintext ), or other/error .
If it's just between ELF and script, you may not need file at all. With bash : IFS= LC_ALL=C read -rn4 -d '' x < filecase $x in ($'\x7fELF') echo ELF;; ("#!"*) echo script;; (*) echo other;;esac ( -d '' (to use NUL character as delimiter) is to work around the fact that bash 's read otherwise just ignores the NUL bytes in the input). See also: Searching for 32-bit ELF file Fastest way to determine if shebang is present
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287420", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
287,442
I have been studying for my Unix exam and I often saw exclamation mark variable - $! If I write echo $! on my Mac shell tells me something like "You have mail in…" Is purpose of this to store whether I do or do not have any mails or is there something else? EDIT: Funny coincidence happened. Exactly when I wrote echo $! my Shell also notified me that I have some mail. So that's why I wrote that mail part.
$! is the PID of the most recent background command. See this excellent answer for other special parameters.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/287442", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173419/" ] }
287,446
When I use Unetbootin to put a Linux ISO on a USB drive, it proceeds quite quickly until it gets to filesystem.squashfs , which takes longer to process than absolutely everything else combined. Is this writing a new filesystem to the USB, or is it copying some huge filesystem-dependent file? If so, is there a way to only do it once in the event that I will be trying many distros and want to speed this step up?
Most major distributions use squashfs to hold their live CD. squashfs is intended to be used for read-only filesystems, which is exactly what a live CD is. Decompressing filesystem.squashfs takes longer than any other process because filesystem.squashfs contains the entire system. For more information, look at the wikipedia page: https://en.wikipedia.org/wiki/SquashFS
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287446", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99989/" ] }
287,533
The actual data is: Dolibarr techpubl http://techpublications.org/erptekstilworks.com WordPress tekstilwwbq.dandydesigns.co WordPress cbeqteWordPress cbeqte http://wbq.dandydesigns.coWordPress cbeqte http://qbd.dandydesigns.coWordPress cbeqte http://uqdq.dandydesigns.codandydesigns.co WordPress cbeqtestunlockers.info WordPress nmmuop What I want to get: tekstilworks.com WordPress tekstilw wbq.dandydesigns.co WordPress cbeqte dandydesigns.co WordPress cbeqte stunlockers.info WordPress nmmuop
Using awk: awk '$1 ~ /\./' input-file-here The period in the awk expression has to be escaped with a backslash so that it's not treated as a regular expression syntax.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118190/" ] }
287,545
Any quick ideas on how to write a program that extends the terminal's basic functionality? I want to do everything the terminal does but additionally do some custom processing on whatever the user types on my terminal derivative.
Using awk: awk '$1 ~ /\./' input-file-here The period in the awk expression has to be escaped with a backslash so that it's not treated as a regular expression syntax.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172679/" ] }
287,571
I'm a little confused (even after reading this ) post, about how *nix partitions work. From my understanding, sda usually refers to the disk and sda1 , sda2 , etc, refer to the first, second, and so forth, partitions located on the disk. This seems logical, but then I've also read that some directories (or so I thought they were directories), are actually partitions, like /boot/ , /var and /tmp . Where are these partitions located? If sda is the disk and lsblk shows the only partitions on sda are sda1 - sda8 and these 8 partitions start from the beginning and go all the way to the end of the disk, where could these other partitions exist? Are directories like /boot and /var actually partitions? If so, where are they located on the disk in reference to the sdaX partitions? I couldn't seem to find any information about these directories/partitions from parted, fdisk, or lsblk. How can I find out more about these on my machine? If /dev/sda is the disk, then what is / , and where is it on sda ?
/dev does not hold any partitions. /dev is a de facto standrad place to keep all device nodes . Originally, /dev was a plain directory in the root file system (so the device nodes created survived a system reboot). Nowadays, the special virtual filesystem backed by RAM is used by most Linux distributions. There is no standard of any kind to have some filesystem on a specific partition or total number of partitions required. There is a number of good practices/distribution-specific-standards exist to place parts of a system on separate partitions, though. You could find a Linux installation that occupies a single partition for all its needs. In a multi-partitioned installation the '/boot' is usually a separate partition to maintain its readability by the BIOS and/or boot loader. Also some boot loaders and kernels have restrictions on root file system type to use. The rest is up to you in most cases so you split the disk into partitions according to your needs (data storage requirements, temporary files, logs, etc)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/287571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168122/" ] }
287,620
I do successfully in Raspbian Jessie of Raspberry Pi 3b % http://askubuntu.com/a/227513/25388masi@raspberrypi:~ $ sudo locale-gen en_US en_US.UTF-8masi@raspberrypi:~ $ sudo dpkg-reconfigure locales I run sudo deluser pi but I get perl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", LC_PAPER = "fi_FI.UTF-8", LC_ADDRESS = "fi_FI.UTF-8", LC_MONETARY = "fi_FI.UTF-8", LC_NUMERIC = "fi_FI.UTF-8", LC_TELEPHONE = "fi_FI.UTF-8", LC_IDENTIFICATION = "fi_FI.UTF-8", LC_MEASUREMENT = "fi_FI.UTF-8", LC_TIME = "fi_FI.UTF-8", LC_NAME = "fi_FI.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Removing user `pi' ...Warning: group `pi' has no more members.userdel: user pi is currently used by process 929/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting. I also tried the fix if polylocale problem unsuccessfully for y in $(locale | cut -d '=' -f 2| sort |uniq );do sudo locale-gen $y; done I do masi@raspberrypi:~ $ ps -fp 929UID PID PPID C STIME TTY TIME CMDpi 929 1 0 08:54 ? 00:00:00 /usr/lib/menu-cache/menu-cached /tmp/. I do masi@raspberrypi:~ $ sudo kill 929masi@raspberrypi:~ $ sudo deluser piperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", LC_PAPER = "fi_FI.UTF-8", LC_ADDRESS = "fi_FI.UTF-8", LC_MONETARY = "fi_FI.UTF-8", LC_NUMERIC = "fi_FI.UTF-8", LC_TELEPHONE = "fi_FI.UTF-8", LC_IDENTIFICATION = "fi_FI.UTF-8", LC_MEASUREMENT = "fi_FI.UTF-8", LC_TIME = "fi_FI.UTF-8", LC_NAME = "fi_FI.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Removing user `pi' ...Warning: group `pi' has no more members.userdel: user pi is currently used by process 1174/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting.masi@raspberrypi:~ $ sudo deluser -remove-home piperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", LC_PAPER = "fi_FI.UTF-8", LC_ADDRESS = "fi_FI.UTF-8", LC_MONETARY = "fi_FI.UTF-8", LC_NUMERIC = "fi_FI.UTF-8", LC_TELEPHONE = "fi_FI.UTF-8", LC_IDENTIFICATION = "fi_FI.UTF-8", LC_MEASUREMENT = "fi_FI.UTF-8", LC_TIME = "fi_FI.UTF-8", LC_NAME = "fi_FI.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Looking for files to backup/remove ...Removing user `pi' ...Warning: group `pi' has no more members.userdel: user pi is currently used by process 1174/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting.masi@raspberrypi:~ $ I kill now the process id 1174 but I get exactly same output, but now giving a new process id 1202 . I do sudo killall -u pi -m . but I get Usage: killall [OPTION]... [--] NAME... I do masi@raspberrypi:~ $ sudo pkill -u pi masi@raspberrypi:~ $ sudo deluser piperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", ... are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Removing user `pi' ...Warning: group `pi' has no more members.userdel: user pi is currently used by process 3422/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting. I do masi@raspberrypi:~ $ ps -u pi PID TTY TIME CMD 3422 tty1 00:00:00 bashmasi@raspberrypi:~ $ sudo kill 3422masi@raspberrypi:~ $ sudo deluser piperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", LC_PAPER = "fi_FI.UTF-8", LC_ADDRESS = "fi_FI.UTF-8", LC_MONETARY = "fi_FI.UTF-8", LC_NUMERIC = "fi_FI.UTF-8", LC_TELEPHONE = "fi_FI.UTF-8", LC_IDENTIFICATION = "fi_FI.UTF-8", LC_MEASUREMENT = "fi_FI.UTF-8", LC_TIME = "fi_FI.UTF-8", LC_NAME = "fi_FI.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Removing user `pi' ...Warning: group `pi' has no more members.userdel: user pi is currently used by process 3496/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting. I do masi@raspberrypi:~ $ whopi tty1 Jun 4 09:38masi pts/0 Jun 4 09:06 (masi) I do masi@raspberrypi:~ $ sudo skill -KILL -u pimasi@raspberrypi:~ $ sudo deluser piperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", LC_PAPER = "fi_FI.UTF-8", LC_ADDRESS = "fi_FI.UTF-8", LC_MONETARY = "fi_FI.UTF-8", LC_NUMERIC = "fi_FI.UTF-8", LC_TELEPHONE = "fi_FI.UTF-8", LC_IDENTIFICATION = "fi_FI.UTF-8", LC_MEASUREMENT = "fi_FI.UTF-8", LC_TIME = "fi_FI.UTF-8", LC_NAME = "fi_FI.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Removing user `pi' ...Warning: group `pi' has no more members.userdel: user pi is currently used by process 3496/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting. I do Stephen's proposal but masi@raspberrypi:~ $ sudo service lightdm stopmasi@raspberrypi:~ $ sudo deluser piperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", LC_PAPER = "fi_FI.UTF-8", LC_ADDRESS = "fi_FI.UTF-8", LC_MONETARY = "fi_FI.UTF-8", LC_NUMERIC = "fi_FI.UTF-8", LC_TELEPHONE = "fi_FI.UTF-8", LC_IDENTIFICATION = "fi_FI.UTF-8", LC_MEASUREMENT = "fi_FI.UTF-8", LC_TIME = "fi_FI.UTF-8", LC_NAME = "fi_FI.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Removing user `pi' ...Warning: group `pi' has no more members.userdel: user pi is currently used by process 3496/usr/sbin/deluser: `/usr/sbin/userdel pi' returned error code 8. Exiting. My second iteration of the commands included again trying to change the locales by the first step of commands, again unsuccessfully. I do masi@raspberrypi:~ $ ps -ft tty1UID PID PPID C STIME TTY TIME CMDroot 3491 1 0 09:38 tty1 00:00:00 /bin/login -f pi 3507 3491 0 09:38 tty1 00:00:00 -bash I do sudo vim /etc/inittab but the file is empty. I do masi@raspberrypi:~ $ sudo grep -r "login -f" /etc but nothing as output. I do masi@raspberrypi:~ $ sudo grep -r autologin /etc/etc/lightdm/lightdm.conf:# pam-autologin-service = PAM service to use for autologin/etc/lightdm/lightdm.conf:# autologin-guest = True to log in as guest by default/etc/lightdm/lightdm.conf:# autologin-user = User to log in with by default (overrides autologin-guest)/etc/lightdm/lightdm.conf:# autologin-user-timeout = Number of seconds to wait before loading default user/etc/lightdm/lightdm.conf:# autologin-session = Session to load for automatic login (overrides user-session)/etc/lightdm/lightdm.conf:# autologin-in-background = True if autologin session should not be immediately activated/etc/lightdm/lightdm.conf:#pam-autologin-service=lightdm-autologin/etc/lightdm/lightdm.conf:#autologin-guest=false/etc/lightdm/lightdm.conf:autologin-user=pi/etc/lightdm/lightdm.conf:#autologin-user-timeout=0/etc/lightdm/lightdm.conf:#autologin-in-background=false/etc/lightdm/lightdm.conf:#autologin-session=UNIMPLEMENTED/etc/systemd/system/[email protected]:ExecStart=-/sbin/agetty --autologin pi --noclear %I $TERM Content of /etc/lightdm/lightdm.conf # General configuration## start-default-seat = True to always start one seat if none are defined in the configuration# greeter-user = User to run greeter as# minimum-display-number = Minimum display number to use for X servers# minimum-vt = First VT to run displays on# lock-memory = True to prevent memory from being paged to disk# user-authority-in-system-dir = True if session authority should be in the system location# guest-account-script = Script to be run to setup guest account# logind-load-seats = True to automatically set up multi-seat configuration from logind# logind-check-graphical = True to on start seats that are marked as graphical by logind# log-directory = Directory to log information to# run-directory = Directory to put running state in# cache-directory = Directory to cache to# sessions-directory = Directory to find sessions# remote-sessions-directory = Directory to find remote sessions# greeters-directory = Directory to find greeters#[LightDM]#start-default-seat=true#greeter-user=lightdm#minimum-display-number=0#minimum-vt=7#lock-memory=true#user-authority-in-system-dir=false#guest-account-script=guest-account#logind-load-seats=false#logind-check-graphical=false#log-directory=/var/log/lightdm#run-directory=/var/run/lightdm#cache-directory=/var/cache/lightdm#sessions-directory=/usr/share/lightdm/sessions:/usr/share/xsessions#remote-sessions-directory=/usr/share/lightdm/remote-sessions#greeters-directory=/usr/share/lightdm/greeters:/usr/share/xgreeters## Seat defaults## type = Seat type (xlocal, xremote)# xdg-seat = Seat name to set pam_systemd XDG_SEAT variable and name to pass to X server# pam-service = PAM service to use for login# pam-autologin-service = PAM service to use for autologin# pam-greeter-service = PAM service to use for greeters# xserver-command = X server command to run (can also contain arguments e.g. X -special-option)# xserver-layout = Layout to pass to X server# xserver-config = Config file to pass to X server# xserver-allow-tcp = True if TCP/IP connections are allowed to this X server# xserver-share = True if the X server is shared for both greeter and session# xserver-hostname = Hostname of X server (only for type=xremote)# xserver-display-number = Display number of X server (only for type=xremote)# xdmcp-manager = XDMCP manager to connect to (implies xserver-allow-tcp=true)# xdmcp-port = XDMCP UDP/IP port to communicate on# xdmcp-key = Authentication key to use for XDM-AUTHENTICATION-1 (stored in keys.conf)# unity-compositor-command = Unity compositor command to run (can also contain arguments e.g. unity-system-compositor -special-option)# unity-compositor-timeout = Number of seconds to wait for compositor to start# greeter-session = Session to load for greeter# greeter-hide-users = True to hide the user list# greeter-allow-guest = True if the greeter should show a guest login option# greeter-show-manual-login = True if the greeter should offer a manual login option# greeter-show-remote-login = True if the greeter should offer a remote login option# user-session = Session to load for users# allow-user-switching = True if allowed to switch users# allow-guest = True if guest login is allowed# guest-session = Session to load for guests (overrides user-session)# session-wrapper = Wrapper script to run session with# greeter-wrapper = Wrapper script to run greeter with# guest-wrapper = Wrapper script to run guest sessions with# display-setup-script = Script to run when starting a greeter session (runs as root)# display-stopped-script = Script to run after stopping the display server (runs as root)# greeter-setup-script = Script to run when starting a greeter (runs as root)# session-setup-script = Script to run when starting a user session (runs as root)# session-cleanup-script = Script to run when quitting a user session (runs as root)# autologin-guest = True to log in as guest by default# autologin-user = User to log in with by default (overrides autologin-guest)# autologin-user-timeout = Number of seconds to wait before loading default user# autologin-session = Session to load for automatic login (overrides user-session)# autologin-in-background = True if autologin session should not be immediately activated# exit-on-failure = True if the daemon should exit if this seat fails#[SeatDefaults]#type=xlocal#xdg-seat=seat0#pam-service=lightdm#pam-autologin-service=lightdm-autologin#pam-greeter-service=lightdm-greeter#xserver-command=X#xserver-layout=#xserver-config=#xserver-allow-tcp=false#xserver-share=true#xserver-hostname=#xserver-display-number=#xdmcp-manager=#xdmcp-port=177#xdmcp-key=#unity-compositor-command=unity-system-compositor#unity-compositor-timeout=60#greeter-session=example-gtk-gnome#greeter-hide-users=false#greeter-allow-guest=true#greeter-show-manual-login=false#greeter-show-remote-login=true#user-session=default#allow-user-switching=true#allow-guest=true#guest-session=#session-wrapper=lightdm-session#greeter-wrapper=#guest-wrapper=#display-setup-script=#display-stopped-script=#greeter-setup-script=#session-setup-script=#session-cleanup-script=#autologin-guest=falseautologin-user=pi#autologin-user-timeout=0#autologin-in-background=false#autologin-session=UNIMPLEMENTED#exit-on-failure=false## Seat configuration## Each seat must start with "Seat:".# Uses settings from [SeatDefaults], any of these can be overriden by setting them in this section.##[Seat:0]## XDMCP Server configuration## enabled = True if XDMCP connections should be allowed# port = UDP/IP port to listen for connections on# key = Authentication key to use for XDM-AUTHENTICATION-1 or blank to not use authentication (stored in keys.conf)## The authentication key is a 56 bit DES key specified in hex as 0xnnnnnnnnnnnnnn. Alternatively# it can be a word and the first 7 characters are used as the key.#[XDMCPServer]#enabled=false#port=177#key=## VNC Server configuration## enabled = True if VNC connections should be allowed# command = Command to run Xvnc server with# port = TCP/IP port to listen for connections on# width = Width of display to use# height = Height of display to use# depth = Color depth of display to use#[VNCServer]#enabled=false#command=Xvnc#port=5900#width=1024#height=768#depth=8 I do masi@raspberrypi:~ $ sudo systemctl disable autologin@tty1masi@raspberrypi:~ $ ps -fu piUID PID PPID C STIME TTY TIME CMDpi 3496 1 0 09:38 ? 00:00:00 /lib/systemd/systemd --userpi 3502 3496 0 09:38 ? 00:00:00 (sd-pam) pi 3507 3491 0 09:38 tty1 00:00:00 -bashmasi@raspberrypi:~ $ sudo kill 3496 3502 3507masi@raspberrypi:~ $ ps -fu piUID PID PPID C STIME TTY TIME CMDpi 7062 1 1 21:46 ? 00:00:00 /lib/systemd/systemd --userpi 7068 7062 0 21:46 ? 00:00:00 (sd-pam) pi 7073 7056 6 21:46 tty1 00:00:00 -bash The file /etc/systemd/system/[email protected] # This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it# under the terms of the GNU Lesser General Public License as published by# the Free Software Foundation; either version 2.1 of the License, or# (at your option) any later version.[Unit]Description=Getty on %IDocumentation=man:agetty(8) man:systemd-getty-generator(8)Documentation=http://0pointer.de/blog/projects/serial-console.htmlAfter=systemd-user-sessions.service plymouth-quit-wait.serviceAfter=rc-local.service# If additional gettys are spawned during boot then we should make# sure that this is synchronized before getty.target, even though# getty.target didn't actually pull it in.Before=getty.targetIgnoreOnIsolate=yes# On systems without virtual consoles, don't start any getty. Note# that serial gettys are covered by [email protected], not this# unit.ConditionPathExists=/dev/tty0[Service]# the VT is cleared by TTYVTDisallocateExecStart=-/sbin/agetty --autologin pi --noclear %I $TERMType=idleRestart=alwaysRestartSec=0UtmpIdentifier=%ITTYPath=/dev/%ITTYReset=yesTTYVHangup=yesTTYVTDisallocate=yesKillMode=processIgnoreSIGPIPE=noSendSIGHUP=yes# Unset locale for the console getty since the console has problems# displaying some internationalized messages.Environment=LANG= LANGUAGE= LC_CTYPE= LC_NUMERIC= LC_TIME= LC_COLLATE= LC_MONETARY= LC_MESSAGES= LC_PAPER= LC_NAME= LC_ADDRESS= LC_TELEPHONE= LC_MEASUREMENT= LC_IDENTIFICATION=[Install]WantedBy=getty.targetDefaultInstance=tty1 I do masi@raspberrypi:~ $ sudo systemctl stop autologin@tty1masi@raspberrypi:~ $ ps -fu piUID PID PPID C STIME TTY TIME CMDmasi@raspberrypi:~ $ sudo deluser piperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = "en_US.UTF-8", LC_PAPER = "fi_FI.UTF-8", LC_ADDRESS = "fi_FI.UTF-8", LC_MONETARY = "fi_FI.UTF-8", LC_NUMERIC = "fi_FI.UTF-8", LC_TELEPHONE = "fi_FI.UTF-8", LC_IDENTIFICATION = "fi_FI.UTF-8", LC_MEASUREMENT = "fi_FI.UTF-8", LC_TIME = "fi_FI.UTF-8", LC_NAME = "fi_FI.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Removing user `pi' ...Warning: group `pi' has no more members.Done. Problems it creates always a new process to the user pi after removing its current processes. How can you prevent this? Successful attempt [Stephen] Do masi@raspberrypi:~ $ sudo systemctl stop autologin@tty1masi@raspberrypi:~ $ ps -fu piUID PID PPID C STIME TTY TIME CMDmasi@raspberrypi:~ $ sudo deluser pi...masi@raspberrypi:~ & sudo vim /etc/passwd ... pi no longer here!masi@raspberrypi:~ & sudo deluser -remove-home pi Solution of locale problem is in the thread here . Replace autologin-user=pi with autologin-user=masi in /etc/lightdm/lightdm.conf . How can you remove the pi user successfully?
Your pi user is logged in, on tty1 ; you should log the user out before deleting it. (The menu-cached process is used by the LXDE desktop environment. There are probably other processes running for user pi .) If you don't have access to the GUI to log the user out ( i.e. , you're accessing the Raspberry Pi remotely), the safest bet is probably to stop the desktop manager: sudo service lightdm stop (assuming you're using the LXDE default); this should kill all of pi 's processes. You'll also need to de-activate the auto-login ( login -f ). If you have an old-style inittab , edit your /etc/inittab file, and replace the line looking something like 1:2345:respawn:/bin/login -f pi tty1 </dev/tty1 >/dev/tty1 2>&1 (the important part being /bin/login -f pi tty1 ) with 1:2345:respawn:/sbin/getty 115200 tty1 Then reload init by running sudo telinit q With a systemd unit for auto-login, such as your autologin service, disable the service: sudo systemctl --now disable autologin@tty1 This will also stop the unit and reload the systemd configuration. At this point, if pi still has any running processes (as indicated by ps -fu pi ), kill them — they shouldn't respawn any more.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
287,629
I want to clean up my server from large log files and backups. I came up with this: find ./ -size +1M | xargs rm But I do not want to include mp3 and mp4. I just want to do this for log and archive files (zip, tar, etc.) How will the command look like?
find -type f \( -name "*zip" -o -name "*tar" -o -name "*gz" \) -size +1M -delete the \( \) construct allows to group different filename patterns by using -delete option, we can avoid piping and troubles with xargs See this , this and this ./ or . is optional when using find command for current directory Edit: As Eric Renouf notes, if your version of find doesn't support the -delete option, use the -exec option find -type f \( -name "*zip" -o -name "*tar" -o -name "*gz" \) -size +1M -exec rm {} + where all the files filtered by find command is passed to rm command
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/287629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122146/" ] }
287,654
Problem statement: I want to extract an unknown string(last string) from a given path name in a single line command. Restrictions: The path is dynamic and can change with users input. Only last string is to be extracted using only one line o command. Sample: Eg1: /home/xyz/Desktop/tools In this case, I need to just extract the word tools . Eg2: /tmp/my_directory/my_big_dir/my_small/dir/cross In this again, I need to extact the last string cross Is there a way to do this? I tried to use cut command but it didn't work as the path length is dynamic.
I think basename is the command you are looking for. [me@host ~]# basename /home/xyz/Desktop/toolstools
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171377/" ] }
287,721
Is there a program that I can use to list processes based on their current network I/O usage? top does CPU, and on FreeBSD at least, it will also do disk I/O if you pass it -m io (I assume that there's an equivalent of some kind on Linux, but I don't remember it off the top of my head). But what I'd like is specifically network I/O so that I can see which processes are using it and how much. Is there a program that I can use to list processes that way? And if not, what would be the best alternative?
There's ntop and nethogs . And on Linux there's iotop for io.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/287721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2712/" ] }
287,798
I am on Ubuntu 15.10 x64. When I am trying to edit server.js file, it is opening a blank nano editor and displaying "File server.js is being edited (by root with nano 2.4.2, PID xxxx); continue?" with options - Yes, No, Cancel. I copied a backup file on this file but still I am getting the same message. Could you please suggest how to resolve this.
Check with tools like ps and htop whether this other nano instance is still running. If it's not, there's most likely a hidden dotfile in the same folder which leads nano to believe that the other instance is still running (at least vim works this way, I don't use nano ; try ls -lA and look for a file that begins with .server.js or something like that.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/287798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173659/" ] }
287,812
I have set up a Ubuntu server that I plan on using to backup my macbook air using rsync . But every time I use rsync , or even scp , the connection drops, with one of the following errors: packet_write_wait: Connection to 192.168.1.202: Broken pipepacket_write_poll: Connection to 192.168.1.202: Broken pipepacket_write_poll: Connection to 192.168.1.202: Protocol wrong type for socket Now I have searched for other questions like this, and the usually people have this problem with long backups where the session times out. For me this always happens within 10 seconds of starting the file transfer. I get the same errors using scp and rsync. I guess it could be due to a faulty network connection, but I find it hard to believe that my connection to the server on the same LAN is that unstable. Anyone have any ideas? Examples of the commands I have used that result in the errors: scp -r /Users/Matt/Documents [email protected]:/media/matt/MattsBackups//usr/local/bin/rsync -av -e ssh /Users/Matt/Documents [email protected]:/media/matt/MattsBackups/ I did some more testing today, and strangely enough it was working fairly reliably from outside my LAN. So I tried again from inside my home network, and it still doesn't work. Running grep 'sshd' /var/log/auth.log on the server shows the following error fatal: ssh_dispatch_run_fatal: Connection from <My IP> port 49870: message authentication code incorrect Some more detailed information on my setup: Macbook Air OS X 10.11.5OpenSSH_6.9p1, LibreSSL 2.1.8rsync version 3.1.2 protocol version 31Ubuntu ServerOpenSSH_7.2p2 Ubuntu-4ubuntu1, OpenSSL 1.0.2g-fips 1 Mar 2016 I did notice the difference in ssh versions but I was hoping it wouldn't be an issue. I can try and install a newer version with homebrew. UPDATE: OK I just updated ssh with homebrew to OpenSSH_7.2p2, OpenSSL 1.0.2g 1 Mar 2016 which appears to be the same version used on the Ubuntu box. However, Rsync still results in the error when I run the command. Heres the command I tried with the ssh -v flag: /usr/local/bin/rsync -a -e '/usr/local/bin/ssh -v -c aes128-ctr -m hmac-sha1' /Users/Matt/Documents [email protected]:/media/matt/MattsBackups/ The output is: OpenSSH_7.2p2, OpenSSL 1.0.2g 1 Mar 2016debug1: Reading configuration data /usr/local/etc/ssh/ssh_configdebug1: Connecting to 192.168.1.202 [192.168.1.202] port 22.debug1: Connection established.debug1: identity file /Users/Matt/.ssh/id_rsa type 1 debug1: key_load_public: No such file or directorydebug1: identity file /Users/Matt/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Matt/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Matt/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Matt/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Matt/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Matt/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Matt/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.2debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu1debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu1 pat OpenSSH* compat 0x04000000debug1: Authenticating to 192.168.1.202:22 as 'matt'debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: algorithm: [email protected]: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: aes128-ctr MAC: hmac-sha1 compression: nonedebug1: kex: client->server cipher: aes128-ctr MAC: hmac-sha1 compression: nonedebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:+zkrXNJENs5EobFwHa8wpMDe6zPDfj975qLcPp4b4sgdebug1: Host '192.168.1.202' is known and matches the ECDSA host key.debug1: Found key in /Users/Matt/.ssh/known_hosts:1debug1: rekey after 4294967296 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: rekey after 4294967296 blocksdebug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_EXT_INFO receiveddebug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>debug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /Users/Matt/.ssh/id_rsadebug1: Server accepts key: pkalg rsa-sha2-512 blen 279debug1: Authentication succeeded (publickey).Authenticated to 192.168.1.202 ([192.168.1.202]:22).debug1: channel 0: new [client-session]debug1: Requesting [email protected]: Entering interactive session.debug1: pledge: networkdebug1: client_input_global_request: rtype [email protected] want_reply 0debug1: Sending command: rsync --server -logDtpre.iLsfxC . /media/matt/MattsBackups/MacAir/debug1: channel 0: free: client-session, nchannels 1debug1: fd 0 clearing O_NONBLOCKdebug1: fd 1 clearing O_NONBLOCKConnection to 192.168.1.202 closed by remote host.Transferred: sent 145304, received 13032 bytes, in 0.1 secondsBytes per second: sent 1373019.8, received 123143.2debug1: Exit status -1rsync: [sender] write error: Broken pipe (32)rsync error: error in socket IO (code 10) at io.c(820) [sender=3.1.2]
Well as a last resort, I found an old 10/100 ethernet card from a windows 98 PC, and installed it in the server. After configuring it, I have had no more errors, over about 30 GB of data. I guess the built-in ethernet chipset didn't work well with ubuntu. Or I had somehow configured it incorrectly. Edit:While I never found the root cause of my problem, be sure to check out the comment thread under @sourcejedi's answer. Big thanks to @sourcejedi, @sneep and @dentarg.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173664/" ] }
287,866
I have a bash script and I have an else statement which I want it to do nothing. What is the best way to do this?
The standard way to do it is using colon : : if condition; do commandelse :fi or true : if condition; do commandelse truefi But why not just skipping the else part: if condition; do commandfi In zsh and yash , you can even make the else part empty: if condition; then commandelsefi
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/287866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159904/" ] }
287,903
I have inotifywait (version 3.14) on Linux to monitor a folder that is shared with Samba Version 4.3.9-Ubuntu. It works if I copy a file from Linux machine to samba share(that is on different machine, under Linux as well). But if I copy a file from Windows machine inotify won't detect anything.Spaces or no spaces, recursive or not result is the same. printDir="/media/smb_share/temp/monitor"inotifywait -m -r -e modify -e create "$printDir" | while read linedo echo "$line"done Does anyone have any ideas of how to solve it?
The standard way to do it is using colon : : if condition; do commandelse :fi or true : if condition; do commandelse truefi But why not just skipping the else part: if condition; do commandfi In zsh and yash , you can even make the else part empty: if condition; then commandelsefi
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/287903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147365/" ] }
287,913
I was trying to make a linux server become a radius client. So I downloaded pam_radius. By following the steps from this website : openacs.org/doc/install-pam-radius.html and by following these steps : cd /usr/local/srcwget ftp://ftp.freeradius.org/pub/radius/pam_radius-1.3.16.tartar xvf pam_radius-1.3.16cd pam_radiusmakecp pam_radius_auth.so /lib/security I thought I could install it but I got stuck at "make" I get this error message: [root@zabbix pam_radius-1.4.0]# makecc -Wall -fPIC -c src/pam_radius_auth.c -o pam_radius_auth.omake: cc: Command not foundmake: *** [pam_radius_auth.o] Error 127 I googled this error message and someone said they installed pam-devel. But I get the same message even after installation of pam-devel. What can I do?
The standard way to do it is using colon : : if condition; do commandelse :fi or true : if condition; do commandelse truefi But why not just skipping the else part: if condition; do commandfi In zsh and yash , you can even make the else part empty: if condition; then commandelsefi
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/287913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167898/" ] }
287,974
I have followed instructions from their site. The command ls /usr/local/hdf5 returns bin include lib share But dpkg -s hdf5dpkg-query: package 'hdf5' is not installed and no information is available Why?I have downloaded from here https://www.hdfgroup.org/HDF5/release/obtainsrc.html#conf and then I have followed the INSTALL file instructions,from make to make install. How can I know for sure that HDF5 in installed or not?
The actual name of the hdf5 installation package is "libhdf5-dev" (not "hdf5"). Running the following command should return package information. dpkg -s libhdf5-dev If that doesn't give any results, you can check for any hdf5 installation by doing: dpkg -l | grep hdf5
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287974", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143955/" ] }
287,979
I use Lynx web browser from remote SSH (so I have no GUI ) to enter into the web interface of some router and access points. It works fine with some Ovislink and Buffalo models I have tested, but it shows no info on screen when trying to navigate to two D-Link models. Authentication should be the first step on both devices, but Lynx does not ask me for it. The OvisLink model does not ask for password, but the Buffalo does, and Lynx request it OK as it must. Case for D-Link DWL-2100AP : DWL-2100AP[Blank lines here]Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back. Arrow keys: Up and Down to move. Right to follow a link; Left to go back. H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list Case for D-Link DAP-2310 : REFRESH(0 sec): http://192.168.1.231/index.php[Blank lines here]Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back. Arrow keys: Up and Down to move. Right to follow a link; Left to go back. H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list This is the GUI first screen (the browser pops up a separate credentials window) for DWL-2100AP (older) model: And this one is the D-Link DAP-2310 (credentials inside the web page): Is there a way for Lynx to access these interfaces? Further tests: Adding -auth=ID:PASSWD to the lynx IP command, with same results. Further data about the interfaces: It seems both devices use Java . This is the starting HTML, obtained with curl , that DWL-2100AP seems to send: <html><head><script src="jsMain.js"> </script><title>DWL-2100AP</title></head><script language='JavaScript'>document.cookie = 'RpWebID=3c26d7c1';</script><script language='JavaScript'>function JumpToHmain(){location.replace('/html/HomeWizard.html');}window.setTimeout('JumpToHmain()',1);</script> Starting web page for DWL-2100AP (after inputing credentials): http://IPAddress/html/HomeWizard.html If I try to download it with curl (whether be it using -u username:password or not): <html><head><title>Object Not Found</title></head><body><h1>Object Not Found</h1>The requested URL '/html/HomeWizard.html' was not found on the RomPager server.<p>Return to <A HREF="">last page</A><p></body></html> Starting web page for DAP-2310 (requesting credentials page): http://IPAddress/login.php Attempt to reboot the DWL-2100AP via WGet (direct URL): luis@Fresoncio:~$ wget http://admin:[email protected]/Forms/RESET_Switch converted 'http://admin:[email protected]/Forms/RESET_Switch?FlagForReboot=' (ANSI_X3.4-1968) -> 'http://admin:[email protected]/Forms/RESET_Switch?FlagForReboot=' (UTF-8) --2016-06-07 00:46:42-- http://admin:*password*@192.168.1.232/Forms/RESET_Switch?FlagForReboot= Connecting to 192.168.1.232:80... connected. HTTP request sent, awaiting response... 303 See Other Location: http://192.168.1.232/html/HomeWizard.html [following] converted 'http://192.168.1.232/html/HomeWizard.html' (ANSI_X3.4-1968) -> 'http://192.168.1.232/html/HomeWizard.html' (UTF-8) --2016-06-07 00:46:42-- http://192.168.1.232/html/HomeWizard.html Reusing existing connection to 192.168.1.232:80. HTTP request sent, awaiting response... 404 Not Found 2016-06-07 00:46:42 ERROR 404: Not Found. Lynx does not report of any need to accept cookies when browsing to DWL-2100AP. The WireShark/TCPDump capture for the reboot attempt shows a GET to http://192.168.1.232/html/HomeWizard.html with the description Authorization: Basic XXXXXXXXXX and just below Credentials: admin:MyEditedPassword . I think this could be Base64 encoding. ... and then I try WGet with --post-data for RpWebId=3c268b4c and FlagForReboot=&Submit=+Restart+ : ~$ wget 192.168.1.232/Forms/RESET_Switch -post-data='RpWebId=3c268b4c' --post-data='FlagForReboot=&Submit=+Restart+'converted 'http://192.168.1.232/Forms/RESET_Switch' (ANSI_X3.4-1968) -> 'http://192.168.1.232/Forms/RESET_Switch' (UTF-8)--2016-06-07 10:29:25-- http://192.168.1.232/Forms/RESET_SwitchConnecting to 192.168.1.232:80... connected.HTTP request sent, awaiting response... 303 See OtherLocation: http://192.168.1.232/html/HomeWizard.html [following]converted 'http://192.168.1.232/html/HomeWizard.html' (ANSI_X3.4-1968) -> 'http://192.168.1.232/html/HomeWizard.html' (UTF-8)--2016-06-07 10:29:25-- http://192.168.1.232/html/HomeWizard.htmlReusing existing connection to 192.168.1.232:80.HTTP request sent, awaiting response... 404 Not Found2016-06-07 10:29:25 ERROR 404: Not Found. This is the capture for the moment of the login: GET /html/HomeWizard.html HTTP/1.1Host: 192.168.1.232User-Agent: Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.6.0Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-US,en;q=0.5Accept-Encoding: gzip, deflateReferer: http://192.168.1.232/Cookie: RpWebID=3c267896Connection: keep-aliveAuthorization: Basic YWRtaWAAAAAhQXBpYQ== The password part is the final YWRta... part, ending on Q== (The AAAAA is mine, in order to obscure the real password). If I do echo YWRtaWAAAAAhQXBpYQ== | base64 --decode the shell gives me admin:MyPassword . This is the reboot device page: This is my attempt (see JigglyNaga answer) to access the reboot device page via CURL without sending the FlagForReboot data: $ IP_ADDRESS=192.168.1.232$ curl -b "$COOKIE" -u "admin:MySecretPassword" $IP_ADDRESS/html/MntRestartSystem.html<html><head>... etc And this one sending the FlagForReboot data (I would swear they are the same): $ IP_ADDRESS=192.168.1.232$ curl -b "$COOKIE" -u "admin:MySecretPassword" --data "FlagForReboot=&Submit=+Restart+" $IP_ADDRESS/html/MntRestartSystem.html<html><head><title>DWL-2100AP</title><meta HTTP-EQUIV="Content-Type" content="text/html; charset=iso-8859-1"><LINK REL=stylesheet TYPE="text/css" HREF="web_style.css"><script language="JavaScript" src="jsMain.js"></script><script language="JavaScript">function ShowMessage(s){ // alert("Switch is rebooting and Web will be disconnected!"); var Msg='Device will reboot and web will be disconnected! Continue?'; if(confirm(Msg)) { return true; } else return false;}</script><style type="text/css">font{font-family:"Arial";font-size:10pt;}</style></head><body BGCOLOR=#FFFFFF leftmargin="0" topmargin="0" onLoad="Change_Device_Name()"><table width="75%" border="0" cellspacing="0" cellpadding="0" align=center> <tr> <td><div align=center><img id="img_logo" src="" width="765" height="95"></div></td></tr><tr> <td><table width=765 border=0 cellpadding=0 cellspacing=0 align=center> <tr> <td rowspan=9 width="20" background="/Images/down_01.gif">&nbsp; </td><td rowspan=2 width="133"> <img id="img_ap" src="" width=133 height=75></td><td rowspan=2 width="25" background="/Images/down_03.jpg">&nbsp; </td><td width="21"> <img src="/Images/tools_04.jpg" width=21 height=49></td><td width="522"> <img src="/Images/tools_over_05.jpg" width=522 height=49 usemap="#MapMap" border="0"></td><td width="19"> <img src="/Images/tools_06.jpg" width=19 height=49></td><td width="25" background="/Images/down_11.gif">&nbsp; </td></tr><tr> <td width="21" background="/Images/down_14.gif">&nbsp; </td><td rowspan=8 width="522" valign=top><FORM METHOD="POST" ACTION="/Forms/RESET_Switch"><INPUT TYPE="HIDDEN" NAME="FlagForReboot" VALUE="" id="FlagForReboot"> <table width="100%" border="0" height="100"> <tr > <td colspan=2 align=left height="24" bordercolorlight="#FFFFFF" bordercolordark="#000000"><b><font face=Arial color=#8bacb1 size=2> System Settings</font> </b></td></tr><tr> <td align=left height="20" width=200> <font face=Arial size=2> Apply Settings and Restart</font> </td><td><INPUT TYPE="SUBMIT" NAME="Submit" VALUE=" Restart " onClick="return ShowMessage()"> </td></tr></table></form><FORM METHOD="POST" ACTION="/Forms/RESTORE_Switch"><INPUT TYPE="HIDDEN" NAME="FlagForReboot" VALUE="" id="FlagForReboot"> <table width="100%" border="0" height="110"> <tr> <td align=left height="25" width=200> <font face=Arial size=2> Restore factory settings </font> </td><td><INPUT TYPE="SUBMIT" NAME="Submit" VALUE=" Restore " onClick="return ShowMessage()"> </td></tr><tr><td height=20 colspan=2><div align=right><a href=/html/help_tools.html#02 target=_blank><img src=/Images/help_p.jpg width=36 height=52 border=0></a></div></td></tr></table></form></td><td width="19"> <img src="/Images/down_10.jpg" width=19 height=26></td><td width="25" background="/Images/down_11.gif">&nbsp; </td></tr><tr> <td class="style1" width="133" height="57" align=center onClick="javascript:Link('MgtUserAccount.html')">Admin</td><td width="25" background="/Images/down_03.jpg" height="42">&nbsp; </td><td width="21" background="/Images/down_14.gif" height="42">&nbsp; </td><td width="19" background="/Images/down_40.gif" height="42">&nbsp; </td><td width="25" background="/Images/down_11.gif" height="42">&nbsp; </td></tr><tr> <td class="style2" width="133" height="57" valign=middle align=center onClick="javascript:Link('MntRestartSystem.html')">System</td><td width="25" background="/Images/down_03.jpg">&nbsp; </td><td width="21" background="/Images/down_14.gif">&nbsp; </td><td width="19" background="/Images/down_40.gif">&nbsp; </td><td width="25" background="/Images/down_11.gif">&nbsp; </td></tr><tr> <td class="style1" width="133" height="57" valign=middle align=center onClick="javascript:Link('MntUpdateFirmware.html?1')">Firmware</td><td width="25" background="/Images/down_03.jpg">&nbsp; </td><td width="21" background="/Images/down_14.gif">&nbsp; </td><td width="19" background="/Images/down_40.gif">&nbsp; </td><td width="25" background="/Images/down_11.gif">&nbsp; </td></tr><tr> <td class="style1" width="133" height="57" valign=middle align=center onClick="javascript:Link('MntConfigurationFile.html?0,0,0,0,0,0,0,0,0')">Cfg File</td><td width="25" background="/Images/down_03.jpg" height="6">&nbsp; </td><td width="21" background="/Images/down_14.gif" height="6">&nbsp; </td><td width="19" background="/Images/down_40.gif" height="6">&nbsp; </td><td width="25" background="/Images/down_11.gif" height="6">&nbsp; </td></tr> <tr> <td width="133" height="57" valign=middle align=center background="/Images/down_37.gif">&nbsp;</td><td width="25" background="/Images/down_03.jpg">&nbsp;</td><td width="21" background="/Images/down_14.gif">&nbsp; </td><td width="19" background="/Images/down_40.gif">&nbsp; </td><td width="25" background="/Images/down_11.gif">&nbsp; </td></tr> <tr> <td width="133" background="/Images/down_37.gif">&nbsp;</td><td width="25" background="/Images/down_03.jpg">&nbsp;</td><td width="21" background="/Images/down_14.gif">&nbsp; </td><td width="19" background="/Images/down_40.gif">&nbsp; </td><td width="25" background="/Images/down_11.gif">&nbsp; </td></tr> <tr> <td width="133" background="/Images/down_37.gif">&nbsp;</td><td width="25" background="/Images/down_03.jpg">&nbsp;</td><td width="21" background="/Images/down_14.gif">&nbsp; </td><td width="19" background="/Images/down_40.gif">&nbsp; </td><td width="25" background="/Images/down_11.gif">&nbsp; </td></tr><tr> <td colspan=6 rowspan=2> <img src="/Images/down_43.jpg" width=740 height=44></td><td width="25"> <img src="/Images/down_45.gif" width="25" height="17"></td></tr><tr> <td width="25"> <img src="/Images/down_44.gif" width=25 height=27></td></tr></table></td></tr></table><map id="MapMap" name="MapMap"> <area shape="rect" coords="17,17,82,45" href="/html/HomeWizard.html" target="_self"/> <area shape="rect" coords="109,18,205,42" href="/html/CfgWLanParam.html?1" target="_self"/> <area shape="rect" coords="232,19,289,46" href="/html/MgtUserAccount.html" target="_self"/> <area shape="rect" coords="346,18,405,45" href="/html/DeviceInfo.html" target="_self"/> <area shape="rect" coords="455,18,501,47" href="/html/help_men.html" target="_self"/></map></body></html> And this is its TCPDump capture (the part that seems relevant) for that moment, as shown on WireShark: Frame 4: 517 bytes on wire (4136 bits), 517 bytes captured (4136 bits)Ethernet II, Src: 00:00:00_00:09:77 (00:00:00:00:09:77), Dst: D-LinkIn_24:f7:6d (c8:d3:a3:24:f7:6d)Internet Protocol Version 4, Src: 192.168.1.99, Dst: 192.168.1.232Transmission Control Protocol, Src Port: 44981 (44981), Dst Port: 80 (80), Seq: 1, Ack: 1, Len: 451Hypertext Transfer Protocol GET /html/MntRestartSystem.html HTTP/1.1\r\n Host: 192.168.1.232\r\n User-Agent: Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.6.0\r\n Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n Accept-Language: en-US,en;q=0.5\r\n Accept-Encoding: gzip, deflate\r\n Referer: http://192.168.1.232/html/MgtUserAccount.html\r\n Cookie: RpWebID=3c26e560\r\n Authorization: Basic YWRtaW46VmlhQXBpYQ==\r\n Connection: keep-alive\r\n \r\n [Full request URI: http://192.168.1.232/html/MntRestartSystem.html] [HTTP request 1/2] [Response in frame: 15] [Next request in frame: 17]0000 47 45 54 20 2f 68 74 6d 6c 2f 4d 6e 74 52 65 73 GET /html/MntRes0010 74 61 72 74 53 79 73 74 65 6d 2e 68 74 6d 6c 20 tartSystem.html 0020 48 54 54 50 2f 31 2e 31 0d 0a 48 6f 73 74 3a 20 HTTP/1.1..Host: 0030 31 39 32 2e 31 36 38 2e 31 2e 32 33 32 0d 0a 55 192.168.1.232..U0040 73 65 72 2d 41 67 65 6e 74 3a 20 4d 6f 7a 69 6c ser-Agent: Mozil0050 6c 61 2f 35 2e 30 20 28 58 31 31 3b 20 4c 69 6e la/5.0 (X11; Lin0060 75 78 20 61 72 6d 76 37 6c 3b 20 72 76 3a 33 38 ux armv7l; rv:380070 2e 30 29 20 47 65 63 6b 6f 2f 32 30 31 30 30 31 .0) Gecko/2010010080 30 31 20 46 69 72 65 66 6f 78 2f 33 38 2e 30 20 01 Firefox/38.0 0090 49 63 65 77 65 61 73 65 6c 2f 33 38 2e 36 2e 30 Iceweasel/38.6.000a0 0d 0a 41 63 63 65 70 74 3a 20 74 65 78 74 2f 68 ..Accept: text/h00b0 74 6d 6c 2c 61 70 70 6c 69 63 61 74 69 6f 6e 2f tml,application/00c0 78 68 74 6d 6c 2b 78 6d 6c 2c 61 70 70 6c 69 63 xhtml+xml,applic00d0 61 74 69 6f 6e 2f 78 6d 6c 3b 71 3d 30 2e 39 2c ation/xml;q=0.9,00e0 2a 2f 2a 3b 71 3d 30 2e 38 0d 0a 41 63 63 65 70 */*;q=0.8..Accep00f0 74 2d 4c 61 6e 67 75 61 67 65 3a 20 65 6e 2d 55 t-Language: en-U0100 53 2c 65 6e 3b 71 3d 30 2e 35 0d 0a 41 63 63 65 S,en;q=0.5..Acce0110 70 74 2d 45 6e 63 6f 64 69 6e 67 3a 20 67 7a 69 pt-Encoding: gzi0120 70 2c 20 64 65 66 6c 61 74 65 0d 0a 52 65 66 65 p, deflate..Refe0130 72 65 72 3a 20 68 74 74 70 3a 2f 2f 31 39 32 2e rer: http://192.0140 31 36 38 2e 31 2e 32 33 32 2f 68 74 6d 6c 2f 4d 168.1.232/html/M0150 67 74 55 73 65 72 41 63 63 6f 75 6e 74 2e 68 74 gtUserAccount.ht0160 6d 6c 0d 0a 43 6f 6f 6b 69 65 3a 20 52 70 57 65 ml..Cookie: RpWe0170 62 49 44 3d 33 63 32 36 65 35 36 30 0d 0a 41 75 bID=3c26e560..Au0180 74 68 6f 72 69 7a 61 74 69 6f 6e 3a 20 42 61 73 thorization: Bas0190 69 63 20 59 57 52 74 61 57 34 36 56 6d 6c 68 51 ic YWRtaW46VmlhQ01a0 58 42 70 59 51 3d 3d 0d 0a 43 6f 6e 6e 65 63 74 XBpYQ==..Connect01b0 69 6f 6e 3a 20 6b 65 65 70 2d 61 6c 69 76 65 0d ion: keep-alive.01c0 0a 0d 0a ... Now trying to "press" that Reset button via CURL (same result: $ curl -v -u "admin:MySecretPassword" -b "$COOKIE" --data "FlagForReboot=&Submit=+Restart+" $IP_ADDRESS/Forms/RESET_Switch* Hostname was NOT found in DNS cache* Trying 192.168.1.232...* Connected to 192.168.1.232 (192.168.1.232) port 80 (#0)* Server auth using Basic with user 'admin'> POST /Forms/RESET_Switch HTTP/1.1> Authorization: Basic YWRtaW46VmlhQXBpYQ==> User-Agent: curl/7.38.0> Host: 192.168.1.232> Accept: */*> Cookie: RpWebID=3c270622> Content-Length: 31> Content-Type: application/x-www-form-urlencoded>* upload completely sent off: 31 out of 31 bytes< HTTP/1.1 303 See Other< Location: http://192.168.1.232/html/HomeWizard.html< Content-Length: 0* Server Allegro-Software-RomPager/4.06 is not blacklisted< Server: Allegro-Software-RomPager/4.06<* Connection #0 to host 192.168.1.232 left intact A rether different results with -L option for CURL: $ curl -b "$COOKIE" --data "FlagForReboot=&Submit=+Restart+" $IP_ADDRESS/Forms/RESET_Switch -L<html><head><title>DWL-2100AP</title><meta http-equiv="content-type" content="text/html; charset=iso-8859-1"/><link rel=stylesheet type="text/css" href="web_style.css"><script type="text/javascript" src="jsMain.js"></script><script type="text/javascript" src="WizardScript.js"></script><style type="text/css">font{font-family:"Arial";font-size:10pt;}td.h30{height:30px;}td.h60{height:60px;}td.h80{height:80px;}</style></head><body bgcolor="#ffffff" topmargin="0" onLoad="InitialSettings()"><table border="0" align=center cellpadding="0" cellspacing="0"> <tr> <td><img id="img_logo" src="" alt=""></td></tr><tr> <td><table border="0" align=center cellspacing="0" cellpadding="0"> <tr><!-- row 1 --> <td width=20 rowspan=3 background="/Images/down_01.gif"></td><td width=133 rowspan=2><img id="img_ap" src="" border="0" alt=""/></td><td width=25 background="/Images/down_03.jpg"></td><td width=21 background="/Images/down_04.jpg"></td><td width=522><img src="/Images/down_05.jpg" border="0" usemap="#MapMap"/></td><td width=19 background="/Images/down_06.jpg"></td><td width=25 background="/Images/down_11.gif"></td></tr><tr><!-- row 2 --> <td background="/Images/down_03.jpg"></td><td height="26" background="/Images/down_14.gif"></td><td rowspan=3 valign=top><FORM METHOD="POST" ACTION="/Forms/FormWizard"><table border="0" width="100%" align=center cellpadding="0" cellspacing="0"><tr><td colspan=2><!-- Beginning of Contents --><table width=510 border="0"> <tr> <td colspan=2> <font color="#8bacb1"><b>Setup Wizard</b></font> <INPUT TYPE="HIDDEN" NAME="Run_Wizard" VALUE="0" id="Run_Wizard"> </td></tr> <tr> <td height="150"> <b><font> The <span id="APName0"></span>&nbsp;is a <span id="Device_Type"></span>. The setup wizard will guide you through the configuration of the <span id="APName1"></span>. The <span id="APName2"></span>'s easy setup will allow you to have wireless access within minutes. Please follow the setup wizard step by step to configure the <span id="APName3"></span>. </font></b> </td></tr><tr> <td>&nbsp;</td></tr><tr> <td align=center><INPUT TYPE="SUBMIT" NAME="Submit" VALUE=" Run Wizard " id="Run_Wizard" onClick="formSubmit(1)" align=right> </td></tr><tr> <td align=right> <a href="help_home.html#01" target=_blank> <img src="/Images/help_p.jpg" width="36" height="52" border="0"> </a></td></tr></table><!-- End of Contents --></td></tr></table></form> </td><td background="/Images/down_10.jpg"></td><td background="/Images/down_11.gif"></td></tr><tr><!-- row 3 --> <td valign=top background="/Images/down_37.gif"> <table width="100%" border="0" cellspacing="0" cellpadding="0" align=center> <tr> <td class="style2" align=center valign=middle onClick="javascript:Link('HomeWizard.html')">Wizard</td></tr><tr> <td class="style1" align=center valign=middle onClick="javascript:Link('Wireless.html?1')">Wireless</td></tr><tr> <td class="style1" align=center valign=middle onClick="javascript:Link('CfgIpSetup.html')">LAN</td></tr></table></td><td background="/Images/down_03.jpg"></td><td background="/Images/down_14.gif"></td><td background="/Images/down_40.gif"></td><td background="/Images/down_11.gif"></td></tr><tr><!-- row 4 --> <td height="150" background="/Images/down_37.gif"></td><td background="/Images/down_37.gif"></td><td background="/Images/down_03.jpg"></td><td background="/Images/down_14.gif"></td><td background="/Images/down_40.gif"></td><td background="/Images/down_11.gif"></td></tr><tr><!-- row 5 --> <td colspan=6 rowspan=2><img src="/Images/down_43.jpg" border="0"></td><td><img src="/Images/down_45.gif" border="0"></td></tr><tr> <td><img src="/Images/down_44.gif" border="0"></td></tr></table></td></tr></table><map id="MapMap" name="MapMap"> <area shape="rect" coords="17,17,82,45" href="/html/HomeWizard.html" target="_self"/> <area shape="rect" coords="109,18,205,42" href="/html/CfgWLanParam.html?1" target="_self"/> <area shape="rect" coords="232,19,289,46" href="/html/MgtUserAccount.html" target="_self"/> <area shape="rect" coords="346,18,405,45" href="/html/DeviceInfo.html" target="_self"/> <area shape="rect" coords="455,18,501,47" href="/html/help_men.html" target="_self"/></map><script type="text/javascript">var Run_Wizard = document.getElementById("Run_Wizard");var NewPwd = document.getElementById("NewPwd");var CfmNewPwd = document.getElementById("CfmNewPwd");var channel = document.getElementById("channel");var WizardRootSsid = document.getElementById("WizardRootSsid");var no = document.getElementById("No");var wpa = document.getElementById("wpa");var psk = document.getElementById("psk");var keytype = document.getElementById("Wizard_KeyType");var KeySize1 = document.getElementById("KeySize1");var KeySize2 = document.getElementById("KeySize2");var KeySize3 = document.getElementById("KeySize3");var key = document.getElementById("Key");var passphrase = document.getElementById("passphrase");var cipher = document.getElementById("cipher");var groupkey = document.getElementById("groupkey");</script></body></html> After that, there will be a confirmation screen for reboots: I can add more data upon requests.
The actual name of the hdf5 installation package is "libhdf5-dev" (not "hdf5"). Running the following command should return package information. dpkg -s libhdf5-dev If that doesn't give any results, you can check for any hdf5 installation by doing: dpkg -l | grep hdf5
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/287979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
288,024
I want to get a list of packages that I have installed using apt-get . i.e. any packages that I have installed since my Linode (Debian) was initially created. Is this possible?
You can list installed package using apt using the following command: apt --installed list Edit Use the following command to list installed package through apt-get : zcat /var/log/apt/history.log.*.gz | cat - /var/log/apt/history.log | grep -Po '^Commandline: apt-get install (?!.*--reinstall)\K.*' Source: Askubuntu
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/288024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173071/" ] }