source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
201,751
I need to replace some non-printable characters with spaces in file. Specifically, all characters from 0x00 up to 0x1F , except 0x09 (TAB), 0x0A (new line), 0x0D (CR) Up until now, I just needed to replace 0x00 character. Since my previous OS was AIX (without GNU commands), I can't use sed (well, I can but it had some limitations). So, I found next command using perl , which worked as expected: perl -p -e 's/\x0/ /g' $FILE_IN > $FILE_OUT Now I'm working on Linux, so I expected to be able to use sed command. My questions: Is this command appropriate to replace those characters? I tried, and it seems to work, but I want to make sure: perl -p -e 's/[\x00-\x08\x0B\x0C\x0E-\x1F]/ /g' $FILE_IN > $FILE_OUT I thought perl -p works as sed . So, why does the previous command work (at least, it doesn't fail), and next one doesn't? sed -e 's/[\x00-\x08\x0B\x0C\x0E-\x1F]/ /g' $FILE_IN > $FILE_OUT It tells me: sed: -e expression #1, char 34: Invalid collation character
That's a typical job for tr : LC_ALL=C tr '\0-\10\13\14\16-\37' '[ *]' < in > out In your case, it doesn't work with sed because you're in a locale where those ranges don't make sense. If you want to work with byte values as opposed to characters and where the order is based on the numerical value of those bytes, your best bet is to use the C locale . Your code would have worked with LC_ALL=C with GNU sed , but using sed (let alone perl ) is a bit overkill here (and those \xXX are not portable across sed implementations while this tr approach is POSIX). You can also trust your locale's idea of what printable characters are with: tr -c '[:print:]\t\r\n' '[ *]' But with GNU tr (as typically found on Linux-based systems), that only works in locales where characters are single-byte (so typically, not UTF-8). In the C locale, that would also exclude DEL (0x7f) and all byte values above (not in ASCII). In UTF-8 locales, you could use GNU sed which doesn't have the problem GNU tr has: sed 's/[^[:print:]\r\t]/ /g' < in > out (note that those \r , \t are not standard, and GNU sed won't recognize them if POSIXLY_CORRECT is in the environment (will treat them as backslash, r and t being part of the set as POSIX requires)). It would not convert bytes that don't form valid characters if any though.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/201751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93422/" ] }
201,757
I have a device under test (DUT) and I measure its power usage the using a Power Analyzer Datalogger using the data from /dev/ttyUSB0 . The problem is that the DUT is now remotely from the workstation I used to gather data with, but in the same network, I need to use a 2nd PC which is directly connected via USB to the Power Analyzer as a sort of USB proxy and ssh to create a kind of symbolic link on the measuring machine of the USB of the "proxy" machine. Given the above diagram how can the 1 st PC access /dev/ttyUSB0 of the 2 nd PC which is directly connected, in a way that a program reading the stream from the 1 st PC will not notice the difference?
socat might work here. On the 2nd PC you could let socat listen for data on /dev/ttyUSB0 and serve it to a tcp port, e.g: socat /dev/ttyUSB0,raw,echo=0 tcp-listen:8888,reuseaddr Then on 1st PC you can connect to 2nd PC with socat and provide the data on a pseudo terminal /dev/ttyVUSB0 for your application: socat PTY,raw,echo=0,link=/dev/ttyVUSB0 tcp:<ip_of_pc2>:8888 This isn't tested and socat supports many options, so tweaking may be needed.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/201757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22558/" ] }
201,761
I have a large music collection stored on my hard drive; and browsing through it, I found that I have a lot of duplicate files in some album directories. Usually the duplicates exist alongside the original in the same directory. Usually the format is filename.mp3 and duplicate file is filename 1.mp3 . Sometimes there may be more than one duplicate file, and I have no idea if there are duplicate files across folders (for example duplicates of album directories). Is there any way I can scan for these duplicate files (for example by comparing filesize, or comparing the entire files to check if they are identical), review the results, and then delete the duplicates? The ones that have a longer name, or the ones that have a more recent modified/created date would usually be the targets of deletion. Is there a program out there that can do this on Linux?
There is such a program, and it's called rdfind : SYNOPSIS rdfind [ options ] directory1 | file1 [ directory2 | file2 ] ...DESCRIPTION rdfind finds duplicate files across and/or within several directories. It calculates checksum only if necessary. rdfind runs in O(Nlog(N)) time with N being the number of files. If two (or more) equal files are found, the program decides which of them is the original and the rest are considered duplicates. This is done by ranking the files to each other and deciding which has the highest rank. See section RANKING for details. It can delete the duplicates, or replace them with symbolic or hard links.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/201761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72554/" ] }
201,768
What is the good common practice to store shell scripts? (bash, sh, ...) For now, I have a few bash and sh scripts in my $HOME directory and I invoke them with $ bash $HOME/script1.bash arg1, arg2, ... or $ sh $HOME/script2.sh arg1, arg2, ... May I store them in some standard location and invoke them as normalapps like ls , pwd , ...?e.g. $ script1 arg1, arg2, ...$ script2 arg1, arg2, ... What is common practice here for advanced linux users?
You could store your scripts where they belong in the filesystem, and create a bin directory in your home. Adding if [ -d "$HOME/bin" ] ; then export PATH="$HOME/bin:$PATH"fi in your .bashrc makes any executable placed in ~/bin discoverable. Finally, you just need to add files in the dorectory. You can use symbolic links to whatever script you want to make discoverable in ~/bin , which allows you to virtually change the name of the script and leave it where you want it on you filesystem. As an example, for a file my_script.sh , first make sure the file is executable chmod u+x my_script.sh then create a symbolic link ln -s my_script.sh ~/bin/my_script in the dedicated folder. Note that the extension was removed for convenience. You can now run your script from anywhere using the command my_script . You don't have to make the symbolic link every time you edit the original my_script.sh file. Edit : to make any text file executable via a certain interpreter, you can use a shebang . For a bash script, this means adding #!/bin/bash as a first line for the file. Note that the technique is not restricted to bash scripts, but also applies to python for instance using #!/usr/bin/env python Note : I personnally use ~/.local/bin instead if ~/bin as a personal preference, but most people use a bin directory directly located in their home, not hidden. Many distributions integrate it directly, such as debian or ubuntu which automatically add such a directory in the PATH if it exists (in the default .profile file they ship). My choice is based on the fact that many softwares already use .local/share , that I consider it as a configuration tool rather than as a set of real files (only symbolic links), and that I don't want this folder to mess with the completion.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/201768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
201,779
I am trying to use the solarized color scheme in VIM using gnome terminal (Ubuntu). When I run vim without tmux, it looks great, see below: If I add the following commands to my .bashrc # tmux configurationtmux attach &> /dev/nullif [[ ! $TERM =~ screen ]]; then exec tmuxfi and start the terminal with tmux, the colors do not look right, see below: Here is the contents of the .tmux.conf file source ~/.local/lib/python2.7/site-packages/powerline/bindings /tmux/powerline.conf set-option -g default-terminal "screen-256color" set-option -g history-limit 10000 I am using https://github.com/altercation/vim-colors-solarized for the vim color scheme, and the terminal is: https://github.com/Anthony25/gnome-terminal-colors-solarized . EDIT:With tmux: ~$ echo $TERMscreen Without tmux: ~$ echo $TERMxterm
The value of $TERM must be screen-256color , so that Vim correctly detects the availability of 256 colors. ( tmux reuses the terminal definitions of screen , as this tool implements similar multiplexing.) You either need to set the correct value for TERM inside tmux adding the line set-option -g default-terminal "screen-256color" to ~/.tmux.conf , or force 256 colors in your ~/.vimrc via set t_Co=256 (which would be a workaround, and best guarded by if $TERM == 'screen' if you also use non-high color terminals).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/201779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110304/" ] }
201,808
I would like to view the unique owners of all the files and directories underneath a certain directory. I have tried: ls -ltR <dir-path> | grep -P '^[d|\-]' | awk '{print $3}' | sort | uniq Which commits the cardinal sin of trying to parse ls output, but works -- until I try it on a directory with an immense amount of files within a complex directory structure where it bogs down and hangs. While I could work around and simply do the command at lower levels and work up piece by piece, I was wondering if there is a more efficient way to do this in one fell swoop?
Here's a slightly shorter version that uses find: find <path> -printf "%u\n" | sort -u Depending on the complexity of the directory structure, this may or may not be more efficient.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/201808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72058/" ] }
201,818
Is there any command in unix to check a pop3 account using terminal? I mean, type the server/username/password of a pop3 account and see if the username/password is correct?
You can use telnet to connect to the mail server and talk POP3 to check your credentials: $ telnet pop.gmx.net 110Trying 212.227.17.185...Connected to pop.gmx.net.Escape character is '^]'.+OK POP server ready H migmx028 0MAbjW-1YwF4D0ml8-00BiVlUSER [email protected]+OK password required for user "[email protected]"PASS typeyourpassword-ERR Error retrieving your GMX emails. Your connection is not encrypted. Enable SSL in your mail program. Instructions: https://ssl.gmx.netConnection closed by foreign host. Well, this failed because most mail server require a SSL/TLS encrypted session nowadays. So instead of using telnet you can use socat : $ socat - OPENSSL:pop.gmx.net:995+OK POP server ready H migmx113 0MC062-1Yzese0KO7-00AVNEUSER [email protected]+OK password required for user "[email protected]"PASS typeyourpassword+OK mailbox "[email protected]" has 13518 messages (191718918 octets) H migmx113 If you type a wrong password, the server will probably say something like: -ERR authentication failed Or instead of socat you probably have openssl laying around: $ openssl s_client -quiet -connect pop.gmx.net:995depth=2 C = DE, O = Deutsche Telekom AG, OU = T-TeleSec Trust Center, CN = Deutsche Telekom Root CA 2verify error:num=19:self signed certificate in certificate chainverify return:0+OK POP server ready H migmx108 0MWpjO-1YiwnK3ZfP-00XoK
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/201818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45335/" ] }
201,833
hscroot@hmcserver:~> grep root /etc/grouproot:x:0:hscroot,ccfwhscroot@hmcserver:~> hscroot@hmcserver:~> ls -la /etc/shadow-r-------- 1 root shadow 5252 2015-05-06 19:36 /etc/shadowhscroot@hmcserver:~> hscroot@hmcserver:~> cat /etc/shadowcat: /etc/shadow: Permission deniedhscroot@hmcserver:~> hscroot@hmcserver:~> grep hscroot /etc/passwdhscroot:x:500:500:HMC Super User:/home/hscroot:/bin/hmcbashhscroot@hmcserver:~> hscroot@hmcserver:~> echo $DISPLAYlocalhost:10.0hscroot@hmcserver:~> hscroot@hmcserver:~> su -bash: su: command not foundhscroot@hmcserver:~> sudo su -bash: sudo: command not foundhscroot@hmcserver:~> bashbash: bash: command not foundhscroot@hmcserver:~> chsbash: chs: command not foundhscroot@hmcserver:~> kshbash: ksh: command not foundhscroot@hmcserver:~> ls /bin/bash/bin/bashhscroot@hmcserver:~> /bin/bashbash: /bin/bash: restricted: cannot specify `/' in command nameshscroot@hmcserver:~> exitexitConnection to 1.2.3.4 closed.[user@notebook ~]$ ssh [email protected] /bin/bashPassword: /bin/bash: /bin/bash: restricted: cannot specify `/' in command names[user@notebook ~]$ Question : How can I cat the " /etc/shadow "? I only have "hscroot" user. I have X forward if I use "ssh -X".
You submit a support call to IBM who then give you the hscpe user password, which is good for one day. That user ID and password allows you to gain access to root (assuming you recorded the root password when you installed the HMC). Then you can cat /etc/shadow . You can't do it without root access (by design), and you can't simply switch to root either (also by design) on an HMC.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/201833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112826/" ] }
201,838
Say I have a folder with 10K text files. I would like to remove every space, TAB and linebreak from each file. How can I do this efficiently?
You can use tr : LC_ALL=C tr -d '[:blank:]\n' < file_in > file_out Since when you have to work with 10k files, a better solution would be: find . -type f -exec perl -i.bak -pe 's/ |\t|\n//g' {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/201838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
201,869
Cannot get why $ apt-cache policy fooN: Unable to locate package foo but $ apt-cache policy foo 2>&1 | grep . is empty. Where in the latter call am I doing the wrong assumption? The original task: I need to process the apt-cache policy output presumably :-) UPD : foo used in my example may be substituted with any package name that does not exist in your apt-get index. UPD 2 : there is an answer with a workaround. Additional +50 bounty will be awarded to anyone who explains why the 2>&1 solution does not work.
If stdout is not a tty (i.e. it's a regular file or a pipe) and if no --quiet option has been specified, apt-cache acts as if you had passed it --quiet=1 . A workaround is to pass it a --quiet=0 option. $ apt-cache --quiet=0 policy foo 2>&1 | grep .N: Unable to locate package foo
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/201869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31347/" ] }
201,889
Is a directory removed when its number of hard links becomes 0? A directory always has at least 2 as its number of hard links, because of . . When rm -r a directory, does it decrease the number of hard link from 2 to 0 by 2 instead of 1? Can the number of hard links of a directory ever be 1? Thanks.
Firstly not all filesystems use . and .. as hard links. this is documented in the gnu find manual. I am going to ignore those filesystems for the rest of my answer because they were not designed for unix and only complicate things without adding clarity. I am also going to ignore the root directory and mount points for the same reason. the number of links to a directory is never less than two because of . and .. . The number of subdirectories is equal to the number of links minus two. Because of this you cannot link or unlink a directory, so rm -r will stat a file before deleting and use rmdir instead of unlink on directories. The two system calls use completely different code paths in the kernel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/201889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
201,900
If you have gnome-terminal running, and want a new instance of the program, you might think that running gnome-terminal & from a shell would do the trick. Astonishingly, this new instance behaves like some insipid Windows or Mac program; it only sends a message to the existing, running gnome-terminal to create a new window. If this one gnome-terminal process crashes, you lose all of the terminal windows! (Of course, each window has its own shell, which is an independent process, but the actual terminal emulator and its GUI are managed from a single instance of the application.) How can we create independent instances of gnome-terminal , each running in their own process, so that killing that process only destroys the window(s) associated with that process?
According to man gnome-terminal , the option you're looking for appears to be the confusingly-named --disable-factory Do not register with the activation name server, do not re-use an active terminal. However, the option is apparently removed in more recent releases so should not be relied on.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/201900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16369/" ] }
202,043
I am no longer able to forward X11 using KiTTY/PuTTY to CygwinX. I am connecting to an Ubuntu Server 14.10 machine that is correctly configured to allow X11 forwarding. I am able to initiate X11 forwarding using Cygwin xterm and from other linux machines. I am using CygwinX [1.7.34(0.285/5/3)] and KiTTY 0.64.0.1 (PuTTY fork, I have also tried PuTTY), on Win7. I have verified my display variable and have tried disabling xhost access control in Cygwin xterm. $ echo $DISPLAY:1$ xhost +access control disabled, clients can connect from any host My KiTTY/PuTTY is configured to enable X11 forwarding and the correct display is set. I've tried :1 and :1.0. When I SSH to the server my DISPLAY variable is set and xauth is updated. I have deleted my .Xauthority and recreated it to verify. user@server:~$ echo $DISPLAYlocalhost:10.0user@server:~$ xauth listserver/unix:10 MIT-MAGIC-COOKIE-1 3983b2d7f3d5f9f66d9796997771bf82 When I attempt to launch an X11 application I get the following error. user@server:~$ xtermKiTTY X11 proxy: unable to connect to forwarded X server: Network error: Connection refusedxterm: Xt error: Can't open display: localhost:10.0 XWin.exe is listening on port 34576 if that matters. [XWin.exe] TCP 127.0.0.1:34576 0.0.0.0:0 LISTENING I believe there is a software or configuration issue I am missing as I am seeing this with multiple server and client machines. Any help would be appreciated.
Ok, I figured out the solution to my own problem. By default CygwinX no longer listens for tcp connections (Cyg SSH is using Unix sockets to connect). To enable tcp connections "-listen tcp" needs to be added to the command line parameters. In my case I changed the "XWin Server" icon to read: C:\cygwin64\bin\run.exe --quote /usr/bin/bash.exe -l -c "cd; /usr/bin/startxwin -- -multiwindow -listen tcp"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86315/" ] }
202,044
I have a freshly installed version on CentOS 7 once which I have installed syslog-ng from the EPEL repositories. ~: yum list | grep syslogsyslog-ng.x86_64 3.5.6-1.el7 @epel When I try to start it via systemctl, it fails as follows : /usr/lib/systemd/system: systemctl start syslog-ngJob for syslog-ng.service failed. See 'systemctl status syslog-ng.service' and 'journalctl -xn' for details. When looking into the journals, we can see that their is a dependency on the socket which "starts" fine but that the process returns an error about the arguments being incorrect as shown below : May 07 17:26:15 superserver.company.corp systemd[1]: Starting Syslog Socket.May 07 17:26:15 superserver.company.corp systemd[1]: Listening on Syslog Socket.May 07 17:26:15 superserver.company.corp systemd[1]: Starting System Logger Daemon...May 07 17:26:15 superserver.company.corp systemd[1]: syslog-ng.service: main process exited, code=exited, status=2/INVALIDARGUMENTMay 07 17:26:15 superserver.company.corp systemd[1]: Failed to start System Logger Daemon.May 07 17:26:15 superserver.company.corp systemd[1]: Unit syslog-ng.service entered failed state.May 07 17:26:15 superserver.company.corp systemd[1]: syslog-ng.service holdoff time over, scheduling restart.May 07 17:26:15 superserver.company.corp systemd[1]: Stopping System Logger Daemon...May 07 17:26:15 superserver.company.corp systemd[1]: Starting System Logger Daemon...May 07 17:26:15 superserver.company.corp systemd[1]: syslog-ng.service: main process exited, code=exited, status=2/INVALIDARGUMENT If we look into the service configuration file, we can confirm the dependency on the socket and the command that is used to start the service. [Service]Type=notifySockets=syslog.socketExecStart=/usr/sbin/syslog-ng -F -p /var/run/syslogd.pid The problem is that if I run the above-mentionned command, it starts up just fine and it works as expected. My question is : what is difference between me running the program startup command and systemd starting up the same program ? What can I do to find out what is actually wrong with it ? Edit 1 I enabled the debug output as suggested by Raymond in the answers and the output doesn't teach us much more. May 08 10:31:29 server.corp systemd[1]: Starting System Logger Daemon...May 08 10:31:29 server.corp systemd[1]: About to execute: /usr/sbin/syslog-ng -F -p /var/run/syslogd.pidMay 08 10:31:29 server.corp systemd[1]: Forked /usr/sbin/syslog-ng as 3121May 08 10:31:29 server.corp systemd[1]: syslog-ng.service changed dead -> startMay 08 10:31:29 server.corp systemd[1]: Set up jobs progress timerfd.May 08 10:31:29 server.corp systemd[1]: Set up idle_pipe watch.May 08 10:31:29 server.corp systemd[3121]: Executing: /usr/sbin/syslog-ng -F -p /var/run/syslogd.pidMay 08 10:31:29 server.corp systemd[1]: Got notification message for unit syslog-ng.serviceMay 08 10:31:29 server.corp systemd[1]: syslog-ng.service: Got messageMay 08 10:31:29 server.corp systemd[1]: syslog-ng.service: got STATUS=Starting up... (Fri May 8 10:31:29 2015May 08 10:31:29 server.corp systemd[1]: Got notification message for unit syslog-ng.serviceMay 08 10:31:29 server.corp systemd[1]: syslog-ng.service: Got messageMay 08 10:31:29 server.corp systemd[1]: syslog-ng.service: got STATUS=Starting up... (Fri May 8 10:31:29 2015May 08 10:31:29 server.corp systemd[1]: Received SIGCHLD from PID 3121 (syslog-ng).May 08 10:31:29 server.corp systemd[1]: Child 3121 (syslog-ng) died (code=exited, status=2/INVALIDARGUMENT)May 08 10:31:29 server.corp systemd[1]: Child 3121 belongs to syslog-ng.serviceMay 08 10:31:29 server.corp systemd[1]: syslog-ng.service: main process exited, code=exited, status=2/INVALIDARGUMENTMay 08 10:31:29 server.corp systemd[1]: syslog-ng.service changed start -> failedMay 08 10:31:29 server.corp systemd[1]: Job syslog-ng.service/start finished, result=failedMay 08 10:31:29 server.corp systemd[1]: Failed to start System Logger Daemon. There are a few warnings that are displayed at the start of the syslog-ng processes (nothing that keeps it from starting properly) so I redirected all output to /dev/null but the end result is the same. Also, as a side note, my entire system does not boot anymore if systemd is unable to syslog. This can be disabled with kernel options to log to kmesg.
We had the same problem on Debian 8.1, but fixed it by changing our syslog-ng local configuration to use unix-dgram instead of unix-socket . I was clued in by this comment at RedHat Bugzilla : Note about custom syslog-ng configurations files People with custom syslog-ng configurations will most likely face upgrade problems due to the unix socket type mismatch between systemd and syslog-ng old configuration file: systemd creates /dev/log as unix-dgram syslog-ng < 3.2.5 expected /dev/log to be unix-stream (configuration file) If you use 'unix-stream ("/dev/log")' in one of your log messages sources, you will need to manually change it to 'unix-dgram ("/dev/log")'.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1115/" ] }
202,084
I want to test an upload of files of content type application/x-gzip of various sizes ranging from 100 MB to 999 MB. How can go I about creating .gz files of these predetermined sizes? If I do dd if=/dev/zero of=somefile bs=1 seek=100 , the resultant file after zipping is usually very small.
You can create a 10MB gzip file like this: head -c 10M /dev/urandom | gzip -1 >10m.gz This uses urandom to get a high-entropy stream of bytes: sincethis is incompressible, the gzipped version will be about the samesize as the input. You can then catenate copies of your gzip file together: cat $(perl -e "print '10m.gz ' x 30") >300m.gz Thirty copies of the source file will be about 300MB, and 100copies will be about a gigabyte.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15333/" ] }
202,113
Given the following code, why does cat only print the contents of the pipe after I've typed \n or CTRL + D ? What are the conditions for cat to actually print what it read? #include <stdio.h>#include <stdlib.h>#include <unistd.h>int main(void){ int pid,p[2]; char ch; pipe(p); if((pid=fork()) == -1){ fprintf(stderr,"Error:can't create a child!\n"); exit(2); } if(pid){ close(p[0]); while((ch=getchar())!=EOF){ write(p[1],&ch,1); } close(p[1]); wait(0); }else{ close(p[1]); close(0); dup(p[0]); close(p[0]); execlp("cat","cat",NULL); } return 0;}
That has nothing to do with cat , pipes or buffering. Your "problem" is upstreams, at the terminal device. If every character you enter at the keyboard in the terminal was available for reading by your application immediately and cat , then would you enter a Backspace b Return , your application would read a then ^? then b then ^M , which is not what you want and is not what you get. What you want and get is for it to read a^J instead. You get that because in the default canonical mode, the terminal device driver (the tty line discipline ) implements a line editor that lets you enter text and edit it (with limited capabilities), and the text you enter is only available for reading when you press Return . To avoid that, you can tell the terminal device to leave that canonical mode. For instance, by doing stty raw , and you'll see your application and cat get the characters immediately, but probably the behaviour is not what you expect. You'll also notice that for other types of input, like a pipe, characters are output as soon as they arrive as in : (printf x; sleep 1; echo y) | that-command
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114574/" ] }
202,161
I am writing a rolling upgrade playbook and would like to print out the hostname of current host been upgraded. I put inventory_hostname and ansible_hostname in task names but that did not work - name: upgrade softare on {{inventory_hostname}}- name: current host is {{ansible_hostname}} debug works fine - name: Test a variable debug: var=inventory_hostnameTASK: [Test a variable] ******************************************************* ok: [SERV14] => { "var": { "inventory_hostname": "SERV14" }} So what should I do to be able to use those variables in task name descriptions. Thanks
Starting from v2.0 Ansible supports variable substitution in task/handler names: https://github.com/ansible/ansible/issues/10347 , so these examples will work as expected: - name: upgrade software on {{inventory_hostname}}- name: current host is {{ansible_hostname}}
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/202161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39569/" ] }
202,198
This is the view from gedit editor: and the view from vim editor: I then try to grep it, it did grep successfully if i put Log instead of Tog, but the output is corrupted: [xiaobai@xiaobai grep]$ grep Tog test[xiaobai@xiaobai grep]$ grep Log test Dtring.valueOf[xiaobai@xiaobai grep]$ And then i cat the file, it's also corrupted: [xiaobai@xiaobai grep]$ cat test Dtring.valueOf[xiaobai@xiaobai grep]$ So i use hexdump: [xiaobai@xiaobai grep]$ hexdump -C test 00000000 4c 6f 67 2e 64 28 22 6d 75 73 69 63 22 2c 20 22 |Log.d("music", "|00000010 4e 41 56 49 47 41 54 4f 52 3a 20 22 20 2b 20 53 |NAVIGATOR: " + S|00000020 74 72 69 6e 67 2e 76 61 6c 75 65 4f 66 0d 20 20 |tring.valueOf. |00000030 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 | |00000040 20 20 20 20 20 20 20 20 20 20 20 20 20 44 0d 0a | D..|00000050[xiaobai@xiaobai grep]$ I'm narrow down it: [xiaobai@xiaobai grep]$ cat test3 D[xiaobai@xiaobai grep]$ hexdump -C test300000000 61 0d 20 20 20 20 20 20 20 20 20 20 20 20 20 20 |a. |00000010 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 | |00000020 20 44 0d 0a | D..|00000024[xiaobai@xiaobai grep]$ echo -e '\x61'a[xiaobai@xiaobai grep]$ echo -e '\x61\x0d'a[xiaobai@xiaobai grep]$ echo -e '\x61\x0d\x20'[xiaobai@xiaobai grep]$ echo -e '\x61\x0d\x20\x62' b As you can see, the 'a' erased after i appended one \x20 byte. So my question is, why is that happening and how can i get rid of this without prior knowledge of some files might contains \x0d\x20, e.g. grep -r ?
Characters of code 0 to 31 in ASCII are control characters. When sent to a terminal, they're used to do special things. For instance, \a (BEL, 0x7) rings the terminal's bell. \b (BS, 0x8) moves the cursor backward. \n (LF, 0xa) moves the cursor one row down, \t (TAB 0x9) moves the cursor to the next tabulation... \r (CR, 0xd) moves the cursor to the first column. When you run at a shell prompt in a terminal: printf 'foo\nbar\n' printf writes foo\nbar\n to /dev/tty<something> , the tty line discipline of that device translates that to foo\r\nbar\r\n , which is why you see bar on the next line after foo . printf 'foo\rbar\n' Would have the terminal overwrite foo with bar . If your file contains control characters, you could either remove them, or give them a textual representation (for instance ^M or \r for the CR 0xd character) if you want to check for their presence. You may not want to do that for the LF and TAB characters though. So: LC_ALL=C tr -d '\0-\10\13-\37\177' < file # to remove themcat -v < file # to display as ^Msed -n l < file # to display as \r (also converts TAB to \t) # and marks the end of lines with $ Note that those sed and cat ones would also transform non-ASCII characters. You could do instead: LC_ALL=C sed "$(printf 's/[^\t -\176\200-\377]/^&/g')" < file | LC_ALL=C tr '\0-\10\13-\37\177' '@-HK-_?' To only convert the ASCII control characters (except TAB and LF) to their ^X visual form (note though that not all sed implementations support input files with NUL characters in them).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64403/" ] }
202,214
Like in title. On Windows, I can do this: explorer /select,"C:\folder\file.txt" which will result in opening of explorer.exe , that will immediately open C:\folder and select file.txt . I believe ROX had this functionality too. Can I do the same with thunar?
With a little digging, I discovered this is possible using D-Bus: #!/usr/bin/env pythonimport dbusimport osimport sysimport urlparseimport urllibbus = dbus.SessionBus()obj = bus.get_object('org.xfce.Thunar', '/org/xfce/FileManager')iface = dbus.Interface(obj, 'org.xfce.FileManager')_thunar_display_folder = iface.get_dbus_method('DisplayFolder')_thunar_display_folder_and_select = iface.get_dbus_method('DisplayFolderAndSelect')def display_folder(uri, display='', startup_id=''): _thunar_display_folder(uri, display, startup_id)def display_folder_and_select(uri, filename, display='', startup_id=''): _thunar_display_folder_and_select(uri, filename, display, startup_id)def path_to_url(path): return urlparse.urljoin('file:', urllib.pathname2url(path))def url_to_path(url): return urlparse.urlparse(url).pathdef main(args): path = args[1] # May be a path (from cmdline) or a file:// URL (from OS) path = url_to_path(path) path = os.path.realpath(path) url = path_to_url(path) if os.path.isfile(path): dirname = os.path.dirname(url) filename = os.path.basename(url) display_folder_and_select(dirname, filename) else: display_folder(url)if __name__ == '__main__': main(sys.argv) Execute with: $ ./thunar-open-file.py /home/user/myfile.txt And it will still open a folder, if you pass that: $ ./thunar-open-file.py /home/user/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41991/" ] }
202,219
I am on Windows, but I guess my question is still rightly placed here. C:\Users\User>grep --versionGNU grep 2.6.3C:\Users\User>sed --versionGNU sed version 4.2.1 I noticed that the following works (outputting here ): echo here | grep -E "\w+"echo here | grep -E "[her]+" But, this does not work (outputting nothing): echo here | grep -E "[\w]+" This again does (outputting here ): echo here | grep -P "[\w]+" So [\w] is something specific to Perl regular expressions, I assume. Is that correct? So, let's talk sed . This works (outputting gone ): echo here | sed -r "s/\w+/gone/"echo here | sed -r "s/[her]+/gone/" And again, this does not (outputting here ): echo here | sed -r "s/[\w]+/gone/" Now, how can I activate Perl regular expressions for sed -- is there any way?
Different tools and versions thereof support different variants of regular expressions. The documentation of each will tell you what they support. Standards exist so that one can rely on a minimum set of features that are available across all conforming applications. For instance, all modern implementations of sed and grep implement basic regular expressions as specified by POSIX (at least one version or the other of the standard, but that standard has not evolved a lot in that regard in the last few decades). In POSIX BRE and ERE, you have the [:alnum:] character class. That matches letters and digits in your locale (note that often includes a lot more than a-zA-Z0-9 unless the locale is C). So: grep -x '[[:alnum:]_]\{1,\}' matches one or more alnums or _. [\w] is required by POSIX to match either backslash or w . So you won't find a grep or sed implementation where that's available (unless via non-standard options). The behaviour for \w alone is not specified by POSIX, so implementations are allowed to do what they want. GNU grep added that a long time ago. GNU grep used to have its own regexp engine however it now uses the GNU libc's one (though it does embed its own copy). It's meant to match alnums and underscore in your locale. However, it currently has a bug in that it only matches single-byte characters (for instance, not é in a UTF-8 locale even though that's clearly a letter and even though it does match é in all the locales where é is a single character). There also is a \w regexp operator in perl regexp and in PCRE. PCRE/perl are not POSIX regular expressions, they're just another thing altogether. Now, with the way GNU grep -P uses PCRE, it's got the same issue as without -P . It can be worked around there though by using (*UCP) (though that also has side-effects in non-UTF8 locales). GNU sed also uses the GNU libc's regexs for its own regexps. It uses it in such a way though that it doesn't have the same bug as GNU grep . GNU sed doesn't support PCREs. There's some evidence in the code that it has been attempted before, but it doesn't seem to be on the agenda anymore. If you want Perl's regular expressions, just use perl though. Otherwise, I'd say that rather than trying to rely on a bogus non-standard feature of your particular implementation of sed / grep , it would be better to stick with the standard and use [_[:alnum:]] .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114660/" ] }
202,238
For changing file permission, I know I could use chmod. For changing group-owner, I could use chgrp. However, if I want to change both permission and owner at the same time, any command I could use on Linux? For example, there is a file with this permission and owner: -rw-r--r--+ 1 raymondtau staff 0 May 8 16:38 WantToChangeThisFile And now I want to change it to: ---x-w--wx+ 1 raymondtau admin 0 May 8 16:38 WantToChangeThisFile I know I could use this command: chmod 123 WantToChangeThisFile && chgrp admin WantToChangeThisFile , but want to know if there is any neat way to do that.
There is concept known as "UNIX-way". Each tool should perform one simple function. If one need a more complex function, he can combine smaller tools. The opposite is the monolitic design when all functionality is aggregated within one huge tool. If you want to do something complex - just write a script, invoking simple tools.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33709/" ] }
202,243
I have a requirement, if I execute a script ./123 with an arguments of empty path, say /usr/share/linux-headers-3.16.0-34-generic/.tmp_versions (this directory is empty). It should display "directory is empty" My code is: #!/bin/bashdir="$1"if [ $# -ne 1 ]then echo "please pass arguments" exitfiif [ -e $dir ]thenprintf "minimum file size: %s\n\t%s\n" \ $(du $dir -hab | sort -n -r | tail -1)printf "maximum file size: %s\n\t%s\n" \ $(du $dir -ab | sort -n | tail -1)printf "average file size: %s"du $dir -sk | awk '{s+=$1}END{print s/NR}'else echo " directory doesn't exists"fiif [ -d "ls -A $dir" ] then echo " directory is empty"fi I have an error displays like, if I execute the script name ./123 /usr/src/linux-headers-3.16.0-34-generic/.tmp_versions (this directory is empty). minimum file size: 4096 /usr/src/linux-headers-3.16.0-34-generic/.tmp_versionsmaximum file size: 4096 /usr/src/linux-headers-3.16.0-34-generic/.tmp_versionsaverage file size: 4 instead of showing output only "directory is empty" its shows the above output The below output has to be display if I exceute the script with correct arguments( I mean with correct directory path). say ./123 /usr/share minimum file size: 196 /usr/share maximum file size: 14096 /usr/share average file size: 4000 my expected output is: ./123 /usr/src/linux-headers-3.16.0-34-generic/.tmp_versions directory is empty.
if ls -A1q ./somedir/ | grep -q .then ! echo somedir is not emptyelse echo somedir is emptyfi The above is a POSIX-compatible test - and should be very fast. ls will list all files/dirs in a directory excepting . and .. (from -A ) each one per line (from -1 ) and will -q uote all non-printable characters (to include \n ewlines) in the output with a ? question-mark. In this way if grep receives even a single character in input it will return true - else false. To do it in a POSIX-shell alone: cd ./somedir/ || exitset ./* ./.[!.]* ./..?*if [ -n "$4" ] || for e do [ -L "$e" ] || [ -e "$e" ] && break donethen ! echo somedir is not emptyelse echo somedir is emptyficd "$OLDPWD" A POSIX-shell (which has not earlier disabled -f ilename generation) will set the "$@" positional-parameter array to either the literal strings followed by the set command above, or else to the fields generated by glob operators at the end of each. Whether it does so is dependent upon whether the globs actually match anything. In some shells you can instruct a non-resolving glob to expand to null - or nothing at all. This can sometimes be beneficial, but it is not portable and often comes with additional problems - such as having to set special shell-options and afterwards unset them. The only portable means of handling null-valued arguments involve either empty or unset variables or ~ tilde-expansions. And the latter, by the way, is far safer than the former. Above the shell only tests any of the files for -e xistence if neither of the three globs specified resolves to more than a single a match. So the for loop is only ever run at all for three or fewer iterations, and only in the case of an empty directory, or in the case that one or more of the patterns resolves only to a single file. The for also break s if any of the globs represent an actual file - and as I have arranged the globs in the order of most likely to least likely, it should pretty much quit on the first iteration every time. Either way you do it should involve only a single system stat() call - the shell and ls should both only need to stat() the directory queried and list out the files its dentries report that it contains. This is contrasted by the behavior of find which would instead stat() every file you might list with it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113004/" ] }
202,255
I usually find the answers to all my Unix related problems already posted as questions and answers. However, this particular issue has had me stumped for the past hour so I thought I’d ask my first question on this site. Problem I have a development / staging server server running CentOS 5.11.Running locate as a regular user results in no output (not even an error message): locate readdir However, running the command as the superuser prints a list of valid results: $ sudo locate readdir/home/anthony/repos/php-src/TSRM/readdir.h/home/anthony/repos/php-src/ext/standard/tests/dir/readdir_basic.phpt... etc. strace usually helps me debug any such issues and running strace locate readdir shows: stat64("/var/lib/mlocate/mlocate.db", 0xbff65398) = -1 EACCES (Permission denied)access("/", R_OK|X_OK) = -1 EACCES (Permission denied)exit_group(1) = ? Check permissions I checked the ownership and permissions of the locate binary and its default database. As expected the command is setgid with slocate as the group owner while the database has the appropriate ownership and permissions. $ ls -l /usr/bin/locate-rwx--s--x 1 root slocate 22280 Sep 3 2009 /usr/bin/locate$ sudo ls -l /var/lib/mlocate/mlocate.db-rw-r----- 1 root slocate 78395703 May 8 04:02 /var/lib/mlocate/mlocate.db$ sudo ls -ld /var/lib/mlocate/drwxr-x--- 2 root slocate 4096 Sep 3 2009 /var/lib/mlocate/ There are also no unusual file attributes: $ sudo lsattr /usr/bin/locate /var/lib/mlocate/mlocate.db------------- /usr/bin/locate------------- /var/lib/mlocate/mlocate.db Compare with working system Meanwhile, everything works as expected on the Production server. Running locate readdir as a regular (non-root) user returns a list of results as it should: $ locate readdir/usr/include/php/TSRM/readdir.h/usr/lib/perl5/5.8.8/i386-linux-thread-multi/auto/POSIX/readdir.al/usr/share/man/man2/readdir.2.gz For comparison, I also ran this command through strace but I then got the same permission denied error as on the staging server. I was wondering how this could be until I read the manual page for sudo . Listed in the Bugs section: Programs that use the setuid bit do not have effective user ID privileges while being traced. So, unfortunately, I can’t use strace for debugging. I compared the results of all the above commands between the Staging and Production servers and there’s no difference between them. Both systems have the mlocate-0.15-1.el5.2 RPM with no modifications to their files as shown by rpm -V mlocate . Other considerations I thought it might be related to the fact that on the problematic staging server, my login is authenticated using Winbind but I created a regular local user on the same box and I still have the same issue. There’s obviously something else that I’m missing but I simply don’t know what it is. I suspect it is related to the setgid file permission, maybe PAM or possibly SELinux. I don’t know much about either PAM or SELinux: I’ve only ever looked at PAM when configuring Winbind authentication while SELinux was installed with the OS but I’ve never used it. Note: the production server has been subject to far fewer modifications than the development server which has had some experimentation.
The problem was the permissions for / (the root directory) and the clue for finding that was this line from your strace output: access("/", R_OK|X_OK) = -1 EACCES (Permission denied) You were missing group read permission settings for / . But because you still had x (execute) permission, which allows you to traverse a directory, you could still access all of the files on the filesystem, which is why most everything continued working while those permissions were in effect. The only thing you were not allowed to do is list the contents of / . Most commands don't need to list / , they either use pathnames relative to the current directory or absolute pathnames that access specific well-known directories off the root (like /etc and /var ). For security reasons, locate , even though it has access to a complete inventory of filenames generated by a privileged user, insists on reporting only results that the calling user would be able to find by scanning the whole filesystem from the root. Since you couldn't list / , which makes scanning anything straight from the root a non-starter, locate would report nothing at all.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202255", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22812/" ] }
202,263
I just started learning how Everything Is A File TM on Linux, which made me wonder what would happen if I literally read from /dev/stdout: $ cat /dev/stdout ^C$ tail /dev/stdout ^C (The ^C is me killing the program after it hangs). When I try with vim , I get the unthinkable message: "/dev/stdout" is not a file. Gasp! So what gives, why am I getting hangups or error messages when I try to read these "files"?
why am I getting hangups You aren't getting "hangups" from cat(1) and tail(1) , they're just blocking on read. cat(1) waits for input, and prints it as soon as it sees a complete line: $ cat /dev/stdoutfoofoobarbar Here I typed foo Enter bar Enter CTRL - D . tail(1) waits for input, and prints it only when it can detect EOF : $ tail /dev/stdoutfoobarfoobar Here I typed again foo Enter bar Enter CTRL - D . or error messages Vim is the only one that gives you an error. It does that because it runs stat(2) against /dev/stdout , and it finds it doesn't have the S_IFREG bit set. /dev/stdout is a file, but not a regular file. In fact, there's some dance in the kernel to give it an entry in the filesystem. On Linux: $ ls -l /dev/stdoutlrwxrwxrwx 1 root root 15 May 8 19:42 /dev/stdout -> /proc/self/fd/1 On OpenBSD: $ ls -l /dev/stdoutcrw-rw-rw- 1 root wheel 22, 1 May 7 09:05:03 2015 /dev/stdout On FreeBSD: $ ls -l /dev/stdoutlrwxr-xr-x 1 root wheel 4 May 8 21:35 /dev/stdout -> fd/1$ ls -l /dev/fd/1crw-rw-rw- 1 root wheel 0x18 May 8 21:35 /dev/fd/1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99989/" ] }
202,286
I am currently running Fedora 16 on my laptop, which is like 3 years EOL-ed. The reason that's the case is because it is not easy to upgrade to the current version in Fedora (at least v16) and it requires a complete reinstall. So everything on it it old and doesn't work too well and I am just too lazy to move my data on an external medium and rebuild the OS. Ideally, I would like to upgrade by running a simple yum/apt-get command. So my lesson learned from this experience is that I would like my next Linux distro to be idiotically easy to bring up-to-date, not because I am incompetent to deal with Linux but simply because I am lazy and want to remain so. I also believe that, in this day and age, OSs should be very easily upgraded. What Linux distros make upgrades easy to conform to this agenda? I specifically do not want to have to move the data or the file system to a separate medium before performing the upgrade. I also would like the distro to support KDE, which is my preferred interface.
Debian is probably one of the easiest to upgrade - even across major releases. From the Debian FAQ, Chapter 9, Keeping your Debian system up-to-date there is this statement, A Debian goal is to provide a consistent upgrade path and a secure upgrade process. We always do our best to make upgrading to new releases a smooth procedure. Opinion: I have just upgraded a number of systems from release 7 ("wheezy") to release 8 ("jessie"). For the most part it just worked . One had previously been upgraded from release 6 ("squeeze"). This is one of the major reasons I prefer to use Debian. More information in answers to the Question Will Debian Wheezy (stable) automatically upgrade to Jessie once Jessie becomes the stable release Update: since you have amended your Question to indicate a preference for KDE, you might like to review KDE's software in Debian and these Live install images
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23944/" ] }
202,292
When I run a perl script pdfannotextractor.pl in bash, I want to set the value of a variable TEXMFVAR needed by the script, and pass it to the script. I googled around, and thought I found a solution: $ TEXMFVAR=/usr/local/texlive/2014/texmf-var sudo /usr/local/texlive/2014/texmf-dist/scripts/pax/pdfannotextractor.pl --install But it seems TEXMFVAR is still empty in the script, because it creates a directory $TEXMFVAR under my current dir. I am puzzled. $ ls \$TEXMFVAR/*$TEXMFVAR/ls-R$TEXMFVAR/scripts:pax Do I need to export the variable? Is this a problem of usage of environment variables? Note: My original problem is about texlive and here https://tex.stackexchange.com/questions/243889/error-installing-pdfbox-library-for-pax-package
sudo sanitizes the environment so potentially harmful variables are not passed to the process running as the superuser. You can change this behavior with the -E or --preserve-env flag to sudo.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
202,302
I'm learning bash scripting and found this on my /usr/share/bash-completion, line 305: local cword words=() What does it do? All tutorials online are just in the format local var=value
Although I like answer given by jordanm I think it's equally important to show less experienced Linux users how to cope with such questions by themselves. The suggested way is faster and more versatile than looking for answers at random pages showing up at Google search results page. First, all commands that can be run in Bash without typing an explicit path to it such as ./command can be divided into two categories: Bash shell builtins and external commands . Bash shell builtins come installed with Bash and are part of it while external commands are not part of Bash. This is important because Bash shell builtins are documented inside man bash and their documentation can be also invoked with help command while external commands are usually documented in their own man pages or take some kind of flag like -h, --help . To check whether a command is a Bash shell builtin or an external command: $ type locallocal is a shell builtin It will display how command would be interpreted if used as a command name (from help type ). Here we can see that local is a shell builtin. Let's see another example: $ type vimvim is /usr/bin/vim Here we can see that vim is not a shell builtin but an external command located in /usr/bin/vim . However, sometimes the same command could be installed both as an external command and be a shell builtin at the same time. Add -a to type to list all possibilities, for example: $ type -a echoecho is a shell builtinecho is /usr/bin/echoecho is /bin/echo Here we can see that echo is both a shell builtin and an external command. However, if you just typed echo and pressed Return a shell builtin would be called because it appears first on this list. Note that all these versions of echo do not need to be the same. For example, on my system /usr/bin/echo takes --help flag while the Bash builtin one doesn't. Ok, now when we know that local is a shell builtin let's find out how it works: $ help locallocal: local [option] name[=value] ...Define local variables.Create a local variable called NAME, and give it VALUE. OPTION canbe any option accepted by `declare'.Local variables can only be used within a function; they are visibleonly to the function where they are defined and its children.Exit Status:Returns success unless an invalid option is supplied, an error occurs,or the shell is not executing a function. Note the first line: name[=value] . Everything between [ and ] is optional . It's a common convention used in many man pages and form of documentation in *nix world. That being said, command you asked about in your question is perfectly legal. In turn, ... character means that previous argument can be repeated. You can also read about this convention in some versions of man man : The following conventions apply to the SYNOPSIS section and can be usedas a guide in other sections.bold text type exactly as shown.italic text replace with appropriate argument.[-abc] any or all arguments within [ ] are optional.-a|-b options delimited by | cannot be used together.argument ... argument is repeatable.[expression] ... entire expression within [ ] is repeatable. So, at the end of the day, I hope that now you'll have an easier time understanding how different commands in Linux work.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/202302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67546/" ] }
202,308
Background It seems I only know enough linux to get myself in trouble. I'm working on a couple of embedded systems (two different models) both running linux. I've been troubleshooting a modem support code that I had working with the newer one of the two. I stopped the code and tried to manually load the usbserial driver. On the newer device, when I load the usbserial driver, four devices appear in /dev/ttyUSB# . What I did I noted that there were actually 16 ttyUSB## devices listed all the time on the older device. They never disappear. I'm guessing [now] it is because the older kernel works differently or something. Unfortunately, I went ahead and deleted all 16 ttyUSB## devices . Now they are gone and won't come back. I don't know how to create character devices. What can I do to get those devices back? Kernel Version: uname -r returns 2.6.17.9-ep93xx-pxa-ads5 Additional Information If there is some important piece of information I've left out, comment and I'll add it. Thanks in advance!
You can manually create the /dev entry using mknod /dev/ttyUSBn c 188 n Parameters: mknod is widely known tool to create /dev entries /dev/ttyUSBn: device name c : char device 188 : major device number n : minor device number,ttyUSB0, ttyUSB1, etc. But the device should be created automatically according to the udev rules
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202308", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96733/" ] }
202,322
Inspired by one of the previously answered questions , I tried executing a program by a user who was not its owner -- and the process's RUID and EUID remained the same. (Unless I read the answer wrong and that's not how you can achieve the difference.) Then I tried opening a program as another user via sudo -- and still nothing. I've scanned through all already-existing processes (I think) via ps axo euid,ruid,comm -e g , and none of them had different RUIDs and EUIDs. How can I achieve (or find the processes with) the difference? Some specific commands would help, because I could have possibly made some mistakes in some steps.
Invoking an executable that you don't own is nothing remarkable. Most executables on the system belong to root, and running them does not give the user any extra privileges. It's only setuid executables that start with the effective UID set to the owner of the executable while the real UID remains the real UID of the invoking process. sudo is setuid root, so it runs with the effective UID 0 and your real UID. But when it invokes another command, it sets both the effective UID and the real UID to the target user. You'd have to catch sudo itself in order to observe an EUID that differs from the RUID. This will be too quick to see unless sudo prompts you for a password. You can easily observe the differing UIDs by running the passwd command as a non-root user. While the prompt is being displayed, run ps in another temrinal: ps -o user,ruser -C passwd To find all running processes with differing EUID and RUID, you can use ps -e -o user= -o ruser= | awk '$1 != $2' It's normal not to find any, most setuid processes are short-lived.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87511/" ] }
202,332
$ source /etc/environment $ sudo source /etc/environment [sudo] password for t: sudo: source: command not found It seems that a different shell than bash is run to execute source /etc/environment and that shell doesn't have source as builtin. But my and the root's default shells are both bash . $ echo $SHELL/bin/bash If sudo indeeds uses a different shell, why is it? I saw slm's reply , but don't understand in my case.
source is a shell builtin, so it cannot be executed without the shell. However, by default, sudo do not run shell. From sudo Process model When sudo runs a command, it calls fork(2), sets up the execution environment as described above, and calls the execve system call in the child process If you want to explicitly execute shell, use -s option: # sudo -s source /etc/environment Which is still useless because after shell is exited, environment changes are lost.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/202332", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
202,383
I basically need to do this: DUMMY=dummysudo su - ec2-user -c 'echo $DUMMY' This doesn't work. How can I pass the env variable $DUMMY to su? -p doesn't work with -l.
You can do it without calling login shell: sudo DUMMY=dummy su ec2-user -c 'echo "$DUMMY"' or: sudo DUMMY=dummy su -p - ec2-user -c 'echo "$DUMMY"' The -p option of su command preserve environment variables.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/202383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95383/" ] }
202,391
Using the following command, could someone please explain what exactly is the purpose for the ending curly braces ({}) and plus sign (+)? And how would the command operate differently if they were excluded from the command? find . -type d -exec chmod 775 {} +
The curly braces will be replaced by the results of the find command, and the chmod will be run on each of them. The + makes find attempt to run as few commands as possible (so, chmod 775 file1 file2 file3 as opposed to chmod 755 file1 , chmod 755 file2 , chmod 755 file3 ). Without them the command just gives an error. This is all explained in man find : -exec command ; Execute command ; true if 0 status is returned.  All following arguments to find are taken to be arguments to the command until an argument consisting of ‘ ; ’ is encountered.  The string ‘ {} ’ is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find . … -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files. …
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/202391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64938/" ] }
202,396
I upgraded the system and did sudo pacman -S virtualbox virtualbox-host-modules virtualbox-guest-isovirtualbox-host-dkms yaourt virtualbox-ext-oracle sudo depmod -a sudo modprobe vboxdrv modprobe: FATAL: Module vboxdrv not found.
The problem was that I followed tutorials on the web and youtube videos, instead of reading manjaro wiki . The correct way of doing it is not to install virtualbox virtualbox-host-modules , instead of that, first I should check kernel version uname -r in my case I'm using 3.16.7.10-1-MANJARO so I have to do sudo pacman -S linux316-virtualbox-host-modules As time goes on, blogs get more and more popular, they get better ranked, leaving official documentation way behind and users like me get false information. Anyway, hopefully my answer helps future users.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202396", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15433/" ] }
202,400
I am testing a hard disk with SmartMonTools . Hard disk status prior to the testings (only one short test performed days ago): $ sudo smartctl -l selftest /dev/sdasmartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION ===SMART Self-test log structure revision number 1Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error# 1 Short offline Completed without error 00% 5167 - So I start the long test : $ sudo smartctl -t long /dev/sdasmartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.Testing has begun.Please wait 130 minutes for test to complete.Test will complete after Sat May 9 16:05:27 2015Use smartctl -X to abort test. The test is supposed to be running , then, but if I try to see its progress: $ sudo smartctl -l selftest /dev/sdasmartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION ===SMART Self-test log structure revision number 1Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error# 1 Short offline Completed without error 00% 5167 - ... all I get is the same results, like if there were no running/performing tests right now. The '-H' parameter gives no more info: $ sudo smartctl -H /dev/sdasmartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION ===SMART overall-health self-assessment test result: PASSED And, as long as there is no process running (this test is performed by the hard disk controller alone), some ps -e style search should neither help. How can I know if there is some SMART self test running right now?
In smartctl -a <device> look for Self-test execution status . Example when no test is running: Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Example while a test is running: Self-test execution status: ( 249) Self-test routine in progress... 90% of test remaining. When running selective self-test ( -t select ) there will also be a progress shown here: SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 125045423 Self_test_in_progress [90% left] (2881512-2947047)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/202400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
202,404
I have two folders on the same partition (EXT2)If I mv folder1/file folder2 and some interruption occur (e.g. power failure) could the file system ever end up being inconsistent? Isn't the mv operation atomic? Update: So far on IRC I got the following perspectives: it is atomic so inconsistencies cannot happen first you copy the dir entry in the new dir and then erase entry on previous dir, so you may have the inconsistency of having a file referenced twice, but the ref count is 1 it first erases the pointer and then copy the pointer so the inconsistency is that the file has reference 0 Can someone clarify?
First, let's dispel some myths. it is atomic so inconsistencies cannot happen Moving a file inside the same filesystem (i.e. the rename ) system call is atomic with respect to the software environment. Atomicity means that any process that looks for the file will either see it at its old location or at its new location; no process will be able to observe that the file has a different link count, or that the file is present in the source directory after being present in the destination directory, or that the file is absent from the target directory after being absent in the source directory. However, if the system crashes due to a bug, a disk error or a power loss, there is no guarantee that the filesystem is left in a consistent state, let alone that the move isn't left half-done. Linux does not in general offer a guarantee of atomicity with respect to hardware events. first you copy the dir entry in the new dir and then erase entry on previous dir, so you may have the inconsistency of having a file referenced twice, but the ref count is 1 This refers to a specific implementation technique. There are others. It so happens that ext2 on Linux (as of kernel 3.16) uses this particular technique. However, this does not imply that the disk content goes through the sequence [old location] → [both locations] → [new location], because the two operations (add new entry, remove old entry) are not atomic at the hardware level either: it is possible for one of them to be interrupted, leaving the filesystem in an inconsistent state. (Hopefully fsck will repair it.) Furthermore the block layer can reorder writes, so the first half could be committed to disk just before the crash and the second half would then not have been performed. The reference count will never be observed to be different from 1 as long as the system doesn't crash (see above) but that guarantee does not extend to a system crash. it first erases the pointer and then copy the pointer so the inconsistency is that the file has reference 0 Once again, this refers to a particular implementation technique. A dangling file cannot be observed if the system doesn't crash, but it is a possible consequence of a system crash, at least in some configurations. According to a blog post by Alexander Larsson , ext2 gives no guarantee of consistency on a system crash, but ext3 does in the data=ordered mode. (Note that this blog post is not about rename itself, but about the combination of writing to a file and calling rename on that file.) Theodore Ts'o, the principal author of the ext2, ext3 and ext4 filesystems, wrote a blog post on the same issue . This blog post discusses atomicity (with respect to the software environment only) and durability (which is atomicity with respect to crashes plus a guarantee of commitment, i.e. knowing that the operation has been performed). Unfortunately I can't find information about atomicity with respect to crashes alone. However, the durability guarantees given for ext4 require that rename is atomic. The kernel documentation for ext4 states that ext4 with the auto_da_alloc option (which is the default in modern kernels), as well as ext4, provides a durability guarantee for a write followed by a rename , which implies that rename is atomic with respect to hardware crashes. For Btrfs, a rename that overwrites an existing file is guaranteed to be atomic with respect to crashes, but a rename that does not overwrite a file can result in neither file or both files existing. In summary, the answer to your question is that not only is moving a file not atomic with respect to crashes on ext2, but it isn't even guaranteed to leave the file in a consistent state (though failures that fsck cannot repair are rare) — pretty much nothing is, which is why better filesystems have been invented. Ext3, ext4 and btrfs do provide limited guarantees.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114752/" ] }
202,430
I want to create a "copy" of a directory tree where each file is a hardlink to the original file Example: I have a directory structure: dirA/dirA/file1dirA/x/dirA/x/file2dirA/y/dirA/y/file3 Here is the expected result, a "copy" of the directory tree where each file is a hardlink to the original file: dirB/ # normal directorydirB/file1 # hardlink to dirA/file1dirB/x/ # normal directorydirB/x/file2 # hardlink to dirA/x/file2dirB/y/ # normal directorydirB/y/file3 # hardlink to dirA/y/file3
On Linux (more precisely with the GNU and busybox implementations of cp as typically found on systems that have Linux as a kernel) and recent FreeBSD, this is how: cp -al dirA dirB For a more portable solution, see answer using pax and cpio by Stéphane Chazelas
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/202430", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114765/" ] }
202,514
How to split a large file into two parts, at a pattern? Given an example file.txt : ABCEFGXYZHIJKNL I want to split this file at XYZ such that file1 contains lines up-to XYZ and rest of the lines in file2 .
With awk you can do: awk '{print >out}; /XYZ/{out="file2"}' out=file1 largefile Explanation: The first awk argument ( out=file1 ) defines a variable with the filename that will be used for output while the subsequent argument ( largefile ) is processed. The awk program will print all lines to the file specified by the variable out ( {print >out} ). If the pattern XYZ will be found the output variable will be redefined to point to the new file ( {out="file2}" ) which will be used as target to print the subsequent data lines. References: gawk manual: Redirection http://www.gnu.org/software/gawk/manual/html_node/Redirection.html#Redirection
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114678/" ] }
202,566
I wrote the following .desktop file for my application named Qtag: [Desktop entry]Name=QtagComment=Audio tag editorExec=qtagIcon=/usr/share/pixmaps/Qtag.pngTerminal=falseCategories=Multimedia;Version=1.0Type=Application I copied it to /usr/share/applications , but I still cannot find my app in the menu (I use KDE Plasma 5 application launcher). When I try to open the file in Dolphin (the KDE file manager), it says that there is no Type=... entry in the file.I use KDE Plasma 5. The executable and the icon are in the right places (qtag is in /usr/local/bin ).
The first line needs to be [Desktop Entry] , with a capital E . Otherwise the file isn't recognized as a desktop entry. Dolphin is looking for the Type= line in the [Desktop Entry] section — this could use a more explicit error message! You shouldn't put files under /usr (except under /usr/local ), that's for your distribution. For your own desktop entry files, use ~/.local/share/applications . If you put .desktop files in random places, they need to be executable — that's a security measure, to avoid accidentally running arbitrary code from files downloaded from the Internet. That doesn't apply if you put the file in a directory that's dedicated to destkop entry files such as /usr/share/applications or ~/.local/share/applications . You can add #!/usr/bin/xdg-open at the beginning to make the file a valid, executable script which will launch the application when executed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114853/" ] }
202,570
I've got two issues with my script that copies files and adds a timestamp to the name. cp -ra /home/bpacheco/Test1 /home/bpacheco/Test2-$(date +"%m-%d-%y-%T") The above adds Test2 as the filename, but I want it to keep the original source file's file name which in this example is named Test . cp -ra /home/bpacheco/Test1 /home/bpacheco/Test2-$(date +"%m-%d-%y-%r") The other issue is when I add the %r as the timestamp code I get the error stating that target "PM" is not a directory. I'm trying to get the timestamp as 12-hour clock time.
One of your problems is that you left out the double quotes around the command substitution, so the output from the date command was split at spaces. See Why does my shell script choke on whitespace or other special characters? This is a valid command: cp -a /home/bpacheco/Test1 "/home/bpacheco/Test2-$(date +"%m-%d-%y-%r")" If you want to append to the original file name, you need to have that in a variable. source=/home/bpacheco/Test1cp -a -- "$source" "$source-$(date +"%m-%d-%y-%r")" If you're using bash, you can use brace expansion instead. cp -a /home/bpacheco/Test1{,"-$(date +"%m-%d-%y-%r")"} If you want to copy the file to a different directory, and append the timestamp to the original file name, you can do it this way — ${source##*/} expands to the value of source without the part up to the last / (it removes the longest prefix matching the pattern */ ): source=/home/bpacheco/Test1cp -a -- "$source" "/destination/directory/${source##*/}-$(date +"%m-%d-%y-%r")" If Test1 is a directory, it's copied recursively, and the files inside the directory keep their name: only the toplevel directory gets a timestamp appended (e.g. Test1/foo is copied to Test1-05-10-15-07:19:42 PM ). If you want to append a timestamp to all the file names, that's a different problem. Your choice of timestamp format is a bad idea: it's hard to read for humans and hard to sort. You should use a format that's easier to read and that can be sorted easily, i.e. with parts in decreasing order of importance: year, month, day, hour, minute, second, and with a separation between the date part and the time part. cp -a /home/bpacheco/Test1 "/home/bpacheco/Test2-$(date +"%Y%m%d-%H%M%S")"cp -a /home/bpacheco/Test1 "/home/bpacheco/Test2-$(date +"%Y-%m-%dT%H%M%S%:z")"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114685/" ] }
202,578
Can anybody explain me how UUID (Universally unique identifier) of a partition is generated in Linux based distributions ?
Both UUID of GPT partitions, and UUID of filesystems, are generated randomly when the partition/filesystem is created. You can check that they're version 4 UUIDs .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
202,586
I want to run a task with limits on the kernel objects that they will indirectly trigger. Note that this is not about the memory, threads, etc. used by the application, but about memory used by the kernel. Specifically, I want to limit the amount of inode cache that the task can use. My motivating example is updatedb . It can use a considerable amount of inode cache, for things that mostly won't be needed afterwards. Specifically, I want to limit the value that is indicated by the ext4_inode_cache line in /proc/slabinfo . (Note that this is not included in the “buffers” or “cache” lines shown by free : that's only file content cache, the slab content is kernel memory and recorded in the “used” column.) echo 2 >/proc/sys/vm/drop_caches afterwards frees the cache, but that doesn't do me any good: the useless stuff has displaced things that I wanted to keep in memory, such as running applications and their frequently-used files. The system is Linux with a recent (≥ 3.8) kernel. I can use root access to set things up. How can I run a command in a limited environment (a container?) such that the contribution of that environment to the (ext4) inode cache is limited to a value that I set?
Both UUID of GPT partitions, and UUID of filesystems, are generated randomly when the partition/filesystem is created. You can check that they're version 4 UUIDs .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
202,588
I run Gnome, which has pretty good support for my HiDPI screen. However, when I run QT apps I can't seem to find a way to scale the fonts. Is there a way to do this without installing a full version of KDE?
Updated: Since Qt 5.6, Qt 5 applications can be instructed to honor screen DPI by setting the QT_AUTO_SCREEN_SCALE_FACTOR environment variable. If automatic detection of DPI does not produce the desired effect, scaling can be set manually per-screen ( QT_SCREEN_SCALE_FACTORS ) or globally ( QT_SCALE_FACTOR ). You can also use QT_FONT_DPI to adjust scaling of text. Original: You can try this recipe from the archwiki Qt5 applications can often be run at higher dpi by setting the QT_DEVICE_PIXEL_RATIO environment variable. Note that the variable has to be set to a whole integer, so setting it to 1.5 will not work. This can for instance be enabled by creating a file /etc/profile.d/qt-hidpi.sh export QT_DEVICE_PIXEL_RATIO=2 And set the executable bit on it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65517/" ] }
202,598
Imagine I want to access the blocks of file /hello/file .How many inodes should I walt through? I guess two, since I should not go through the root inode, right?
Updated: Since Qt 5.6, Qt 5 applications can be instructed to honor screen DPI by setting the QT_AUTO_SCREEN_SCALE_FACTOR environment variable. If automatic detection of DPI does not produce the desired effect, scaling can be set manually per-screen ( QT_SCREEN_SCALE_FACTORS ) or globally ( QT_SCALE_FACTOR ). You can also use QT_FONT_DPI to adjust scaling of text. Original: You can try this recipe from the archwiki Qt5 applications can often be run at higher dpi by setting the QT_DEVICE_PIXEL_RATIO environment variable. Note that the variable has to be set to a whole integer, so setting it to 1.5 will not work. This can for instance be enabled by creating a file /etc/profile.d/qt-hidpi.sh export QT_DEVICE_PIXEL_RATIO=2 And set the executable bit on it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114752/" ] }
202,643
I am running Raspbian on a Pi and installed cron to schedule a job. I wrote a Python script and I set it to run every 5 minutes. The job is happening every 5 minutes, no problems, but when I run crontab -l as root and pi , it says there are no jobs. When I run crontab -e as root and as pi they are blank. I honestly can't remember the exact details of when I set up the job. I know I wrote a line on a document that was formatted like a crontab and I am pretty sure it was done as root . I have discovered this as I was going to add some more jobs, and would like to locate the other one I made before I get going on adding more.
There are two lists of scheduled tasks (crontabs). Each user (including root) has a per-user crontab which they can list with crontab -l and edit with crontab -e . The usual Linux implementation of cron stores these files in /var/spool/cron/crontabs . You shouldn't modify these files directly (run crontab -e as the user instead), but it's safe to list them to see what's inside. You need to be root to list them. There is a system crontab as well. This one is maintained by root, and the jobs can run as any user. The system crontab consists of /etc/crontab and, on many systems, files in /etc/cron.d . These files have an additional column: after the 5 date/time fields, they have a “user” field, which is the user that the job will run as. It's common to set up /etc/crontab to run scripts from directories /etc/cron.hourly , /etc/cron.daily , etc. and that's how it's done on Raspbian. So look in all these places: /var/spool/cron/crontabs/* (you need to be root for this one), /etc/crontab , /etc/cron.* . You can also get information in the system logs. They won't tell you where the job was listed, but they tell you exactly what command is being executed, so you can search for the command text. For example, this is the entry that runs commands in /etc/cron.hourly every hour: May 11 07:17:01 darkstar CRON[2480]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114879/" ] }
202,671
Basically the packages kbd, console-setup, console-setup-linux, and keyboard-configuration all depend on each other. So I cannot remove, configure, or purge them. When attempting to run: dpkg --configure <package name> it returns the error: dpkg: dependency problems prevent configuration of It goes on to list the specific dependencies. Upon trying to configure those, it eventually ends up back at the original package. Am I missing something? Edit: It appears I can configure keyboard-configuration, but it gives me this error: user@ip:~$ sudo dpkg --configure keyboard-configurationSetting up keyboard-configuration (1.108ubuntu5) .../var/lib/dpkg/info/keyboard-configuration.postinst: 1: /var/lib/dpkg/info/keyboard-configuration.postinst: udpkg: not found/usr/bin/ckbcomp: Can not find file "symbols/en" in any known directorydpkg: error processing package keyboard-configuration (--configure): subprocess installed post-installation script returned error exit status 1Errors were encountered while processing: keyboard-configuration
I had exactly the same problem after upgrading a server from 14.10 to 15.04 yesterday. I solved it with these commands: sudo apt-get remove keyboard-configurationsudo apt-get install keyboard-configurationsudo apt-get update && sudo apt-get upgrade
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115167/" ] }
202,689
Given input of the form XY981743 foobarlkasdf saflkas asfZR!sgfad asdSAD asdsadf SAdfasdf46lk lksad bar foolkasjfdrte how can I truncate only the second column? The delimiter is TAB and the second column must be at most 75 characters long.
Using awk , split the file using tabs and output the first field in full and the first 75 characters (at most) of the second: awk -F "\t" 'BEGIN { OFS=FS }; { print $1, substr($2, 1, 75); }' As pointed out by fedorqui , you can handle files with more than two fields by replacing the fields you need to truncate: awk -F "\t" 'BEGIN { OFS=FS }; { $2=substr($2, 1, 75); print }' You could apply the substr to multiple fields by looping over them if necessary.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112826/" ] }
202,698
On Ubuntu 15.04 I have this file: /usr/local/bin/myscript (it's a script I made). If I run this command under my account, it will do what I need it to do as a root user: sudo /usr/local/bin/myscript I now want to make /usr/local/bin/myscript run on machine startup, but as a root user (as if I was running the sudo command but without having to type any password). How is this done on Ubuntu 15.04?
And now, the systemd answer. You're using Ubuntu version 15. You have systemd. /etc/rc.local is at best a backwards compatibility mechanism in systemd. And as shown by the mess in the AskUbuntu question hyperlinked below, using it can go horribly wrong. So make a proper systemd service unit. You are creating a local, non-system non-package, service unit, so the unit file will be in /etc/systemd/system/ which is where that type of units go. Let us call it /etc/systemd/system/myscript.service . It contains: [Unit]Description=user2580's scriptDocumentation=https://unix.stackexchange.com/questions/202698/[Service]Type=simpleExecStart=/usr/local/bin/myscript[Install]WantedBy=multi-user.target If your script forks "in order to dæmonize" then stop it from doing so. That's completely unnecessary. Run systemctl preset myscript.service (as the superuser) to have it start automatically at bootstrap. Run systemctl start myscript.service (as the superuser) to manually start it right now. Run systemctl status myscript.service to see its status. Note that this does not execute your script in a context where it will be able to talk to an X server. It could be run before an X server is even started up. But you don't mention any requirement for being an X client, nor for other complexities that bite novices like a HOME environment variable. And those are subjects for other questions, in any case. So I'll leave it at that. Further reading https://unix.stackexchange.com/a/200281/5132 Jonathan de Boyne Pollard (2001). "Don't fork() in order to "put the dæmon into the background"." . Mistakes to avoid when designing Unix dæmon programs . Frequently Given Answers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114903/" ] }
202,749
I've got an USB pen drive and I'd like to turn it into a bootable MBR device. However, at some point in its history, that device had a GPT on it, and I can't seem to get rid of that. Even after I ran mklabel dos in parted , grub-install still complains about Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet.. I don't want to preserve any data. I only want to clear all traces of the previous GTP, preferably using some mechanism which works faster than a dd if=/dev/zero of=… to zero out the whole drive. I'd prefer a termina-based (command line or curses) approach, but some common and free graphical tool would be fine as well.
If you don't want to fiddle with dd , gdisk can do: $ sudo gdisk /dev/sdbGPT fdisk (gdisk) version 0.8.8Partition table scan: MBR: protective BSD: not present APM: not present GPT: presentFound valid GPT with protective MBR; using GPT.Command (? for help): ?b back up GPT data to a file<snip>w write table to disk and exitx extra functionality (experts only)? print this menuCommand (? for help): xExpert command (? for help): ?a set attributes<snip>w write table to disk and exitz zap (destroy) GPT data structures and exit? print this menuExpert command (? for help): zAbout to wipe out GPT on /dev/sdb. Proceed? (Y/N): YGPT data structures destroyed! You may now partition the disk using fdisk orother utilities.Blank out MBR? (Y/N): Y Verify: $ sudo gdisk /dev/sdbGPT fdisk (gdisk) version 0.8.8Partition table scan: MBR: not present BSD: not present APM: not present GPT: not presentCreating new GPT entries.Command (? for help):
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202749", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20807/" ] }
202,796
What is the name of the font that is used in Linux Console TTY 1-6?
The font in the image you supplied is the VGA font (I believe people refer to it as the VGA 437 font, but it's ambiguous; take a look at the wikipedia page .) This rendering is not something specific to Linux – it's your graphics card's rendition. Every graphics card I've used has used this particular rendering by default. I found a TTF clone of it here . The Linux TTY has other fonts and sizes too. If you want to customize it, try sudo dpkg-reconfigure console-setup
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114975/" ] }
202,797
If we use echo 1234 >> some-file then Documentation says that the output is appended. My guess is that, if some-file does not exist, then O_CREAT will make a new file. If > was used, then O_TRUNC will truncate existing file. In case of >> :Will the file be opened as O_WRONLY (or O_RDWR) and seeked to end and write operation is done , simulating O_APPEND ?Or will the file be opened as O_APPEND , leaving it to the kernel to make sure appending happens ? I am asking this because a conserver process is overwriting some markers inserted by echo, when the output file is from NFS mount point, & NFS Documentation says O_APPEND is not supported on server, so client kernel will have to handle it. I guess conserver process is using O_APPEND , but not sure of bash >> on linux, hence asking the question here.
I ran this: strace -o spork.out bash -c "echo 1234 >> some-file" to figure out your question. This is what I found: open("some-file", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3 No file named "some-file" existed in the directory in which I ran the echo command.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/202797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54246/" ] }
202,819
Today the IT manager got angry because I used nmap on the 3 servers I manage to see what ports they had open. I know I could have used netstat inside the host' shell. He told me that "If the network goes down because of nmap I would be punished". I would like to know technically how many network bandwith / bytes would take a nmap 192.168.1.x which outputs: Starting Nmap 6.40 ( http://nmap.org ) at 2015-05-11 13:33 ARTNmap scan report for 192.168.x.53Host is up (0.0043s latency).Not shown: 983 closed portsPORT STATE SERVICE1/tcp open tcpmux22/tcp open ssh79/tcp open finger80/tcp open http111/tcp open rpcbind119/tcp open nntp143/tcp open imap1080/tcp open socks1524/tcp open ingreslock2000/tcp open cisco-sccp6667/tcp open irc12345/tcp open netbus31337/tcp open Elite32771/tcp open sometimes-rpc532772/tcp open sometimes-rpc732773/tcp open sometimes-rpc932774/tcp open sometimes-rpc11Nmap done: 1 IP address (1 host up) scanned in 3.28 seconds
This is easy enough to measure, at least if you nmap a host your machine is not otherwise communicating with. Just use tcpdump or wireshark to capture the traffic, limited to that IP address. You could also use iptables counters, etc. I did so (using wireshark), the machine I tested on has fewer open TCP ports (5), but the totals were 2009 packets, 118,474 bytes. That took 1.4 seconds, so 1435 pps or 677 kbps. Neither should take down a reasonably-configured network. Doing additional targets could potentially overwhelm a stateful firewall's connection tracking, if the scan went through a firewall. And of course running nmap is likely to cause any intrusion detection system to alarm—potentially wasting someone's time investigating. Finally, nmap (by default) doesn't check all ports and host-based IDSs may detect and respond to the scan—both mean you don't necessarily get accurate answers.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21575/" ] }
202,839
I am working on a system that I did not create the BusyBox build for. I do not want to recompile BusyBox for fear that my configuration will not completely match the original and besides this the system is functioning well enough on this build. I could be swayed to do this if I knew of a way to pull the configuration of a running BusyBox install much like a running kernel. I am trying to figure out how to disable the switches used to call udhcpc from the ifup command. I can see the defaults compiled into the build that I am using. They are -R -n -p . I want for this process to fork into the background and I thought using udhcpc_opts -b in /etc/networking/interfaces would solve this issue. I get the fork to the background and then the process kills. If i just call udhcpc -b it forks to the background indefinitely. Is there a way to override the -n switch through something I can put into udhcpc_opts ? Thank you.
Having the same problem, did not want to recompile busybox and wanted to use those flags: "-t 0 -b" to let udhcpc try forever in background, but could not avoid flag "-n" that is passed by default by ifup. As a super-hack (but it worked for me) I used the following options for udhcpc_opts in /etc/network/interfaces: udhcpc_opts -t 0 -T 10 -A 20 -S & The final "&" did the trick, as it launchs udhcpc as a background task and is almost the same of the "-b" flag, but works also if the "-n" is specified in the command line. Note, that it has to be added to iface option to work, eg: iface eth0 inet dhcp udhcpc_opts -t 0 -T 10 -A 20 -S &
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109133/" ] }
202,855
Can someone please explain me following options of readlink command in simple language: -f, --canonicalize canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist -e, --canonicalize-existing canonicalize by following every symlink in every component of the given name recursively, all components must exist -m, --canonicalize-missing canonicalize by following every symlink in every component of the given name recursively, without requirements on components existence
I think it's quite self-explanatory, so I don't really know the part which sounds ambiguous for you...Let's see with an example: --canonicalize $ mkdir /tmp/realdir$ mkdir /tmp/subdir$ ln -s /tmp/realdir /tmp/subdir/link$ cd /tmp$ readlink -f ./subdir/link/nonexistentdir//tmp/realdir/nonexistentdir$ readlink -f ./subdir/link/nonexistentfile.txt/tmp/realdir/nonexistentfile.txt Whatever the options are, readlink will: - translate the relative path to absolute path - translate the symlink name to the real path And as you can see above, with -f , readlink doesn't care if the last part of this path (here nonexistentfile.txt ) exists or not. If another part of this path does not exist, readlink will output nothing and will have a return code different than 0 (which means an error occured). See: $ readlink -f /tmp/fakedir/foo.txt$ echo $?1 --canonicalize-existing If you try the same with -e : $ readlink -e ./subdir/link/tmp/realdir$ readlink -e ./subdir/link/nonexistentfile.txt$ echo $?1 With -e , in case any of the path component doesn't exist, readlink will output nothing and will have a return code different than 0. --canonicalize-missing -m option is the opposite of -e . No test will be made to check if the components of path exist: $ readlink -m ./subdir/link/fakedir/fakefile/tmp/realdir/fakedir/fakefile$ ln -s /nonexistent /tmp/subdir/brokenlink$ readlink -m ./subdir/brokenlink/foobar/nonexistent/foobar
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
202,880
I have a text file named foo.txt with root permission in one Linux distribution. I copied it to another Linux distribution on another computer. Would file permissions be still maintained for foo.txt ? If yes, how does Unix/Linux linux know, and duplicate the permissions of the file? Does it add extra bytes (which indicates the permissions) to the file?
To add Eric's answer (don't have rep to comment), permissions are not stored in file but file's inode (filesystem's pointer to the file's physical location on disk) as metadata along with owner and timestamps. This means that copying file to non-POSIX filesystem like NTFS or FAT will drop the permission and owner data. File owner and group is just a pair of numbers, user ID (UID) and group ID (GID) respectively. Root UID is 0 as standard so file will show up as owned by root on (almost) every unix-compliant system. On the other hand, non-root owner will not be saved in meaningful way. So in short, root ownership will be preserved if tarball'd or copied via extX usbstick or the like. Non-root ownership is unreliable.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101461/" ] }
202,883
I have a custom xmodmap file I use, including useful multi-language diacriticals, english quotes, dashes &c. I want to use this map with kmscon , so I need to create a xkb configuration from it. Is there an automated method to do it? Or even a straightforward manual process, since I won't need to do this frequently?
Make your own xkb configuration file The idea is to "read" the current keyboard config (do not call xmodmap ), and write your own symbols file based on it. First: xkbcomp -xkb $DISPLAY This creates server-0_0.xkb . In this file, take the following block: xkb_symbols "pc+inet(evdev)+compose(menu)+whatever(option)" { key <ESC> { [ Escape ] }; ...}; change the first line into: default xkb_symbols "my_symbols" { include "pc+inet(evdev)" include "compose(menu)+whatever(option)" (I think you can break options into as many "include" lines as you like). Change the keys you want to modify and prepend them with override : override key <AE10> { [ 0, parenright, degree ]}; Remove all unchanged keys. System-wide installation Put all this into /usr/share/X11/xkb/symbols/my_terrific_kb . Now users may load it with setxkbmap my_terrific_kb (in .xinitrc or .xsessionrc ). Probably you can put Option "XkbLayout" "my_terrific_kb" in xorg.conf for a system-wide change. Single-user installation Put all this into ~/anywhere/my_terrific_kb . Find the XInput id of your keyboard with xinput list . Then run xkbcomp -i <XInput_id> ~/anywhere/my_terrific_kb $DISPLAY .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/202883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115004/" ] }
202,891
So I just installed the latest Kali Linux on my laptop which was based on Debian 7 (oldstable). I then dist-upgrad-ed the whole thing to Debian 8. I've always wanted Wayland instead of X11, so I installed the necessary packages. Then created a minimal ~./config/weston.ini configuration. Now, from the Gnome log-in screen: I can boot to Gnome on Wayland or LXDE (among others). The previous with very limited success and the latter (LXDE) almost perfectly, though the panel needs setting up (I have to look up freedesktop). Anyways, in LXDE, the GUI is more responsive than it was on the oldstable and possibly as fast when it was running windows 7. I was pleased. But I want to know if this is because of all the library/module upgrades from Debian 7 to 8 or from using Wayland (if I really am using Wayland at all). I skimmed through htop and found a /usr/bin/Xorg running and no process named "wayland". So which one am I currently running?
Obtain the session ID to pass in by issuing: loginctl That will show you something like: SESSION UID USER SEAT TTY c2 1000 yourusername seat0 1 sessions listed. In that example, c2 is the session ID. Then: loginctl show-session <SESSION_ID> -p Type If you want all this on a single command: loginctl show-session $(awk '/tty/ {print $1}' <(loginctl)) -p Type | awk -F= '{print $2}' Use the one corresponding to your user name. Refer to: https://fedoraproject.org/wiki/How_to_debug_Wayland_problems So, for me it is: $ loginctl show-session 2 -p Type Type=wayland
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/202891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100633/" ] }
202,894
I'm writing a script to compare two directories recursively and run vimdiff when it finds a difference: #!/bin/bashdir1=${1%/}dir2=${2%/}find "$dir1/" -type f -not -path "$dir1/.git/*" | while IFS= read line; do file1="$line" file2=${line/$dir1/$dir2} isdiff=$(diff -q "$file1" "$file2") if [ -n "$isdiff" ]; then vimdiff "$file1" "$file2" fidone This doesn't work because vim throws a warning: "Input is not from a terminal." I understand that I need to supply the - argument, which is kind of tricky, but I have it more or less working: #!/bin/bashdir1=${1%/}dir2=${2%/}find "$dir1/" -type f -not -path "$dir1/.git/*" | while IFS= read line; do file1="$line" file2=${line/$dir1/$dir2} isdiff=$(diff -q "$file1" "$file2") if [ -n "$isdiff" ]; then cat "$file1" | vim - -c ":vnew $file2 | windo diffthis" fidone The problem with this is the right side of the diff window is a new file. I want to compare the original file in dir1 to the original file in dir2. How can I do this?
vim and therefore vimdiff seem to assume that stdin is an tty . You can workaround this by something like this in your script: </dev/tty vimdiff "$file1" "$file2"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22172/" ] }
202,918
I have to edit some files placed on some server I could reach via ssh. I would prefer to edit these files in customized vim on my workstation (I have not rights to change vim settings on remote server). Sometimes I would like to edit a file with sublime text or other GUI editor. Of course, I can download these files, edit them locally and upload them back to server. Is there more elegant solution?
You could do this by mounting the remote folder as a file-system using sshfs. To do this, first some pre-requisites: #issue all these cmds on local machinesudo apt-get install sshfssudo adduser <username> fuse #Not required for new Linux versions (including Ubuntu > 18.04) Now, do the mounting process: mkdir ~/remoteserv sshfs -o idmap=user <username>@<ipaddress>:/remotepath ~/remoteserv After this, just go into the mounted folder and use your own local customized vim.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/202918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39370/" ] }
202,936
To list all the files ending with 10 or 11 or 12 I have tried ls *[10-12] and ls *[10,11,12] but these are not working. I don't know why. Can anyone help me?
[...] matches one character ( collating element in some shells) if it's in the specified set. [10-12] means either character 1 or characters from 0 to 1 or character 2 so the same as [102] or [0-2] ¹. [10,11,12] means either 1 , 0 , , , 1 ... So the same as [,0-2] . Here you want: ls -d -- *1[0-2] That is, (non-hidden) filenames that end in 1 followed by any of the 0 , 1 or 2 characters. Now beware that it also matches foo110 or foo112345612 . If you don't want that, then you'd need something like: ls -d -- *[!0-9]1[0-2] 1[0-2] That would however not match foo010 If out of foo10 , foo00010 , foo110 you want foo10 and foo00010 to match, then with ksh or bash -O extglob or zsh -o kshglob : ls -d -- !(*[0-9])*(0)1[0-2] Note that except with zsh , if any of those pattern's don't match, the patterns will be passed as-is to ls , so you may end up listing unwanted files if there are files named like that. With bash , you can set the failglob option to work around that. zsh is the only shell that has a glob operator to match ranges of numbers. ls -d -- *<10-12> would match files ending in a number from 10 to 12. It would also match foo110 since that's foo1 followed by 10 . You could do (with extendedglob ): ls -d -- (^*[0-9])<10-12> though. zsh is also the shell that introduced the {10..12} type of brace expansion (copied by ksh93 and bash a few years later). However, like the {10,11,12} equivalent, that is not globbing. That's a form of expansion that is done before globbing. ls -d -- *{10..12} is first expanded to: ls -d -- *10 *11 *12 If any of those 3 globs fails, then the command is aborted in zsh (and bash -O failglob ). ¹ though note that in several shells and locales, including bash on a GNU system in a typical UTF-8 locale such as en_US.UTF-8 , [0-1] may match on a lot more characters than just 0 and 1 including things like ٠۰߀०০੦૦୦௦౦౸೦൦෦๐໐༠༳၀႐០៰᠐᥆᧐᪀᪐᭐᮰᱀᱐⁰₀↉⓪⓿〇㍘꘠꣐꤀꧐꧰꩐꯰0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202936", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115053/" ] }
202,945
I'm trying to run some experiments with Linux and look for the smallest distribution by installation size. (RAM, CPU doesn't really matter)
Update: ttylinux is unmaintained at the moment! If you're still interested start here or here . Depending on your platform, ttylinux is maybe something for you: This smallest ttylinux system has an 8 MB file system and runs on i486 computers within 28 MB of RAM, but provides a complete command line environment and is ready for Internet access. Started in 2001 and latest release is from 2015-03-05 so it is still maintained.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/202945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115043/" ] }
202,979
I am using ksh . I have a file temp.txt with some pipe delimited data. one|two|three|four|fiveabc|def|pqr|lmn|xyz As clear from example, the record ends with a new line character after the data value in the last column. However, I want the record to end with a pipe delimiter and a new line character as below: one|two|three|four|five|abc|def|pqr|lmn|xyz| I tried the following commands but still unsuccessful: tr '\n' '|\n' < temp.txt and sed -i 's/\n/|\n/g' temp.txt Am I missing something?
Unless you have joined lines, the \n doesn't appear in sed 's pattern space: the end-of-line anchor is $ . So with GNU sed: sed -i 's/$/|/' temp.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/202979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57370/" ] }
203,043
I have the following files: Codigo-0275_tdim.matches.tsv Codigo-0275_tdim.snps.tsv FloragenexTdim_haplotypes_SNp3filter17_single.tsv FloragenexTdim_haplotypes_SNp3filter17.tsv FloragenexTdim_SNP3Filter17.fas S134_tdim.alleles.tsv S134_tdim.snps.tsv S134_tdim.tags.tsv I want to count the number of files that have the word snp (case sensitive) on their name. I tried using grep -a 'snp' | wc -l but then I realized that grep searches within the files. What is the correct command to scan through the file names?
Do you mean you want to search for snp in the file names ? That would be a simple shell glob (wildcard), used like this: ls -dq *snp* | wc -l Omit the -q flag if your version of ls doesn't recognise it. It handles filenames containing "strange" characters (including newlines).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/203043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114559/" ] }
203,074
I know in vim editor, pressing f a will locate the character a and it's a great shortcut for me. But I do not find the similar shortcut in linux bash ( emacs mode ). I use xshell as my terminal.
If you don't want to use vi mode (despite being used to vi commands), in emacs mode (e.g. in bash or ksh ) there's Ctrl-] c to find the character c on the current history line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203074", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88545/" ] }
203,129
I want to find files which are greater than 1 GB and older than 6 months in entire server. How to write a command for this?
Use find : find /path -mtime +180 -size +1G -mtime means search for modification times that are greater than 180 days (+180). And the -size parameter searches for files greater than 1GB.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/203129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114762/" ] }
203,136
I am using Trisquel GNU/Linux-Libre which comes with Gnome3 Flashback Desktop Environment. I know that I can run GUI application as root by sudo & gksudo but I want to know that How do I run GUI applications as root with the help of pkexec ? When I tries to run gedit (or any other application like: nautilus ) by pkexec gedit then It prompts for password for authentication:- After entering password it exit with error:- $ pkexec gediterror: XDG_RUNTIME_DIR not set in the environment.(gedit:6135): Gtk-WARNING **: cannot open display: So, It seems something is going wrong with display environment. I've also tried with DISPLAY=:0 pkexec gedit but doesn't work. Following information is available from man pkexec :- The environment that PROGRAM will run it, will be set to a minimal known and safe environment in order to avoid injecting code through LD_LIBRARY_PATH or similar mechanisms. In addition the PKEXEC_UID environment variable is set to the user id of the process invoking pkexec. As a result, pkexec will not allow you to run X11 applications as another user since the $DISPLAY and $XAUTHORITY environment variables are not set. These two variables will be retained if the org.freedesktop.policykit.exec.allow_gui annotation on an action is set to a nonempty value; this is discouraged, though, and should only be used for legacy programs. Now I don't know What to do in order to accomplish this. Thus, Help me to find out How to run GUI applications as root by means of pkexec . Or Is this possible or not? BTW, Inspired by gparted-pkexec command which works fine. How gparted use pkexec ?
It can be done by adding custom actions to policykit. If you want to run gedit as root with pkexec you have to create new file /usr/share/polkit-1/actions/org.freedesktop.policykit.gedit.policy for example: <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd"><policyconfig> <action id="org.freedesktop.policykit.pkexec.gedit"> <description>Run gedit program</description> <message>Authentication is required to run the gedit</message> <icon_name>accessories-text-editor</icon_name> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>auth_admin</allow_active> </defaults> <annotate key="org.freedesktop.policykit.exec.path">/usr/bin/gedit</annotate> <annotate key="org.freedesktop.policykit.exec.allow_gui">true</annotate> </action></policyconfig> Finally pkexec gedit should work as expected. Visit manpage or Reference Manual which explains it with EXAMPLE like:- $ man pkexec | grep -i ^Example -A 60EXAMPLE To specify what kind of authorization is needed to execute the program /usr/bin/pk-example-frobnicate as another user, simply write an action definition file like this <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd"> <policyconfig> <vendor>Examples for the PolicyKit Project</vendor> <vendor_url>http://hal.freedesktop.org/docs/PolicyKit/</vendor_url> <action id="org.freedesktop.policykit.example.pkexec.run-frobnicate"> <description>Run the PolicyKit example program Frobnicate</description> <description xml:lang="da">Kør PolicyKit eksemplet Frobnicate</description> <message>Authentication is required to run the PolicyKit example program Frobnicate (user=$(user), program=$(program), command_line=$(command_line))</message> <message xml:lang="da">Autorisering er påkrævet for at afvikle PolicyKit eksemplet Frobnicate (user=$(user), program=$(program), command_line=$(command_line))</message> <icon_name>audio-x-generic</icon_name> <defaults> <allow_any>no</allow_any> <allow_inactive>no</allow_inactive> <allow_active>auth_self_keep</allow_active> </defaults> <annotate key="org.freedesktop.policykit.exec.path">/usr/bin/pk-example-frobnicate</annotate> </action> </policyconfig> and drop it in the /usr/share/polkit-1/actions directory under a suitable name (e.g. matching the namespace of the action). Note that in addition to specifying the program, the authentication message, description, icon and defaults can be specified. Note that occurences of the strings $(user), $(program) and $(command_line) in the message will be replaced with respectively the user (of the form "Real Name (username)" or just "username" if there is no real name for the username), the binary to execute (a fully-qualified path, e.g. "/usr/bin/pk-example-frobnicate") and the command-line, e.g. "pk-example-frobnicate foo bar". For example, for the action defined above, the following authentication dialog will be shown: [IMAGE][2] +----------------------------------------------------------+ | Authenticate [X] | +----------------------------------------------------------+ | | | [Icon] Authentication is required to run the PolicyKit | | example program Frobnicate | | | | An application is attempting to perform an | | action that requires privileges. Authentication | | is required to perform this action. | | | | Password: [__________________________________] | | | | [V] Details: | | Command: /usr/bin/pk-example-frobnicate | | Run As: Super User (root) | | Action: org.fd.pk.example.pkexec.run-frobnicate | | Vendor: Examples for the PolicyKit Project | | | | [Cancel] [Authenticate] | +----------------------------------------------------------+
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203136", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
203,141
I want to execute a script after outbound ssh. I have two servers (A) and (B), and a computer (C). (C) can connect to (A) via SSH but can't connect to (B). And (A) can connect to (B) via SSH. But (C) can do multi-hop ssh to go to (B): C -> A -> B So I want to log the outbound ssh in (A) with this variable ${SSH_CLIENT%% *} to know if (C) do a multi-hop to connect to (B).
It can be done by adding custom actions to policykit. If you want to run gedit as root with pkexec you have to create new file /usr/share/polkit-1/actions/org.freedesktop.policykit.gedit.policy for example: <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd"><policyconfig> <action id="org.freedesktop.policykit.pkexec.gedit"> <description>Run gedit program</description> <message>Authentication is required to run the gedit</message> <icon_name>accessories-text-editor</icon_name> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>auth_admin</allow_active> </defaults> <annotate key="org.freedesktop.policykit.exec.path">/usr/bin/gedit</annotate> <annotate key="org.freedesktop.policykit.exec.allow_gui">true</annotate> </action></policyconfig> Finally pkexec gedit should work as expected. Visit manpage or Reference Manual which explains it with EXAMPLE like:- $ man pkexec | grep -i ^Example -A 60EXAMPLE To specify what kind of authorization is needed to execute the program /usr/bin/pk-example-frobnicate as another user, simply write an action definition file like this <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd"> <policyconfig> <vendor>Examples for the PolicyKit Project</vendor> <vendor_url>http://hal.freedesktop.org/docs/PolicyKit/</vendor_url> <action id="org.freedesktop.policykit.example.pkexec.run-frobnicate"> <description>Run the PolicyKit example program Frobnicate</description> <description xml:lang="da">Kør PolicyKit eksemplet Frobnicate</description> <message>Authentication is required to run the PolicyKit example program Frobnicate (user=$(user), program=$(program), command_line=$(command_line))</message> <message xml:lang="da">Autorisering er påkrævet for at afvikle PolicyKit eksemplet Frobnicate (user=$(user), program=$(program), command_line=$(command_line))</message> <icon_name>audio-x-generic</icon_name> <defaults> <allow_any>no</allow_any> <allow_inactive>no</allow_inactive> <allow_active>auth_self_keep</allow_active> </defaults> <annotate key="org.freedesktop.policykit.exec.path">/usr/bin/pk-example-frobnicate</annotate> </action> </policyconfig> and drop it in the /usr/share/polkit-1/actions directory under a suitable name (e.g. matching the namespace of the action). Note that in addition to specifying the program, the authentication message, description, icon and defaults can be specified. Note that occurences of the strings $(user), $(program) and $(command_line) in the message will be replaced with respectively the user (of the form "Real Name (username)" or just "username" if there is no real name for the username), the binary to execute (a fully-qualified path, e.g. "/usr/bin/pk-example-frobnicate") and the command-line, e.g. "pk-example-frobnicate foo bar". For example, for the action defined above, the following authentication dialog will be shown: [IMAGE][2] +----------------------------------------------------------+ | Authenticate [X] | +----------------------------------------------------------+ | | | [Icon] Authentication is required to run the PolicyKit | | example program Frobnicate | | | | An application is attempting to perform an | | action that requires privileges. Authentication | | is required to perform this action. | | | | Password: [__________________________________] | | | | [V] Details: | | Command: /usr/bin/pk-example-frobnicate | | Run As: Super User (root) | | Action: org.fd.pk.example.pkexec.run-frobnicate | | Vendor: Examples for the PolicyKit Project | | | | [Cancel] [Authenticate] | +----------------------------------------------------------+
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114314/" ] }
203,168
I'm trying to run a docker image that works on other systems (and you can even pull it from dockerhub, if you'd like: it's dougbtv/asterisk ) however, on my general workstation, it's complaining about free space when it (looks like) it's untarring the docker images. I try to run it, and when I do I get an error stating that it's out of space. Here's an example of me trying to run it, and it complaining about space.. [root@localhost docker]# docker run -i -t dougbtv/asterisk /bin/bashTimestamp: 2015-05-13 07:50:58.128736228 -0400 EDTCode: System errorMessage: [/usr/bin/tar -xf /var/lib/docker/tmp/70c178005ccd9cc5373faa8ff0ff9c7c7a4cf0284bd9f65bbbcc2c0d96e8565d410879741/_tmp.tar -C /var/lib/docker/devicemapper/mnt/70c178005ccd9cc5373faa8ff0ff9c7c7a4cf0284bd9f65bbbcc2c0d96e8565d/rootfs/tmp .] failed: /usr/bin/tar: ./asterisk/utils/astdb2sqlite3: Wrote only 512 of 10240 bytes/usr/bin/tar: ./asterisk/utils/conf2ael.c: Cannot write: No space left on device/usr/bin/tar: ./asterisk/utils/astcanary: Cannot write: No space left on device/usr/bin/tar: ./asterisk/utils/.astcanary.o.d: Cannot write: No space left on device/usr/bin/tar: ./asterisk/utils/check_expr.c: Cannot write: No space left on device[... another few hundred similar lines] Of course, I check how much space is available, and through googling I find that sometimes this happens because you're out of inodes. So I take a look at both, and I can see that there's plenty of inodes as well. [root@localhost docker]# df -hFilesystem Size Used Avail Use% Mounted ondevtmpfs 3.9G 0 3.9G 0% /devtmpfs 3.9G 20M 3.9G 1% /dev/shmtmpfs 3.9G 1.2M 3.9G 1% /runtmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup/dev/mapper/fedora-root 36G 9.4G 25G 28% /tmpfs 3.9G 5.2M 3.9G 1% /tmp/dev/sda3 477M 164M 285M 37% /boot/dev/mapper/fedora-home 18G 7.7G 8.9G 47% /hometmpfs 793M 40K 793M 1% /run/user/1000/dev/sdb1 489G 225G 265G 46% /mnt/extradoze[root@localhost docker]# df -iFilesystem Inodes IUsed IFree IUse% Mounted ondevtmpfs 1012063 585 1011478 1% /devtmpfs 1015038 97 1014941 1% /dev/shmtmpfs 1015038 771 1014267 1% /runtmpfs 1015038 15 1015023 1% /sys/fs/cgroup/dev/mapper/fedora-root 2392064 165351 2226713 7% /tmpfs 1015038 141 1014897 1% /tmp/dev/sda3 128016 429 127587 1% /boot/dev/mapper/fedora-home 1166880 145777 1021103 13% /hometmpfs 1015038 39 1014999 1% /run/user/1000/dev/sdb1 277252836 168000 277084836 1% /mnt/extradoze And so you can see a bit what's going on here's my /etc/fstab [root@localhost docker]# cat /etc/fstab ## /etc/fstab# Created by anaconda on Tue Mar 17 20:11:16 2015## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/fedora-root / ext4 defaults 1 1UUID=2e2535da-907a-44ec-93d8-1baa73fb6696 /boot ext4 defaults 1 2/dev/mapper/fedora-home /home ext4 defaults 1 2/dev/mapper/fedora-swap swap swap defaults 0 0 And I also asked someone with a similar stack exchange question asked for the results of the lvs command, which shows: [root@localhost docker]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home fedora -wi-ao---- 17.79g root fedora -wi-ao---- 36.45g swap fedora -wi-ao---- 7.77g It's a Fedora 21 system: [root@localhost docker]# cat /etc/redhat-release Fedora release 21 (Twenty One)[root@localhost docker]# uname -aLinux localhost.localdomain 3.19.5-200.fc21.x86_64 #1 SMP Mon Apr 20 19:51:56 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Storage driver: [doug@localhost cs]$ sudo docker info|grep Driver:Storage Driver: devicemapperExecution Driver: native-0.2 Docker version: [doug@localhost cs]$ sudo docker -vDocker version 1.6.0, build 3eac457/1.6.0 Per this recommended article I tried to change docker to /etc/sysconfig/docker OPTIONS='--selinux-enabled --storage-opt dm.loopdatasize=500GB --storage-opt dm.loopmetadatasize=10GB' And restarted docker, to no avail. I have changed it back to just --selinux-enabled (note: I have selinux disabled) Additionally I noticed that the article mentioned looking at the spare data file, which looks like: [root@localhost doug]# ls -alhs /var/lib/docker/devicemapper/devicemappertotal 3.4G4.0K drwx------ 2 root root 4.0K Mar 20 13:37 .4.0K drwx------ 5 root root 4.0K Mar 20 13:39 ..3.4G -rw------- 1 root root 100G May 13 14:33 data9.7M -rw------- 1 root root 2.0G May 13 14:33 metadata Is it a problem that the sparse file is larger than the size of the disk? My lsblk looks like: [root@localhost doug]# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 100M 0 part ├─sda2 8:2 0 49.2G 0 part ├─sda3 8:3 0 500M 0 part /boot├─sda4 8:4 0 1K 0 part └─sda5 8:5 0 62G 0 part ├─fedora-swap 253:0 0 7.8G 0 lvm [SWAP] ├─fedora-root 253:1 0 36.5G 0 lvm / └─fedora-home 253:2 0 17.8G 0 lvm /homesdb 8:16 0 1.8T 0 disk └─sdb1 8:17 0 489G 0 part /mnt/extradozeloop0 7:0 0 100G 0 loop └─docker-253:1-1051064-pool 253:3 0 100G 0 dm loop1 7:1 0 2G 0 loop └─docker-253:1-1051064-pool 253:3 0 100G 0 dm
If you are using any operating system Red-Hat based, you should know that "Devicemapper" is limited to 10 GB per image, and if you are trying to run an image which is up to 10GB you may get that error. That may be your issue. Try this, it worked for me https://docs.docker.com/engine/reference/commandline/daemon/#storage-driver-options sudo systemctl stop docker.service or sudo service docker stoprm -rvf /var/lib/docker (Take back up of any important data; containers and images will be deleted) Run this command docker daemon --storage-opt dm.basesize=20G Where "20G" refers to the new size you want the devicemapper to take, and then, restart docker sudo systemctl start docker.service or sudo service docker start Check if it is set by running docker info Hope this works!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38721/" ] }
203,172
Recently my Unison started throwing up some strange error whenever I tried to sync between my laptop and my PC. I realized that I had added a line in bashrc that would print my pending tasks whenever I would open a terminal. The line added in my bashrc : task list #this command comes from a small utility called taskwarrior The error is here: Received unexpected header from the server: expected "Unison 2.40\n" but received "\nID Proj Age Description\n-- -------- --- -----------------------------\n 2 11d Do the research work\n 3 Life 11d Get stickynotes from stationary\n 1 Technical 11d Fix the error\n\n3 tasks\n", which differs at "\n".This can happen because you have different versions of Unisoninstalled on the client and server machines, or becauseyour connection is failing and somebody is printing an errormessage, or because your remote login shell is printingsomething itself before starting Unison. As mentioned in the error log, my login shell is printing something itself before starting Unison . This is indeed the root of the problem. So, now I have 2 questions: How do I make my bashrc to print "task-list" message AFTER the Unison header? Alternatively, can I make the ssh sessions to load separate RC file so that the "task-list" is not printed at all? Will it be safe to print anything at all? I mean if I am somehow manage to print my task-list after the Unison header, is their any chance of data corruption during syncing, due to the additional information in the header? PS: Unison uses ssh for communication between the two systems.
If you are using any operating system Red-Hat based, you should know that "Devicemapper" is limited to 10 GB per image, and if you are trying to run an image which is up to 10GB you may get that error. That may be your issue. Try this, it worked for me https://docs.docker.com/engine/reference/commandline/daemon/#storage-driver-options sudo systemctl stop docker.service or sudo service docker stoprm -rvf /var/lib/docker (Take back up of any important data; containers and images will be deleted) Run this command docker daemon --storage-opt dm.basesize=20G Where "20G" refers to the new size you want the devicemapper to take, and then, restart docker sudo systemctl start docker.service or sudo service docker start Check if it is set by running docker info Hope this works!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89385/" ] }
203,195
I need to grep words like these: ABC-DEFAB2-DEFAB3-DEFAB-DEF So I was trying: grep AB*-DEFgrep -w -AB*-DEFgrep -w AB*DEF But neither of them are working.
* in a regex is not like a filename glob. It means 0 or more of the previous character/pattern. So your examples would be looking for a A then 0 or more B then -DEF . in regex means "any character" so you could fix your pattern by using grep 'AB.*DEF'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74221/" ] }
203,222
I Have a list file File_Transfer_List.txt which contains list of file to do scp My requirement is I need to do scp that files given in the list file and then delete the files from source location. I tried this : scp File_Name user@server:/destination && rm File_Name ; I am unable to test it, I don't have my scp ready to test it; can any one correct me if I am wrong.
You have two requirements here ( files from filelist and remove source files ) that scream for using rsync . Depending on what your filelist contains (relative or absolute paths, preserve paths on backup, etc) you could just do: rsync --files-from=filelist.txt --remove-source-files -avz \ . user@remotehost:/path/to/backup/folder
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203222", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111277/" ] }
203,251
So I have a 4k display, and for some reason Ubuntu decides that it's a good idea to give me a huge cursor instead of something normal. I don't have any DPI settings on the 4k monitor, and I don't want any, so why is the cursor so huge? This is how it looks like: This is on Ubuntu 15.04 with XFCE4 with Nvidia drivers. It only looks like that when the mouse is over system-dependant things (or something in that nature), such as the desktop, window titles, menu bar (File, Edit, View, ...) and context menus. In Firefox it seems to work just fine, except in the bookmarks dropdown. What I've already tried: Running update-alternatives to force the cursor theme. This changes the cursor theme, but it doesn't change the cursor size. Modify the cursor size in dconf-editor . This doesn't do anything. Put Xcursor.size: 24 in ~/.Xdefaults . This also doesn't appear to do anything. xrdb -query returns the following: *customization: -colorXft.dpi: 96Xft.hintstyle: hintnoneXft.rgba: noneXcursor.theme: DMZ-BlackXcursor.size: 24Xcursor.theme_core: 1
I ended up solving it myself (kind of). It's not the ultimate way, but it's a workaround that I can live with myself. Essentially, I took the original sources of the DMZ-Cursors package and created a fork of DMZ-Black, then I removed the 32x32 and 42x42 images, and am now using that as my cursor set. For convenience sake, I've put up my version of DMZ-Black on Github: https://github.com/codecat/dmzblack-96dpi If you wish to do the same with DMZ-White, simply download the sources here , copy DMZ-White, and remove all lines mentioning 32x32 and 42x42 in the *.in files. You can also remove the folders for those images if you want. Then simply run make.sh and copy the generated cursor files (in ../xcursors ) to your cursors folder. (You can take my install script and change_cursor.sh as an example.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203251", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115264/" ] }
203,280
I am aware that the touch command is used to update the date of last modification on a file. It is also used to create a new file if requested file does not exist on file system. Since touch (as it's name implies), should just update last mod date, why does it also try to create a new file? Is it just a check written in the touch's code, or is it something else that causes a file to be created?
Using strace touch t yields: open("t", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 It is in touch 's code and I wouldn't call it a check though. The timestamp is updated by opening the file for writing and then just closing it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115278/" ] }
203,284
I have a machine with a built-in NIC (eth0), which serves as a DHCP server for a Raspberry Pi. I also have a USB 3G modem, which shows up as ethernet device eth1. eth0 has the static ip 192.168.100.1 in /etc/network/interfaces .When I connect the Pi to the server, /var/log/syslog shows NetworkManager[2366]: <info> Policy set 'Ifupdown (eth0)' (eth0) as default for IPv4 routing and DNS. and after, ip route show gives default via 192.168.1.100 dev eth0 proto static I then need to manually ip route delete defaultip route add default via 192.168.1.1 to get it to connect to the internet via the 3G modem again.I am using CrunchBang Linux, based on Debian 7 wheezy, on the server, and the latest Raspbian on the Pi. How can I choose the default pathway for NetworkManager to prefer? Edit: here's my /etc/network/interfaces : # This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).# The loopback network interfaceauto loiface lo inet loopbackallow-hotplug eth0auto eth0iface eth0 inet static address 192.168.100.1 netmask 255.255.255.0allow-hotplug eth1auto eth1iface eth1 inet dhcp Note that I've changed /etc/NetworkManager/NetworkManager.conf to have [ifupdown]managed=true because I want to be able to disconnect eth1, the 3G Modem, using nm-applet.Here's /etc/NetworkManager/NetworkManager.conf : [main]plugins=ifupdown,keyfile[ifupdown]managed=true
If using GUI, try checking the "Use only for resources on this connection" checkbox. If using the config files (like you are :) ), in the [ipv4] section add never-default=true . If using commandline tools, run sudo nmcli con mod "connection name" ipv4.never-default yes This way you will be able to delete the default route going in the tunnel & add your own.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39759/" ] }
203,290
I am using Linux Mint 17.1 Rebecca for about 2 days and accidentally typed my password into the terminal which is now displayed in the history list of commands I have previously typed. I want to clear the terminal history completely. I have tried using the following commands in the terminal which I thought would clear the history forever but they do not: history -cresettput reset The above commands "will" clear the history from the terminal but when I exit and bring up a new one all my previous history is still there and can all be listed again using the - history command and also by pressing the UP arrow on my keyboard. I do not want this to happen until I have totally cleared my history, then I want to continue using it. How can I clear my terminal history completely - forever and start fresh? Please Note: I do not want to exit the terminal without saving history just clear it forever in this one instance.
reset or tput reset only does things to the terminal. The history is entirely managed by the shell, which remains unaffected. history -c clears your history in the current shell. That's enough (but overkill) if you've just typed your password and haven't exited that shell or saved its history explicitly. When you exit bash, the history is saved to the history file, which by default is .bash_history in your home directory. More precisely, the history created during the current session is appended to the file; entries that are already present are unaffected. To overwrite the history file with the current shell's history, run history -w . Instead of removing all your history entries, you can open .bash_history in an editor and remove the lines you don't want to keep. You can also do that inside bash, less conveniently, by using history to display all the entries, then history -d to delete the entries you don't want, and finally history -w to save. Note that if you have multiple running bash instances that have read the password, each of them might save it again. Before definitively purging the password from the history file, make sure that it is purged from all running shell instances. Note that even after you've edited the history file, it's possible that your password is still present somewhere on the disk from an earlier version of the file. It can't be retrieved through the filesystem anymore, but it might still be possible (but probably not easy) to find it by accessing the disk directly. If you use this password elsewhere and your disk gets stolen (or someone gets access to the disk), this could be a problem.
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/203290", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115277/" ] }
203,307
I am using some custom Linux distribution without any UI. I would like to find out the Bluez version through the command-line. How can this be done?
If you have a rough idea (or are fine with covering the last 10 years), bluez provides tools in bluez-uils to request the version. Unfortunately, these tools changed between version 4 and 5, so you may have to check if one of both is installed. For BlueZ 4.0: bluetoothd --version Since BlueZ 5.0, there is a new command-line tool bluetoothctl : bluetoothctl --version
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115297/" ] }
203,309
I'm working with select and case in bash. I currently have nine options, which makes a nice, tidy, 3x3 grid of options, but it displays like so: 1) show all elements 4) write to file 7) clear elements 2) add elements 5) generate lines 8) choose file 3) load file 6) clear file 9) exit I'd prefer if it displayed in rows before columns: 1) show all elements 2) add elements 3) load file4) write to file 5) generate lines 6) clear file 7) clear elements 8) choose file 9) exit Is there any way to accomplish this? Preferably something easy to set and unset within a script, like a shell option. If it matters, the options are stored in an array, and referenced in the case blocks by the index of the array. OPTIONS=("show all elements" "add elements" "load file" "write to file" "generate lines" "clear file" "clear elements" "choose file" "exit")...select opt in "${OPTIONS[@]}"docase $opt in "${OPTIONS[0]}")... "${OPTIONS[8]}") echo "Bye bye!" exit 0 break ;; *) echo "Please enter a valid option."esacdone
Create your own "select": #!/bin/bashoptions=("show all elements" "add elements" "load file" "write to file" "generate lines" "clear file" "clear elements" "choose file" "exit")width=25cols=3for ((i=0;i<${#options[@]};i++)); do string="$(($i+1))) ${options[$i]}" printf "%s" "$string" printf "%$(($width-${#string}))s" " " [[ $(((i+1)%$cols)) -eq 0 ]] && echodonewhile true; do echo read -p '#? ' opt case $opt in 1) echo "${options[$opt-1]}" ;; 2) echo "${options[$opt-1]}" ;; 9) echo "Bye bye!" break ;; esacdone Output: 1) show all elements 2) add elements 3) load file 4) write to file 5) generate lines 6) clear file 7) clear elements 8) choose file 9) exit #?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67459/" ] }
203,329
Hello everyone I was wondering if it's possible to use sendmail in a way that I can string multiple email addresses in the To: as such From: sendmailTo: [email protected];[email protected]: Did You Both Receive It?I hope you did Instead of using From: sendmailTo: [email protected]: [email protected]: Did You Both Receive It?I hope you did
To put multiple addresses on the To: or Cc: or Bcc: line, separate them by a comma (plus optional spaces). There are mail readers that allow typing a semicolon to separate addresses and show addresses separated by semicolons, but this is not standard syntax. From: sendmailTo: [email protected], [email protected]: Did You Both Receive It? You can split the header onto multiple lines after the comma (and at some other places in the address, but this is trickier). The continuation line must start with at least one space or tab. From: sendmailTo: [email protected], [email protected]: Did You Both Receive It?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89568/" ] }
203,338
Yesterday I add a SSD to my PC configuration and I make on it a fresh installation. At the moment of installation I replace my old HDD and there was only the SSD. When the installation finish I make a manually shutdown to attach the HDD with cables and then turn on the pc. After that I can't open my information on the HDD but in BIOS everything seems fine. From second HDD I can mount only boot partition which is 524MB from 500GB HDD. When I check with fdisk -l what is the situation the answer looks fine: Disk /dev/sda: 128.0 GB, 128035676160 bytes255 heads, 63 sectors/track, 15566 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000d66f4 Device Boot Start End Blocks Id System/dev/sda1 * 1 64 512000 83 LinuxPartition 1 does not end on cylinder boundary./dev/sda2 64 15567 124521472 8e Linux LVMDisk /dev/sdb: 500.1 GB, 500107862016 bytes255 heads, 63 sectors/track, 60801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x16481d17 Device Boot Start End Blocks Id System/dev/sdb1 * 1 64 512000 83 LinuxPartition 1 does not end on cylinder boundary./dev/sdb2 64 60802 487873536 8e Linux LVMDisk /dev/mapper/vg_andromeda-lv_root: 53.7 GB, 53687091200 bytes255 heads, 63 sectors/track, 6527 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/vg_andromeda-lv_swap: 8136 MB, 8136949760 bytes255 heads, 63 sectors/track, 989 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/vg_andromeda-lv_home: 65.7 GB, 65682800640 bytes255 heads, 63 sectors/track, 7985 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000 Here is a screenshot of computer:/// When I execute mount /dev/sdb2 /storage as root I get the following error: mount: unknown filesystem type 'LVM2_member' When I run vgs here is the answer: WARNING: Duplicate VG name vg_andromeda: Existing gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73WARNING: Duplicate VG name vg_andromeda: Existing gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73WARNING: Duplicate VG name vg_andromeda: gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73WARNING: Duplicate VG name vg_andromeda: gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73 So can anyone helps me because I can't open my information from the HDD. I've tried to mount /dev/sdb and /dev/sdb2 (there is no problem with /dev/sdb1 because there is the boot partition). On the fresh installation I use the same username and hostname as on the old. Also on the old HDD there is a other CentOS installation but there is a lot of information and I want copy it first to the SSD and then I'll format the HDD. Best regards,Georgi!
Volume group name should be unique on system, by design. Problem occurs when a disk is moved from one system to another. So you have few options (detailed below) Rename the VG externally [not mounted] disk(s). Rename the VG of your system (not realistic) Merge both volume group into a single one (probably needs to rename first) Option 1 - Rename the VG externally on the unmounted disk(s) Use the command vgrename . You need to use vgdisplay or vgs ,to retrieve the volume group UUID. $ vgs -o vg_name,vg_attr,vg_uuidVG Attr VG UUID vg_andromeda wz--n- gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY???? ?????? bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73$ vgrename bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73 vg_andromeda_old$ vgchange -ay vg_andromeda_old (please, edit/update this post with the actual ouput of the command vgs) Option 2 - Rename the VG of your system This is not realistic. You can't rename an active volume group, so you would have to boot on a CD/DVD, rename the VG, and fix your system configuration in various places (fstab, bootloader)... However, since your installation is fresh, you could reinstall your system with another name. Option 3 - Merge both volume group into a single one You could merge both VGs, but it has a few caveats; It only makes sense if both drives are meant to remain on the system. You can't have two LVs with the same name in a single VG. You have an SSD and an HDD, It's recommended to keep them on distinct VG for clarity. The vgmerge command seems to only merge two VGs by Name (not UUID), so you have to rename the duplicate VG anyway.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
203,339
I'd like to run a script if the XFCE session is locked and unlocked. Is there a way that I can intercept this and perform certain actions when the desktop is locked or unlocked? I have found following solutions: for Gnome - Run script on screen lock/unlock for xscreensaver - How do I run a script on unlock? But I'm using light-locker and no screen saver. I was trying to monitor DBUS but it doesn't seem the light-locker emits any signals. One option would be to modify xflock4 but that would help only with screen locking. Is there any way for light-locker ?
The previous answer helped me write this fragment of bash script that handles Lock and Unlock session events for the current session. I use it to suspend browser processes when the session is locked and to resume them when it unlocks. Tested under Debian unstable (Xfce 4.12) Enjoy! session=/org/freedesktop/login1/session/$XDG_SESSION_IDiface=org.freedesktop.login1.Sessiondbus-monitor --system "type=signal,path=$session,interface=$iface" 2>/dev/null | while read signal stamp sender arrow dest rest; do case "$rest" in *Lock) echo LOCKED at $stamp pause $@;; *Unlock) echo UNLOCKED at $stamp resume $@;; #unknown Session signal received *)# echo $signal $stamp $sender $arrow $dest $rest esacdone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203339", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43656/" ] }
203,364
On Debian 8 jessie I've removed python: perry@perry:~$ sudo apt-get remove pythonReading package lists... DoneBuilding dependency tree Reading state information... DonePackage 'python2.7' is not installed, so not removed0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. But somehow I can still launch python from the terminal. perry@perry:~$ pythonPython 2.7.9 (default, Apr 29 2015, 18:34:06) [GCC 4.9.2] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> I haven't installed it from source or from any other place but apt. How is this possible and how can I remove python completely?
It turned out that the additional package python-minimal had python installed. One does then not only have to do: sudo apt-get remove python but also: sudo apt-get remove python-minimal
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/203364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115042/" ] }
203,371
When I try to run ./script.sh I got Permission denied but when I run bash script.sh everything is fine. What did I do wrong?
Incorrect POSIX permissions It means you don't have the execute permission bit set for script.sh . When running bash script.sh , you only need read permission for script.sh . See What is the difference between running “bash script.sh” and “./script.sh”? for more info. You can verify this by running ls -l script.sh . You may not even need to start a new Bash process. In many cases, you can simply run source script.sh or . script.sh to run the script commands in your current interactive shell. You would probably want to start a new Bash process if the script changes current directory or otherwise modifies the environment of the current process. Access Control Lists If the POSIX permission bits are set correctly, the Access Control List (ACL) may have been configured to prevent you or your group from executing the file. E.g. the POSIX permissions would indicate that the test shell script isexecutable. $ ls -l t.sh-rwxrwxrwx+ 1 root root 22 May 14 15:30 t.sh However, attempting to execute the file results in: $ ./t.shbash: ./t.sh: Permission denied The getfacl command shows the reason why: $ getfacl t.sh# file: t.sh# owner: root# group: rootuser::rwxgroup::r--group:domain\040users:rw-mask::rwxother::rwx In this case, my primary group is domain users which has had execute permissions revoked by restricting the ACL with sudo setfacl -m 'g:domain\040users:rw-' t.sh . This restriction can be lifted by either of the following commands: sudo setfacl -m 'g:domain\040users:rwx' t.shsudo setfacl -b t.sh See: Access Control Lists, Arch Linux Wiki Using ACLs with Fedora Core 2 Filesystem mounted with noexec option Finally, the reason in this specific case for not being able to run the script is that the filesystem the script resides on was mounted with the noexec option. This option overrides POSIX permissions to prevent any file on that filesystem from being executed. This can be checked by running mount to list all mounted filesystems; the mount options are listed in parentheses in the entry corresponding to the filesystem, e.g. /dev/sda3 on /tmp type ext3 (rw,noexec) You can either move the script to another mounted filesystem or remount the filesystem allowing execution: sudo mount -o remount,exec /dev/sda3 /tmp Note: I’ve used /tmp as an example here since there are good security reasons for keeping /tmp mounted with the noexec,nodev,nosuid set of options.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/203371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59854/" ] }
203,376
I'm a NOVICE user of Linux and using Linux mint 17.1 I've reinstalled Linux Mint 20 time in last 3 days for that issue but could not fix it I'm trying to install ruby on rails using rvm what happened is if start a new Linux installation and try installing gems and ruby and stuff in one terminal session it installed successfully as soon as I close the terminal session I can't get those gems. By reinstalling it over 20 times in last 3 days I've searched on the web too. I've been told to put environment variable in /etc/environment I echoed path in that session and pasted that in /etc/environment file. even now I don't get my installed gems when I typed rvm -v I get the following errors Warning: PATH set to RVM ruby but GEM_HOME and/or GEM_PATH not set, see: https://github.com/wayneeseguin/rvm/issues/3212Warning! PATH is not properly set up, $GEM_HOME is not set, usually this is caused by shell initialization files - check them for 'PATH=...' entries, it might also help to re-add RVM to your dotfiles: 'rvm get stable --auto-dotfiles', to fix temporarily in this shell session run: 'rvm use ruby-2.2.2'.rvm 1.26.11 (latest) by Wayne E. Seguin <[email protected]>, Michal Papis <[email protected]> [https://rvm.io/] I'm a very novice person when it comes to working with terminal I can run few basic commands to work with ruby. I would like to know How to set PATH How to set GEM_HOME How to set GEM_PATH Below is my full path echo $PATH/home/sharif/.rvm/gems/ruby-2.2.2/bin:/home/sharif/.rvm/gems/ruby-2.2.2@global/bin:/home/sharif/.rvm/rubies/ruby-2.2.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/sharif/.rvm/bin
Sounds like you want the command export for setting environment variables: export PATH=$PATH':/path/to/add'export GEM_HOME=$HOME/.gemexport GEM_PATH=$HOME/.gem That will only take effect for the current session, though. To make them more permanent, add those lines to your ~/.bashrc .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112991/" ] }
203,386
I have a question after reading about extended glob. After using shopt -s extglob , What is the difference in the following? ?(list): Matches zero or one occurrence of the given patterns.*(list): Matches zero or more occurrences of the given patterns.+(list): Matches one or more occurrences of the given patterns.@(list): Matches one of the given patterns. Yes, I have read the above description that accompanies them, but for practical purpose, I can't see situations where people would prefer ?(list) over *(list). That is, I don't see any difference. I've tried the following: $ ls> test1.in test2.in test1.out test2.out`$ echo *(*.in)> test1.in test2.in$ echo ?(*.in)> test1.in test2.in I'd expect $ echo ?(*.in) to output test1.in only, from the description, but it does not appear to be the case. Thus, could anyone give an example where it makes a difference regarding the type of extended glob used? Source : http://mywiki.wooledge.org/BashGuide/Patterns#Extended_Globs
$ shopt -s extglob$ lsabbc abc ac$ echo a*(b)cabbc abc ac$ echo a+(b)cabbc abc$ echo a?(b)cabc ac$ echo a@(b)cabc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115344/" ] }
203,401
I have recently installed CentOS (into a machine with only one hard drive) and I would like to know how to Partition the main hard drive into two. As it is a fresh install there is no data to lose and I am using linux rescue from a live CD 50GB[/dev/sda ] 25GB 25G[/dev/sda1][/dev/sda2] These are bogus numbers at the moment and I doubt the result will be what I expect but anything close or any ideas would be really great
$ shopt -s extglob$ lsabbc abc ac$ echo a*(b)cabbc abc ac$ echo a+(b)cabbc abc$ echo a?(b)cabc ac$ echo a@(b)cabc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89568/" ] }
203,408
I'm running KDE on Fedora Workstation 21. When I wake up my machine after suspending it, it always prompts for my password. If there any way to disable this? I've looked through the settings but can't see how this is done.
Power management -> advanced settings -> Lock screen on resume. EDIT: In later Plasma vesrions, it's in System settings -> Desktop Behaviour -> Screen locking. In even later versions: System settings -> Workspace Behaviour -> Screen locking. Quick tip:In the systemsettings screen, there is an useful dialong in which you can search for "lock", it'll highlight the relevant icons.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90367/" ] }
203,410
This is a follow-up question to A list of available DBus services . The following python code will list all available DBus services. import dbusfor service in dbus.SystemBus().list_names(): print(service) How do we list out the object paths under the services in python? It is ok if the answer does not involve python although it is preferred. I am using Ubuntu 14.04
QT setups provide the most convenient way to do it, via qdbus : qdbus --system org.freedesktop.UPower prints //org/org/freedesktop/org/freedesktop/UPower/org/freedesktop/UPower/Wakeups/org/freedesktop/UPower/devices/org/freedesktop/UPower/devices/line_power_ADP0/org/freedesktop/UPower/devices/DisplayDevice/org/freedesktop/UPower/devices/battery_BAT0 As to the python way... per the official docs (under standard interfaces ): There are some standard interfaces that may be useful across various D-Bus applications. org.freedesktop.DBus.Introspectable This interface has one method: org.freedesktop.DBus.Introspectable.Introspect (out STRING xml_data) Objects instances may implement Introspect which returns an XML description of the object, including its interfaces (with signals and methods), objects below it in the object path tree, and its properties. So here's a very simplistic example that should get you started. It uses xml.etree.ElementTree and dbus : #!/usr/bin/env pythonimport dbusfrom xml.etree import ElementTreedef rec_intro(bus, service, object_path): print(object_path) obj = bus.get_object(service, object_path) iface = dbus.Interface(obj, 'org.freedesktop.DBus.Introspectable') xml_string = iface.Introspect() for child in ElementTree.fromstring(xml_string): if child.tag == 'node': if object_path == '/': object_path = '' new_path = '/'.join((object_path, child.attrib['name'])) rec_intro(bus, service, new_path)bus = dbus.SystemBus()rec_intro(bus, 'org.freedesktop.UPower', '/org/freedesktop/UPower') It recursively introspects org.freedesktop.UPower starting from e.g. /org/freedesktop/UPower and prints all object paths (node names): /org/freedesktop/UPower/org/freedesktop/UPower/Wakeups/org/freedesktop/UPower/devices/org/freedesktop/UPower/devices/DisplayDevice/org/freedesktop/UPower/devices/battery_BAT0/org/freedesktop/UPower/devices/line_power_ADP0 which is pretty much what you'd get if you used d-feet (not that you'd need it):
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115297/" ] }
203,418
For what I have tried, TAB and C-i in .inputrc seems to mean the same thing, whatever I bind to one is bound to the other. I know that originally, it was the same thing and that this behavior is kind of inherited from the old times but nowadays, apart from terminal emulators, all X applications makes the difference between a C-i and a TAB press. So is there a way to run a terminal command ("complete" for example) when I press the TAB key and run another command when I press C-i ? (the same question applies for C-m and ENTER , C-z , C-d , and all these control sequences that I would like to send by other means than their original binding and apply my own commands to these precious keybindings) And by the way, if you could explain a little bit the process from a keypress to a shell interpretation that would help me understand. For now I understood that keyboard events are translated by Xmodmap, then by .inputrc and that the result is interpreted by the shell or something like this. I am currently using Guake, and sometimes gnome-terminal, as terminal emulators. After following the link proposed in a comment, it appears that the terminal emulator is the element of the chain that transforms TAB keysym from X server into C-i , and sends it to the bash shell because it doesn't understand such things as TAB , ENTER and siblings. So configuring readline itself won't work as it comes after the terminal emulator and before the bash shell. The question could be then precised like this: How to configure my terminal emulator so it translates TAB and C-i , ENTER and C-m , etc, to different pairs of character sequences? Maybe make TAB and ENTER send a new custom escape sequence, that could be mapped in .inputrc later to the original commands, and finally be able to use C-i and C-m for other purposes. Or leave TAB and ENTER and make C-i and C-m send escape sequences instead.
The terminal emulator translates events like “the Tab key was pressed” into sequences of characters that the application running in the terminal (bash, in your case) reads. See How do keyboard input and text output work? for a more detailed presentation of this topic. For historical reasons, a few of keys send a character that's the same as pressing Ctrl with some other character: Tab = Ctrl + I , Return = Ctrl + M , Esc = Ctrl + [ . This is because historical physical terminals did this, so applications that run in terminals expect it, so terminals do it. Both Guake and Gnome-terminal use the VTE library , which does not allow the mapping from key chords to character sequences to be configured. You have the same problem as bash - wrong key sequence bindings with control+alt+space Xterm has fully configurable key bindings. You can make the Tab key send a tab character (that's the default), or make it send the string hello , or whatever you choose. Xterm is configured via X resources . For example, to make Tab send the escape sequence \e[t when pressed and \e]t when released, put this in your ~/.Xresources : XTerm.vt100.translations: #override \ <Key>Tab: string("\033[t") \n\ <KeyRelease>Tab: string("\033]t") \n\ Or maybe you would leave Tab sending the tab character and make Ctrl + I send something else: XTerm.vt100.translations: #override \ Ctrl~Meta~Shift<Key>I: string("\033[a5i") \n\ Ctrl~Meta Shift<Key>I: string("\033[a6i") \n\ You can then bind \e[a5i to whatever you want in bash and other terminal applications with configurable keybindings. Note that by convention, multi-character escape sequences start with the escape character (often represented as \e or \033 or \x1b in programming languages and configuration files); some applications may have trouble with escape sequences starting with other characters, and of course you can't have a character that's both an escape sequence and a key of its own, unless you're willing to accept a timeout (that's how it works in applications such as vi where Esc on its own is bound to some functionality). If you define your own key sequences, take care not to clash with the ones sent by function and cursor keys, which are more or less de facto standardized .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115371/" ] }
203,420
I have a computer with Debian at home, acting as a server with two Ethernet cards: eth0 connected to the router in DHCP mode and eth1 to a switch (static address) that holds four more computers. I'm using the PC to be the gateway-firewall of the others. Since I only have four more PCs in the internal network, I don't want to set up BIND on the server. It is easier to use the file hosts to solve the names of the four PCs, but I can't make the server look into the file /etc/hosts . The server has no configuration at all; it's only using the defaults gotten from my ISP. How can I make the server resolve the addresses in the file hosts ?
That's because the /etc/hosts is simply a file on your Debian server that it utilizes for its own name resolution. It doesn't use the file to provide any DNS services. Since you don't want to set up BIND can I recommend that you look at dnsmasq instead? It's lightweight and can act as a DNS and DHCP server, simply by making use of your hosts file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203420", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72530/" ] }
203,446
I often run scripts to transform a CSV file which I then preview in LibreOffice. I often open the CSV with xdg-open file.csv . However, if I run that command when the file is already open, LibreOffice simply focuses that window - it doesn't reload the file from disk. Is there a way, from the command line, that I can specify a window to close in the GUI? I can't just kill the process, since LibreOffice shares a single pid for all its windows. I'm running the latest version of Cinnamon on Mint 17.1.
You could use xkill, xdotool or wmctl. type xkill on the terminal and then click on the window you want to close.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203446", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95808/" ] }
203,449
#!/bin/bashsearch_string="\/sbin\/iptables -A INPUT -p tcp --dport 12443 -j ACCEPT";delimeters=$(cat /root/firewall/firewall.txt);sed -i "s/$search_string/$delimeters$search_string/" /root/result.txt I want to add the contents of the /root/firewall/firewall.txt into /root/result.txt file before a line which is saved in search_string variable. If /root/firewall/firewall.txt contains one line above script works. But if the firewall.txt contains multiple lines, script breaks as: sed: -e expression #1, char 64: unterminated `s' command I think, new line characters causing the problem but I could not properly backslash it. search_string="\/sbin\/iptables -A INPUT -p tcp --dport 12443 -j ACCEPT";delimeters=$(cat /root/firewall/firewall.txt);replaced= "$delimeters" | sed -r 's/\\n/\\\\n/g'sed -i "s/$search_string/$replaced$search_string/" /root/result.txt How can I fix this issue?
You could use xkill, xdotool or wmctl. type xkill on the terminal and then click on the window you want to close.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81390/" ] }
203,467
I have two shells open. The first is in directory A. In the second, I remove directory A, and then recreate it. When I go back to the first shell, and type ls , the output is: ls: cannot open directory .: Stale file handle Why? I thought the first shell (the one that remained open inside a non-existent directory) would "freeze" while waiting for the next command, and wouldn't have "realized" that the directory was deleted and recreated. Does the shell hold a "deeper" reference to its current working directory other than the string $PWD ?
A directory (like any file) is not defined by its name. Think of the name as the directory's address . When you move the directory, it's still the same directory, just like if you move to a different house, you're still the same person. If you remove a directory and create a new one by the same name, it's a new directory, just like someone who moves into the house where you used to live isn't you. Each process has a working directory . The cd command in the shell changes the shell's current working directory. The pwd command prints the¹ path to the current working directory. When you removed the directory A, what this did was to remove the entry for A in its parent directory. The directory A itself remained in the filesystem, but in a detached state, with no name. It was not deleted yet because it was in use by a process, namely the first shell. When you changed the directory in the first shell, the directory was finally deleted. The same thing happens when a file is deleted while a process still has it open: the file's directory entry is removed immediately, and the file itself is removed when it stops being in use. Similarly, observe what happens when you move directories around. mkdir one twotouch one/1 two/2cd onels In another shell: mv one tmpmv two onemv tmp two In the first shell: ls The file 1 is in the directory that was originally called one and is now called two . The file 2 is in the directory that was originally called two and is now called one . ¹ More precisely, a path, which may not be unique if symbolic links or other subtleties are involved.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56104/" ] }
203,497
I've been using Windows and Mac OS for the past 5 years and now I'm considering to use Linux on a daily basis. I've installed Ubuntu on a virtual machine and trying to understand how I can use Linux for my daily job (as a js programmer / web designer). Sorry for the novice question but it occurs to me that sometimes when I install a program through make config & make install it changes my system in ways that is not revertible easily. In windows when you install a program, you can uninstall it and hopefully if it plays by the book there will be no traces of the program left in the file system or registery, etc. In Mac OS you simply delete an App like a file. But in Linux there is apt-get and then there is make . I didn't quite understand how I can keep my Linux installation clean and tidy. It feels like any new app installation may break my system. But then Linux has a reputation of being very robust, so there must be something I don't understand about how app installation and uninstallation affects the system. Can anyone shed some light into this? Update: when installing an app, its files can spread anywhere really (package managers handle part of the issue) but there is a cool hack around that: use Docker for installing apps and keep them in their sandbox, specially if you're not gonna use them too often. It is also possible to run GUI apps like Firefox entirely in a Docker "sandbox".
A new install will seldom break your system (unless you do weird stuff like mixing source and binary). If you use precompiled binaries in Ubuntu then you can remove them and not have to worry about breaking your system, because a binary should list what it requires to run and your package manager will list what programs rely on that program for you to review. When you use source, you need to be more careful so you don't remove something critical (like glib). There are no warnings or anything else when you uninstall from source. This means you can completely break your machine. If you want to uninstall using apt-get then you'll use apt-get remove package as previously stated. Any programs that rely on that package will be uninstalled as well and you'll have a chance to review them. If you want to uninstall then generally the process is make uninstall . There is no warning (as I said above). make config will not alter your system, but make install will. As a beginner, I recommend using apt-get or whatever distro you use for binary packages. It keeps things nice and organized and unless you really want to it won't break your system. Hopefully, that clears everything up.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/203497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115426/" ] }
203,606
CoreOS does not include a package manager but my preferred text editor is nano , not vi or vim . Is there any way around this? gcc is not available so its not possible to compile from source: core@core-01 ~/nano-2.4.1 $ ./configurechecking build system type... x86_64-unknown-linux-gnuchecking host system type... x86_64-unknown-linux-gnuchecking for a BSD-compatible install... /usr/bin/install -cchecking whether build environment is sane... yeschecking for a thread-safe mkdir -p... /usr/bin/mkdir -pchecking for gawk... gawkchecking whether make sets $(MAKE)... nochecking whether make supports nested variables... nochecking for style of include used by make... nonechecking for gcc... nochecking for cc... nochecking for cl.exe... noconfigure: error: in `/home/core/nano-2.4.1':configure: error: no acceptable C compiler found in $PATH To put this in context, I was following this guide when I found I wanted to use nano .
To do this on a CoreOS box, following the hints from the guide here : Boot up the CoreOS box and connect as the core user Run the /bin/toolbox command to enter the stock Fedora container. Install any software you need. To install nano in this case, it would be as simple as doing a dnf -y install nano (dnf has replaced yum) Use nano to edit files. "But wait -- I'm in a container!" Don't worry -- the host's file system is mounted at /media/root when inside the container. So just save a sample text file at /media/root/home/core/test.txt , then exit the container, and finally go list the files in /home/core . Notice your test.txt file? If any part of this is too cryptic or confusing, please ask follow up questions. :-) In the recent CoreOS 47.83.202103292105-0, the host is placed in /host instead of /media/root .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/203606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50131/" ] }
203,622
I've just made a mistake and I have removed the 'Name' column from the Archive Manager program interface. Here is the actual interface: Now, I can't see the ' Name ' column. I've tried to modify everything in the options but I couldn't get it back. How do I restore the 'Name' column? If the program has a config file, where can I find it? I'm using Ubuntu 14.04
To do this on a CoreOS box, following the hints from the guide here : Boot up the CoreOS box and connect as the core user Run the /bin/toolbox command to enter the stock Fedora container. Install any software you need. To install nano in this case, it would be as simple as doing a dnf -y install nano (dnf has replaced yum) Use nano to edit files. "But wait -- I'm in a container!" Don't worry -- the host's file system is mounted at /media/root when inside the container. So just save a sample text file at /media/root/home/core/test.txt , then exit the container, and finally go list the files in /home/core . Notice your test.txt file? If any part of this is too cryptic or confusing, please ask follow up questions. :-) In the recent CoreOS 47.83.202103292105-0, the host is placed in /host instead of /media/root .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/203622", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115508/" ] }
203,628
I am root. I would like to know whether a non-root user has write access to some files - thousands of them. How to do it efficiently while avoiding process creation?
TL;DR find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -w' You need to ask the system if the user has write permission. The only reliable way is to switch the effective uid, effective gid and supplementation gids to that of the user and use the access(W_OK) system call (even that has some limitations on some systems/configurations). And bear in mind that not having write permission to a file does not necessarily guarantee that you can't modify the content of the file at that path. The longer story Let's consider what it takes for instance for a $user to have write access to /foo/file.txt (assuming none of /foo and /foo/file.txt are symlinks)? He needs: search access to / (no need for read ) search access to /foo (no need for read ) write access to /foo/file.txt You can see already that approaches (like @lcd047's or @apaul's ) that check only the permission of file.txt won't work because they could say file.txt is writable even if the user doesn't have search permission to / or /foo . And an approach like: sudo -u "$user" find / -writeble Won't work either because it won't report the files in directories the user doesn't have read access (as find running as $user can't list their content) even if he can write to them. If we forget about ACLs, read-only file systems, FS flags (like immutable), other security measures (apparmor, SELinux, which can even distinguish between different types of writing) and only focus on traditional permission and ownership attributes, to get a given (search or write) permission, that's already quite complicated and hard to express with find . You need: if the file is owned by you, you need that permission for the owner (or have uid 0) if the file is not owned by you, but the group is one of yours, then you need that permission for the group (or have uid 0). if it's not owned by you, and not in any of your groups, then the other permissions apply (unless your uid is 0). In find syntax, here as an example with a user of uid 1 and gids 1 and 2, that would be: find / -type d \ \( \ -user 1 \( -perm -u=x -o -prune \) -o \ \( -group 1 -o -group 2 \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) -o -type l -o \ -user 1 \( ! -perm -u=w -o -print \) -o \ \( -group 1 -o -group 2 \) \( ! -perm -g=w -o -print \) -o \ ! -perm -o=w -o -print That one prunes the directories that user doesn't have search right for and for other types of files (symlinks excluded as they're not relevant), checks for write access. If you also want to consider write access to directories: find / -type d \ \( \ -user 1 \( -perm -u=x -o -prune \) -o \ \( -group 1 -o -group 2 \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user 1 \( ! -perm -u=w -o -print \) -o \ \( -group 1 -o -group 2 \) \( ! -perm -g=w -o -print \) -o \ ! -perm -o=w -o -print Or for an arbitrary $user and its group membership retrieved from the user database: groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "find / -type d \ \( \ -user "$user" \( -perm -u=x -o -prune \) -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" \( ! -perm -u=w -o -print \) -o \ \( -group $groups \) \( ! -perm -g=w -o -print \) -o \ ! -perm -o=w -o -print (that's 3 processes in total: id , sed and find ) The best here would be to descend the tree as root and check the permissions as the user for each file. find / ! -type l -exec sudo -u "$user" sh -c ' for file do [ -w "$file" ] && printf "%s\n" "$file" done' sh {} + (that's one find process plus one sudo and sh process every few thousand files, [ and printf are usually built in the shell). Or with perl : find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -w' (3 processes in total: find , sudo and perl ). Or with zsh : files=(/**/*(D^@))USERNAME=$userfor f ($files) { [ -w $f ] && print -r -- $f} (0 process in total, but stores the whole file list in memory) Those solutions rely on the access(2) system call. That is instead of reproducing the algorithm the system uses to check for access permission, we're asking the system to do that check with the same algorithm (which takes into account permissions, ACLs, immutable flags, read-only file systems...) it would use would you try to open the file for writing, so is the closest you're going to get to a reliable solution. To test the solutions given here, with the various combinations of user, group and permissions, you could do: perl -e ' for $u (1,2) { for $g (1,2,3) { $d1="u${u}g$g"; mkdir$d1; for $m (0..511) { $d2=$d1.sprintf"/%03o",$m; mkdir $d2; chown $u, $g, $d2; chmod $m,$d2; for $uu (1,2) { for $gg (1,2,3) { $d3="$d2/u${uu}g$gg"; mkdir $d3; for $mm (0..511) { $f=$d3.sprintf"/%03o",$mm; open F, ">","$f"; close F; chown $uu, $gg, $f; chmod $mm, $f } } } } } }' Varying user between 1 and 2 and group betweem 1, 2, and 3 and limiting ourselves to the lower 9 bits of the permissions as that's already 9458694 files created. That for directories and then again for files. That creates all possible combinations of u<x>g<y>/<mode1>/u<z>g<w>/<mode2> . The user with uid 1 and gids 1 and 2 would have write access to u2g1/010/u2g3/777 but not u1g2/677/u1g1/777 for instance. Now, all those solutions try to identify the paths of files that the user may open for writing, that's different from the paths where the user may be able to modify the content. To answer that more generic question, there are several things to take into account: $user may not have write access to /a/b/file but if he owns file (and has search access to /a/b , and the file system is not read-only, and the file doesn't have the immutable flag, and he's got shell access to the system), then he would be able to change the permissions of the file and grant himself access. Same thing if he owns /a/b but doesn't have search access to it. $user may not have access to /a/b/file because he doesn't have search access to /a or /a/b , but that file may have a hard link at /b/c/file for instance, in which case he may be able to modify the content of /a/b/file by opening it via its /b/c/file path. Same thing with bind-mounts . He may not have search access to /a , but /a/b may be bind-mounted in /c , so he could open file for writing via its /c/file other path. He may not have write permissions to /a/b/file , but if he has write access to /a/b he can remove or rename file in there and replace it with his own version. He would change the content of the file at /a/b/file even if that would be a different file. Same thing if he's got write access to /a (he could rename /a/b to /a/c , create a new /a/b directory and a new file in it. To find the paths that $user would be able to modify. To address 1 or 2, we can't rely on the access(2) system call anymore. We could adjust our find -perm approach to assume search access to directories, or write access to files as soon as you're the owner: groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "find / -type d \ \( \ -user "$user" -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" -print -o \ \( -group $groups \) \( ! -perm -g=w -o -print \) -o \ ! -perm -o=w -o -print We could address 3 and 4, by recording the device and inode numbers or all the files $user has write permission to and report all the file paths that have those dev+inode numbers. This time, we can use the more reliable access(2) -based approaches: Something like: find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -0lne 'print 0+-w,$_' | perl -l -0ne ' ($w,$p) = /(.)(.*)/; ($dev,$ino) = stat$p or next; $writable{"$dev,$ino"} = 1 if $w; push @{$p{"$dev,$ino"}}, $p; END { for $i (keys %writable) { for $p (@{$p{$i}}) { print $p; } } }' 5 and 6 are at first glance complicated by the t bit of the permissions. When applied on directories, that's the restricted deletion bit which prevents users (others than the owner of the directory) from removing or renaming the files they don't own (even though they have write access to the directory). For instance, if we go back to our earlier example, if you have write access to /a , then you should be able to rename /a/b to /a/c , and then recreate a /a/b directory and a new file in there. But if the t bit is set on /a and you don't own /a , then you can only do it if you own /a/b . That gives: If you own a directory, as per 1, you can grant yourself write access, and the t bit doesn't apply (and you could remove it anyway), so you can delete/rename/recreate any file or dirs in there, so all file paths under there are yours to rewrite with any content. If you don't own it but have write access, then: Either the t bit is not set, and you're in the same case as above (all file paths are yours). or it's set and then you can't modify the files you don't own or don't have write access to, so for our purpose of finding the file paths you can modify, that's the same as not having write permission at all. So we can address all of 1, 2, 5 and 6 with: find / -type d \ \( \ -user "$user" -prune -exec find {} + -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" \( -type d -o -print \) -o \ \( -group $groups \) \( ! -perm -g=w -o \ -type d ! -perm -1000 -exec find {} + -o -print \) -o \ ! -perm -o=w -o \ -type d ! -perm -1000 -exec find {} + -o \ -print That and the solution for 3 and 4 are independent, you can merge their output to get a complete list: { find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -0lne 'print 0+-w,$_' | perl -0lne ' ($w,$p) = /(.)(.*)/; ($dev,$ino) = stat$p or next; $writable{"$dev,$ino"} = 1 if $w; push @{$p{"$dev,$ino"}}, $p; END { for $i (keys %writable) { for $p (@{$p{$i}}) { print $p; } } }' find / -type d \ \( \ -user "$user" -prune -exec sh -c 'exec find "$@" -print0' sh {} + -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" \( -type d -o -print0 \) -o \ \( -group $groups \) \( ! -perm -g=w -o \ -type d ! -perm -1000 -exec sh -c 'exec find "$@" -print0' sh {} + -o -print0 \) -o \ ! -perm -o=w -o \ -type d ! -perm -1000 -exec sh -c 'exec find "$@" -print0' sh {} + -o \ -print0} | perl -l -0ne 'print unless $seen{$_}++' As should be clear if you've read everything thus far, part of it at least only deals with permissions and ownership, not the other features that may grant or restrict write access (read-only FS, ACLs, immutable flag, other security features...). And as we process it in several stages, some of that information may be wrong if the files/directories are being created/deleted/renamed or their permissions/ownership modified while that script is running, like on a busy file server with millions of files. Portability notes All that code is standard (POSIX, Unix for t bit) except: -print0 is a GNU extension now also supported by a few other implementations. With find implementations that lack support for it, you can use -exec printf '%s\0' {} + instead, and replace -exec sh -c 'exec find "$@" -print0' sh {} + with -exec sh -c 'exec find "$@" -exec printf "%s\0" {\} +' sh {} + . perl is not a POSIX-specified command but is widely available. You need perl-5.6.0 or above for -Mfiletest=access . zsh is not a POSIX-specified command. That zsh code above should work with zsh-3 (1995) and above. sudo is not a POSIX-specified command. The code should work with any version as long as the system configuration allows running perl as the given user.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22534/" ] }
203,634
I am developing a binary executable program command to execute programs and return the output into a text field. The parameters to the command involve the command itself as it would be typed on the command line, and the directory. So the routine that executes the operation first switches to the directory, then executes the command. For example if I want to execute the command some.cmd in the directory /home/user the parameters are command = 'some.cmd' and directory = '/home/user' . What I have found is that some.cmd does not work but if I change command to /home/user/some.cmd the command works. However the command ls -l works. I also notice that the cd command is not recognized. If I run it remotely via ssh such as setting command to ssh user@localhost 'cd /home/user && ./some.cmd' it works. It seems that some settings which are present when the command is executed in a shell are not present when it is run directly, but doing it via ssh seems to create the settings for it work. Is there some explanation for this? UPDATE: After some enquiries I got to learn that the API used for executing the commands were not being executed in the shell, or were not executed with the normal environment available from the console. After executing the commands with the /bin/sh -c "cd ..." option the problem is no more. This is the environment doing ssh user@localhost 'command ...' gave me.I am not so sure of the technical details, but apparently the existence of the environment available when you execute in your normal shell is not always available to commands executed directly by the OS.
When you try to execute a file, the system has to know how to find the file. That's why it works if you specify the full path to it. The shell also has a PATH environment variable that stores a list of directories to look in to find an executable. That's why you don't have to specify the full path for ls .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26026/" ] }
203,637
How can I run a script-based tool which will process files continuously downloaded to given directory as they arrive? I'd like to minimize delay (~1 second is OK), script can have own infinite loop. I know a few ways, like: autologin user with .bashrc or .profile calling my script fork script from cron, then ignore if it is already running use init scripts somehow (I guess it varies between distributions) What method would work best?
use init scripts somehow (I guess it varies between distributions) It does indeed. Here's the systemd way, which doesn't involve System 5 rc scripts at all. It's two units. Because they are non-package non-system units they go in /etc/systemd/system . The first is a service unit that describes running your program as a dæmon: # /etc/systemd/system/example-spooler.service[Unit]Description=Process files in /var/spool/example/Documentation=http://unix.stackexchange.com/questions/203637/[Service]ExecStart=/usr/local/bin/example-spooler /var/spool/example/ Note that you don't have to explicitly start or stop this service. It is path activated . The path unit that describes the path that systemd monitors and what it looks for is the second unit file: # /etc/systemd/system/example-spooler.path[Unit]Description=Watch /var/spool/example/ and activate example-spooler.serviceDocumentation=http://unix.stackexchange.com/questions/203637/[Path]DirectoryNotEmpty=/var/spool/example/[Install]WantedBy=multi-user.target To auto-start this at bootstrap, run systemctl preset example-spooler.path . To start it now, run systemctl start example-spooler.path . Further reading Lennart Poettering (2013-10-07). systemd.path . systemd manual pages. freedesktop.org.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115515/" ] }
203,641
I have written a shell script to get the files from another UNIX server. But files are not being copied. Could someone help me what I am doing wrong. sftp username@server:$pathget ubpbilp* ./mget cust.cmp* bunc.cmp* ./echo "Your files are copied."
Perhaps like this: sftp username@server <<EOTcd $pathget ubpbilp*get cust.cmp* get bunc.cmp*quitEOT as sftp doesn't support mget .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69047/" ] }
203,723
I tried both netstat and lsof , but it appears it's not possible to see the connections to my LXC guests. Is there a way to achieve this ... for all guests at once? Essentially what throws me off here is the fact that I can see the processes of the guests as long as I run as superuser. I can also see the veth interfaces that get dynamically created per guest. Why can I not see connections on processes that are otherwise visible?
The kernel indicate the connections state on /proc/net/tcp , /proc/net/udp etc. but as the namespaces separate the network stack if an application is running inside a container (a different userspace) and is connected to the network the host /proc/net/tcp won't show its connection, conntrack can be used to show the whole machine connection but this does not work for some interfaces like wireguard... ip -all netns exec command can be used to run commands inside all the userspaces but this is limited to userspaces created with ip command. On the perspective of an application running on a container its network stack state is still visible on the host on the location /proc/$pid/net/tcp so as a workaround awaiting to write a proper tool in c, i wrote a little bash script that loop on /proc/$pid/net/tcp[udp] and join all the states to be able to list the whole machine connection. The script first join all /proc/$pid/net/tcp or /proc/$pid/net/udp sort them, remove duplicate, translate the value to a readable text and print them (the script require find , grep , xargs , awk , strtonum , sort and uniq ) For TCP find /proc/ 2>/dev/null | grep tcp | grep -v task | grep -v sys/net | xargs grep -v rem_address 2>/dev/null | awk '{x=strtonum("0x"substr($3,index($3,":")-2,2)); y=strtonum("0x"substr($4,index($4,":")-2,2)); for (i=5; i>0; i-=2) x = x"."strtonum("0x"substr($3,i,2)); for (i=5; i>0; i-=2) y = y"."strtonum("0x"substr($4,i,2))}{printf ("%s\t:%s\t ----> \t %s\t:%s\t%s\n",x,strtonum("0x"substr($3,index($3,":")+1,4)),y,strtonum("0x"substr($4,index($4,":")+1,4)),$1)}' | sort | uniq --check-chars=25 For UDP find /proc/ 2>/dev/null | grep udp | grep -v task | grep -v sys/net | xargs grep -v rem_address 2>/dev/null | awk '{x=strtonum("0x"substr($3,index($3,":")-2,2)); y=strtonum("0x"substr($4,index($4,":")-2,2)); for (i=5; i>0; i-=2) x = x"."strtonum("0x"substr($3,i,2)); for (i=5; i>0; i-=2) y = y"."strtonum("0x"substr($4,i,2))}{printf ("%s\t:%s\t ----> \t %s\t:%s\t%s\n",x,strtonum("0x"substr($3,index($3,":")+1,4)),y,strtonum("0x"substr($4,index($4,":")+1,4)),$1)}' | sort | uniq --check-chars=25 The output look like: (note that the pid is not accurate and is just used to identify the container) 127.0.0.1 :80 ----> 0.0.0.0 :0 /proc/10176/net/tcp:192.168.0.2 :33882 ----> 192.30.253.125 :443 /proc/10176/net/tcp192.168.0.2 :34020 ----> 192.30.253.125 :443 /proc/10176/net/tcp:192.168.0.2 :34162 ----> 192.30.253.125 :443 /proc/10176/net/tcp:192.168.0.2 :36242 ----> 192.30.253.124 :443 /proc/10176/net/tcp:192.168.0.2 :37324 ----> 192.30.253.124 :443 /proc/10176/net/tcp:192.168.0.2 :40122 ----> 216.239.38.21 :80 /proc/10176/net/tcp:192.168.0.2 :40124 ----> 216.239.38.21 :80 /proc/10176/net/tcp: Also i found a great tool for managing namespace with some very useful commands that is nsutils
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/203723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5462/" ] }
203,768
I have updated my KVM management script for Ubuntu 14.04 KVM hosts to support debian 8 guests. After a manual installation (preseed script does not work yet), I am stuck with the the following message on bootup: During the installation, I: Selected only ssh server and base system utilities. Set the grub bootloader to install to the only listed option. Used the guided partitioning mode for everything on one partition. Used the local UK mirror. Is there some step I need to be careful to make or can Debian 8 not yet be installed as a KVM guest? Update After giving up and deciding to just upgrade a debian 7 VM to debian 8 by updating all of the lines in /etc/apt/sources.list to jessie instead of wheezy , I found that I eventually got the same behaviour. However this instance had a static IP and I found that I could still SSH into the server on that IP, so it looks like this is some sort of graphics issue where the server does manage to boot up, we just can't see the login text. How can I resolve this? Update This time, on the debian installation created by upgrading debian 7, I can click advanced from the grub menu and select the option with (sysvinit) which works for now. I am hoping this can lead to an explanation of what is going wrong with the normal version that gets booted?
Thanks to the link @Someone posted in the comments to the question, I was able to pull this content which fixed the issue for me: on the boot screen (below) press the "e" key to edit the configuration. You will be shown a screen like follows. Scroll down using the keyboard down arrow. You want the line that says linux Add the text console=ttyS0 after the word quiet and then press cntrl + x to proceed Now as root, or using sudo, run the command systemctl enable getty@ttyS0 in order to never have to go through all those steps again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/203768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64440/" ] }