source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
159,530
I never really understood why a window system must have a server. Why do desktop environments, display managers and window managers need xorg-server? Is it only to have a layer of abstraction on top of the graphics card? Why do window systems employ a client-server model? Wouldn't inter-process communication via named pipes be simpler?
I think you've already noticed that some sort of "server" is needed. Each client (desktop environment, window manager, or windowed program) needs to share the display with all of the others, and they need to be able to display things without knowing the details of the hardware, or knowing who else is using the display. So the X11 server provides the layer of abstraction and sharing that you mentioned, by providing an IPC interface. X11 could probably be made to run over named pipes, but there are a two big things that named pipes can't do. Named pipes only communicate in one direction. If two processes start putting data into the "sending" end of a named pipe, the data will get intermingled. In fact, most X clients talk to the server using a "new and improved" named pipe called a UNIX-domain socket. It's a lot like a named pipe except that it lets processes talk in both directions, and it keeps track of who said what. These are the same sorts of things that the network has to do, and so UNIX-domain sockets use the same programming interface as the TCP/IP sockets that provide network communications. But from there, it's really easy to say "What if I ran the server on a different host than the client?" Just use a TCP socket instead of the UNIX socket, and voila: a remote-desktop protocol that predates Windows RDP by decades. I can ssh to four different remote hosts and run synaptic (graphical package manager) on each of them, and all four windows appear on my local computer's display.
{ "source": [ "https://unix.stackexchange.com/questions/159530", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84985/" ] }
159,531
I am trying to compile nginx with a module: https://github.com/leev/ngx_http_geoip2_module . Before the nginx compilation, this library: https://github.com/maxmind/libmaxminddb needs to be installed. I followed the instructions ( https://github.com/maxmind/libmaxminddb/blob/master/README.md#installing-from-a-tarball ), compiled and installed the library. After the installation, ldconfig -p | grep maxminddb gives: libmaxminddb.so.0 (libc6,x86-64) => /usr/local/lib/libmaxminddb.so.0 libmaxminddb.so (libc6,x86-64) => /usr/local/lib/libmaxminddb.so However, when I configure nginx with ngx_http_geoip2_module, it complains during configure: adding module in /home/cilium/ngx_http_geoip2_module checking for MaxmindDB library ... not found ./configure: error: the geoip2 module requires the maxminddb library. which is exactly the library I've already installed. This error seems to come from the config file of ngx_http_geoip2_module : ngx_feature="MaxmindDB library" ngx_feature_name= ngx_feature_run=no ngx_feature_incs="#include <maxminddb.h>" ngx_feature_libs=-lmaxminddb . auto/feature if [ $ngx_found = yes ]; then ngx_addon_name=ngx_http_geoip2_module HTTP_MODULES="$HTTP_MODULES ngx_http_geoip2_module" NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_geoip2_module.c" CORE_LIBS="$CORE_LIBS -lmaxminddb" else cat << END $0: error: the geoip2 module requires the maxminddb library. END exit 1 fi Does anyone know what may have gone wrong here? UPDATE: some relevant output by sh -x ./configure .. : + echo adding module in /home/cilium/ngx_http_geoip2_module adding module in /home/cilium/ngx_http_geoip2_module + test -f /home/cilium/ngx_http_geoip2_module/config + . /home/cilium/ngx_http_geoip2_module/config + ngx_feature=MaxmindDB library + ngx_feature_name= + ngx_feature_run=no + ngx_feature_incs=#include <maxminddb.h> + ngx_feature_libs=-lmaxminddb + . auto/feature + echo checking for MaxmindDB library ...\c checking for MaxmindDB library ...+ cat + ngx_found=no + test -n ... + [ -x objs/autotest ] + echo not found not found + echo ---------- + cat objs/autotest.c + echo ---------- + echo cc -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/chromium/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/google-sparsehash/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/google-sparsehash/gen/arch/linux/x64/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/protobuf/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/re2/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/out/Debug/obj/gen -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/out/Debug/obj/gen/protoc_out/instaweb -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/apr/src/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/aprutil/src/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/apr/gen/arch/linux/x64/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/aprutil/gen/arch/linux/x64/include -o objs/autotest objs/autotest.c -Wl,-Bsymbolic-functions -Wl,-z,relro -lmaxminddb + echo ---------- + rm -rf objs/autotest.c + [ no = yes ] + cat ./configure: error: the geoip2 module requires the maxminddb library. + exit 1
I think you've already noticed that some sort of "server" is needed. Each client (desktop environment, window manager, or windowed program) needs to share the display with all of the others, and they need to be able to display things without knowing the details of the hardware, or knowing who else is using the display. So the X11 server provides the layer of abstraction and sharing that you mentioned, by providing an IPC interface. X11 could probably be made to run over named pipes, but there are a two big things that named pipes can't do. Named pipes only communicate in one direction. If two processes start putting data into the "sending" end of a named pipe, the data will get intermingled. In fact, most X clients talk to the server using a "new and improved" named pipe called a UNIX-domain socket. It's a lot like a named pipe except that it lets processes talk in both directions, and it keeps track of who said what. These are the same sorts of things that the network has to do, and so UNIX-domain sockets use the same programming interface as the TCP/IP sockets that provide network communications. But from there, it's really easy to say "What if I ran the server on a different host than the client?" Just use a TCP socket instead of the UNIX socket, and voila: a remote-desktop protocol that predates Windows RDP by decades. I can ssh to four different remote hosts and run synaptic (graphical package manager) on each of them, and all four windows appear on my local computer's display.
{ "source": [ "https://unix.stackexchange.com/questions/159531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37082/" ] }
159,540
I'm trying to install php-5.3 on Arch Linux, but bison is too new, so I built older version of bison from sources. And it appears it installs itself into /usr/local by default. (Is this some kind of convention?) So I'm now wondering if I can install more than one version of bison i.e., side-by-side with the default system one and the one I just installed. These kind of things are likely to be rarely needed. I'm just curious about it. Is this a hard thing to do? How should I go about it?
I think you've already noticed that some sort of "server" is needed. Each client (desktop environment, window manager, or windowed program) needs to share the display with all of the others, and they need to be able to display things without knowing the details of the hardware, or knowing who else is using the display. So the X11 server provides the layer of abstraction and sharing that you mentioned, by providing an IPC interface. X11 could probably be made to run over named pipes, but there are a two big things that named pipes can't do. Named pipes only communicate in one direction. If two processes start putting data into the "sending" end of a named pipe, the data will get intermingled. In fact, most X clients talk to the server using a "new and improved" named pipe called a UNIX-domain socket. It's a lot like a named pipe except that it lets processes talk in both directions, and it keeps track of who said what. These are the same sorts of things that the network has to do, and so UNIX-domain sockets use the same programming interface as the TCP/IP sockets that provide network communications. But from there, it's really easy to say "What if I ran the server on a different host than the client?" Just use a TCP socket instead of the UNIX socket, and voila: a remote-desktop protocol that predates Windows RDP by decades. I can ssh to four different remote hosts and run synaptic (graphical package manager) on each of them, and all four windows appear on my local computer's display.
{ "source": [ "https://unix.stackexchange.com/questions/159540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29867/" ] }
159,557
In a shell script, how do I easily and non-invasively test for write access to a file without actually attempting to modify the file? I could parse the output of stat , but that seems really complex, and perhaps brittle, though I'm not sure how much stat output differs across implementations and time. I could append to the end of the file and see if that succeeds, but that's potentially dangerous, for two reasons I can think of: I now have to remove the addition, and in case some other process writes to the file, this immediately becomes non-trivial as my line is no longer the last one. Any process reading the file may have arbitrary requirements on the contents of that file, and I may just have broken that application.
Just use the - w flag of the test utillity: [ -w /path/to/file ] && echo "writeable" || echo "write permission denied" Note that if you're going to write to the file later, it's still possible that you won't be able to write to it. The file may have moved, the permissions may have changed, etc. It can also happen that -w detects write permissions but some other factor intervenes to make the file not writable .
{ "source": [ "https://unix.stackexchange.com/questions/159557", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18863/" ] }
159,569
I've been given this one-liner to know how much memory my processes consume. $ ps -u $USER -o pid,rss,command | \ awk '{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}' Now I'd like to create an alias for that command, but have an issue escaping ' characters: $ alias mm=ps -u $USER -o pid,rss,command | \ awk '{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}' bash: alias: -u: not found bash: alias: myuser: not found bash: alias: -o: not found bash: alias: pid,rss,command: not found Total 0 MB I tried to escape the single quotes, but still it doesn't work. $ alias mm='ps -u $USER -o pid,rss,command | \ awk \'{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}\'' > Can you help me understand how to create this alias?
To make this an alias, which is possible, you need to use double quotes around the entire value for the alias. You'll also need to escape a few things within the alias as a result, and you need to escape any of the field arguments to awk since these will get interpreted as arguments by Bash as well when you're setting the alias. This worked for me: $ alias mm="ps -u $USER -o pid,rss,command | \ awk '{print \$0}{sum+=\$2} END {print \"Total\", sum/1024, \"MB\"}'" In the above I've done the following: Double quotes around alias' value alias mm="ps -u ...." Escaped awk's double quotes awk '{print \$0}{sum+=\$2} END {print \"Total\", sum/1024, \"MB\"} Escaped awk's fields awk '{print \$0}{sum+=\$2} END Would I use this? Probably not, I'd switch this to a Bash function instead, since it'll be easier to maintain and understand what's going on, but here's the alias if you still want it.
{ "source": [ "https://unix.stackexchange.com/questions/159569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8115/" ] }
159,594
I have recently come across a file whose name begins with the character '♫'. I wanted to copy this file, feed it into ffmpeg , and reference it in various other ways in the terminal. I usually auto-complete weird filenames but this fails as I cannot even type the first letter. I don't want to switch to the mouse to perform a copy-paste maneuver. I don't want to memorize a bunch of codes for possible scenarios. My ad hoc solution was to switch into vim , paste !ls and copy the character in question, then quit and paste it into the terminal. This worked but is quite horrific. Is there an easier way to deal with such scenarios? NOTE: I am using the fish shell if it changes things.
If the first character of file name is printable but neither alphanumeric nor whitespace you can use [[:punct:]] glob operator: $ ls *.txt f1.txt f2.txt ♫abc.txt $ ls [[:punct:]]*.txt ♫abc.txt
{ "source": [ "https://unix.stackexchange.com/questions/159594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83019/" ] }
159,672
I'm just trying to review basic terminal commands. Having said that, how do I create a text file using the terminal only?
You can't use a terminal to create a file. You can use an application running in a terminal. Just invoke any non-GUI editor ( emacs -nw , joe , nano , vi , vim , …). If you meant using the command line, then you are asking how to create a file using the shell. See What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? The basic way to create a file with the shell is with output redirection . For example, the following command creates a file called foo.txt containing the line Hello, world. echo 'Hello, world.' >foo.txt If you want to write multiple lines, here are a few possibilities. You can use printf . printf '%s\n' 'First line.' 'Second line.' 'Third line.' >foo.txt You can use a string literal containing newlines. echo 'First line. Second line. Third line.' >foo.txt or echo $'First line.\nSecond line.\nThird line.' >foo.txt Another possibility is to group commands. { echo 'First line.' echo 'Second line.' echo 'Third line.' } >foo.txt On the command line, you can do this more directly with cat . Redirect its output to the file and type the input line by line on cat 's standard input. Press Ctrl + D at the beginning of the line to indicate the end of the input. $ cat >foo.txt First line. Second line. Third line. Ctrl+D In a script you would use a here document to achieve the same effect: cat <<EOF >foo.txt First line. Second line. Third line. EOF If you just want to create an empty file, you can use the touch command: it creates the file if it doesn't exist, and just updates its last-modified date if it exists. touch foo.txt Equivalently: >>foo.txt i.e. open foo.txt for appending, but write 0 bytes to it — this creates the file but doesn't modify it. Unlike touch , this doesn't update the file's last-modified date if it already existed. To create an empty file, and remove the file's content if the file already existed, you can use >foo.txt
{ "source": [ "https://unix.stackexchange.com/questions/159672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68557/" ] }
159,695
This one-liner removes duplicate lines from text input without pre-sorting. For example: $ cat >f q w e w r $ awk '!a[$0]++' <f q w e r $ The original code I have found on the internets read: awk '!_[$0]++' This was even more perplexing to me as I took _ to have a special meaning in awk, like in Perl, but it turned out to be just a name of an array. Now, I understand the logic behind the one-liner: each input line is used as a key in a hash array, thus, upon completion, the hash contains unique lines in the order of arrival. What I would like to learn is how exactly this notation is interpreted by awk. E.g. what the bang sign ( ! ) means and the other elements of this code snippet. How does it work?
Here is a "intuitive" answer, for a more in depth explanation of awk's mechanism see either @Cuonglm's In this case, !a[$0]++ , the post-increment ++ can be set aside for a moment, it does not change the value of the expression. So, look at only !a[$0] . Here: a[$0] uses the current line $0 as key to the array a , taking the value stored there. If this particular key was never referenced before, a[$0] evaluates to the empty string. !a[$0] The ! negates the value from before. If it was empty or zero (false), we now have a true result. If it was non-zero (true), we have a false result. If the whole expression evaluated to true, meaning that a[$0] was not set to begin with, the whole line is printed as the default action. Also, regardless of the old value, the post-increment operator adds one to a[$0] , so the next time the same value in the array is accessed, it will be positive and the whole condition will fail.
{ "source": [ "https://unix.stackexchange.com/questions/159695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27985/" ] }
159,696
I really tried searching but could not find anything (it's hard to know what exactly to search for). I know how to do this with sed : print from current line until the line that matches SOMETHING: sed -n '/1/,/SOMETHING/p' But how do I do the same thing, but print from current line until the line that does not match SOMETHING? e.g. pipe this into sed : blah blah SOMETHING blah blah blah blah SOMETHINGblahblahblah SOMETHING blah blah NO MATCH HERE Then I want to filter out and print only the first 3 lines (but "3" can vary).
Here is a "intuitive" answer, for a more in depth explanation of awk's mechanism see either @Cuonglm's In this case, !a[$0]++ , the post-increment ++ can be set aside for a moment, it does not change the value of the expression. So, look at only !a[$0] . Here: a[$0] uses the current line $0 as key to the array a , taking the value stored there. If this particular key was never referenced before, a[$0] evaluates to the empty string. !a[$0] The ! negates the value from before. If it was empty or zero (false), we now have a true result. If it was non-zero (true), we have a false result. If the whole expression evaluated to true, meaning that a[$0] was not set to begin with, the whole line is printed as the default action. Also, regardless of the old value, the post-increment operator adds one to a[$0] , so the next time the same value in the array is accessed, it will be positive and the whole condition will fail.
{ "source": [ "https://unix.stackexchange.com/questions/159696", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86919/" ] }
159,859
As per my knowledge/understanding both help and man came at the same time or have very little time difference between them. Then GNU Info came in and from what I have seen is much more verbose, much more detailed and arguably much better than what man is. Many entries even today in man are cryptic. I have often wondered why Info which is superior to man in many ways didn't succeed man at all. I still see people producing man pages than info pages. Was it due to not helpful tools for info? Something in the licenses of the two? Or some other factor which didn't get info the success it richly deserved? I did see a few questions on unix stackexchange notably What is GNU Info for? and Difference between help, info and man command among others.
To answer your question with at least a hint of factual background I propose to start by looking at the timeline of creation of man , info and other documentation systems. The first man page was written in 1971 using troff (nroff was not around yet) in a time when working on a CRT based terminal was not common and printing of manual pages the norm. The man pages use a simple linear structure. The man pages normally give a quick overview of a command, including its commandline option/switches. The info command actually processes the output from Texinfo typesetting syntax. This had its initial release in February 1986, a time when working on a text based CRT was the norm for Unix users, but graphical workstations still exclusive. The .info output from Texinfo provides basic navigation of text documents. And from the outset has a different goal of providing complete documentation (for the GNU Project). Things like the use of the command and the commandline switches are only a small part of what an Texinfo file for a program contains. Although there is overlap the (Tex)info system was designed to complement the man pages, and not to replace them. HTML and web browsers came into existence in the early 90s and relatively quickly replaced text based information systems based on WAIS and gopher. Web browsers utilised the by then available graphical systems, which allows for more information (like underlined text for a hyperlink) then text-only systems allow. As the functionality info provides can be emulated in HTML and a web browser (possible after conversion), the browser based system allow for greater ease of navigation (or at least less experience/learning). HTML was expanded and could do more things than Texinfo can. So for new projects (other than GNU software) a whole range of documentation systems has evolved (and is still evolving), most of them generating HTML pages. A recent trend for these is to make their input (i.e. what the human documenter has to provide) human readable, whereas Texinfo (and troff) is more geared to efficient processing by the programs that transform them.¹ info was not intended to be a replacement for the man pages, but they might have replaced them if the GNU software had included a info2man like program to generate the man pages from a (subset of a larger) Texinfo file. Combine that with the fact that fully utilising the facilities that a system like Texinfo, (La(TeX, troff, HTML (+CSS) and reStructured Text provide takes time to learn, and that some of those are arguably more easy to learn and/or are more powerful, there is little chance of market dominance of (Tex) info . ¹ E.g reStructured Text , which can also be used to write man pages
{ "source": [ "https://unix.stackexchange.com/questions/159859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
159,873
I want to set up CentOS 7 firewall such that, all the incoming requests will be blocked except from the originating IP addresses that I whitelist. And for the Whitelist IP addresses all the ports should be accessible. I'm able to find few solutions (not sure whether they will work) for iptables but CentOS 7 uses firewalld . I can't find something similar to achieve with firewall-cmd command. The interfaces are in Public Zone. I have also moved all the services to Public zone already.
I'd accomplish this by adding sources to a zone. First checkout which sources there are for your zone: firewall-cmd --permanent --zone=public --list-sources If there are none, you can start to add them, this is your "whitelist" firewall-cmd --permanent --zone=public --add-source=192.168.100.0/24 firewall-cmd --permanent --zone=public --add-source=192.168.222.123/32 (That adds a whole /24 and a single IP, just so you have a reference for both a subnet and a single IP) Set the range of ports you'd like open: firewall-cmd --permanent --zone=public --add-port=1-22/tcp firewall-cmd --permanent --zone=public --add-port=1-22/udp This just does ports 1 through 22. You can widen this, if you'd like. Now, reload what you've done. firewall-cmd --reload And check your work: firewall-cmd --zone=public --list-all Side note / editorial: It doesn't matter but I like the "trusted" zone for a white-listed set of IPs in firewalld. You can make a further assessment by reading redhat's suggestions on choosing a zone . See also: RHEL 7 using Firewalls article Fedora FirewallD docs (fairly good, fedora's been using firewalld for some while) If you'd like to DROP packets outside this source, here's an example for dropping those outside the /24 I used as an example earlier, you can use rich rules for this , I believe. This is conceptual, I have not tested it (further than seeing that centos 7 accepts the command), but, should be easy enough to do a pcap and see if it behaves how you'd expect firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.100.0/24" invert="True" drop'
{ "source": [ "https://unix.stackexchange.com/questions/159873", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87039/" ] }
160,019
I was trying to install bsd-mailx utility the package got installed however I am wondering about the error. This is the error I get: Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: warning: 'start-stop-daemon' not found in PATH or not executable. dpkg: error: 2 expected programs not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2)
First of all, the lines you are truly interested in are: dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: warning: 'start-stop-daemon' not found in PATH or not executable. These errors have been reported several times by Debian and Ubuntu users (you can actually Google them for more information). It seems like the PATH variable isn't correctly set when the user tries to execute a command through sudo , which is probably what you are trying to do. Solution 1: Set sudo 's default secure path Open /etc/sudoers by running visudo in your terminal, and make sure the file includes the following line: Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" More information about this problem may be found here (Problems and tips > PATH not set). Solution 2: use the root account directly Don't use sudo , just switch to root to run your commands. Run one of the following commands to do so: $ sudo -i $ su Once you are logged in as root, just run your apt-get commands again: # apt-get ... You might have to set root's PATH first though. Edit /root/.bashrc (with root privileges of course), and add the following line: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Solution 3: try to pass the PATH variable to sudo at execution time. Just prefix the sudo call with the redefinition of the PATH variable: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin sudo apt-get ...
{ "source": [ "https://unix.stackexchange.com/questions/160019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87142/" ] }
160,212
I'm looking for a way to watch YouTube videos in terminal (not in a browser or another window, but right there, in any bash session). Is there a simple way to do this? I imagine something like this: $ youtube <video-url> I already know how to play a video using mplayer : $ mplayer -vo caca local-file.avi However, this opens a new window. It would be cool to play it in terminal. Also, it should be compatible with tmux sessions. I asked another question for how to prevent opening a new window . For those that wonder where I need such a functionality, I started an experimental project named TmuxOS -- with the concept that everything should run inside of a tmux session . So, indeed I need a video player for local and remote videos. :-)
You can download videos and/or just the audio and then watch/listen to them using youtube-dl . The script is written in Python and makes use of ffmpeg I believe. $ youtube-dl --help Usage: youtube-dl [options] url [url...] Options: General Options: -h, --help print this help text and exit --version print program version and exit -U, --update update this program to latest version. Make sure that you have sufficient permissions (run with sudo if needed) ... ... To download videos you simply give it the URL from the page you want the video on and the script does the rest: $ youtube-dl https://www.youtube.com/watch?v=OwvZemXJhF4 [youtube] Setting language [youtube] OwvZemXJhF4: Downloading webpage [youtube] OwvZemXJhF4: Downloading video info webpage [youtube] OwvZemXJhF4: Extracting video information [youtube] OwvZemXJhF4: Encrypted signatures detected. [youtube] OwvZemXJhF4: Downloading js player 7N [youtube] OwvZemXJhF4: Downloading js player 7N [download] Destination: Joe Nichols - Yeah (Audio)-OwvZemXJhF4.mp4 [download] 100% of 21.74MiB in 00:16 You can then use vlc or mplayer to watch these locally: $ vlc "Joe Nichols - Yeah (Audio)-OwvZemXJhF4.mp4" VLC media player 2.1.5 Rincewind (revision 2.1.4-49-gdab6cb5) [0x1cd1118] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. Fontconfig warning: FcPattern object size does not accept value "0" Fontconfig warning: FcPattern object size does not accept value "0" Fontconfig warning: FcPattern object size does not accept value "0" Fontconfig warning: FcPattern object size does not accept value "0" OK but I want to watch these videos as they're streamed & in ASCII I found this blog article titled: On ascii, youtube and letting go (Wayback Machine) that demonstrates the method that I discussed in the chatroom, mainly using youtube-dl as the "backend" which could do the downloading of the YouTube stream and then redirecting it to some other app. This article shows it being done with mplayer : $ youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ mplayer -vo aa -monitorpixelaspect 0.5 - The video being downloaded by youtube-dl is redirected via STDOUT above, -o - . There's a demo of the effect here . With the installation of additional libraries the ASCII video can be enhanced further. OK but I want the video in my actual terminal? I found this trick which allows video to be played in an xterm in the O'Reilly articled titled: Watch Videos in ASCII Art . $ xterm -fn 5x7 -geometry 250x80 -e "mplayer -vo aa:driver=curses j.mp4 The above results in a xterm window being opened where the video plays. So I thought, why not put the peanut butter and the chocolate together like this: $ xterm -fn 5x7 -geometry 250x80 -e \ "youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ mplayer -vo aa:driver=curses -" This almost works! I'm not sure why the video cannot play in the window, but it would seem like it should be able to. The window comes up and starts to play but then closes. I see video for a brief few seconds and then nothing. Perhaps the above will get you closer to your ultimate solution, or perhaps it just needs to be tweaked a bit on the options. Additional libraries If you have libcaca installed (the colorized version of aalib ) and you reduce the font size in your gnome-terminal to something really small, like say 3, the following command will display a much better looking ASCII video directly within the terminal: $ CACA_DRIVER=ncurses mplayer -vo caca video.mp4 Terminals It would seem that the choice of terminal can make a big deal as to whether mplayer can play directly inside the terminal or whether it opens a separate window. Caching too on mplayer made a dramatic difference in being able to play directly in ones terminals. Using this command I was able to play in terminator , at least for the first 1/4 of the video before it cut out: $ youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ mplayer -cache 32767 -vo aa:driver=curses - The colored version used this command: $ youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ CACA_DRIVER=ncurses mplayer -cache 64000 -vo caca - These same commands could play in gnome-terminal & xterm too. NOTE: That's (from Left to Right) xterm , terminator , gnome-terminal , and terminology .
{ "source": [ "https://unix.stackexchange.com/questions/160212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45370/" ] }
160,356
Is there a key combination for bash to auto compete a command from the history? In ipython and matlab for example this is achieved by pressing up arrow after typing a few characters.
First of all, hitting tab in bash is even better since it autocompletes all executables in your PATH irrespecitve of whether they're in the history. That said, there are various ways of getting a command from your history: Use its number. If you know that the command you want was 3 commands ago, you can just run !-3 That will re-execute the command you ran three commands ago. Search for it. Type Ctrl r and start typing any text. The first command from the history that matches your text will be shown and hitting enter will execute it. Hit ▲ (up arrow). That will bring up the last command, press it again and you will go up your command history. When you find the one you want, hit enter . Add these lines to your ~/.inputrc (create the file if it doesn't exist): "\e[A": history-search-backward "\e[B": history-search-forward To immediately load the file, run bind -f ~/.inputrc ( source ). Now, type the first few characters of one of the commands you've previously run and hit ▲ . The first command from your history that starts with those characters will be shown. Hit ▲ again to see the rest and hit enter when you've found the one you want. Use the history command. As @Isaac explained, that will list all of the commands stored in your history file.
{ "source": [ "https://unix.stackexchange.com/questions/160356", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77305/" ] }
161,674
While I understand the greatness of udev and appreciate the developers' effort, I was simply wondering if there is an alternative to it. For instance, I might imagine there should be a way to make startup script that creates most of the device nodes which on my system (no changing hardware) are most the same anyway. The benefit or reason I would like to skip udev would be the same as for skipping dbus , namely reducing complexity and by that increasing my changes to setup the system more safely.
There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;)
{ "source": [ "https://unix.stackexchange.com/questions/161674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
161,675
When I try to install Jekyll on Elementary OS Luna with the command sudo gem install jekyll --no-rdoc --no-ri I get the following error. -- rbconfig (LoadError) from /usr/lib/ruby/vendor_ruby/1.8/rubygems.rb:29 from /usr/bin/gem:8:in `require' from /usr/bin/gem:8 Can anybody help me make sense of the error and maybe suggest a fix?
There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;)
{ "source": [ "https://unix.stackexchange.com/questions/161675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87559/" ] }
161,682
I'm using Xubuntu 14.04 and typing on a keyboard without "Windows" and "context menu" keys (Unicomp Model M). Currently, I get context menu (mouse right-click) with Shift + F10 , but when touch-typing, I often miss F10 key. I wanted to create keyboard shortcut ( Alt + F1 ) for context menu, so when I opened keyboard settings in Xfce I expected to find definition of that shortcut (like I did for Whisker Menu, which I remapped to Alt + ` ), but it wasn't there.
There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;)
{ "source": [ "https://unix.stackexchange.com/questions/161682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87563/" ] }
161,698
I have a fresh VPS I just bought, for playing only. No risk of any kind involved, so I noticed that due to my slow connection I have to wait seconds to finish writing commands and opening/closing files. So, I would like to know if I can connect to my VPS without ssh? Putty gives me an option to connect as raw , which upon choosing I can not log in to my VPS.
There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;)
{ "source": [ "https://unix.stackexchange.com/questions/161698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77324/" ] }
161,821
How can I delete all lines in a file using vi? At moment I do that using something like this to remove all lines in a file: echo > test.txt How can I delete all lines using vi ? Note: Using dd is not a good option. There can be many lines.
In vi do :1,$d to delete all lines. The : introduces a command (and moves the cursor to the bottom). The 1,$ is an indication of which lines the following command ( d ) should work on. In this case the range from line one to the last line (indicated by $ , so you don't need to know the number of lines in the document). The final d stands for delete the indicated lines. There is a shorter form ( :%d ) but I find myself never using it. The :1,$d can be more easily "adapted" to e.g. :4,$-2d leaving only the first 3 and last 2 lines, deleting the rest.
{ "source": [ "https://unix.stackexchange.com/questions/161821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87669/" ] }
161,905
I'm trying to add unzipped files to an existing already zipped folder say new folder.zip . Is it possible to use zip -r new folder.zip after adding some unzipped files to them? Will this command compress the folder? Is there any alternative to do this?
Use the update flag: -u Example: zip -ur existing.zip myFolder This command will compress and add myFolder (and it's contents) to the existing.zip . Advanced Usage: The update flag actually compares the incoming files against the existing ones and will either add new files, or update existing ones. Therefore, if you want to add/update a specific subdirectory within the zip file, just update the source as desired, and then re-zip the entire source with the -u flag. Only the changed files will be zipped. If you don't have access to the source files, you can unzip the zip file, then update the desired files, and then re-zip with the -u flag. Again, only the changed files will be zipped. Example: Original Source Structure ParentDir ├── file1.txt ├── file2.txt ├── ChildDir │ ├── file3.txt │ ├── Logs │ │ ├── logs1.txt │ │ ├── logs2.txt │ │ ├── logs3.txt Updated Source Structure ParentDir ├── file1.txt ├── file2.txt ├── ChildDir │ ├── file3.txt │ ├── Logs │ │ ├── logs1.txt │ │ ├── logs2.txt │ │ ├── logs3.txt │ │ ├── logs4.txt &lt-- NEW FILE Usage $ zip -ur existing.zip ParentDir > updating: ParentDir/ChildDir/Logs (stored 0%) > adding: ParentDir/ChildDir/Logs/logs4.txt (stored 96%)
{ "source": [ "https://unix.stackexchange.com/questions/161905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87738/" ] }
162,131
I would like to create a patch from a specific gcc branch comparing it to the official releases; so when I unpack the tarball from the stable release, I can apply the patch and get the equivalent of what was in that specific branch . It's the first time I need to create a patch, so it's my very first time doing this and my main concern is to get the options and the parsing right since we are talking about an extremely important piece of software diff -crB GccStable GccGit > /tmp/fromStabletoBranch.patch Is this enough and the best way of doing it ?
Yes, this is a good way to create a patch. In short: To create patch for single file your command may look like diff -Naru file_original file_updated > file.patch where -N : treat absent files as empty -a : treat all files as text -r : recursively compare any subdirectories found -u : output NUM (default 3) lines of unified context To create patch for whole directory: diff -crB dir_original dir_updated > dfile.patch where -c : output NUM (default 3) lines of copied context -r : recursively compare any subdirectories -B : ignore changes whose lines are all blank After all to apply this patch one can run patch -p1 --dry-run < dfile.patch where switch p instructs patch to strip the path prefix so that files will be identified correctly. In most cases it should be 1 . Remove --dry-run if you are happy from the result printed on the screen.
{ "source": [ "https://unix.stackexchange.com/questions/162131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41194/" ] }
162,133
I want to run a bash script in a detached screen. The script calls a program a few times, each of which takes too long to wait. My first thought was to simply open a screen and then call the script, but it appears that I can't detach (by ctrl-a d ) while the script is running. So I did some research and found this instruction to replace the shebang with following: #!/usr/bin/screen -d -m -S screenName /bin/bash But that doesn't work, either (the options are not recognized). Any suggestions? PS It occurs to me just now that screen -dmS name ./script.sh would probably work for my purposes, but I'm still curious about how to incorporate this into the script. Thank you.
The shebang line you've seen may work on some unix variants, but not on Linux. Linux's shebang lines are limited: you can only have one option. The whole string -d -m -S screenName /bin/bash is passed as a single option to screen , instead of being passed as different words. If you want to run a script inside screen and not mess around with multiple files or quoting, you can make the script a shell script which invokes screen if not already inside screen. #!/bin/sh if [ -z "$STY" ]; then exec screen -dm -S screenName /bin/bash "$0"; fi do_stuff more_stuff
{ "source": [ "https://unix.stackexchange.com/questions/162133", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87866/" ] }
162,221
There are many ways to replace characters in a variable. The shortest way I found out is tr so far: OUTPUT=a\'b\"c\`d_123and_a_lot_more OUTPUT=$(echo "$OUTPUT"|tr -d "'\`\"") echo $OUTPUT Is there a faster way? And is this quoting-safe for quotes like ' , " and ` itself?
Let's see. The shortest I can come up with is a tweak of your tr solution: OUTPUT="$(tr -d "\"\`'" <<<$OUTPUT)" Other alternatives include the already mentioned variable substitution which can be shorter than shown so far: OUTPUT="${OUTPUT//[\'\"\`]}" And sed of course though this is longer in terms of characters: OUTPUT="$(sed s/[\'\"\`]//g <<<$OUTPUT)" I'm not sure if you mean shortest in length or in terms of time taken. In terms of length these two are as short as it gets (or as I can get it anyway) when it comes to removing those specific characters. So, which is fastest? I tested by setting the OUTPUT variable to what you had in your example but repeated several dozen times: $ echo ${#OUTPUT} 4900 $ time tr -d "\"\`'" <<<$OUTPUT real 0m0.002s user 0m0.004s sys 0m0.000s $ time sed s/[\'\"\`]//g <<<$OUTPUT real 0m0.005s user 0m0.000s sys 0m0.000s $ time echo ${OUTPUT//[\'\"\`]} real 0m0.027s user 0m0.028s sys 0m0.000s As you can see, the tr is clearly the fastest, followed closely by sed . Also, it seems like using echo is actually slightly faster than using <<< : $ for i in {1..10}; do ( time echo $OUTPUT | tr -d "\"\`'" > /dev/null ) 2>&1 done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0025 $ for i in {1..10}; do ( time tr -d "\"\`'" <<<$OUTPUT > /dev/null ) 2>&1 done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0029 Since the difference is tiny, I ran the above tests 10 times for each of the two and it turns out that the fastest is indeed the one you had to begin with: echo $OUTPUT | tr -d "\"\`'" However, this changes when you take into account the overhead of assigning to a variable, here, using tr is slightly slower than the simple replacement: $ for i in {1..10}; do ( time OUTPUT=${OUTPUT//[\'\"\`]} ) 2>&1 done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0032 $ for i in {1..10}; do ( time OUTPUT=$(echo $OUTPUT | tr -d "\"\`'")) 2>&1 done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0044 So, in conclusion, when you simply want to view the results, use tr but if you want to reassign to a variable, using the shell's string manipulation features is faster since they avoid the overhead of running a separate subshell.
{ "source": [ "https://unix.stackexchange.com/questions/162221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
162,393
I've heard that changing the hostname in new versions of fedora is done with the hostnamectl command. In addition, I recently (and successfully) changed my hostname on Arch Linux with this method. However, when running: [root@localhost ~]# hostnamectl set-hostname --static paragon.localdomain [root@localhost ~]# hostnamectl set-hostname --transient paragon.localdomain [root@localhost ~]# hostnamectl set-hostname --pretty paragon.localdomain The changes are not preserved after a reboot (contrary to many people's claims that it does). What is wrong? I really don't want to edit /etc/hostname manually. I should also note that this is a completely stock fedora. I haven't even gotten around to installing my core apps yet.
The command to set the hostname is definitely, hostnamectl . root ~ # hostnamectl set-hostname --static "YOUR-HOSTNAME-HERE" Here's an additional source that describes this functionality a bit more, titled: Correctly setting the hostname - Fedora 20 on Amazon EC2 . Additionally the man page for hostnamectl : HOSTNAMECTL(1) hostnamectl HOSTNAMECTL(1) NAME hostnamectl - Control the system hostname SYNOPSIS hostnamectl [OPTIONS...] {COMMAND} DESCRIPTION hostnamectl may be used to query and change the system hostname and related settings. This tool distinguishes three different hostnames: the high-level "pretty" hostname which might include all kinds of special characters (e.g. "Lennart's Laptop"), the static hostname which is used to initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the transient hostname which is a default received from network configuration. If a static hostname is set, and is valid (something other than localhost), then the transient hostname is not used. Note that the pretty hostname has little restrictions on the characters used, while the static and transient hostnames are limited to the usually accepted characters of Internet domain names. The static hostname is stored in /etc/hostname, see hostname(5) for more information. The pretty hostname, chassis type, and icon name are stored in /etc/machine-info, see machine-info(5). Use systemd-firstboot(1) to initialize the system host name for mounted (but not booted) system images. There is a bug in Fedora 21 where SELinux prevents hostnamectl access, found here, titled: Bug 1133368 - SELinux is preventing systemd-hostnam from 'unlink' accesses on the file hostname . This bug seems to be related. There's an issue with the SELinux contexts not being applied properly to the file /etc/hostname upon installation. This manifests in the tool hostnamectl not being able to manipulate the file /etc/hostname . That same thread offered this workaround: $sudo restorecon -v /etc/hostname NOTE: That patches were applied to Anaconda (the installation tool) so that this issue should go away in the future for new users.
{ "source": [ "https://unix.stackexchange.com/questions/162393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78754/" ] }
162,402
I'd like to get a list of fail logs in the current directory for use in an outside script, the logs are determined with the pattern FAIL in the filename, so I've been using a FAIL* to feed my script files to open and process. However, for each FAIL file there are two types, a compressed file and an uncompressed one. I just want to open the uncompressed file. Is it possible to chain find FAIL* but not if *.gz/bz2/whatever exists?
The command to set the hostname is definitely, hostnamectl . root ~ # hostnamectl set-hostname --static "YOUR-HOSTNAME-HERE" Here's an additional source that describes this functionality a bit more, titled: Correctly setting the hostname - Fedora 20 on Amazon EC2 . Additionally the man page for hostnamectl : HOSTNAMECTL(1) hostnamectl HOSTNAMECTL(1) NAME hostnamectl - Control the system hostname SYNOPSIS hostnamectl [OPTIONS...] {COMMAND} DESCRIPTION hostnamectl may be used to query and change the system hostname and related settings. This tool distinguishes three different hostnames: the high-level "pretty" hostname which might include all kinds of special characters (e.g. "Lennart's Laptop"), the static hostname which is used to initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the transient hostname which is a default received from network configuration. If a static hostname is set, and is valid (something other than localhost), then the transient hostname is not used. Note that the pretty hostname has little restrictions on the characters used, while the static and transient hostnames are limited to the usually accepted characters of Internet domain names. The static hostname is stored in /etc/hostname, see hostname(5) for more information. The pretty hostname, chassis type, and icon name are stored in /etc/machine-info, see machine-info(5). Use systemd-firstboot(1) to initialize the system host name for mounted (but not booted) system images. There is a bug in Fedora 21 where SELinux prevents hostnamectl access, found here, titled: Bug 1133368 - SELinux is preventing systemd-hostnam from 'unlink' accesses on the file hostname . This bug seems to be related. There's an issue with the SELinux contexts not being applied properly to the file /etc/hostname upon installation. This manifests in the tool hostnamectl not being able to manipulate the file /etc/hostname . That same thread offered this workaround: $sudo restorecon -v /etc/hostname NOTE: That patches were applied to Anaconda (the installation tool) so that this issue should go away in the future for new users.
{ "source": [ "https://unix.stackexchange.com/questions/162402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88039/" ] }
162,411
I am trying to understand how to use find -maxdepth 0 option. I have the below directory structure. --> file1 --> parent --> child1 --> file1 --> file2 --> child2 --> file1 --> file2 --> file1 Now, I execute my find command as below. find ./parent -maxdepth 0 -name "file1" find ./ -maxdepth 0 -name "file1" find . -maxdepth 0 -name "file1" With none of the above find commands, file1 gets returned. From man page of find , I see the below information. -maxdepth 0 means only apply the tests and actions to the command line arguments. I searched for some examples with -maxdepth 0 option and couldn't find any proper example. My find version is, find --version find (GNU findutils) 4.4.2 Can someone please provide me some pointers on which cases -maxdepth 0 option would be useful? EDIT When I execute the below command, I get the file1 getting listed twice. Is this intended to work this way? find . file1 -maxdepth 1 -name "file1" ./file1 file1
Let us suppose that we have file1 in the current directory.  Then: $ find . -maxdepth 0 -name "file1" $ find . file1 -maxdepth 0 -name "file1" file1 Now, let's look at what the documentation states: -maxdepth 0 means only apply the tests and actions to the command line arguments. In my first example above, only the directory . is listed on the command line.  Since . does not have the name file1 , nothing is listed in the output.  In my second example above, both . and file1 are listed on the command line, and, because file1 matches -name "file1" , it was returned in the output. In other words, -maxdepth 0 means do not search directories or subdirectories. Instead, only look for a matching file among those explicitly listed on the command line. In your examples, only directories were listed on the command line and none of them were named file1 . Hence, no output. In general, many files and directories can be named on the command line.  For example, here we try a find command with nine files and directories on the command line: $ ls d1 file1 file10 file2 file3 file4 file5 file6 file7 $ find d1 file1 file10 file2 file3 file4 file5 file6 file7 -maxdepth 0 -name "file1" file1 Overlapping paths Consider: $ find . file1 -maxdepth 0 -iname file1 file1 $ find . file1 file1 -maxdepth 0 -iname file1 file1 file1 $ find . file1 file1 -maxdepth 1 -iname file1 ./file1 file1 file1 find will follow each path specified on the command line and look for matches even if the paths lead to the same file, as in . file , or even if the paths are exact duplicates, as in file1 file1 .
{ "source": [ "https://unix.stackexchange.com/questions/162411", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
162,476
For example, $PATH and $HOME When i type echo $PATH it returns my $PATH , but i want to echo the word $PATH and not what the actual variable stands for, echo "$PATH" doesn't work either.
You just need to escape the dollar $ .: echo \$PATH $PATH Or surround it in single quotes: echo '$PATH' $PATH This will ensure the word is not interpreted by the shell.
{ "source": [ "https://unix.stackexchange.com/questions/162476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
162,478
Everybody seems to be talking about the POODLE vulnerability today. And everybody recommends disabling SSLv3 in Apache using the following configuration directive: SSLProtocol All -SSLv2 -SSLv3 instead of the default SSLProtocol All -SSLv2 I've done that, and no joy – after testing repeatedly with various tools ( here's a fast one ), I find that SSLv3 is happily accepted by my server. Yes, I did restart Apache. Yes, I did a recursive grep on all configuration files, and I don't have any override anywhere. And no, I'm not using some ancient version of Apache: [root@server ~]# apachectl -v Server version: Apache/2.2.15 (Unix) Server built: Jul 23 2014 14:17:29 So, what gives? How does one really disable SSLv3 in Apache?
I had the same problem... You have to include SSLProtocol all -SSLv2 -SSLv3 within every VirtualHost stanza in httpd.conf The VirtualHost stanzas are generally towards the end of the httpd.conf file. So for example: ... ... <VirtualHost your.website.example.com:443> DocumentRoot /var/www/directory ServerName your.website.example.com ... SSLEngine on ... SSLProtocol all -SSLv2 -SSLv3 ... </VirtualHost> Also check ssl.conf or httpd-ssl.conf or similar because they may be set there, not necessarily in httpd.conf
{ "source": [ "https://unix.stackexchange.com/questions/162478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88086/" ] }
162,528
A rather annoying feature of vim is that if you are in insert mode and do an autocomplete (Ctrl-N), arrow key down to the desired item, and press the Enter key, then it inserts the item but also inserts a newline which you then have to delete. Is there a way to select an item out of the autocomplete list without getting an additional unwanted newline?
It depends on which popup menu state you are in (see :help popupmenu-completion ). I understand from your question that you're in state 2 (since you've pressed arrow keys to find a completion). However, the default behavior for Enter in state 2 is to insert the completion without newline; what you describe is normally the behavior of state 1 (which is when you use Ctrl + N / Ctrl + P .) A way that works consistently in all states is to use Ctrl + Y . I like to remember the Y as standing for "yes, accept that word." It's also possible to just start typing the text that should come after the completed word, unless you've remapped things as in geedoubleya's answer. In the same context, you can press Ctrl + E to cancel the menu and leave your text as it was before you invoked it. If you're used to the pairings of Ctrl + E and Ctrl + Y in other contexts (e.g. to scroll up or down in normal mode, or to insert the character below or above the cursor in insert mode), that's one way to remember it here. I guess you could also think of it as "exiting" the menu or similar. See :help popupmenu-keys for more.
{ "source": [ "https://unix.stackexchange.com/questions/162528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
162,531
I'm confused about following script ( hello.go ). //usr/bin/env go run $0 $@ ; exit package main import "fmt" func main() { fmt.Printf("hello, world\n") } It can execute. (on MacOS X 10.9.5) $ chmod +x hello.go $ ./hello.go hello, world I haven't heard about shebang starting with // . And it still working when I insert a blank line at the top of the script. Why does this script work?
It isn't a shebang, it is just a script that gets run by the default shell. The shell executes the first line //usr/bin/env go run $0 $@ ; exit which causes go to be invoked with the name of this file, so the result is that this file is run as a go script and then the shell exits without looking at the rest of the file. But why start with // instead of just / or a proper shebang #! ? This is because the file need to be a valid go script, or go will complain. In go, the characters // denote a comment, so go sees the first line as a comment and does not attempt to interpret it. The character # however, does not denote a comment, so a normal shebang would result in an error when go interprets the file. This reason for the syntax is just to build a file that is both a shell script and a go script without one stepping on the other.
{ "source": [ "https://unix.stackexchange.com/questions/162531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88111/" ] }
162,769
Is xeyes purely for fun? What is the point of having it installed by default in many linux distrubutions (in X)?
xeyes is not for fun, at least not only. The purpose of this program is to let you follow the mouse pointer which is sometimes hard to see. It is very useful on multi-headed computers, where monitors are separated by some distance, and if someone (say teacher at school) wants to present something on the screen, the others on their monitors can easily follow the mouse with xeyes .
{ "source": [ "https://unix.stackexchange.com/questions/162769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
162,779
Short story I'm looking for the command to enter first found foo-something directory like: cd foo-* but without using wildcard (or other special shell characters). Long story As part of the remote drush build script, I'm trying to find the way of entering folder which folder name could change, but it has common prefix. Same example: drush -y dl ads or drush -y dl ads --dev downloads either ads-7.x-1.0-alpha1 or ads-7.x-1.x-dev ). To make the things more tricky, the command can't consist either wildcard or escaped semicolon , because drush is heavily escaping shell aliases. So ls * is escaped into ls '\''*'\''' and ending up with Command ls '*' failed. error. I've tried also using find , but I can't use -exec primary, because semicolon needs to be escaped , and drush is double escaping it into ( '\''\;'\'' ). Therefore I'm looking to enter foo-* folder without using wildcard (or any other special characters, parameter expansion, command substitution, arithmetic expansion, etc.) if possible. I believe the logic of shell escaping is here and it is intended to work the same way that escapeshellarg() does on Linux. What it does, it's escaping each parameter.
xeyes is not for fun, at least not only. The purpose of this program is to let you follow the mouse pointer which is sometimes hard to see. It is very useful on multi-headed computers, where monitors are separated by some distance, and if someone (say teacher at school) wants to present something on the screen, the others on their monitors can easily follow the mouse with xeyes .
{ "source": [ "https://unix.stackexchange.com/questions/162779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21471/" ] }
162,891
I need to append a directory to PKG_CONFIG_PATH . Normally, I would use the standard export PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:$(pyenv prefix)/lib/pkgconfig but PKG_CONFIG_PATH has not been previously set on my system. Therefore, the variable begins with a : character, which tells it to look in the current directory first. I do not want that. I settled on the following, export PKG_CONFIG_PATH=${PKG_CONFIG_PATH}${PKG_CONFIG_PATH:+:}$(pyenv prefix)/lib/pkgconfig but that just seems so ugly. Is there a better way? What is the appropriate way to conditionally append the colon if and only if the variable has already been set?
You are on the right track with the ${:+} expansion operator, you just need to modify it slightly: V=${V:+${V}:}new_V The first braces expand to $V and the colon iff V is set already otherwise to nothing - which is exactly what you need (and probably also one of the reasons for the existence of the operator). Thus in your case: export "PKG_CONFIG_PATH=${PKG_CONFIG_PATH:+${PKG_CONFIG_PATH}:}$(pyenv prefix)/lib/pkgconfig"
{ "source": [ "https://unix.stackexchange.com/questions/162891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37941/" ] }
162,900
What is this folder: /run/user/1000 on my Fedora system and what does it do? ~ $ df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.2G 20K 1.2G 1% /run/user/1000 EDIT: 7 june 2019. My two answers don't agree on what directory or where the files stored in this place were: Patrick : Prior to systemd , these applications typically stored their files in /tmp . And again here: /tmp was the only location specified by the FHS which is local, and writable by all users. Braiam : The purposes of this directory were once served by /var/run . In general, programs may continue to use /var/run to fulfill the requirements set out for /run for the purposes of backwards compatibility. And again here: Programs which have migrated to use /run should cease their usage of /var/run , except as noted in the section on /var/run . So which one is it that is the father of /run/user/1000 , why is there no mention in either answer of what the other says about the directory used before /run/user .
/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that user. These might be things such as your keyring daemon, pulseaudio, etc. Prior to systemd , these applications typically stored their files in /tmp . They couldn't use a location in /home/$user as home directories are often mounted over network filesystems, and these files should not be shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all users. However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can change the ownership & mode on the files being created, it's more difficult to work with. So systemd came along and created /run/user/$uid . This directory is local to the system and only accessible by the target user. So applications looking to store their files locally no longer have to worry about access control. It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp , you couldn't do this.
{ "source": [ "https://unix.stackexchange.com/questions/162900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
162,908
A non-root user X cannot message a user Y. This is despite both users having successfully run mesg y . I've tried following advice for similar problems on Ubuntu described in this question . No luck. A root user can message anybody. I have a rough feeling that appropriate configuration of /etc/login.defs or PAM configuration files would solve the problem, but don't know enough to troubleshoot further. Any suggestions? I am locally logged in as user picrin on tty1 and as user iva on tty2. User iva is also sshed into the box. EDIT #1 For the sake of completeness here's more info. This is returned by who : picrin tty1 2014-10-18 22:10 iva pts/1 2014-10-19 10:09 (hostXXX-XXX-XX-X.rangeXXX-XXX.btcentralplus.com) iva tty2 2014-10-19 10:13 This is returned when user picrin executes write iva tty2 : write: iva has messages disabled on tty2 This is returned when user picrin executes write iva pts/1 : write: iva has messages disabled on pts/1 This is returned when user iva runs mesg : is y I'm running Fedora 20.
/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that user. These might be things such as your keyring daemon, pulseaudio, etc. Prior to systemd , these applications typically stored their files in /tmp . They couldn't use a location in /home/$user as home directories are often mounted over network filesystems, and these files should not be shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all users. However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can change the ownership & mode on the files being created, it's more difficult to work with. So systemd came along and created /run/user/$uid . This directory is local to the system and only accessible by the target user. So applications looking to store their files locally no longer have to worry about access control. It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp , you couldn't do this.
{ "source": [ "https://unix.stackexchange.com/questions/162908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88361/" ] }
162,922
In GIMP, I can import a PDF, and use the GUI to flatten it (if it was made with many layers) by selecting Flatten Image in the Image dropdown menu. I can then export the PDF with a new filename. I would like to automate this. Is there some way to do it via the terminal?
I found these 2 method via Google, in this thread titled: Re: Flattening PDF Files at the UNIX Command Line . Method #1 - using Imagemagick's convert: $ convert -density 300 orig.pdf flattened.pdf NOTE: The quality is reported to be so so with this approach. Method #2 - Using pdf2ps -> ps2pdf: $ pdf2ps orig.pdf - | ps2pdf - flattened.pdf NOTE: This method is reported to retain the image quality.
{ "source": [ "https://unix.stackexchange.com/questions/162922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36103/" ] }
162,960
I would like to download some files from my server into my laptop, and the thing is that I want this communication to be as stealth and secure as it can be. So, far I came up using VPN, in that way I redirect the whole internet traffic of my laptop via my server. Additionally, I tried to send a file using ftp and observing Wireshark at the same time. The communication seems to be encrypted, however I would like also to encrypt the file itself (as a 2nd step security or something like that). My server is a RasPi running Raspbian. My laptop is Macbook Air. I want firstly to encrypt a file in my Ras Pi and secondly download it. How can I do that?
You can use openssl to encrypt and decrypt using key based symmetric ciphers. For example: openssl enc -in foo.bar \ -aes-256-cbc \ -pass stdin > foo.bar.enc This encrypts foo.bar to foo.bar.enc (you can use the -out switch to specify the output file, instead of redirecting stdout as above) using a 256 bit AES cipher in CBC mode. There are various other ciphers available (see man enc ). The command will then wait for you to enter a password and use that to generate an appropriate key. You can see the key with -p or use your own in place of a password with -K (actually it is slightly more complicated than that since an initialization vector or source is needed, see man enc again). If you use a password, you can use the same password to decrypt, you do not need to look at or keep the generated key. To decrypt this: openssl enc -in foo.bar.enc \ -d -aes-256-cbc \ -pass stdin > foo.bar Notice the -d . See also man openssl .
{ "source": [ "https://unix.stackexchange.com/questions/162960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72520/" ] }
162,973
I've configured dnsmasq as a caching-only DNS server on a Debian server, and it's working well (I'm seeing improved DNS response times via dig). However, I'd like to understand what dnsmasq is caching at any one time, so that I can start to think about the efficiency (i.e. hit rate) that I'm achieving. I've had a look around the man pages, and web, and can't find how I see what dnsmasq is caching at any point (unlike you can do for the leases for example, which are kept in a dnsmasq.lease file). Is the dnsmasq DNS cache held in memory only? Or do I have to do some log file munging?
I do not have access to dnsmasq but according to this thread titled: dnsmasq is it caching? you can send the signal USR1 to the dnsmasq process, causing it to dump statistics to the system log. $ sudo pkill -USR1 dnsmasq Then consult the system logs: $ sudo tail /var/log/syslog Jan 21 13:37:57 dnsmasq[29469]: time 1232566677 Jan 21 13:37:57 dnsmasq[29469]: cache size 150, 0/475 cache insertions re-used unexpired cache entries. Jan 21 13:37:57 dnsmasq[29469]: queries forwarded 392, queries answered locally 16 Jan 21 13:37:57 dnsmasq[29469]: server 208.67.222.222#53: queries sent 206, retried or failed 12 Jan 21 13:37:57 dnsmasq[29469]: server 208.67.220.220#53: queries sent 210, retried or failed 6 NOTE: I believe that dnsmasq retains its cache in RAM. So if you want to dump the cache you'll need to enable the -q switch when dnsmasq is invoked. This is mentioned in the dnsmasq man page: -d, --no-daemon Debug mode: don't fork to the background, don't write a pid file, don't change user id, generate a complete cache dump on receipt on SIGUSR1, log to stderr as well as syslog, don't fork new processes to handle TCP queries. Note that this option is for use in debugging only, to stop dnsmasq daemonising in production, use -k. -q, --log-queries Log the results of DNS queries handled by dnsmasq. Enable a full cache dump on receipt of SIGUSR1.
{ "source": [ "https://unix.stackexchange.com/questions/162973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88401/" ] }
163,120
In my shell script I cannot invoke ant , or mv or cp commands, but the same commands execute on terminal. Below is my script: sample.sh file #! /bin/sh cp filename.so filename_org.so android update project -p . ant clean ant release PATH is set in the .bashrc file: export PATH=$PATH:/usr/bin/ cp , mv , ant are working only under terminal not via script.
As your script is a shell script ( /bin/sh ), then your PATH entries in .bashrc will not be read as that is for the bash ( /bin/bash ) interactive shell. To make your PATH entries available to /bin/sh scripts run by a specific user, add the PATH entry to the .profile file in that users home directory. Additionally you could add the full path for each of your commands within the script: /bin/cp filename.so filename_org.so Or set the PATH variable including all the required $PATHS at the beginning of your script. PATH=$PATH:/bin:/usr/bin:xxx export PATH
{ "source": [ "https://unix.stackexchange.com/questions/163120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88502/" ] }
163,124
Running a server machine with CentOS 7, I've noticed that the avahi service is running by default. I am kind of wondering what the purpose of it is. One thing it seems to do (in my environment) is randomly disabling IPv6 connectivity, which looks like this in the logs: Oct 20 12:23:29 example.org avahi-daemon[779]: Withdrawing address record for fd00::1:2:3:4 on eno1 Oct 20 12:23:30 example.org Withdrawing address record for 2001:1:2:3:4:5:6:7 Oct 20 12:23:30 example.org Registering new address record for fe80::1:2:3:4 on eno1.*. (the suffixes 1:2:3... are made up) And indeed, after that the public 2001:1:2:3:4:5:6:7 IPv6 address is not accessible anymore. Because of that I've disabled the avahi service via: # systemctl disable avahi-daemon.socket avahi-daemon.service # systemctl mask avahi-daemon.socket avahi-daemon.service # systemctl stop avahi-daemon.socket avahi-daemon.service So far I haven't noticed any limitations. Thus, my question about the use-case(s) of avahi on a server system.
Avahi is the opensource implementation of Bonjour/Zeroconf. excerpt - http://avahi.org/ Avahi is a system which facilitates service discovery on a local network via the mDNS/DNS-SD protocol suite. This enables you to plug your laptop or computer into a network and instantly be able to view other people who you can chat with, find printers to print to or find files being shared. Compatible technology is found in Apple MacOS X (branded ​Bonjour and sometimes Zeroconf). A more detailed description is here along with the Wikipedia article . The ArchLinux article is more useful, specifying the types of services that can benefit from Avahi. In the past I'd generally disable it on servers, since every server I've managed in the past was explicitly told about the various resources that it needed to access. The two big benefits of Avahi are name resolution & finding printers, but on a server, in a managed environment, it's of little value.
{ "source": [ "https://unix.stackexchange.com/questions/163124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
163,145
How can I get the command arguments or the whole command line from a running process using its process name? For example this process: # ps PID USER TIME COMMAND 1452 root 0:00 /sbin/udhcpc -b -T 1 -A 12 -i eth0 -p /var/run/udhcpc.eth0.pid And what I want is /sbin/udhcpc -b -T 1 -A 12 -i eth0 -p /var/run/udhcpc.eth0.pid or the arguments. I know the process name and want its arguments. I'm using Busybox on SliTaz.
You could use the -o switch to specify your output format: $ ps -eo args From the man page : Command with all its arguments as a string. Modifications to the arguments may be shown. [...] You may also use the -p switch to select a specific PID: $ ps -p [PID] -o args pidof may also be used to switch from process name to PID, hence allowing the use of -p with a name: $ ps -p $(pidof dhcpcd) -o args Of course, you may also use grep for this (in which case, you must add the -e switch): $ ps -eo args | grep dhcpcd | head -n -1 GNU ps will also allow you to remove the headers (of course, this is unnecessary when using grep ): $ ps -p $(pidof dhcpcd) -o args --no-headers On other systems, you may pipe to AWK or sed: $ ps -p $(pidof dhcpcd) -o args | awk 'NR > 1' $ ps -p $(pidof dhcpcd) -o args | sed 1d Edit: if you want to catch this line into a variable, just use $(...) as usual: $ CMDLINE=$(ps -p $(pidof dhcpcd) -o args --no-headers) or, with grep : $ CMDLINE=$(ps -eo args | grep dhcpcd | head -n -1)
{ "source": [ "https://unix.stackexchange.com/questions/163145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81319/" ] }
163,148
I'd like to know if it is possible to configure postfix to redirect to many email addresses (including the original recipient) instead of only one? Here is my scenario: When an e-mail is: Sent from: [email protected] Addressed to: [email protected] Result: redirect e-mail to [email protected] and deliver to the original recipient The question is partly answered here: https://serverfault.com/questions/284702/redirect-specific-e-mail-address-sent-to-a-user-to-another-user
You could use the -o switch to specify your output format: $ ps -eo args From the man page : Command with all its arguments as a string. Modifications to the arguments may be shown. [...] You may also use the -p switch to select a specific PID: $ ps -p [PID] -o args pidof may also be used to switch from process name to PID, hence allowing the use of -p with a name: $ ps -p $(pidof dhcpcd) -o args Of course, you may also use grep for this (in which case, you must add the -e switch): $ ps -eo args | grep dhcpcd | head -n -1 GNU ps will also allow you to remove the headers (of course, this is unnecessary when using grep ): $ ps -p $(pidof dhcpcd) -o args --no-headers On other systems, you may pipe to AWK or sed: $ ps -p $(pidof dhcpcd) -o args | awk 'NR > 1' $ ps -p $(pidof dhcpcd) -o args | sed 1d Edit: if you want to catch this line into a variable, just use $(...) as usual: $ CMDLINE=$(ps -p $(pidof dhcpcd) -o args --no-headers) or, with grep : $ CMDLINE=$(ps -eo args | grep dhcpcd | head -n -1)
{ "source": [ "https://unix.stackexchange.com/questions/163148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20867/" ] }
163,271
So I was told to do last > lastloggedin which creates a file that shows the classes last login since the last system reboot, and now I am asked to write an Awk script which is named myawk that counts/determines how many lines of lastloggedin contain the string CFS264 . I've done grep -c CFS264 lastloggedin
To get you started you can use awk to search for lines in a file that contain a string like so: $ awk '/CFS264/ { .... }' lastloggedin The bits in the { .... } will be the commands required to tally up the number of lines with that string. To confirm that the above is working you could use a print $0 in there to simply print those lines that contain the search string. $ awk '/CFS264/ { print $0 }' lastloggedin As to the counting, if you search for "awk counter" you'll stumble upon this SO Q&A titled: using awk to count no of records . The method shown there would suffice for what you describe: $ awk '/CFS264/ {count++} END{print count}' lastloggedin Example $ last > lastloggedin $ awk '/slm/ {count++} END {print count}' lastloggedin 758 $ grep slm lastloggedin | wc -l 758 $ grep -c slm lastloggedin 758 NOTE: You don't say which field CFS264 pertains to in the last output. Assuming it's a username then you could further restrict the awk command to search only that field like so: $ awk '$1=="CFS264" { print $0 }' lastloggedin
{ "source": [ "https://unix.stackexchange.com/questions/163271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86930/" ] }
163,346
I'm using debian live-build to work on a bootable system. By the end of the process i get the typical files used to boot a live system: a squashfs file, some GRUB modules and config files, and an initrd.img file. I can boot just fine using those files, passing the initrd to the kernel via initrd=/path/to/my/initrd.img on the bootloader command line. But when I try to examine the contents of my initrd image, like so: $file initrd.img initrd.img: ASCII cpio archive (SVR4 with no CRC) $mkdir initTree && cd initTree $cpio -idv < ../initrd.img the file tree i get looks like this: $tree --charset=ASCII . `-- kernel `-- x86 `-- microcode `-- GenuineIntel.bin Where is the actual filesystem tree, with the typical /bin , /etc, /sbin ... containing the actual files used during boot?
The cpio block skip method given doesn't work reliably. That's because the initrd images I was getting myself didn't have both archives concatenated on a 512 byte boundary. Instead, do this: apt-get install binwalk legolas [mc]# binwalk initrd.img DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000" 120 0x78 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000" 244 0xF4 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000" 376 0x178 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x00005000" 21004 0x520C ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000" 21136 0x5290 gzip compressed data, from Unix, last modified: Sat Feb 28 09:46:24 2015 Use the last number (21136) which is not on a 512 byte boundary for me: legolas [mc]# dd if=initrd.img bs=21136 skip=1 | gunzip | cpio -tdv | head drwxr-xr-x 1 root root 0 Feb 28 09:46 . drwxr-xr-x 1 root root 0 Feb 28 09:46 bin -rwxr-xr-x 1 root root 554424 Dec 17 2011 bin/busybox lrwxrwxrwx 1 root root 7 Feb 28 09:46 bin/sh -> busybox -rwxr-xr-x 1 root root 111288 Sep 23 2011 bin/loadkeys -rwxr-xr-x 1 root root 2800 Aug 19 2013 bin/cat -rwxr-xr-x 1 root root 856 Aug 19 2013 bin/chroot -rwxr-xr-x 1 root root 5224 Aug 19 2013 bin/cpio -rwxr-xr-x 1 root root 3936 Aug 19 2013 bin/dd -rwxr-xr-x 1 root root 984 Aug 19 2013 bin/dmesg
{ "source": [ "https://unix.stackexchange.com/questions/163346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88640/" ] }
163,352
I am reading an article about crontab There is something about disabling automatically sending emails. Disable Email By default cron jobs sends an email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line. >/dev/null 2>&1 What is the detailed meaning for 2 > & and 1 ? Why putting this to the end of a crontab file would turn off the email-sending thing?
> is for redirect /dev/null is a black hole where any data sent, will be discarded 2 is the file descriptor for Standard Error > is for redirect & is the symbol for file descriptor (without it, the following 1 would be considered a filename) 1 is the file descriptor for Standard Out Therefore >/dev/null 2>&1 redirects the output of your program to /dev/null . Include both the Standard Error and Standard Out . Much more information is available at The Linux Documentation Project's I/O Redirection page. cron will only email you if there is some output from you job. With everything redirected to null , there is no output and hence cron will not email you.
{ "source": [ "https://unix.stackexchange.com/questions/163352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
163,370
Or is shutdown -h now the fastest it can get? I look for some syscall or similar that will allow to skip lots of the stuff done prior to shutdown (in particular to care about the running proccesses). At best I would like a kernel related solution, being ignorant to the init -middleware (like systemd or upstart ). The risks related with i.e. killing directly all services like cups/apache/pulseaudio etc... I would not care.... remark: the solution should be software-vice. Pressing buttons at the device is not what I look for.
It doesn't get much faster than using the System Request (SysRq) functionality and then triggering an immediate reboot . This is a key combination understood by the kernel. Enable SysRq: echo 1 > /proc/sys/kernel/sysrq Now, send it into reboot. echo b > /proc/sysrq-trigger b - Immediately reboot the system, without unmounting or syncing filesystems. Note: Although this is a reboot it will behave like the power has been cut off, which is not recommended. If you want to sync and umount the filesystems before hand then use: echo s > /proc/sysrq-trigger echo u > /proc/sysrq-trigger or if you just want to power off the system then: echo o > /proc/sysrq-trigger Magic key combinations There are also key combinations to use that are interpreted by the kernel: Alt + SysRq / Print Screen + Command Key Command Keys: R - Take control of keyboard back from X. E - Send SIGTERM to all processes, allowing them to terminate gracefully. I - Send SIGKILL to all processes, forcing them to terminate immediately. S - Flush data to disk. U - Remount all filesystems read-only. B - Reboot. Quoting from the Magic SysRq Key Wiki : A common use of the magic SysRq key is to perform a safe reboot of a Linux computer which has otherwise locked up. Hold down the Alt and SysRq (Print Screen) keys. While holding those down, type the following keys in order, several seconds apart: REISUB . Computer should reboot. A way to remember these are: " R eboot E ven I f S ystem U tterly B roken" or simply the word " BUSIER " read backwards. References Magic SysRq Key Wiki Fedora SysRq
{ "source": [ "https://unix.stackexchange.com/questions/163370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
163,399
I have a big file and need to split into two files. Suppose in the first file the 1000 lines should be selected and put into another file and delete those lines in the first file. I tried using split but it is creating multiple chunks.
The easiest way is probably to use head and tail : $ head -n 1000 input-file > output1 $ tail -n +1001 input-file > output2 That will put the first 1000 lines from input-file into output1 , and all lines from 1001 till the end in output2
{ "source": [ "https://unix.stackexchange.com/questions/163399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69047/" ] }
163,481
I know that the cut command can print the first n characters of a string but how to select the last n characters? If I have a string with a variable number of characters, how can I print only the last three characters of the string. eg. "unlimited" output needed is "ted" "987654" output needed is "654" "123456789" output needed is "789"
Why has nobody given the obvious answer? sed 's/.*\(...\)/\1/' … or the slightly less obvious grep -o '...$' Admittedly, the second one has the drawback that lines with fewer than three characters vanish; but the question didn’t explicitly define the behavior for this case.
{ "source": [ "https://unix.stackexchange.com/questions/163481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88709/" ] }
163,726
I have to grep through some JSON files in which the line lengths exceed a few thousand characters. How can I limit grep to display context up to N characters to the left and right of the match? Any tool other than grep would be fine as well, so long as it available in common Linux packages. This would be example output, for the imaginary grep switch Ф : $ grep -r foo * hello.txt: Once upon a time a big foo came out of the woods. $ grep -Ф 10 -r foo * hello.txt: ime a big foo came of t
With GNU grep : N=10; grep -roP ".{0,$N}foo.{0,$N}" . Explanation: -o => Print only what you matched -P => Use Perl-style regular expressions The regex says match 0 to $N characters followed by foo followed by 0 to $N characters. If you don't have GNU grep : find . -type f -exec \ perl -nle ' BEGIN{$N=10} print if s/^.*?(.{0,$N}foo.{0,$N}).*?$/$ARGV:$1/ ' {} \; Explanation: Since we can no longer rely on grep being GNU grep , we make use of find to search for files recursively (the -r action of GNU grep ). For each file found, we execute the Perl snippet. Perl switches: -n Read the file line by line -l Remove the newline at the end of each line and put it back when printing -e Treat the following string as code The Perl snippet is doing essentially the same thing as grep . It starts by setting a variable $N to the number of context characters you want. The BEGIN{} means this is executed only once at the start of execution not once for every line in every file. The statement executed for each line is to print the line if the regex substitution works. The regex: Match any old thing lazily 1 at the start of line ( ^.*? ) followed by .{0,$N} as in the grep case, followed by foo followed by another .{0,$N} and finally match any old thing lazily till the end of line ( .*?$ ). We substitute this with $ARGV:$1 . $ARGV is a magical variable that holds the name of the current file being read. $1 is what the parens matched: the context in this case. The lazy matches at either end are required because a greedy match would eat all characters before foo without failing to match (since .{0,$N} is allowed to match zero times). 1 That is, prefer not to match anything unless this would cause the overall match to fail. In short, match as few characters as possible.
{ "source": [ "https://unix.stackexchange.com/questions/163726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
163,747
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once? I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in parallel. I have tried openssl dgst -sha256 -md5 , but it only calculates the hash using one algorithm. Pseudo-code for the expected behavior: for each block: for each algorithm: hash_state[algorithm].update(block) for each algorithm: print algorithm, hash_state[algorithm].final_hash()
Check out pee (" tee standard input to pipes ") from moreutils . This is basically equivalent to Marco's tee command, but a little simpler to type. $ echo foo | pee md5sum sha256sum d3b07384d113edec49eaa6238ad5ff00 - b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c - $ pee md5sum sha256sum <foo.iso f109ffd6612e36e0fc1597eda65e9cf0 - 469a38cb785f8d47a0f85f968feff0be1d6f9398e353496ff7aa9055725bc63e -
{ "source": [ "https://unix.stackexchange.com/questions/163747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8250/" ] }
163,810
Let's say I have a variable line="This is where we select from a table." now I want to grep how many times does select occur in the sentence. grep -ci "select" $line I tried that, but it did not work. I also tried grep -ci "select" "$line" It still doesn't work. I get the following error. grep: This is where we select from a table.: No such file or directory
Have grep read on its standard input. There you go, using a pipe ... $ echo "$line" | grep select ... or a here string ... $ grep select <<< "$line" Also, you might want to replace spaces by newlines before grepping : $ echo "$line" | tr ' ' '\n' | grep select ... or you could ask grep to print the match only: $ echo "$line" | grep -o select This will allow you to get rid of the rest of the line when there's a match. Edit: Oops, read a little too fast, thanks Marco . In order to count the occurences, just pipe any of these to wc(1) ;) Another edit made after lzkata 's comment, quoting $line when using echo .
{ "source": [ "https://unix.stackexchange.com/questions/163810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68738/" ] }
163,845
I have the below JSON file, with the data stored as columns enumerated by rank in an array: { "data": [ { "displayName": "First Name", "rank": 1, "value": "VALUE" }, { "displayName": "Last Name", "rank": 2, "value": "VALUE" }, { "displayName": "Position", "rank": 3, "value": "VALUE" }, { "displayName": "Company Name", "rank": 4, "value": "VALUE" }, { "displayName": "Country", "rank": 5, "value": "VALUE" } ] } I would like to have a CSV file in this format, where the header come from the value of a column's displayName and the data in the column is the singular value key's value: First Name, Last Name, Position, Company Name, Country VALUE, VALUE, VALUE, VALUE, VALUE Is this possible by using only jq ? I don't have any programming skills.
jq has a filter, @csv, for converting an array to a CSV string. This filter takes into account most of the complexities associated with the CSV format, beginning with commas embedded in fields. (jq 1.5 has a similar filter, @tsv, for generating tab-separated-value files.) Of course, if the headers and values are all guaranteed to be free of commas and double quotation marks, then there may be no need to use the @csv filter. Otherwise, it would probably be better to use it. For example, if the 'Company Name' were 'Smith, Smith and Smith', and if the other values were as shown below, invoking jq with the "-r" option would produce valid CSV: $ jq -r '.data | map(.displayName), map(.value) | @csv' so.json2csv.json "First Name","Last Name","Position","Company Name","Country" "John (""Johnnie"")","Doe","Director, Planning and Posterity","Smith, Smith and Smith","Transylvania"
{ "source": [ "https://unix.stackexchange.com/questions/163845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88962/" ] }
163,872
I am setting up a server where there are multiple developers working on multiple applications. I have figured out how to give certain developers shared access to the necessary application directories using the setgid bit and default ACLs to give anyone in a group access. Many of these applications run under a terminal while in development for easy access. When I work alone, I set up a user for an application and run screen as that user. This has the downside that every developer to use the screen session needs to know the password and it is harder to keep user and application accounts separate. One way that could work is using screen multiuser features. They do not work out-of-the-box however, screen complains about needing suid root . Does giving that have any downsides? I am pretty careful about using suid root anything. Maybe there is a reason why it isn't the default? Should I do it with screen or is there some other intelligent way of doing what I want?
Yes, you can do it with screen which has multiuser support. First, create a new session: screen -d -m -S multisession Attach to it: screen -r multisession Turn on multiuser support: Press Ctrl-a and type (NOTE: Ctrl+a is needed just before each single command, i.e. twice here) :multiuser on :acladd USER ← use username of user you want to give access to your screen Now, Ctrl-a d and list the sessions: $ screen -ls There is a screen on: 4791.multisession (Multi, detached) You now have a multiuser screen session. Give the name multisession to acl'd user, so he can attach to it: screen -x youruser/multisession And that's it. The only drawback is that screen must run as suid root. But as far as I know is the default, normal situation. Another option is to do screen -S $screen_id -X multiuser on , screen -S $screen_id -X acladd authorized_user Hope this helps.
{ "source": [ "https://unix.stackexchange.com/questions/163872", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17737/" ] }
163,898
The issue: I need to assign a variable a value that is decently long. All the lines of my script must be under a certain number of columns. So, I am trying to assign it using more than one line. It's simple to do without indents: VAR="This displays without \ any issues." echo "${VAR}" Result: This displays without any issues. However with indents: VAR="This displays with \ extra spaces." echo "${VAR}" Result: This displays with extra spaces. How can I elegantly assign it without these spaces?
Here the issue is that you are surrounding the variable with double quotes (""). Remove it and things will work fine. VAR="This displays with \ extra spaces." echo ${VAR} Output This displays with extra spaces. Here the issue is that double quoting a variable preserves all white space characters. This can be used in case if you explicitly need it. For example, $ echo "Hello World ........ ... ...." will print Hello World ........ ... .... And on removing quotes, its different $ echo Hello World ........ ... .... Hello World ........ ... .... Here the Bash removes extra spaces in the text because in the first case the entire text is taken as a "single" argument and thus preserving extra spaces. But in the second case echo command receives the text as 5 arguments. Quoting a variable will also be helpful while passing arguments to commands. In the below command, echo only gets single argument as "Hello World" $ variable="Hello World" $ echo "$variable" But in case of the below scenario echo gets two arguments as Hello and World $ variable="Hello World" $ echo $variable
{ "source": [ "https://unix.stackexchange.com/questions/163898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88594/" ] }
163,955
Yesterday I read this SO comment which says that in the shell (at least bash ) >&- "has the same result as" >/dev/null . That comment actually refers to the ABS guide as the source of its information. But that source says that the >&- syntax "closes file descriptors". It is not clear to me whether the two actions of closing a file descriptor and redirecting it to the null device are totally equivalent. So my question is: are they? On the surface of it it seems that closing a descriptor is like closing a door but redirecting it to a null device is opening a door to limbo! The two don't seem exactly the same to me because if I see a closed door, I won't try to throw anything out of it, but if I see an open door I will assume I can. In other words, I have always wondered if >/dev/null means that cat mybigfile >/dev/null would actually process every byte of the file and write it to /dev/null which forgets it. On the other hand, if the shell encounters a closed file descriptor I tend to think (but am not sure) that it will simply not write anything, though the question remains whether cat will still read every byte. This comment says >&- and >/dev/null " should " be the same, but it is not so resounding answer to me. I'd like to have a more authoritative answer with some reference to standard or source core or not...
No, you certainly don't want to close file descriptors 0, 1 and 2. If you do so, the first time the application opens a file, it will become stdin/stdout/stderr... For instance, if you do: echo text | tee file >&- When tee (at least some implementations, like busybox') opens the file for writing, it will be open on file descriptor 1 (stdout). So tee will write text twice into file : $ echo text | strace tee file >&- [...] open("file", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 1 read(0, "text\n", 8193) = 5 write(1, "text\n", 5) = 5 write(1, "text\n", 5) = 5 read(0, "", 8193) = 0 exit_group(0) = ? That has been known to cause security vulnerabilities. For instance: chsh 2>&- And chsh (a setuid application) may end up writing error messages in /etc/passwd . Some tools and even some libraries try to guard against that. For instance GNU tee will move the file descriptor to one above 2 if the files it opens for writing are assigned 0, 1, 2 while busybox tee won't. Most tools, if they can't write to stdout (because for instance it's not open), will report an error message on stderr (in the language of the user which means extra processing to open and parse localisation files...), so it will be significantly less efficient, and possibly cause the program to fail. In any case, it won't be more efficient. The program will still do a write() system call. It can only be more efficient if the program gives up writing to stdout/stderr after the first failing write() system call, but programs generally don't do that. They generally either exit with an error or keep on trying.
{ "source": [ "https://unix.stackexchange.com/questions/163955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54067/" ] }
164,005
I want to boot to console instead of a GUI using systemd . How can I do that?
Open a terminal and (as root) run: systemctl set-default multi-user.target or with --force systemctl set-default -f multi-user.target to overwrite any existing conflicting symlinks 1 . Double-check with: systemctl get-default Another way is to add the following parameter to your kernel boot line: systemd.unit=multi-user.target
{ "source": [ "https://unix.stackexchange.com/questions/164005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
164,025
I have several files with the same base filename. I'd like to remove all but one foo.org #keep foo.tex #delete foo.fls #delete foo.bib #delete etc If I didn't need to keep one, I know I could use rm foo.* . TLDP demonstrates ^ to negate a match. Through trial and error, I was able to find that rm foo.*[^org] does what I need, but I don't really understand the syntax. Also, while not a limitation in my use case, I think this pattern also ignores foo.o and foo.or . How does this pattern work, and what would a glob that ignores only foo.org look like?
shopt -s extglob echo rm foo.!(org) This is "foo." followed by anything NOT "org" ref: https://www.gnu.org/software/bash/manual/bashref.html#Pattern-Matching
{ "source": [ "https://unix.stackexchange.com/questions/164025", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67205/" ] }
164,210
During linux installation I selected "minimal" option: When I went to run the nslookup command to look up an IP address I got the error message nslookup: command not found as shown in the example below. $ nslookup www.google.com bash: nslookup: command not found
The minimal install likely did not come with the bind-utils package, which I believe contains nslookup . You can install bind-utils with: sudo yum install bind-utils In general, you can search for what package provides a command using the yum provides command: sudo yum provides '*bin/nslookup'
{ "source": [ "https://unix.stackexchange.com/questions/164210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16253/" ] }
164,391
cat < file prints the contents of file to stdout. cat > file reads stdin until Ctrl + D is detected and the input text is written to file . cat <> file , at least in my version of Bash, prints the contents of file happily (without error), but doesn't modify the file nor does it update the modification timestamp. How does the Bash standard justify the seemingly ignored > in the third statement - and, more importantly, is it doing anything?
Bash uses <> to create a read-write file descriptor : The redirection operator [n]<>word causes the file whose name is the expansion of word to be opened for both reading and writing on file descriptor n, or on file descriptor 0 if n is not specified. If the file does not exist, it is created. cat <> file opens file read-write and binds it to descriptor 0 (standard input). It's essentially equivalent to < file for any sensibly-written program, since nobody's likely to try writing to standard input ordinarily, but if one did it'd be able to. You can write a simple C program to test that out directly - write(0, "hello", 6) will write hello into file via standard input. <> should also work in any other POSIX-compliant shell with the same effect.
{ "source": [ "https://unix.stackexchange.com/questions/164391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89283/" ] }
164,508
I have a text file named links.txt which looks like this link1 link2 link3 I want to loop through this file line by line and perform an operation on every line. I know I can do this using while loop but since I am learning, I thought to use a for loop. I actually used command substitution like this a=$(cat links.txt) Then used the loop like this for i in $a; do ###something###;done Also I can do something like this for i in $(cat links.txt); do ###something###; done Now my question is when I substituted the cat command output in a variable a, the new line characters between link1 link2 and link3 are removed and is replaced by spaces echo $a outputs link1 link2 link3 and then I used the for loop. Is it always that a new line is replaced by space when we do a command substitution?? Regards
Newlines get swapped out at some points because they are special characters. In order to keep them, you need to make sure they're always interpreted, by using quotes: $ a="$(cat links.txt)" $ echo "$a" link1 link2 link3 Now, since I used quotes whenever I was manipulating the data, the newline characters ( \n ) always got interpreted by the shell, and therefore remained. If you forget to use them at some point, these special characters will be lost. The very same behaviour will occur if you use your loop on lines containing spaces. For instance, given the following file... mypath1/file with spaces.txt mypath2/filewithoutspaces.txt The output will depend on whether or not you use quotes: $ for i in $(cat links.txt); do echo $i; done mypath1/file with spaces.txt mypath2/filewithoutspaces.txt $ for i in "$(cat links.txt)"; do echo "$i"; done mypath1/file with spaces.txt mypath2/filewithoutspaces.txt Now, if you don't want to use quotes, there is a special shell variable which can be used to change the shell field separator ( IFS ). If you set this separator to the newline character, you will get rid of most problems. $ IFS=$'\n'; for i in $(cat links.txt); do echo $i; done mypath1/file with spaces.txt mypath2/filewithoutspaces.txt For the sake of completeness, here is another example, which does not rely on command output substitution. After some time, I found out that this method was considered more reliable by most users due to the very behaviour of the read utility. $ cat links.txt | while read i; do echo $i; done Here is an excerpt from read 's man page: The read utility shall read a single line from standard input. Since read gets its input line by line, you're sure it won't break whenever a space shows up. Just pass it the output of cat through a pipe, and it'll iterate over your lines just fine. Edit: I can see from other answers and comments that people are quite reluctant when it comes to the use of cat . As jasonwryan said in his comment, a more proper way to read a file in shell is to use stream redirection ( < ), as you can see in val0x00ff's answer here . However, since the question isn't " how to read/process a file in shell programming ", my answer focuses more on the quotes behaviour, and not the rest.
{ "source": [ "https://unix.stackexchange.com/questions/164508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63486/" ] }
164,577
I have a node.js process that uses fs.appendFile to add lines to file.log . Only complete lines of about 40 chars per lines are appended, e.g. calls are like fs.appendFile("start-end") , not 2 calls like fs.appendFile("start-") and fs.appendFile("end") . If I move this file to file2.log can I be sure that no lines are lost or copied partially?
As long as you don't move the file across file-system borders, the operation should be safe. This is due to the mechanism, how »moving« actually is done. If you mv a file on the same file-system, the file isn't actually touched, but only the file-system entry is changed. $ mv foo bar actually does something like $ ln foo bar $ rm foo This would create a hard link (a second directory entry) for the file (actually the inode pointed by file-system entry) foo named bar and remove the foo entry. Since now when removing foo , there is a second file-system entry pointing to foo 's inode, removing the old entry foo doesn't actually remove any blocks belonging to the inode. Your program would happily append to the file anyways, since its open file-handle points to the inode of the file, not the file-system entry. Note: If your program closes and reopens the file between writes, you would end up having a new file created with the old file-system entry! Cross file-system moves: If you move the file across file-system borders, things get ugly. In this case you couldn't guarantee to have your file keeping consistent, since mv would actually create a new file on the target file-system copy the contents of the old file to the new file remove the old file or $ cp /path/to/foo /path/to/bar $ rm /path/to/foo resp. $ touch /path/to/bar $ cat < /path/to/foo > /path/to/bar $ rm /path/to/foo Depending on whether the copying reaches end-of-file during a write of your application, it could happen that you have only half of a line in the new file. Additionally, if your application does not close and reopen the old file, it would continue writing to the old file, even if it seems to be deleted: the kernel knows which files are open and although it would delete the file-system entry, it won't delete old file's inode and associated blocks until your application closes its open file-handle.
{ "source": [ "https://unix.stackexchange.com/questions/164577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48453/" ] }
164,826
This answer and comments mention --rfc-3339 and a "hidden" --iso-8601 option that I have used for a long time and now seems to be undocumented. When did that option documentation get removed from the --help text? Will the option go away anytime soon?
The option was introduced in the coreutils date (which is probably what you have) in 1999 (Apr. 8). The documentation was removed in 2005 without much explanation in the commit. In 2011 , the help for --iso-8601 was reintroduced with the following explanation: We deprecated and undocumented the --iso-8601 (-I) option mostly because date could not parse that particular format. Now that it can, it's time to restore the documentation. * src/date.c (usage): Document it. * doc/coreutils.texi (Options for date): Reinstate documentation. Reported by Hubert Depesz Lubaczewski in http://bugs.gnu.org/7444. It looks like the help was taken out in version 5.90 and put back in, in version 8.15 (it is not in my 8.13) and the comment above suggests that it is now back to stay and not likely to be disappearing any time soon. In version 8.31 (as provided by Solus July 2020) the man page descriptions for two two options are: -I[FMT], --iso-8601[=FMT] output date/time in ISO 8601 format. FMT='date' for date only (the default), 'hours', 'minutes', 'sec‐ onds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14T02:34:56-06:00 --rfc-3339=FMT output date/time in RFC 3339 format. FMT='date', 'seconds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14 02:34:56-06:00
{ "source": [ "https://unix.stackexchange.com/questions/164826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89609/" ] }
164,842
I am new to software development, and over the course of compiling about 20 programs and dependencies from source I have seen a rough pattern, but I don't quite get it. I'm hoping you could shed some light on it. I am SSHing on a SLC6 machine and without root permissions, I have to install all the software dependencies and the most difficult part - to LINK them to the right place. For instance: I need to install log4cpp. I download a tarball and unpack it ./autogen.sh (if there isn't this one, just continue to next) ./configure make So It is installed in the folder itself along with the source code, just lying there dormant, until I can call it in the right way. Then there is an other program which I need to install, and it requires me to specify the lib and include dirs for some dependencies --with-log4cpp-inc= --with-log4cpp-lib= For SOME source compilations, the folder has a lib, bin and inc or include dir - Perfect! For some, the folder has just lib and inc dir. For some, the folder has just inc dir. I have no problem when they all have a nice folder, easy to find. But I often run into problems, like with the log4cpp. locate log4cpp.so returns null (The lib dirs have .so files in it? or do they?) So I have a problem, in this specific instance, that the library dir is missing and I cannot find it. But I want to know how to solve the problem every time, and also have some background information. However my googling skills seem to return nothing when searching for how library, include and bin environment variables work. I have also tried looking up the documentation for the program, but it seems that the questions I have:"Where is the lib dir, where is the include dir, where is the bin dir?" are so trivial, that they do not even need to communicate it. So: What is an include dir, what does it do, contain, how do I find it. What is a library dir, what does it do, contain, how do I find it - every time - useful commands perhaps. What is a binary dir, what does it do, contain, how do I find it.
The option was introduced in the coreutils date (which is probably what you have) in 1999 (Apr. 8). The documentation was removed in 2005 without much explanation in the commit. In 2011 , the help for --iso-8601 was reintroduced with the following explanation: We deprecated and undocumented the --iso-8601 (-I) option mostly because date could not parse that particular format. Now that it can, it's time to restore the documentation. * src/date.c (usage): Document it. * doc/coreutils.texi (Options for date): Reinstate documentation. Reported by Hubert Depesz Lubaczewski in http://bugs.gnu.org/7444. It looks like the help was taken out in version 5.90 and put back in, in version 8.15 (it is not in my 8.13) and the comment above suggests that it is now back to stay and not likely to be disappearing any time soon. In version 8.31 (as provided by Solus July 2020) the man page descriptions for two two options are: -I[FMT], --iso-8601[=FMT] output date/time in ISO 8601 format. FMT='date' for date only (the default), 'hours', 'minutes', 'sec‐ onds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14T02:34:56-06:00 --rfc-3339=FMT output date/time in RFC 3339 format. FMT='date', 'seconds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14 02:34:56-06:00
{ "source": [ "https://unix.stackexchange.com/questions/164842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89583/" ] }
164,873
The command $ find ~ -name .DS_Store -ls -delete works on Mac OS X, but $ find ~ -name __pycache__ -type d -ls -delete does not - the directories are found but not deleted. Why? PS. I know I can do $ find ~ -name __pycache__ -type d -ls -exec rm -rv {} + the question is why find -delete does not work.
find 's -delete flag works similar to rmdir when deleting directories. If the directory isn't empty when it's reached it can't be deleted. You need to empty the directory first. Since you are specifying -type d , find won't do that for you. You can solve this by doing two passes: first delete everything within dirs named __pycache__ , then delete all dirs named __pycache__ : find ~ -path '*/__pycache__/*' -delete find ~ -type d -name '__pycache__' -empty -delete Somewhat less tightly controlled, but in a single line: find ~ -path '*/__pycache__*' -delete This will delete anything within your home that has __pycache__ as part of its path.
{ "source": [ "https://unix.stackexchange.com/questions/164873", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31443/" ] }
164,944
One of the applications I use at work sometimes screws with my bash and so I don't see my own input anymore. I.e. $ echo foo foo $ becomes $ foo $ I incorrectly tried to run stty -echo which made matters worse and now it stopps accepting commands all together which put my input in some state that just causes > to appear every time I line break and nothing else. What should I have done?
The usual remedy for things like this is stty sane The stty -echo should not have made this worse, as that just turns off echoing of input, and you already had that. The fact that you say returns just causes > to appear means that you've started somethng that is causing continuance over the next lines, e.g. echo ' will do that because it's waiting for the closing ' to terminate the string. Other things will cause this as well, such as if something ; it's waiting for the then ... fi part. You could probably have hit ctrl - c at that stage to stop it waiting for the rest of the command, unless the terminal was so messed up that interrupts were also not being generated.
{ "source": [ "https://unix.stackexchange.com/questions/164944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3125/" ] }
164,955
I have a file name called temp.csv in my script one of the intermediate step is echo "some info" > final.csv | cat temp.csv >> final.csv at times the file final.csv is created without data which is in temp.csv (when it is running through scheduler). And then, when I rerun the job, then the final.csv is created as I expected. Why it is happening like this (what exactly happening in the command echo "some info" > final.csv | cat temp.csv >> final.csv )? If I replace the command in the following way: echo "some info" > final.csv ; cat temp.csv >> final.csv will this modification be helpful?
The usual remedy for things like this is stty sane The stty -echo should not have made this worse, as that just turns off echoing of input, and you already had that. The fact that you say returns just causes > to appear means that you've started somethng that is causing continuance over the next lines, e.g. echo ' will do that because it's waiting for the closing ' to terminate the string. Other things will cause this as well, such as if something ; it's waiting for the then ... fi part. You could probably have hit ctrl - c at that stage to stop it waiting for the rest of the command, unless the terminal was so messed up that interrupts were also not being generated.
{ "source": [ "https://unix.stackexchange.com/questions/164955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69047/" ] }
165,155
In vim I want to substitute a long path /a/b/f/g/d/g with another /s/g/w/d/g/r in a text file. Usually I use \/ but there are too many slashes here. Is there an easier way?
You can use another character instead of slash / as delimiter to substitution command. Example using # : :%s#/a/b/f/g/d/g#/s/g/w/d/g/r#
{ "source": [ "https://unix.stackexchange.com/questions/165155", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
165,214
Are there any relatively strightforward options with top to track a specific process? Ideally by identifying the process by a human readable value? e.g. chrome or java . In other words, I want to view all the typical information top provides, but for the results to be filtered to the parameters provided i.e.. 'chrome' or 'java'
You can simply use grep : NAME grep, egrep, fgrep, rgrep - print lines matching a pattern SYNOPSIS grep [OPTIONS] PATTERN [FILE...] grep [OPTIONS] [-e PATTERN | -f FILE] [FILE...] DESCRIPTION grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) for lines containing a match to the given PATTERN. By default, grep prints the matching lines. Run following command to get output which you want (ex-chrome): top | grep chrome Here we are using grep with pipelines | so top & grep run parallel ; top output given to grep (as input) and grep chrome filters matching lines chrome until top stopped.
{ "source": [ "https://unix.stackexchange.com/questions/165214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89437/" ] }
165,256
Is there a multi-threaded cp command on Linux? I know how to do this on Windows, but I don't know how this is approached in a Linux environment.
As Celada mentioned, there would be no point to using multiple threads of execution since a copy operation doesn't really use the cpu. As ryekayo mentioned, you can run multiple instances of cp so that you end up with multiple concurrent IO streams, but even this is typically counter-productive. If you are copying files from one location to another on the same disk, trying to do more than one at a time will result in the disk wasting time seeking back and forth between each file, which will slow things down. The only time it is really beneficial to copy multiple files at once is if you are, for instance, copying several files from several different slow, removable disks onto your fast hard disk, or vice versa.
{ "source": [ "https://unix.stackexchange.com/questions/165256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
165,423
I use this rsync invocation to backup my home directory: rsync -aARrx --info= --force --delete --info=progress2 -F "$USER_HOME" "$BACKUP_MNTPOINT" rsync man page says that -a implies -g and -o (among other switches), which should preserve ownership. However I've noticed that if a directory does not exist under $BACKUP_MNTPOINT/$USER_HOME , it is created with root:root ownership instead of the correct one. (This only happens with directories right under $BACKUP_MNTPOINT/$USER_HOME ). Why is that? $BACKUP_MNTPOINT is a localy mounted drive. $BACKUP_MNTPOINT/$USER_HOME does have the right ownership and permissions. Neither $USER_HOME nor $BACKUP_MNTPOINT end with a slash. Both the source and the target filesystems are XFS and running mkdir $BACKUP_MNTPOINT/$USER_HOME creates a directory with the expected ownership.
I had a similar problem when using rsync to backup my system to my server. I used: rsync -aAXSHPr \ -e ssh \ --rsync-path="sudo /usr/bin/rsync/" \ --numeric-ids \ --delete \ --progress \ --exclude-from="/path/to/file/that/lists/excluded/folders.txt" \ --include-from="/path/to/file/that/lists/included/folders.txt" \ / USER@SERVER:/path/to/folder/where/backup/should/go/ The solution is that there is not really a problem. I suspect that you aborted the rsync process once you saw that it creates folders with wrong permissions set. The crux is that rsync only sets the permissions of a parent-folder once it is done syncing all subfolders and files of it.
{ "source": [ "https://unix.stackexchange.com/questions/165423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74534/" ] }
165,589
I'm trying to edit my nginx.conf file programmatically, which contains a line like this: access_log /var/log/nginx/access.log; which I want to look like this: access_log /dev/stdout; I believe the regex ^\s*access_log([^;]*); will capture the /var/log/nginx/access.log part in a capture group, but I'm not sure how to correctly replace the capture group with sed? I've tried echo " access_log /var/log/nginx/access.log;" | sed 's/^\s*access_log([^;]*);/\1\\\/dev\\\/stdout/' but I'm getting the error: sed: -e expression #1, char 45: invalid reference \1 on `s' command's RHS if I try sed -r then there is no error, but the output is not what I expect: /var/log/nginx/access.log\/dev\/stdout I'm trying to be smart with the capture group and whatnot and not search directly for "access_log /var/log/nginx/access.log;" in case the distribution changes the default log file location.
A couple of mistakes there. First, since sed uses basic regular expressions, you need \( and \) to make a capture group. The -r switch enables extended regular expressions which is why you don't get the error. See Why do I need to escape regex characters in sed to be interpreted as regex characters? . Second, you are putting the capture group in the wrong place. If I understand you correctly, you can do this: sed -e 's!^\(\s*access_log\)[^;]*;!\1 /dev/stdout;!' your_file Note the use of ! as regex delimiters to avoid having to escape the forward slashes in /dev/stdout .
{ "source": [ "https://unix.stackexchange.com/questions/165589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17420/" ] }
165,858
I have two servers. One of them has 15 million text files (about 40 GB). I am trying to transfer them to another server. I considered zipping them and transferring the archive, but I realized that this is not a good idea. So I used the following command: scp -r usrname@ip-address:/var/www/html/txt /var/www/html/txt But I noticed that this command just transfers about 50,000 files and then the connection is lost. Is there any better solution that allows me to transfer the entire collection of files? I mean to use something like rsync to transfer the files which didn't get transferred when the connection was lost. When another connection interrupt would occur, I would type the command again to transfer files, ignoring those which have already been transferred successfully. This is not possible with scp , because it always begins from the first file.
As you say, use rsync : rsync -azP /var/www/html/txt/ username@ip-address:/var/www/html/txt The options are: -a : enables archive mode, which preserves symbolic links and works recursively -z : compress the data transfer to minimise network usage -P : to display a progress bar and enables you to resume partial transfers As @aim says in his answer, make sure you have a trailing / on the source directory (on both is fine too). More info from the man page
{ "source": [ "https://unix.stackexchange.com/questions/165858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48597/" ] }
165,862
Sice a few days (I suspect since I have upgraded to gnome 3.14) on ArchLinux I can't print anymore. If I open the printing panel of gnome control center I get a message like (it's translated from Italian): "System service for printing seems not be available" So from terminal I tryed: $ sudo systemctl start cups Failed to start cups.service: Unit cups.service failed to load: No such file or directory. I also tried reinstalling cups but no luck. I also googled around and tried the various solutions proposed but none of them works for me.
As of cups v. 2.0.0 the service name has been changed . You'll have to disable the old service: systemctl disable cups.service before enabling and starting the new one: systemctl enable org.cups.cupsd.service systemctl daemon-reload systemctl start org.cups.cupsd.service
{ "source": [ "https://unix.stackexchange.com/questions/165862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57287/" ] }
165,875
How do I resume a partially downloaded file using a Linux commandline tool? I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch. How do I start from 400 MB itself?
Since you didn't specify, I'm assuming you are using wget to download the file. If this is the case, try using it with the -c option (e.g. wget -c <URL> ). Please notice that in case the protocol used is ftp (the URL looks like ftp://... ) there is a chance the remote server uses an old/ancient ftp daemon which doesn't support resuming downloads (newer ftp daemons do support it for more than a decade anyway, so this is just a small chance). If this is the case, though, you may be out of luck. In the other hand you should have no issues if the protocol used is http. (UPDATE: According to other experts (including Gilles in the comments below), resuming while using http is also subject to the server support, so this apply to both ftp and http). Good luck.
{ "source": [ "https://unix.stackexchange.com/questions/165875", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89599/" ] }
166,220
What command could print pi for me? I want to specify how many digits it prints, I couldn't find anything online. I just want to be able to print pi.
You can use this command: echo "scale=5; 4*a(1)" | bc -l 3.14159 Where scale is the number of digits after decimal point. Reference: http://www.tux-planet.fr/calculer-le-chiffre-pi-en-ligne-de-commande-sous-linux/
{ "source": [ "https://unix.stackexchange.com/questions/166220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
166,359
I need to retrieve the expiry date of an SSL cert. The curl application does provide this information: $ curl -v https://google.com/ * Hostname was NOT found in DNS cache * Trying 212.179.180.121... * Connected to google.com (212.179.180.121) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using ECDHE-ECDSA-AES128-GCM-SHA256 * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.google.com * start date: 2014-10-22 13:04:07 GMT * expire date: 2015-01-20 00:00:00 GMT * subjectAltName: google.com matched * issuer: C=US; O=Google Inc; CN=Google Internet Authority G2 * SSL certificate verify ok. > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: google.com > Accept: */* > < HTTP/1.1 302 Found < Cache-Control: private < Content-Type: text/html; charset=UTF-8 < Location: https://www.google.co.il/?gfe_rd=cr&ei=HkxbVMzCM-WkiAbU6YCoCg < Content-Length: 262 < Date: Thu, 06 Nov 2014 10:23:26 GMT * Server GFE/2.0 is not blacklisted < Server: GFE/2.0 < <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>302 Moved</TITLE></HEAD><BODY> <H1>302 Moved</H1> The document has moved <A HREF="https://www.google.co.il/?gfe_rd=cr&amp;ei=HkxbVMzCM-WkiAbU6YCoCg">here</A>. </BODY></HTML> * Connection #0 to host google.com left intact However, when piping the output via grep the result is not less information on the screen, but rather much more : $ curl -v https://google.com/ | grep expire * Hostname was NOT found in DNS cache % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 212.179.180.84... * Connected to google.com (212.179.180.84) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): } [data not shown] * SSLv3, TLS handshake, Server hello (2): { [data not shown] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* SSLv3, TLS handshake, CERT (11): { [data not shown] * SSLv3, TLS handshake, Server key exchange (12): { [data not shown] * SSLv3, TLS handshake, Server finished (14): { [data not shown] * SSLv3, TLS handshake, Client key exchange (16): } [data not shown] * SSLv3, TLS change cipher, Client hello (1): } [data not shown] * SSLv3, TLS handshake, Finished (20): } [data not shown] * SSLv3, TLS change cipher, Client hello (1): { [data not shown] * SSLv3, TLS handshake, Finished (20): { [data not shown] * SSL connection using ECDHE-ECDSA-AES128-GCM-SHA256 * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.google.com * start date: 2014-10-22 13:04:07 GMT * expire date: 2015-01-20 00:00:00 GMT * subjectAltName: google.com matched * issuer: C=US; O=Google Inc; CN=Google Internet Authority G2 * SSL certificate verify ok. > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: google.com > Accept: */* > < HTTP/1.1 302 Found < Cache-Control: private < Content-Type: text/html; charset=UTF-8 < Location: https://www.google.co.il/?gfe_rd=cr&ei=IkxbVMy4K4OBbKuDgKgF < Content-Length: 260 < Date: Thu, 06 Nov 2014 10:23:30 GMT * Server GFE/2.0 is not blacklisted < Server: GFE/2.0 < { [data not shown] 100 260 100 260 0 0 714 0 --:--:-- --:--:-- --:--:-- 714 * Connection #0 to host google.com left intact I suspect that curl detects that it is not printing to a terminal and is thus gives different output, not all of which is recognized by grep as being stdout and is thus passed through to the terminal. However, the closest thing to this that I could find in man curl (don't ever google for that!) is this: PROGRESS METER curl normally displays a progress meter during operations, indicating the amount of transferred data, transfer speeds and estimated time left, etc. curl displays this data to the terminal by default, so if you invoke curl to do an operation and it is about to write data to the terminal, it disables the progress meter as otherwise it would mess up the output mixing progress meter and response data. If you want a progress meter for HTTP POST or PUT requests, you need to redirect the response output to a file, using shell redirect (>), -o [file] or similar. It is not the same case for FTP upload as that operation does not spit out any response data to the terminal. If you prefer a progress "bar" instead of the regular meter, -# is your friend. How can I get just the expiry line out of the curl output? Furthermore, what should I be reading to understand the situation better? Seems like this would be a good use case for a "stdmeta" file descriptor .
curl writes the output to stderr, so redirect that and also suppress the progress: curl -v --silent https://google.com/ 2>&1 | grep expire The reason why curl writes the information to stderr is so you can do: curl <url> | someprgram without that information clobbering the input of someprogram
{ "source": [ "https://unix.stackexchange.com/questions/166359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
166,473
I am running Debian 7 Wheezy and I need to start some screens on startup as soon as there is a fully functional internet connection. However, not, if the internet connection broke and was connected again. So only on the first functional internet connection after boot. Could you please post a dummy script for this and tell me where to put it and make it be executed under the given conditions? The script only needs to start the screen and then terminate but the screen should continue. EDIT I have already heard of the /etc/network/if-up.d/ folder. But how can I make sure that the script is not executed again if the internet connection is lost and then re-established?
Put your script in /etc/network/if-up.d and make it executable. It will be automatically run each time a network interface comes up. To make it do work only the first time it is run on every boot, have it check for existence of a flag file which you create after the first time. Example: #!/bin/sh FLAGFILE=/var/run/work-was-already-done case "$IFACE" in lo) # The loopback interface does not count. # only run when some other interface comes up exit 0 ;; *) ;; esac if [ -e $FLAGFILE ]; then exit 0 else touch $FLAGFILE fi : here, do the real work.
{ "source": [ "https://unix.stackexchange.com/questions/166473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81949/" ] }
166,541
I am running the following command on my ubuntu server root@slot13:~# lxc-stop --name pavan --logfile=test1.txt --logpriority=trace It seems to hang indefinitely. Whenever this happened on AIX, I simply used to get the PID of the offending process and say $ procstack <pid_of_stuck_process> and it used to show the whole callstack of the process. Is there any equivalent of procstack in linux/ubuntu?
My first step would be to run strace on the process, best strace -s 99 -ffp 12345 if your process ID is 12345. This will show you all syscalls the program is doing. How to strace a process tells you more. If you insist on getting a stacktrace, google tells me the equivalent is pstack. But as I do not have it installed I use gdb: tweedleburg:~ # sleep 3600 & [2] 2621 tweedleburg:~ # gdb (gdb) attach 2621 (gdb) bt #0 0x00007feda374e6b0 in __nanosleep_nocancel () from /lib64/libc.so.6 #1 0x0000000000403ee7 in ?? () #2 0x0000000000403d70 in ?? () #3 0x000000000040185d in ?? () #4 0x00007feda36b8b05 in __libc_start_main () from /lib64/libc.so.6 #5 0x0000000000401969 in ?? () (gdb)
{ "source": [ "https://unix.stackexchange.com/questions/166541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17829/" ] }
166,558
When I check free in one of Prod server it showing 70% of memory is being used: total used free shared buffers cached Mem: 164923172 141171860 23751312 0 4555616 20648048 -/+ buffers/cache: 115968196 48954976 Swap: 8388600 0 8388600 But I didn’t find what process is using the memory, I tried the top command and it is showing process using memory only 1.1 and 5.4 % How can I find which process is using the memory? Below are the top command results: 15085 couchbas 25 0 2784m 2.4g 40m S 183.7 1.5 299597:00 beam.smp 28248 tibco 18 0 124m 100m 3440 S 20.9 0.1 2721:45 tibemsd 15334 couchbas 15 0 9114m 8.6g 3288 S 9.0 5.4 12996:28 memcached 15335 couchbas 18 0 6024 600 468 S 2.0 0.0 1704:54 sigar_port 15319 couchbas 15 0 775m 2516 944 S 0.7 0.0 269:13.41 i386-linux-godu 12167 tibco 16 0 11284 1464 784 R 0.3 0.0 0:00.04 top 12701 root 15 0 451m 427m 2140 S 0.3 0.3 18:25.02 controller 13163 root 11 -5 0 0 0 S 0.3 0.0 289:58.58 vxglm_thread
This will show you top 10 process that using the most memory: ps aux --sort=-%mem | head Using top : when you open top , pressing m will sort processes based on memory usage. But this will not solve your problem, in Linux everything is either file or process. So the files you opened will eating the memory too. So this will not help. lsof will give you all opened files with the size of the file or the file offset in bytes.
{ "source": [ "https://unix.stackexchange.com/questions/166558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90639/" ] }
166,686
I understand that "Everything is a file" is one of the major concepts of Unix, but sockets use different APIs that are provided by the kernel (like socket, sendto, recv, etc.), not like normal file system interfaces. How does this "Everything is a file" apply here?
sockets use different APIs That's not entirely true. There are some additional functions for use with sockets, but you can use, e.g., normal read() and write() on a socket fd. how does this "Everything is a file" apply here? In the sense that a file descriptor is involved. If your definition of "file" is a discrete sequence of bytes stored in a filesystem, then not everything is a file. However, if your definition of file is more handle like -- a conduit for information, i.e., an I/O connection -- then "everything is a file" starts to make more sense. These things inevitably do involve sequences of bytes, but where they come from or go to may differ contextually. It's not really intended literally, however. A daemon is not a file, a daemon is a process; but if you are doing IPC your method of relating to another process might well be mitigated by file style entities.
{ "source": [ "https://unix.stackexchange.com/questions/166686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85790/" ] }
166,817
I have a process that needs root privileges when run by a normal user. Apparently I can use the "setuid bit" to accomplish this. What is the proper way of doing this on a POSIX system? Also, how can I do this with a script that uses an interpreter (bash, perl, python, php, etc)?
The setuid bit can be set on an executable file so that when run, the program will have the privileges of the owner of the file instead of the real user, if they are different. This is the difference between effective uid (user id) and real uid. Some common utilities, such as passwd , are owned root and configured this way out of necessity ( passwd needs to access /etc/shadow which can only be read by root). The best strategy when doing this is to do whatever you need to do as superuser right away then lower privileges so that bugs or misuse are less likely to happen while running root. To do this, you set the process's effective uid to its real uid. In POSIX C: #define _POSIX_C_SOURCE 200112L // Needed with glibc (e.g., linux). #include <stdio.h> #include <sys/types.h> #include <unistd.h> void report (uid_t real) { printf ( "Real UID: %d Effective UID: %d\n", real, geteuid() ); } int main (void) { uid_t real = getuid(); report(real); seteuid(real); report(real); return 0; } The relevant functions, which should have an equivalent in most higher level languages if they are used commonly on POSIX systems: getuid() : Get the real uid. geteuid() : Get the effective uid. seteuid() : Set the effective uid. You can't do anything with the last one inappropriate to the real uid except in so far as the setuid bit was set on the executable . So to try this, compile gcc test.c -o testuid . You then need to, with privileges: chown root testuid chmod u+s testuid The last one sets the setuid bit. If you now run ./testuid as a normal user you'll see the process by default runs with effective uid 0, root. What about scripts? This varies from platform to platform , but on Linux, things that require an interpreter, including bytecode, can't make use of the setuid bit unless it is set on the interpreter (which would be very very stupid). Here's a simple perl script that mimics the C code above: #!/usr/bin/perl use strict; use warnings FATAL => qw(all); print "Real UID: $< Effective UID: $>\n"; $> = $<; # Not an ASCII art greedy face, but genuine perl... print "Real UID: $< Effective UID: $>\n"; True to it's *nixy roots, perl has build in special variables for effective uid ( $> ) and real uid ( $< ). But if you try the same chown and chmod used with the compiled (from C, previous example) executable, it won't make any difference. The script can't get privileges. The answer to this is to use a setuid binary to execute the script: #include <stdio.h> #include <unistd.h> int main (int argc, char *argv[]) { if (argc < 2) { puts("Path to perl script required."); return 1; } const char *perl = "perl"; argv[0] = (char*)perl; return execv("/usr/bin/perl", argv); } Compile this gcc --std=c99 whatever.c -o perlsuid , then chown root perlsuid && chmod u+s perlsuid . You can now execute any perl script with with an effective uid of 0, regardless of who owns it. A similar strategy will work with php, python, etc. But... # Think hard, very important: >_< # Genuine ASCII art "Oh tish!" face PLEASE PLEASE DO NOT leave this kind of thing lying around . Most likely, you actually want to compile in the name of the script as an absolute path , i.e., replace all the code in main() with: const char *args[] = { "perl", "/opt/suid_scripts/whatever.pl" } return execv("/usr/bin/perl", (char * const*)args); Them make sure /opt/suid_scripts and everything in it are read-only for non-root users. Otherwise, someone could swap in anything for whatever.pl . In addition, beware that many scripting languages allow environment variables to change the way they execute a script . For example, an environment variable might cause a library supplied by the caller to be loaded, allowing the caller to execute arbitrary code as root. Thus, unless you know that both the interpreter and the script itself are robust against all possible environment variables, DON'T DO THIS . So what should I do instead? A safer way to allow a non-root user to run a script as root is to add a sudo rule and have the user run sudo /path/to/script . Sudo strips most environment variables, and also allows the administrator to finely select who can run the command and with what arguments. See How to run a specific program as root without a password prompt? for an example.
{ "source": [ "https://unix.stackexchange.com/questions/166817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
167,004
All shell builtins share the same manual page: BUILTIN(1) BSD General Commands Manual BUILTIN(1) NAME builtin, ! etc. Then there is a little text describing what shell builtins are, and then a list that looks something like this: Command External csh(1) sh(1) ! No No Yes % No Yes No But if we do man grep we get sections such as Bugs History See also Standards Description etc. Don't shell builtins have their own history, description and arguments like -A or -r ? Why isn't that provided in the manual pages and how would I learn to use them correctly and efficiently?
Because builtins are part of the shell. Any bugs or history they have are bugs and history of the shell itself. They are not independent commands and don't exist outside the shell they are built into. The equivalent, for bash at least, is the help command. For example: $ help while while: while COMMANDS; do COMMANDS; done Execute commands as long as a test succeeds. Expand and execute COMMANDS as long as the final command in the `while' COMMANDS has an exit status of zero. Exit Status: Returns the status of the last command executed. All bash builtins have help pages. Even help itself: $ help help help: help [-dms] [pattern ...] Display information about builtin commands. Displays brief summaries of builtin commands. If PATTERN is specified, gives detailed help on all commands matching PATTERN, otherwise the list of help topics is printed. Options: -d output short description for each topic -m display usage in pseudo-manpage format -s output only a short usage synopsis for each topic matching PATTERN Arguments: PATTERN Pattern specifiying a help topic Exit Status: Returns success unless PATTERN is not found or an invalid option is given. Inspired by @mikeserv's sed script, here's a little function that will print the relevant section of a man page using Perl. Add this line to your shell's initialization file ( ~/.bashrc for bash): manperl(){ man "$1" | perl -00ne "print if /^\s*$2\b/"; } Then, you run it by giving it a man page and the name of a section: $ manperl bash while while list-1; do list-2; done until list-1; do list-2; done The while command continuously executes the list list-2 as long as the last command in the list list-1 returns an exit status of zero. The until command is identical to the while command, except that the test is negated; list-2 is exe‐ cuted as long as the last command in list-1 returns a non-zero exit status. The exit status of the while and until commands is the exit status of the last command executed in list-2, or zero if none was executed. $ manperl grep SYNOPSIS SYNOPSIS grep [OPTIONS] PATTERN [FILE...] grep [OPTIONS] [-e PATTERN | -f FILE] [FILE...] $ manperl rsync "-r" -r, --recursive This tells rsync to copy directories recursively. See also --dirs (-d).
{ "source": [ "https://unix.stackexchange.com/questions/167004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
167,038
Is there any way to know the size of L1, L2, L3 caches and RAM in Linux?
If you have lshw installed: $ sudo lshw -C memory Example $ sudo lshw -C memory ... *-cache:0 description: L1 cache physical id: a slot: Internal L1 Cache size: 32KiB capacity: 32KiB capabilities: asynchronous internal write-through data *-cache:1 description: L2 cache physical id: b slot: Internal L2 Cache size: 256KiB capacity: 256KiB capabilities: burst internal write-through unified *-cache:2 description: L3 cache physical id: c slot: Internal L3 Cache size: 3MiB capacity: 8MiB capabilities: burst internal write-back *-memory description: System Memory physical id: 2a slot: System board or motherboard size: 8GiB *-bank:0 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: M471B5273CH0-CH9 vendor: Samsung physical id: 0 serial: 67010644 slot: DIMM 1 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns) *-bank:1 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: 16JTF51264HZ-1G4H1 vendor: Micron Technology physical id: 1 serial: 3749C127 slot: DIMM 2 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns)
{ "source": [ "https://unix.stackexchange.com/questions/167038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90929/" ] }
167,042
I have the following command to delete files in a folder that are 15 days or older: find /var/www/App/app/var/sessions* -mtime +15 -exec rm {} \; what is the best way to speed this up and run it on the background? I heard rm is a pretty slow operation
If you have lshw installed: $ sudo lshw -C memory Example $ sudo lshw -C memory ... *-cache:0 description: L1 cache physical id: a slot: Internal L1 Cache size: 32KiB capacity: 32KiB capabilities: asynchronous internal write-through data *-cache:1 description: L2 cache physical id: b slot: Internal L2 Cache size: 256KiB capacity: 256KiB capabilities: burst internal write-through unified *-cache:2 description: L3 cache physical id: c slot: Internal L3 Cache size: 3MiB capacity: 8MiB capabilities: burst internal write-back *-memory description: System Memory physical id: 2a slot: System board or motherboard size: 8GiB *-bank:0 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: M471B5273CH0-CH9 vendor: Samsung physical id: 0 serial: 67010644 slot: DIMM 1 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns) *-bank:1 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: 16JTF51264HZ-1G4H1 vendor: Micron Technology physical id: 1 serial: 3749C127 slot: DIMM 2 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns)
{ "source": [ "https://unix.stackexchange.com/questions/167042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73847/" ] }
167,077
Shouldn't it be possible? Let's assume I don't need a response, I just want to send a request. Shouldn't we be able to alter tcp/ip headers, because our computer sends it? I am probably missing something, just really curious, learning about it in the uni.
You can using the -H/--header argument: You could spoof your ip address: curl --header "X-Forwarded-For: 192.168.0.2" http://example.com Example: client $ curl http://webhost.co.uk web host $ tailf access.log | grep 192.168.0.54 192.168.0.54 - - [10/Nov/2014:15:56:09 +0000] "GET / HTTP/1.1" 200 14328 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2" client with ip address changed $ curl --header "X-Forwarded-For: 192.168.0.99" http://webhost.co.uk web host $ tailf access.log | grep 192.168.0.99 192.168.0.99 - - [10/Nov/2014:15:56:43 +0000] "GET / HTTP/1.1" 200 14328 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2" man curl -H/--header <header> (HTTP) Extra header to use when getting a web page. You may specify any number of extra headers. Note that if you should add a custom header that has the same name as one of the internal ones curl would use, your externally set header will be used instead of the internal one. This allows you to make even trickier stuff than curl would normally do. You should not replace internally set headers without knowing perfectly well what you’re doing. Remove an internal header by giving a replacement without content on the right side of the colon, as in: -H "Host:". References: Modify_method_and_headers
{ "source": [ "https://unix.stackexchange.com/questions/167077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90963/" ] }
167,201
Might be a very silly question to many folks out there, but I'm dense! Ex: Applying predefined layouts: C-a M-1 switch to even-horizontal layout C-a M-2 switch to even-vertical layout C-a M-3 switch to main-horizontal layout C-a M-4 switch to main-vertical layout C-a M-5 switch to tiled layout C-a space switch to the next layout What is M? If it's just shift+m then please take away my neckbeard right now. I thought it might be alt + key but that doesn't seem to be it.
It's the meta key . So M-1 is meta-1. (Just like how C-1 is control-1). Now, when you look at your keyboard, you probably notice the distinct lack of any key actually labeled meta, at least if you have a normal PC keyboard. Depending on how your keyboard layout is set up, meta is typically either the alt key or the logo (Windows) key. In short, C-a M-1 is telling you to press and hold Control and press A ; then release both; then press and hold Alt (or Windows ) and press 1 . The release them, of course.
{ "source": [ "https://unix.stackexchange.com/questions/167201", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37656/" ] }
167,216
I have a patch with absolute paths that I wish to use. i.e. the first few lines are as follows. --- /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml 2014-10-10 18:47:23.000000000 +1100 +++ /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml.mod 2014-11-11 09:44:17.786200477 +1100 However, it fails unless I am in the root directory. ~$ cd ~$ sudo patch -i /tmp/fix_kde_icons.patch -p0 Ignoring potentially dangerous file name /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml Ignoring potentially dangerous file name /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml.mod can't find file to patch at input line 3 Perhaps you used the wrong -p or --strip option? ... ~$ cd /tmp /tmp$ sudo patch -i /tmp/fix_kde_icons.patch -p0 ... #same error as above /tmp$ cd /usr /usr$ sudo patch -i /tmp/fix_kde_icons.patch -p0 ... #same error as above /usr$ cd / /$ sudo patch -i /tmp/fix_kde_icons.patch -p0 patching file /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml Is there a way to make patch use the absolute path with any working directory?
Looking at the source code of GNU patch, this behavior is built in since version 2.7. As of GNU patch 2.7.1, only relative paths not containing .. are accepted, unless the current directory is the root directory. To apply a patch containing absolute paths, you can use (cd / && sudo patch -p0) <foo.patch In recent versions of GNU patch, you can simply sudo patch -d/ -p0 <foo.patch
{ "source": [ "https://unix.stackexchange.com/questions/167216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18887/" ] }
167,266
I thought I was familiar with sftp commands in unix and upto my knowledge the command to used to download a compressed file from the server is sftp get filename.tar.gz But when I tried this command the file gets emptied, I mean the file size is going 0 and I don't know why it happens. Is there any other command specifically for .tar.gz type of files. What is the procedure to get this compressed files. I'm using mac terminal.
get is a legal sftp command, but can't be used that way. The correct syntax to download filename.tar.gz to your Mac is: sftp user@host:/path/to/filename.tar.gz /some/local/folder (Replace user with the user you use to connect to the remote server, replace host with the remote server name.) There's nothing special to tar.gz files in the above command, it is generic to any extension. To use get you have to enter interactive mode first: Make a connection to the remote server: sftp user@host Wait until > , the sftp prompt, appears in a newline - you can now type your commands. Change the remote path to /path/to : cd /path/to/ Change the local path to /some/local/folder : lcd /some/local/folder Use get to download the file: get filename.tar.gz
{ "source": [ "https://unix.stackexchange.com/questions/167266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67557/" ] }
167,519
This code generates a error line 3: [: missing `]' . So why am I getting such error? #!/bin/bash read x if [ $x == "Y" ] || [ $x == "y"] then echo "YES" else echo "NO" fi Thanks in advance.
you need to add a space between " and ] $ ./test.sh Y YES $ cat test.sh #!/bin/bash read x if [ $x == "Y" ] || [ $x == "y" ] then echo "YES" else echo "NO" fi Cheers.
{ "source": [ "https://unix.stackexchange.com/questions/167519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91243/" ] }
167,527
Upgraded tomcat 6 to 8 on Centos 7. I get this error in logs: /usr/local/ctera/apache-tomcat-8.0.14/bin/catalina.sh: line 421: -Djava.endorsed.dirs=/usr/local/ctera/apache-tomcat-8.0.14/endorsed: No such file or directory This is the only entry in logs... This directory didn't exist, so I created it, with permission 777. Still get same error. Tomcat 6 did not produce such an error. I read a little about the endorsed directory - http://tomcat.apache.org/tomcat-8.0-doc/class-loader-howto.html and it shouldn't be a critical issue, but it is. What should I do..?
you need to add a space between " and ] $ ./test.sh Y YES $ cat test.sh #!/bin/bash read x if [ $x == "Y" ] || [ $x == "y" ] then echo "YES" else echo "NO" fi Cheers.
{ "source": [ "https://unix.stackexchange.com/questions/167527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67047/" ] }
167,582
I've noticed this on occasion with a variety of applications. I've often thought it was because the output was cancelled early (ctrl+c, for example) or something similar, and zsh is filling in a new line character. But now curiosity has gotten the best of me, since it doesn't seem to do this in bash. zsh bash The Sequence program is something I pulled from a book while reading on Java certifications and just wanted to see if it would compile and run. I did notice that it does not use the println() method from the System.out package/class. Instead it uses plain old print() . Is the lack of a new line character the reason I get this symbol?
Yes, this happens because it is a "partial line". And by default zsh goes to the next line to avoid covering it with the prompt . When a partial line is preserved, by default you will see an inverse+bold character at the end of the partial line: a "%" for a normal user or a "#" for root. If set, the shell parameter PROMPT_EOL_MARK can be used to customize how the end of partial lines are shown.
{ "source": [ "https://unix.stackexchange.com/questions/167582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47355/" ] }
167,610
I'm creating a shell script that would take a filename/path to a file and determine if the file is a symbolic link or a hard link. The only thing is, I don't know how to see if they are a hard link. I created 2 files, one a hard link and one a symbolic link, to use as a test file. But how would I determine if a file is a hard link or symbolic within a shell script? Also, how would I find the destination partition of a symbolic link? So let's say I have a file that links to a different partition, how would I find the path to that original file?
Jim's answer explains how to test for a symlink: by using test 's -L test. But testing for a "hard link" is, well, strictly speaking not what you want. Hard links work because of how Unix handles files: each file is represented by a single inode. Then a single inode has zero or more names or directory entries or, technically, hard links (what you're calling a "file"). Thankfully, the stat command, where available, can tell you how many names an inode has. So you're looking for something like this (here assuming the GNU or busybox implementation of stat ): if [ "$(stat -c %h -- "$file")" -gt 1 ]; then echo "File has more than one name." fi The -c '%h' bit tells stat to just output the number of hardlinks to the inode, i.e., the number of names the file has. -gt 1 then checks if that is more than 1. Note that symlinks, just like any other files, can also be linked to several directories so you can have several hardlinks to one symlink.
{ "source": [ "https://unix.stackexchange.com/questions/167610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91166/" ] }
167,631
So let's say I have a symbolic link of a file in my home directory to another file on a different partition. How would I find the target location of the linked file? By this, I mean, let's say I have file2 in /home/user/ ; but it's a symbolic link to another file1 . How would I find file1 without manually having to go through each partition/directory to find the file?
Use readlink : readlink -f /path/file ( last target of your symlink if there's more than one level ) If you just want the next level of symbolic link, use: readlink /path/file You can also use realpath on modern systems with GNU coreutils (e.g. Linux ), FreeBSD , NetBSD , OpenBSD or DragonFly : realpath /path/file which is similar to readlink -f .
{ "source": [ "https://unix.stackexchange.com/questions/167631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91166/" ] }
167,727
I ran this command yesterday, I thought on a test machine, but it was a File-Server connected through SSH. sudo rm -rf /tmp/* !(lost+found) My terminal emulator is Konsole. My system is Debian 7. Question: Did this command delete other files than the files in /tmp?
The correct syntax in bash is the following: rm /tmp/!(lost+found) As @goldilocks wrote in the comments, the original command makes an expansion on the query (it deletes all the files in the /tmp folder, then goes on, and deletes all the files in the current working folder, in your case the home folder). You can try to check if you can recover some of your data. There is a question about Linux data recovery here .
{ "source": [ "https://unix.stackexchange.com/questions/167727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
167,755
I have a string like rev00000010 and I only want the last number, 10 in this case. I have tried this: TEST='rev00000010' echo "$TEST" | sed '/^[[:alpha:]][0]*/d' echo "$TEST" | sed '/^rev[0]*/d' both return nothing, although the regex seems to be correct (tried with regexr )
The commands you passed to sed mean: if a line matches the regex, delete it . That's not what you want. echo "$TEST" | sed 's/rev0*//' This means: on each line, remove rev followed by any number of zeroes. Also, you don't need sed for such a simple thing. Just use bash and its parameter expansion : shopt -s extglob # Turn on extended globbing. echo "${TEST##rev*(0)}" # Remove everything from the beginning up to `rev` # followed by the maximal number of zeroes.
{ "source": [ "https://unix.stackexchange.com/questions/167755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20479/" ] }
167,814
I have a text file I'm outputting to a variable in my shell script. I only need the first 50 characters however. I've tried using cat ${filename} cut -c1-50 but I'm getting far more than the first 50 characters? That may be due to cut looking for lines (not 100% sure), while this text file could be one long string-- it really depends. Is there a utility out there I can pipe into to get the first X characters from a cat command?
head -c 50 file This returns the first 50 bytes. Mind that the command is not always implemented the same on all OS. On Linux and macOS it behaves this way. On Solaris (11) you need to use the gnu version in /usr/gnu/bin/
{ "source": [ "https://unix.stackexchange.com/questions/167814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88163/" ] }
167,823
I'm trying to understand the difference between these two commands: sudo find / -name .DS_Store -delete and sudo find / -name ".DS_Store" -exec rm {} \; I noticed that the -exec ... {} method is preferred. Why? Which one is safer/faster/better? I've used both on my Macbook and everything appears to work well.
-delete will perform better because it doesn't have to spawn an external process for each and every matched file, but make sure to use it after -name , otherwise it will delete the specified entire file tree. For example, find . -name .DS_Store -type f -delete It is possible that you may see -exec rm {} + often recommended because -delete does not exist in all versions of find . I can't check right now but I'm pretty sure I've used a find without it. Both methods should be "safe". A common method for avoiding the overhead of spawning an external process for each matched file is: find / -name .DS_Store -print0 | xargs -0 rm (but note that there is a portability problem here too: not all versions of find have -print0 !)
{ "source": [ "https://unix.stackexchange.com/questions/167823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44651/" ] }
167,826
How can I update my Buildroot without losing my configuration, packages, etc.? And how can I update the Linux kernel that is configured? Is it just change the url from git repository in menuconfig? If someone helps me I will be grateful.
-delete will perform better because it doesn't have to spawn an external process for each and every matched file, but make sure to use it after -name , otherwise it will delete the specified entire file tree. For example, find . -name .DS_Store -type f -delete It is possible that you may see -exec rm {} + often recommended because -delete does not exist in all versions of find . I can't check right now but I'm pretty sure I've used a find without it. Both methods should be "safe". A common method for avoiding the overhead of spawning an external process for each matched file is: find / -name .DS_Store -print0 | xargs -0 rm (but note that there is a portability problem here too: not all versions of find have -print0 !)
{ "source": [ "https://unix.stackexchange.com/questions/167826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91426/" ] }
167,935
From which log can I get details about sudo commands executed by any user. It should contain the working directory, command, user. It will be helpful if you could provide me a shell script to do so
Depending on your distro; simply: $ sudo grep sudo /var/log/secure or $ sudo grep sudo /var/log/auth.log which gives: Nov 14 09:07:31 vm1 sudo: pam_unix(sudo:auth): authentication failure; logname=gareth uid=1000 euid=0 tty=/dev/pts/19 ruser=gareth rhost= user=gareth Nov 14 09:07:37 vm1 sudo: gareth : TTY=pts/19 ; PWD=/home/gareth ; USER=root ; COMMAND=/bin/yum update Nov 14 09:07:53 vm1 sudo: gareth : TTY=pts/19 ; PWD=/home/gareth ; USER=root ; COMMAND=/bin/grep sudo /var/log/secure The user running the command is after the sudo: - gareth in this case. PWD is the directory. USER is the user that gareth is running as - root in this example. COMMAND is the command ran. Therefore, in the example above, gareth used sudo to run yum update and then ran this example. Before that he typed in the incorrect password. Note also that there may be rolled log files, like /var/log/secure* On newer systems: $ sudo journalctl _COMM=sudo gives a very similar output.
{ "source": [ "https://unix.stackexchange.com/questions/167935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91015/" ] }
168,034
I'm attemting to modify an npm package with multiple dependencies. As such npm install -g . takes a long time to execute. Do I have other options besides removing the dependencies from packages.json?
--no-optional option is now implemented according to this documentation https://docs.npmjs.com/cli/install : The --no-optional argument will prevent optional dependencies from being installed.
{ "source": [ "https://unix.stackexchange.com/questions/168034", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15581/" ] }