source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
102,829
I have several folders defined in Mutt: mailboxes "~/Mail/inbox" mailboxes "~/Mail/sent" mailboxes "~/Mail/archive" I can save (move) a message to archive, by pressing s and then ? to see a list of folders, and then I can choose archive from the list. Since I always want only to save to archive I would like to have a macro, so that pressing s automatically saves selected message to archive , without asking me. Can somebody please help? EDIT: I now have following macro, to save messages to my "archive" folder: macro index,pager S "<tag-prefix><save-message>=archive<enter>\ :set delete=yes<enter><sync-mailbox>:set delete=no<enter>" the problem is, the messages stay in index marked as deleted. They are not "synced" immediately. Second, the <enter> at the end acts as <display-message> , so that when I press S , I end up in the pager of the current message. In a similar way, I am trying to implement the trash folder in mutt. The following is taken from the Mutt MacroSamples set maildir_trash=yes set wait_key=no folder-hook . 'bind index q quit' folder-hook inbox 'macro index q ":unset maildir_trash;push \"T~D\\n<tag-prefix-cond>m=trash\\n<end-cond><quit>\"\n"' but this does not work either. Insted mutt asks me: Append messages to etmaildir_trash;push"T~D\n<tag-prefix-cond>m=trash\n<end-cond><quit>"/maildir_trash;push"T~D\n<tag-prefix-cond>m=trash\n<end-co ([yes]/no): what ever I press, nothing happens (the folders trash/{cur,new,tmp} do exist)
tagged mails: macro index S ":set confirmappend=no delete=yes\n<tag-prefix-cond><save-message>=archive\n<sync-mailbox>:set confirmappend=yes delete=ask-yes\n" current only: macro index A ":set confirmappend=no delete=yes\n<save-message>=archive\n<sync-mailbox>:set confirmappend=yes delete=ask-yes\n" edit macro index S ":set confirmappend=no delete=yes\n<tag-prefix><save-message>=archive\n<sync-mailbox>:set confirmappend=yes delete=ask-yes\n" My fault, using tag-prefix instead of tag-prefix-cond , will apply the macro to tagged messages if present, else to the current selected.
{ "source": [ "https://unix.stackexchange.com/questions/102829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36112/" ] }
102,838
If a script should be executed in the current shell, it can be achieved by adding a dot before the command: . ./somescript.sh Is there a way to do this without typing the dot every time? For example a command to change to the parent shell from the script itself?
It may not be exactly what you want but you could do: alias somescript.sh='. ./somescript.sh'
{ "source": [ "https://unix.stackexchange.com/questions/102838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53298/" ] }
102,905
If I want to tail a 25 GB textfile, does the tail command read the whole file? Since a file might be scattered on a disk I imagine it has to, but I do not understand such internals well.
No, tail doesn't read the whole file, it seeks to the end then read blocks backwards until the expected number of lines have been reached, then it displays the lines in the proper direction until the end of the file, and possibly stays monitoring the file if the -f option is used. Note however that tail has no choice but to read the whole data if provided a non seekable input, for example when reading from a pipe. Similarily, when asked to look for lines starting from the beginning of the file, with using the tail -n +linenumber syntax or tail +linenumber non standard option when supported, tail obviously reads the whole file (unless interrupted).
{ "source": [ "https://unix.stackexchange.com/questions/102905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26674/" ] }
102,956
Can somebody show me how to make a program to do this action: after 5 minutes echo "80" > /sys/class/leds/blue/brightness I want this program run in the background (like rngd service) I can't do this because I don't know so much about Linux.
( sleep 300 ; echo "80" > /sys/class/leds/blue/brightness ) & That way your script continues, or you restore control immediately, while a new background task of the script starts, with two commands: sleep, and echo. The common error is trying to give either sleep or echo or both the & which will not work as intended. Launching a series of commands in () though spawns them in a separate shell process, which you can then send whole into background with & . To that wit, where I found it actively useful. In an embedded device I develop there's the main application that works with a watchdog. If it fails in a way that triggers the watchdog reset soon after startup, repeatedly, it's hard to fix remotely as the period between the OS start and the reset is quite short, not enough to ssh in, and block the app startup. So I need a way to determine the system restarted so fast and introduce a delay if it did, to give myself time to fix it manually. [ -f /tmp/startdelay ] && sleep 30 touch /tmp/startdelay ( sleep 30 ; rm /tmp/startdelay ) & [ -f /tmp/noautostart ] && exit 0 start_app If I log in and perform touch /tmp/noautostart the main app won't start. If the watchdog kicks in, rm /tmp/startdelay won't be performed and the next time the system starts, it will give me extra 30s to stop it. Otherwise restart will be quick, without delay.
{ "source": [ "https://unix.stackexchange.com/questions/102956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53337/" ] }
103,004
How do I match word only between parenthesis Input : this is (test.com) Desire Output : test.com
Here are a few options, all of which print the desired output: Using grep with the -o flag (only print matching part of line) and Perl compatible regular expressions ( -P ) that can do lookarounds : printf "this is (test.com)\n" | grep -Po '(?<=\().*(?=\))' That regex might need some explaining: (?<=\() : This is a positive lookbehind , the general format is (?<=foo)bar and that will match all cases of bar found right after foo . In this case, we are looking for an opening parenthesis, so we use \( to escape it. (?=\)) : This is a positive lookahead and simply matches the closing parenthesis. The -o option to grep causes it to only print the matched part of any line, so we look for whatever is in parentheses and then delete them with sed : printf "this is (test.com)\n" | grep -o '(.*)' | sed 's/[()]//g' Parse the whole thing with Perl: printf "this is (test.com)\n" | perl -pe 's/.*\((.+?)\)/$1/' Parse the whole thing with sed : printf "this is (test.com)\n" | sed 's/.*(\(.*\))/\1/'
{ "source": [ "https://unix.stackexchange.com/questions/103004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28434/" ] }
103,027
Can anyone distinguish the difference between binary file and .exe file?
binary file is pretty much everything that is not plain text , that is contains data encoded in any different way than text encoding (ASCII, UTF-8, or any of other text encodings, e.g. ISO-8859-2). A text file may be a plaintext document, like a story or a letter, it can be a config file, or a data file - anyway, if you use a plain text editor to open it, the contents are readable. A binary is any file that is not a text file (nor "special" like fifo, directory, device etc.) That may be a mp3 music. That may be a jpg image. That may be a compressed archive, or even a word processor document - while for practical purposes it's text, it is encoded (written on disk) as binary. You need a specific program to open it, to make sense of it - for a text editor the contents are a jumbled mess. Now, in Linux you'll often hear "binaries" when referring to "binary executable files" - programs. This is because while sources of most programs (written in high-level languages) are plain text, compiled executables are binary. Since there are quite a few compiled formats (a.out, ELF, bytecode...) they are commonly called binaries instead of dwelling on what internal structure they have - from user's point of view they are pretty much the same. Now, .exe is just another of these compiled formats - one common to MS Windows. It's just a kind of binaries, compiled and linked against Windows API.
{ "source": [ "https://unix.stackexchange.com/questions/103027", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47732/" ] }
103,031
How can I do a cp to copy all files including all sub-directories in the current directory. I looked at the man page, but I can't figure out how to do it. EDIT: (my *nix is Linux)
What *nix do you have? Under Linux you use normally: cp -r <source> <target> and if you want to copy all the same attributes (aka owner, etc.): cp -a <source> <target>
{ "source": [ "https://unix.stackexchange.com/questions/103031", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52350/" ] }
103,034
I have an obligation to provide a document which just contains every source code file, with the file's name before each file. Example: ============================= src/com/example/Factory.java ============================= public class Factory { ... } ============================= src/com/example/Worker.java ============================= public class Worker { ... } It must contain all *.java files in the current folder, recursively. How to easily generate such a file, using commonly available commands or a small script?
What *nix do you have? Under Linux you use normally: cp -r <source> <target> and if you want to copy all the same attributes (aka owner, etc.): cp -a <source> <target>
{ "source": [ "https://unix.stackexchange.com/questions/103034", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2305/" ] }
103,037
I'm looking for a command line tool that can intercept HTTP/HTTPS requests, extract information such as: (content, destination, etc.), perform various analysis tasks, and finally determine if the request should be dropped or not. Legal requests must than be forwarded to the application. A tool that is similar in nature to tcpdump , Wireshark , or Snort , but operates at the HTTP level. References Intercept HTTP requests on Linux
Try mitmproxy . mitmproxy is an SSL-capable man-in-the-middle proxy for HTTP. It provides a console interface that allows traffic flows to be inspected and edited on the fly. mitmdump is the command-line version of mitmproxy, with the same functionality but without the user interface. Think tcpdump for HTTP. Features Intercept HTTP requests and responses and modify them on the fly. Save complete HTTP conversations for later replay and analysis. Replay the client-side of an HTTP conversations. Replay HTTP responses of a previously recorded server. Reverse proxy mode to forward traffic to a specified server. Make scripted changes to HTTP traffic using Python. SSL certificates for interception are generated on the fly. Screenshot Example I setup an example Jekyll Bootstrap app which is listening on port 4000 on my localhost. To intercept it's traffic I'd do the following: % mitmproxy --mode reverse:http://localhost:4000 -p 4001 Then connect to my mitmproxy on port 4001 from my web browser ( http://localhost:4001 ), resulting in this in mitmproxy: You can then select any of the GET results to see the header info associated to that GET : References mitmproxy documentation How mitmproxy works & Modes of Operation
{ "source": [ "https://unix.stackexchange.com/questions/103037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
103,114
The ls -al command shows the following output; -rwxrw-r-- 10 root root 2048 Jan 13 07:11 afile.exe What are all the fields in the preceding display?
In the order of output; -rwxrw-r-- 1 root root 2048 Jan 13 07:11 afile.exe file permissions ( -rwxrw-r-- ), number of (hard) links ( 1 ), owner name ( root ), owner group ( root ), file size in bytes ( 2048 ), time of last modification ( Jan 13 07:11 ), and file/directory name ( afile.exe ) File permissions is displayed as following; first character is most often - , l or d . A d indicates a directory, a - represents a regular file, l is a symlink (or soft link) and other letters are used for other types of special files three sets of characters, three times, indicating permissions for owner, group and other: r = readable w = writable x = executable (for files) or accessible (for directories) this may be followed by some other character of there are extended permissions, like e.g. Linux ACL that are marked with a + . In your example -rwxrw-r-- , this means the line displayed is: a regular file (displayed as - ) readable, writable and executable by owner ( rwx ) readable, writable, but not executable by group ( rw- ) readable but not writable or executable by other ( r-- ) The number of hard links means the number of names the inode has, i.e. links created with ln without the -s option.
{ "source": [ "https://unix.stackexchange.com/questions/103114", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53437/" ] }
103,213
I have an application that I start only from the command line. How can I add the command (and preferably a nice logo) to Gnome's application menu?
In GNOME and other freedesktop.org -compliant desktop environments, such as KDE and Unity , applications are added to the desktop's menus or desktop shell via desktop entries , defined in text files with the .desktop extension (referred to as desktop files ). The desktop environments construct menus for a user from the combined information extracted from available desktop entries. Desktop files may be created in either of two places: /usr/share/applications/ for desktop entries available to every user in the system ~/.local/share/applications/ for desktop entries available to a single user You might need to restart GNOME for the new added applications to work. Per convention, desktop files should not include spaces or international characters in their name. Each desktop file is split into groups , each starting with the group header in square brackets ( [] ). Each section contains a number of key , value pairs, separated by an equal sign ( = ). Below is a sample of desktop file: [Desktop Entry] Type=Application Encoding=UTF-8 Name=Application Name Comment=Application description Icon=/path/to/icon.xpm Exec=/path/to/application/executable Terminal=false Categories=Tags;Describing;Application Explanation [Desktop Entry] the Desktop Entry group header identifies the file as a desktop entry Type the type of the entry, valid values are Application , Link and Directory Encoding the character encoding of the desktop file Name the application name visible in menus or launchers Comment a description of the application used in tooltips Icon the icon shown for the application in menus or launchers Exec the command that is used to start the application from a shell. Terminal whether the application should be run in a terminal, valid values are true or false Categories semi-colon ( ; ) separated list of menu categories in which the entry should be shown Command line arguments in the Exec key can be signified with the following variables: %f a single filename. %F multiple filenames. %u a single URL. %U multiple URLs. %d a single directory. Used in conjunction with %f to locate a file. %D multiple directories. Used in conjunction with %F to locate files. %n a single filename without a path. %N multiple filenames without paths. %k a URI or local filename of the location of the desktop file. %v the name of the Device entry. Note that ~ or environmental variables like $HOME are not expanded within desktop files, so any executables referenced must either be in the $PATH or referenced via their absolute path. A full Desktop Entry Specification is available at the GNOME Dev Center . Launch Scripts If the application to be launched requires certain steps to be done prior to be invoked, you can create a shell script which launches the application, and point the desktop entry to the shell script. Suppose that an application requires to be run from a certain current working directory. Create a launch script in a suitable to location ( ~/bin/ for instance). The script might look something like the following: #!/bin/bash pushd "/path/to/application/directory" ./application "$@" popd Set the executable bit for the script: $ chmod +x ~/bin/launch-application Then point the Exec key in the desktop entry to the launch script: Exec=/home/user/bin/launch-application
{ "source": [ "https://unix.stackexchange.com/questions/103213", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42948/" ] }
103,241
By default ifconfig will show me all available interfaces , but what if I just want to display active ones? Like, en0 only in below. en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 14:10:9f:e0:eb:c9 inet6 fe80::1610:9fff:fee0:ebc9%en0 prefixlen 64 scopeid 0x4 inet X.X.X.X netmask 0xffffff00 broadcast 101.6.69.255 nd6 options=1<PERFORMNUD> media: autoselect **status: active** en3: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 options=60<TSO4,TSO6> ether 32:00:14:e7:4f:80 media: autoselect <full-duplex> **status: inactive** Notice ifconfig en0 will not satisfy, en0 is not always the active one ;) I'm running Mac OS X.
To get a complete description of all the active services, try: ifconfig | pcregrep -M -o '^[^\t:]+:([^\n]|\n\t)*status: active' This simple regex should filter out only active interfaces and all their information. I sugest you put an alias for this in your ~/.profile or ~/.bash_profile file (maybe ifactive?) To just get the interface name (useful for scripts), use: ifconfig | pcregrep -M -o '^[^\t:]+:([^\n]|\n\t)*status: active' | egrep -o -m 1 '^[^\t:]+' You have to install pcregrep for this to work. It's on macports in the pcre package. Alternatively, this should work with GNU grep using grep -Pzo instead of pcregrep -M -o but with the rest the same, but I haven't tested this.
{ "source": [ "https://unix.stackexchange.com/questions/103241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53494/" ] }
103,252
I want a command line program that prints the title of a website. For e.g.: Alan:~ titlefetcher http://www.youtube.com/watch?v=Dd7dQh8u4Hc should give: Why Are Bad Words Bad? You give it the url and it prints out the Title.
wget -qO- 'http://www.youtube.com/watch?v=Dd7dQh8u4Hc' | perl -l -0777 -ne 'print $1 if /<title.*?>\s*(.*?)\s*<\/title/si' You can pipe it to GNU recode if there are things like &lt; in it: wget -qO- 'http://www.youtube.com/watch?v=Dd7dQh8u4Hc' | perl -l -0777 -ne 'print $1 if /<title.*?>\s*(.*?)\s*<\/title/si' | recode html.. To remove the - youtube part: wget -qO- 'http://www.youtube.com/watch?v=Dd7dQh8u4Hc' | perl -l -0777 -ne 'print $1 if /<title.*?>\s*(.*?)(?: - youtube)?\s*<\/title/si' To point out some of the limitations: portability There is no standard/portable command to do HTTP queries. A few decades ago, I would have recommended lynx -source instead here. But nowadays, wget is more portable as it can be found by default on most GNU systems (including most Linux-based desktop/laptop operating systems). Other fairly portables ones include the GET command that comes with perl 's libwww that is often installed, lynx -source , and to a lesser extent curl . Other common ones include links -source , elinks -source , w3m -dump_source , lftp -c cat ... HTTP protocol and redirection handling wget may not get the same page as the one that for instance firefox would display. The reason being that HTTP servers may choose to send a different page based on the information provided in the request sent by the client. The request sent by wget/w3m/GET... is going to be different from the one sent by firefox. If that's an issue, you can alter wget behaviour to change the way it sends the request though with options. The most important ones here in this regard are: Accept and Accept-language : that tells the server in which language and charset the client would like to get the response in. wget doesn't send any by default so the server will typically send with its default settings. firefox on the other end is likely configured to request your language. User-Agent : that identifies the client application to the server. Some sites send different content based on the client (though that's mostly for differences between javascript language interpretations) and may refuse to serve you if you're using a robot -type user agent like wget . Cookie : if you've visited this site before, your browser may have permanent cookies for it. wget will not. wget will follow the redirections when they are done at the HTTP protocol level, but since it doesn't look at the content of the page, not the ones done by javascript or things like <meta http-equiv="refresh" content="0; url=http://example.com/"> . Performance/Efficiency Here, out of laziness, we have perl read the whole content in memory before starting to look for the <title> tag. Given that the title is found in the <head> section that is in the first few bytes of the file, that's not optimal. A better approach, if GNU awk is available on your system could be: wget -qO- 'http://www.youtube.com/watch?v=Dd7dQh8u4Hc' | gawk -v IGNORECASE=1 -v RS='</title' 'RT{gsub(/.*<title[^>]*>/,"");print;exit}' That way, awk stops reading after the first </title , and by exiting, causes wget to stop downloading. Parsing of the HTML Here, wget writes the page as it downloads it. At the same time, perl , slurps its output ( -0777 -n ) whole in memory and then prints the HTML code that is found between the first occurrences of <title...> and </title . That will work for most HTML pages that have a <title> tag, but there are cases where it won't work. By contrast coffeeMug's solution will parse the HTML page as XML and return the corresponding value for title . It is more correct if the page is guaranteed to be valid XML . However, HTML is not required to be valid XML (older versions of the language were not), and because most browsers out there are lenient and will accept incorrect HTML code, there's even a lot of incorrect HTML code out there. Both my solution and coffeeMug's will fail for a variety of corner cases, sometimes the same, sometimes not. For instance, mine will fail on: <html><head foo="<title>"><title>blah</title></head></html> or: <!-- <title>old</title> --><title>new</title> While his will fail on: <TITLE>foo</TITLE> (valid html, not xml) or: or: <title>...</title> ... <script>a='<title>'; b='</title>';</script> (again, valid html , missing <![CDATA[ parts to make it valid XML). <title>foo <<<bar>>> baz</title> (incorrect html, but still found out there and supported by most browsers) interpretation of the code inside the tags. That solution outputs the raw text between <title> and </title> . Normally, there should not be any HTML tags in there, there may possibly be comments (though not handled by some browsers like firefox so very unlikely). There may still be some HTML encoding: $ wget -qO- 'http://www.youtube.com/watch?v=CJDhmlMQT60' | perl -l -0777 -ne 'print $1 if /<title.*?>\s*(.*?)\s*<\/title/si' Wallace &amp; Gromit - The Cheesesnatcher Part 1 (claymation) - YouTube Which is taken care of by GNU recode : $ wget -qO- 'http://www.youtube.com/watch?v=CJDhmlMQT60' | perl -l -0777 -ne 'print $1 if /<title.*?>\s*(.*?)\s*<\/title/si' | recode html.. Wallace & Gromit - The Cheesesnatcher Part 1 (claymation) - YouTube But a web client is also meant to do more transformations on that code when displaying the title (like condense some of the blanks, remove the leading and trailing ones). However it's unlikely that there'd be a need for that. So, as in the other cases, it's up to you do decide whether it's worth the effort. Character set Before UTF-8, iso8859-1 used to be the preferred charset on the web for non-ASCII characters though strictly speaking they had to be written as &eacute; . More recent versions of HTTP and the HTML language have added the possibility to specify the character set in the HTTP headers or in the HTML headers, and a client can specify the charsets it accepts. UTF-8 tends to be the default charset nowadays. So, that means that out there, you'll find é written as &eacute; , as &#233; , as UTF-8 é , (0xc3 0xa9), as iso-8859-1 (0xe9), with for the 2 last ones, sometimes the information on the charset in the HTTP headers or the HTML headers (in different formats), sometimes not. wget only gets the raw bytes, it doesn't care about their meaning as characters, and it doesn't tell the web server about the preferred charset. recode html.. will take care to convert the &eacute; or &#233; into the proper sequence of bytes for the character set used on your system, but for the rest, that's trickier. If your system charset is utf-8, chances are it's going to be alright most of the time as that tends to be the default charset used out there nowadays. $ wget -qO- 'http://www.youtube.com/watch?v=if82MGPJEEQ' | perl -l -0777 -ne 'print $1 if /<title.*?>\s*(.*?)\s*<\/title/si' Noir Désir - L&#39;appartement - YouTube That é above was a UTF-8 é . But if you want to cover for other charsets, once again, it would have to be taken care of. It should also be noted that this solution won't work at all for UTF-16 or UTF-32 encoded pages. To sum up Ideally, what you need here, is a real web browser to give you the information. That is, you need something to do the HTTP request with the proper parameters, intepret the HTTP response correctly, fully interpret the HTML code as a browser would, and return the title. As I don't think that can be done on the command line with the browsers I know (though see now this trick with lynx ), you have to resort to heuristics and approximations, and the one above is as good as any. You may also want to take into consideration performance, security... For instance, to cover all the cases (for instance, a web page that has some javascript pulled from a 3rd party site that sets the title or redirect to another page in an onload hook), you may have to implement a real life browser with its dom and javascript engines that may have to do hundreds of queries for a single HTML page, some of which trying to exploit vulnerabilities... While using regexps to parse HTML is often frowned upon , here is a typical case where it's good enough for the task (IMO).
{ "source": [ "https://unix.stackexchange.com/questions/103252", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53505/" ] }
103,347
How do you pronunce /usr ? I found in the net that someone reads it "user"... but, for what I know, this directory is not related to the user. The meaning of the acronym is "Unix specific (or system) resources". How can we better read it, making it easy to immediately understand the sense of the scope of such folder?
In the original Unix implementations, /usr used to contain the user home directories , e.g. instead of /home/user , you would have /usr/user . The original intention was for the directory to be called ´user´ with the connotation "everything user related". Since then, the role of /usr has narrowed. In current Unix-like operating systems, /usr still tends to contain user-land programs and data (as opposed to 'system' programs and data), although in many cases the distinction between for instance /usr/bin and /bin isn't perhaps as strong as it used to be. Perhaps the pronunciation 'user' is more understandable given this background. A backronym some people prefer is 'User System Resources', but 'user' is still more common.
{ "source": [ "https://unix.stackexchange.com/questions/103347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2920/" ] }
103,398
When I run the history command on my ubuntu server, I get output as follows: history ... 25 cd ~ 26 ls -a 27 vim /etc/gitconfig 28 vim ~/.gitconfig I want to view the datetime of a particular user. However when I assume them: su otheruser export HISTTIMEFORMAT='%F %T ' history ... 25 cd ~ 26 ls -a 27 vim /etc/gitconfig 28 vim ~/.gitconfig It still doesn't show datetime. I am using zsh shell.
I believe the HISTTIMEFORMAT is for Bash shells. If you're using zsh then you could use these switches to the history command: Examples $ history -E 1 2.12.2013 14:19 history -E Alternatively : \history -E $ history -i 1 2013-12-02 14:19 history -E Alternatively : \history -i $ history -D 1 0:00 history -E 2 0:00 history -i If you do a man zshoptions or man zshbuiltins you can find out more information about these switches as well as other info related to history . excerpt from zshbuiltins man page Also when listing, -d prints timestamps for each command -f prints full time-date stamps in the US `MM/DD/YY hh:mm' format -E prints full time-date stamps in the European `dd.mm.yyyy hh:mm' format -i prints full time-date stamps in ISO8601 `yyyy-mm-dd hh:mm' format -t fmt prints time and date stamps in the given format; fmt is formatted with the strftime function with the zsh extensions described for the %D{string} prompt format in the section EXPANSION OF PROMPT SEQUENCES in zshmisc(1). The resulting formatted string must be no more than 256 characters or will not be printed. -D prints elapsed times; may be combined with one of the options above. Debugging invocation You can use the following 2 methods to debug zsh when you invoke it. Method #1 $ zsh -xv Method #2 $ zsh $ setopt XTRACE VERBOSE In either case you should see something like this when it starts up: $ zsh -xv # # /etc/zshenv is sourced on all invocations of the # shell, unless the -f option is set. It should # contain commands to set the command search path, # plus other important environment variables. # .zshenv should not contain commands that produce # output or assume the shell is attached to a tty. # # # /etc/zshrc is sourced in interactive shells. It # should contain commands to set up aliases, functions, # options, key bindings, etc. # ## shell functions ... ... unset -f pathmunge _src_etc_profile_d +/etc/zshrc:49> unset -f pathmunge _src_etc_profile_d # Created by newuser for 4.3.10
{ "source": [ "https://unix.stackexchange.com/questions/103398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15378/" ] }
103,413
I saw a code change at work, where the mode values were changed from 777 to 0777 to make nfs setattr work. What is the difference in the 2 values?
If you're passing them to chmod (the command-line program), there is no difference. But in a C program or similar, 0777 is octal (three sets of three 1 bits, which is what you intend), while 777 is decimal, and it's quite a different bit pattern. ( chmod will interpret any numeric argument as octal, hence no leading zero is necessary.) 0777 (octal)    == binary 0b 111 111 111 == permissions rwxrwxrwx (== decimal 511 ) 777 (decimal) == binary 0b 1 100 001 001 == permissions sr----x--x (== octal 1411 )
{ "source": [ "https://unix.stackexchange.com/questions/103413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32727/" ] }
103,461
I have a SSL CRT file in PEM format. Is there a way that I can extract the common name (CN) from the certificate from the command line?
If you have openssl installed you can run: openssl x509 -noout -subject -in server.pem
{ "source": [ "https://unix.stackexchange.com/questions/103461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
103,467
What is the command env ls -al doing? I had a Linux test and there was question: "How to run command directly, but not its alias?" I knew that there exists solution like prefixing command with some special symbol, but I forgot it. Now I know that it is \ . (read from this post ). But I also remember that somewhere I read that to get rid of alias we can prefix a command with env . I did it and it seems works, but my answer was qualified as wrong. I read info and man on env , but didn't understood too much. What is env doing and exactly in env <command> without any arguments for env itself?
This command env name=value name2=value2 program and args runs the command program and args with an environment formed by extending the current environment with the environment variables and values designated by name=value and name2=value2 . If you do not include any arguments like name=value , then the current environment is passed along unmodified. The key thing that happens with respect to aliases is that env is an external command, so it has no “knowledge” of aliases: aliases are a shell construct that are not part of the normal process model and have no impact on programs that are directly run by non-shell programs (like env ). env simply passes the program and arguments to an exec call (like execvp , which will search the PATH for program ). Basically, using env like this is a (mostly) shell-independent way of avoiding aliases, shell functions, shell builtin commands, and any other bits of shell functionality that might replace or override command-position arguments (i.e. program names)—unless, of course, env is an alias, or shell function! If you are worried about env being an alias, you could spell out the full path (e.g. /usr/bin/env , though it may vary).
{ "source": [ "https://unix.stackexchange.com/questions/103467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52093/" ] }
103,471
I have done some research about this on Google, but the results were cloudy. Why is the / sign used to denote the root directory. Are there any solid reasons behind it?
The forward slash / is the delimiting character which separates directories in paths in Unix-like operating systems. This character seems to have been chosen sometime in the 1970's, and according to anecdotal sources , the reasons might be related to that the predecessor to Unix, the Multics operating system, used the > character as path separator, but the designers of Unix had already reserved the characters > and < to signify I/O redirection on the shell command line well before they had a multi-level file system. So when the time came to design the filesystem, they had to find another character to signify pathname element separation. A thing to note here is that in the Lear-Siegler ADM-3A terminal in common use during the 1970's, from which amongst other things the practice of using the ~ character to represent the home directory originates , the / key is next to the > key: As for why the root directory is denoted by a single / , it is a convention most likely influenced by the fact that the root directory is the top-level directory of the directory hierarchy, and while other directories may be beneath it, there usually isn't a reason to refer to anything outside the root directory. Similarly the directory entry itself has no name, because it's the boundary of the visible directory tree.
{ "source": [ "https://unix.stackexchange.com/questions/103471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53243/" ] }
103,528
Let's say I have a top level directory called /dir and many sub directories. How do I search the subdirectories of /dir to find the one called x/x/dir/x/x/x/target ? This question is similar to, but not exactly what I am looking for: find command for certain subdirectories . I am not looking for files, just directories with a particular name.
Try find /dir -type d -name "your_dir_name" . Replace /dir with your directory name, and replace "your_dir_name" with the name you're looking for. -type d will tell find to search for directories only.
{ "source": [ "https://unix.stackexchange.com/questions/103528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18366/" ] }
103,531
What are these file formats and how do they differ from the .msi format in Windows? Also what are the pros and cons of these package management schemes?
Files such as .deb and .rpm are more akin to a .zip file. They're a directory tree of files and sub-directories that contain files related to a particular application and/or library of files. Distros The .deb files are meant for distributions of Linux that derive from Debian (Ubuntu, Linux Mint, etc.). The .rpm files are used primarily by distributions that derive from Redhat based distros (Fedora, CentOS, RHEL) as well as by the openSuSE distro. What's special about them? These files have one other special trait that sets them apart from .zip files, in that they can include a specification that contains rules that tell the package manager software running on a system that's installing one of these files to do additional tasks. These tasks would include things such as: creating user accounts on the system creating/modifying configuration files that aren't actually contained in the .deb or .rpm file set ownership/permissions on the files after installation run commands as root on the system that's installing the package dependencies, both formats can include names or packages and/or service names that they require to be present on a system, prior to installation. What about .msi files? .msi files are similar to .deb & .rpm files but likely even more sophisticated. The .msi files are utilized by the Windows Installer and offer additional features such as: GUI Framework generation of uninstall sequences A framework within itself - for use by 3rd party installers Rollbacks Advertisement User Interface etc. I'd suggest taking a look at the various Wikipedia pages on these subjects if you want a more in-depth explanation. References Windows Installer - .msi RPM file format DEB file format
{ "source": [ "https://unix.stackexchange.com/questions/103531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19241/" ] }
103,532
I had removed skype panel icon with this method : https://askubuntu.com/questions/7479/how-can-i-remove-the-skype-panel-icon-in-ubuntu-12-04-and-earlier/118979#118979 It worked fine, and I was using skype-wrapper to have skype in my messaging menu.. Then skype-wrapper has stopped working and I wanted to have my skype icon back.. I have reinstalled sni-qt (I never removed it but anyway I did reinstall it.) and I have reinstalled skype too (following this link : https://askubuntu.com/questions/68616/i-installed-sni-qt-and-there-is-no-indicator-for-skype-how-do-i-fix-this ) but it didn't do anything.. (I will add that I don't need to whitelist it in dconf editor because I'm using elementaryOS Luna 2.0 and doesn't have a whitelist but a blacklist (which I have checked)) What package sould I add or update ? I tried to update QT libraries unsuccessfully.. Cheers! ElementaryOS 2.0 Luna, based on Ubuntu 12.04
Files such as .deb and .rpm are more akin to a .zip file. They're a directory tree of files and sub-directories that contain files related to a particular application and/or library of files. Distros The .deb files are meant for distributions of Linux that derive from Debian (Ubuntu, Linux Mint, etc.). The .rpm files are used primarily by distributions that derive from Redhat based distros (Fedora, CentOS, RHEL) as well as by the openSuSE distro. What's special about them? These files have one other special trait that sets them apart from .zip files, in that they can include a specification that contains rules that tell the package manager software running on a system that's installing one of these files to do additional tasks. These tasks would include things such as: creating user accounts on the system creating/modifying configuration files that aren't actually contained in the .deb or .rpm file set ownership/permissions on the files after installation run commands as root on the system that's installing the package dependencies, both formats can include names or packages and/or service names that they require to be present on a system, prior to installation. What about .msi files? .msi files are similar to .deb & .rpm files but likely even more sophisticated. The .msi files are utilized by the Windows Installer and offer additional features such as: GUI Framework generation of uninstall sequences A framework within itself - for use by 3rd party installers Rollbacks Advertisement User Interface etc. I'd suggest taking a look at the various Wikipedia pages on these subjects if you want a more in-depth explanation. References Windows Installer - .msi RPM file format DEB file format
{ "source": [ "https://unix.stackexchange.com/questions/103532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53692/" ] }
103,555
I have tried the following command to set Proxy on yaourt : export ALL_PROXY=http://proxy.example.com:8080 The question is how to unset the proxy on yaourt ? In general, how can I unset the value of a variable in the current shell?
To remove an environment variable, run unset ALL_PROXY Note that an environment variable only takes effect in a program and the program it launches. If you set an environment variable in one shell window, it doesn't affect other shell windows. If you've added export ALL_PROXY=… to an initialization file, remove it from there. You can run export with no arguments to see what environment variables are set in the current shell. Remember that to make a shell variable available to the programs started by that shell, you need to export it, either by running export VAR after the assignment VAR=VALUE or by combining the two ( export VAR=VALUE ).
{ "source": [ "https://unix.stackexchange.com/questions/103555", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52995/" ] }
103,648
I'm in /sbin and I see that shutdown has permissions rwxr-xr-x . Doesn't this mean that anyone can execute it?
Anyone can execute shutdown , but triggering a system shutdown requires root privileges. But shutdown is not setuid, and so only root can successfully execute it. The shutdown program is nice enough to check your privileges and let you know if there is a problem, but even if it naively tried a system shutdown, nothing would happen. GLENDOWER: I can call spirits from the vasty deep. HOTSPUR: Why, so can I, or so can any man; But will they come when you do call for them? (from Henry IV) shutdown is no different from /bin/rm . Everyone can execute it, but a regular user cannot remove /etc , or another user's home directory. Specifically: Only a process running with root privileges (effective UID 0) can direct the init system to stop system services, terminate all user processes, and issue the system call that actually stops the machine. (If shutdown was setuid, it would run as root no matter who invokes it; but it is not.) What about calling shutdown from a GUI, e.g. with control-alt-del? It's important to realize that in that case, shutdown is started directly by init and it runs with root privileges. So everyone who walks up to the console could potentially shut it down. If this is not desirable, control-alt-delete will actually run shutdown -a . (See the documentation that @some1 quoted in their answer). That tells shutdown to check whether the currently logged in user is authorized to run it. But this is only relevant because shutdown is running as root in this scenario.
{ "source": [ "https://unix.stackexchange.com/questions/103648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23295/" ] }
103,666
Is it possible to redefine the home directory? e.g. to /ext1/username instead of /home/username , i.e. expanding the ~ to another directory (as opposed to changing the actual home directory where users' home files are located). (This question is mostly academic, as it seems like bad practice to do so. I also have no choice in the matter of using csh , despite having read the Top 10.)
The tilde ~ is interpreted by your shell. Your shell will interpret ~ as a short form of $HOME . Try (echo ~; HOME=foo; echo ~) . This should first print your real home directory and afterwards "foo", as you set $HOME to that. The default value of $HOME comes from you system configuration. Use getent passwd to list all known users and their home directories. Depending on your system configuration those entries might come from /etc/passwd or any remote directory service. If you only want to temporarily redefine your home directory, just set another $HOME . If you permanently want to change it you have to change the passwd entry, e.g. by manually editing /etc/passwd .
{ "source": [ "https://unix.stackexchange.com/questions/103666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49062/" ] }
103,731
On the CLI, sometimes a command I type takes a while to complete, and sometimes I know when that's about to happen. I'm a bit confused on "backgrounding" and such in Linux. What is the most common (or user-friendly way) of telling the CLI that I don't want to wait, please give me back my prompt immediately. And if it could give me a progress bar or just busy-spinner, that would be great!
Before running the command, you can append & to the command line to run in the background: long-running-command & After starting a command, you can press Ctrl Z to suspend it, and then bg to put it in the background: long-running-command [Ctrl+Z] bg
{ "source": [ "https://unix.stackexchange.com/questions/103731", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11258/" ] }
103,819
In Gnome 3, moving windows with the keyboard shortcuts Meta + ← and Meta + → can be convenient. Now, I have two displays installed and would like to move windows across the displays without touching the mouse. More precisely, I would like to see what is the default behavior in Windows 7, namely four locations, left half of first screen, right half of first screen, left half of second screen, right half of second screen. Any solution involving a sequence of multiple shortcuts is also appreciated. Note that I am using only one desktop but multiple displays.
In Fedora 24 (also Ubuntu 20.04, and probably many more distros which use Gnome) the key combination Super Shift ← or Super Shift → moves windows between monitors by default. If you play with Super + Cursor Keys and then use Super Shift + Cursor Keys you should be able to move/minimise/maximise windows with ease.
{ "source": [ "https://unix.stackexchange.com/questions/103819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53848/" ] }
103,885
I have an executable that starts a user-interactive shell. I would like to, upon launch of the shell, inject a few commands first, then allow the user to have their interactive session. I can do this easily using echo : echo "command 1\ncommand 2\ncommand3" | ./shell_executable This almost works. The problem is that the echo command that is feeding the process's stdin hits EOF once it's done echoing my commands. This EOF causes the shell to terminate immediately (as if you'd pressed Ctrl+D in the shell). Is there a way to inject these commands into stdin without causing an EOF afterwards?
Found this clever answer in a similar question at stackoverflow (echo -e "cmd 1\ncmd 2" && cat) | ./shell_executable This does the trick. cat will pump in the output of echo into input stream of shell_executable and wait for more inputs until EOF.
{ "source": [ "https://unix.stackexchange.com/questions/103885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14439/" ] }
103,898
If I use tmux attach I can attach to a running session but if there is no session running, I only get the error no sessions How can I automatically start a new session if there is none running? something like tmux attach-or-create-new-session
If naming your session is okay, then it's easy to do with the new-session command: tmux new-session -A -s main where main is the session name that will be attached to or created if needed. From man tmux : The -A flag makes new-session behave like attach-session if session-name already exists; in this case, -D behaves like -d to attach-session . This can be shortened to rely on the default session name (which is 0 ): tmux new -As0 Please also note that the -A option was introduced in tmux version 1.8 on 26 March 2013 . For earlier versions, use: tmux attach || tmux
{ "source": [ "https://unix.stackexchange.com/questions/103898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
103,920
I have been trying to parallelize the following script, specifically each of the three FOR loop instances, using GNU Parallel but haven't been able to. The 4 commands contained within the FOR loop run in series, each loop taking around 10 minutes. #!/bin/bash kar='KAR5' runList='run2 run3 run4' mkdir normFunc for run in $runList do fsl5.0-flirt -in $kar"deformed.nii.gz" -ref normtemp.nii.gz -omat $run".norm1.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-flirt -in $run".poststats.nii.gz" -ref $kar"deformed.nii.gz" -omat $run".norm2.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-convert_xfm -concat $run".norm1.mat" -omat $run".norm.mat" $run".norm2.mat" fsl5.0-flirt -in $run".poststats.nii.gz" -ref normtemp.nii.gz -out $PWD/normFunc/$run".norm.nii.gz" -applyxfm -init $run".norm.mat" -interp trilinear rm -f *.mat done
Why don't you just fork (aka. background) them? foo () { local run=$1 fsl5.0-flirt -in $kar"deformed.nii.gz" -ref normtemp.nii.gz -omat $run".norm1.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-flirt -in $run".poststats.nii.gz" -ref $kar"deformed.nii.gz" -omat $run".norm2.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12 fsl5.0-convert_xfm -concat $run".norm1.mat" -omat $run".norm.mat" $run".norm2.mat" fsl5.0-flirt -in $run".poststats.nii.gz" -ref normtemp.nii.gz -out $PWD/normFunc/$run".norm.nii.gz" -applyxfm -init $run".norm.mat" -interp trilinear } for run in $runList; do foo "$run" & done In case that's not clear, the significant part is here: for run in $runList; do foo "$run" & done ^ Causing the function to be executed in a forked shell in the background. That's parallel.
{ "source": [ "https://unix.stackexchange.com/questions/103920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53906/" ] }
104,088
The current time in Los Angeles is 18:05. But when I run TZ=UTC-8 date --iso=ns , I get: 2013-12-07T10:05:37,788173835+0800 The date utility tells me that the time is 10:05, and even says that it's reporting it as UTC+8. Why?
The reason is that TZ=UTC-8 is interpreted as a POSIX time zone . In the POSIX timezone format, the 3 letters are the timezone abbreviation (which is arbitrary) and the number is the number of hours the timezone is behind UTC. So UTC-8 means a timezone abbreviated "UTC" that is −8 hours behind the real UTC, or UTC + 8 hours. (It works that way because Unix was developed in the US, which is behind UTC. This format allows the US timezones to be represented as EST5, CST6, etc.) You can see that's what's happening by these examples: $ TZ=UTC-8 date +'%Z %z' UTC +0800 $ TZ=UTC8 date +'%Z %z' UTC -0800 $ TZ=FOO-8 date +'%Z %z' FOO +0800 The ISO -0800 timezone format takes the opposite approach, with - indicating the zone is behind UTC, and + indicating the zone is ahead of UTC.
{ "source": [ "https://unix.stackexchange.com/questions/104088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54016/" ] }
104,094
When you press Ctrl + L in bash default mode the screen is cleared. But when I run set -o vi and press Ctrl + L the keystroke is printed ( ^L ). Is there any way to keep this behavior?
Ctrl + L is also bound in vi command mode but not in insert mode. There's no default binding for clear-screen in insert mode. Readline bindings should be specified in ~/.inputrc , like so: set editing-mode vi $if mode=vi set keymap vi-command # these are for vi-command mode Control-l: clear-screen set keymap vi-insert # these are for vi-insert mode Control-l: clear-screen $endif This will bind Ctrl + L to clear the screen in both normal and insert mode. Naturally, if you prefer to only use it in one mode, just remove the relevant option. If you prefer to set this just for bash use the following equivalents in ~/.bashrc : set -o vi bind -m vi-command 'Control-l: clear-screen' bind -m vi-insert 'Control-l: clear-screen' There is an extensive list of readline commands that you can use to customize your bash shell with.
{ "source": [ "https://unix.stackexchange.com/questions/104094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47577/" ] }
104,171
I want to silently, non interactively, create an SSL certificate. I.e., without get prompted for any data. The normal way I create the certificate would be: openssl req -x509 -nodes -days 7300 -newkey rsa:2048 \ -keyout /etc/ssl/private/pure-ftpd.pem -out /etc/ssl/private/pure-ftpd.pem I tried the following: openssl genrsa -out server.key 2048 touch openssl.cnf cat >> openssl.cnf <<EOF [ req ] prompt = no distinguished_name = req_distinguished_name [ req_distinguished_name ] C = GB ST = Test State L = Test Locality O = Org Name OU = Org Unit Name CN = Common Name emailAddress = [email protected] EOF openssl req -x509 -config openssl.cnf -nodes -days 7300 \ -signkey server.key -out /etc/ssl/private/pure-ftpd.pem But I still get a prompt for data.
The thing you're missing is to include the certificate subject in the -subj flag. I prefer this to creating a config file because it's easier to integrate into a workflow and doesn't require cleaning up afterward. One step key and csr generation: openssl req -new -newkey rsa:4096 -nodes \ -keyout www.example.com.key -out www.example.com.csr \ -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" One step self signed passwordless certificate generation: openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \ -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" \ -keyout www.example.com.key -out www.example.com.cert Neither of these commands will prompt for any data. See my answer to this nearly identical question on Super User. After many years, and by popular demand, here's how to do it with ECDSA. This is necessarily two steps because EC keys require generating parameters, which (at the time of this writing) must be done separately from signing request*. openssl ecparam -out www.example.com.key -name prime256v1 -genkey openssl req -new -days 365 -nodes -x509 \ -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" \ -key www.example.com.key -out www.example.com.cert * You can either just generate ec parameters and use req -newkey ec:<file with ec params> , or do it like I did above. There isn't really a significant difference.
{ "source": [ "https://unix.stackexchange.com/questions/104171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53700/" ] }
104,325
In Emacs I can run a shell using following commands - M-x term M-x shell M-x eshell What is the difference between these three?
shell is the oldest of these 3 choices. It uses Emacs's comint-mode to run a subshell (e.g. bash ). In this mode, you're using Emacs to edit a command line. The subprocess doesn't see any input until you press Enter. Emacs is acting like a dumb terminal. It does support color codes, but not things like moving the cursor around, so you can't run curses-based applications. term is a terminal emulator written in Emacs Lisp. In this mode, the keys you press are sent directly to the subprocess; you're using whatever line editing capabilities the shell presents, not Emacs's. It also allows you to run programs that use advanced terminal capabilities like cursor movement (e.g. you could run nano or less inside Emacs). eshell is a shell implemented directly in Emacs Lisp. You're not running bash or any other shell as a subprocess. As a result, the syntax is not quite the same as bash or sh . It allows things like redirecting the output of a process directly to an Emacs buffer (try echo hello >#<buffer results> ).
{ "source": [ "https://unix.stackexchange.com/questions/104325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19506/" ] }
104,329
I have Ubuntu13.04 on my laptop. I have to upgrade it to the next release or else I would install a fresh release after deleting my current system. I have already installed and configured my current system a lot and I can't take the burden of doing it all over again if I install the next release. Now I would like to know that what all back-ups are required to be taken beforehand so that I would recover all my files/installations/configurations etc. In other words I should have everything on my new system (simply by restoring from back-ups) that I possess currently. My current system has /, /boot, /home & swap partitions which were created by me when I had freshly installed the O.S.
shell is the oldest of these 3 choices. It uses Emacs's comint-mode to run a subshell (e.g. bash ). In this mode, you're using Emacs to edit a command line. The subprocess doesn't see any input until you press Enter. Emacs is acting like a dumb terminal. It does support color codes, but not things like moving the cursor around, so you can't run curses-based applications. term is a terminal emulator written in Emacs Lisp. In this mode, the keys you press are sent directly to the subprocess; you're using whatever line editing capabilities the shell presents, not Emacs's. It also allows you to run programs that use advanced terminal capabilities like cursor movement (e.g. you could run nano or less inside Emacs). eshell is a shell implemented directly in Emacs Lisp. You're not running bash or any other shell as a subprocess. As a result, the syntax is not quite the same as bash or sh . It allows things like redirecting the output of a process directly to an Emacs buffer (try echo hello >#<buffer results> ).
{ "source": [ "https://unix.stackexchange.com/questions/104329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46723/" ] }
104,374
After executing the aptitude search pattern command, I need to see only installed packages in the search result. Is there any way to do that?
Add ~i (short for ?installed ) to match the installed packages whose name contains bash : aptitude search '~i bash' To match whose description contains bash . aptitude search '~i ~d bash' To limit to the ones that are not installed: aptitude search '!~i bash'
{ "source": [ "https://unix.stackexchange.com/questions/104374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
104,499
I have just installed Cygwin, and I have displayed a file's contents using the less command. Now I am unable to exit that in order to type other commands: I want to exit this mode to type some other commands. How can I do that?
To quit less , type q . Also, check out man less , or type h from within less for some more, useful bits of information. In general, assuming man has been properly installed, man xyz will tell you how to use the xyz tool. On GNU systems like Cygwin or what you call Linux at least, man will usually display through less as well, so to exit from man , again you would type q (which also works in more or most which are other pagers used by man or other commands that need paging on other systems).
{ "source": [ "https://unix.stackexchange.com/questions/104499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54205/" ] }
104,503
If I have the following $STRING aaa.bbb.ccc.[ddd].eee.fff.[ggg].hhh is there any way, using bash parameter expansion, to echo the following aaa.bbb..ccc.eee.fff..hhh That is, remove all occurrences of square brackets and everything inside those brackets? Everything I've tried ends up either removing everything in the string after the first left bracket or removing the brackets but leaving behind everything inside the brackets.
To quit less , type q . Also, check out man less , or type h from within less for some more, useful bits of information. In general, assuming man has been properly installed, man xyz will tell you how to use the xyz tool. On GNU systems like Cygwin or what you call Linux at least, man will usually display through less as well, so to exit from man , again you would type q (which also works in more or most which are other pagers used by man or other commands that need paging on other systems).
{ "source": [ "https://unix.stackexchange.com/questions/104503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53895/" ] }
104,525
I'm facing a huge 4-columns file. I'd like to display the sorted file in stdout based on its 3rd column: cat myFile | sort -u -k3 Is that enough to perform the trick?
sort -k 3,3 myFile would display the file sorted by the 3 rd column assuming the columns are separated by sequences of blanks (ASCII SPC and TAB characters in the POSIX/C locale), according to the sort order defined by the current locale. Note that the leading blanks are included in the column (the default separator is the transition from a non-blank to a blank), that can make a difference in locales where spaces are not ignored for the purpose of comparison, use the -b option to ignore the leading blanks. Note that it's completely independent from the shell (all the shells would parse that command line the same, shells generally don't have the sort command built in). -k 3 is to sort on the portion of the lines starting with the 3 rd column (including the leading blanks). In the C locale, because the space and tab characters ranks before all the printable characters, that will generally give you the same result as -k 3,3 (except for lines that have an identical third field), -u is to retain only one of the lines if there are several that sort identically (that is where the sort key sorts the same (that's not necessarily the same as being equal )). cat is the command to con cat enate. You don't need it here. If the columns are separated by something else, you need the -t option to specify the separator. Given example file a $ cat a a c c c a b ca d a b c e a b c d With -u -k 3 : $ echo $LANG en_GB.UTF-8 $ sort -u -k 3 a a b ca d a c c c a b c d a b c e Line 2 and 3 have the same third column, but here the sort key is from the third column to the end of line, so -u retains both. ␠ca␠d sorts before ␠c␠c because spaces are ignored in the first pass in my locale, cad sorts before cc . $ sort -u -k 3,3 a a b c d a b c e a b ca d Above only one is retained for those where the 3rd column is ␠c . Note how the one with ␠␠c (2 leading spaces) is retained. $ sort -k 3 a a b ca d a c c c a b c d a b c e $ sort -k 3,3 a a b c d a c c c a b c e a b ca d See how the order of a b c d and a c c c are reversed. In the first case, because ␠c␠c sorts before ␠c␠d , in the second case because the sort key is the same ( ␠c ), the last resort comparison that compares the lines in full puts a b c d before a c c c . $ sort -b -k 3,3 a a b c d a b c e a c c c a b ca d Once we ignore the blanks, the sort key for the first 3 lines is the same ( c ), so they are sorted by the last resort comparison. $ LC_ALL=C sort -k 3 a a b c e a c c c a b c d a b ca d $ LC_ALL=C sort -k 3,3 a a b c e a b c d a c c c a b ca d In the C locale, ␠␠c sorts before ␠c as there is only one pass there where characters (then single bytes) sort based on their code point value (where space has a lower code point than c ).
{ "source": [ "https://unix.stackexchange.com/questions/104525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52350/" ] }
104,536
I have a CLI app that produces lots of debug output and I have a really big need to search in the output. I'm using less for that but it freezes when I reach the last line with j or G . And it returns back to live only after Ctrl + c , but that way I kill my app. The problem can be easily reproduced when paging output from find , just use G right after running it. find / | less Is it a bug in less ?
sort -k 3,3 myFile would display the file sorted by the 3 rd column assuming the columns are separated by sequences of blanks (ASCII SPC and TAB characters in the POSIX/C locale), according to the sort order defined by the current locale. Note that the leading blanks are included in the column (the default separator is the transition from a non-blank to a blank), that can make a difference in locales where spaces are not ignored for the purpose of comparison, use the -b option to ignore the leading blanks. Note that it's completely independent from the shell (all the shells would parse that command line the same, shells generally don't have the sort command built in). -k 3 is to sort on the portion of the lines starting with the 3 rd column (including the leading blanks). In the C locale, because the space and tab characters ranks before all the printable characters, that will generally give you the same result as -k 3,3 (except for lines that have an identical third field), -u is to retain only one of the lines if there are several that sort identically (that is where the sort key sorts the same (that's not necessarily the same as being equal )). cat is the command to con cat enate. You don't need it here. If the columns are separated by something else, you need the -t option to specify the separator. Given example file a $ cat a a c c c a b ca d a b c e a b c d With -u -k 3 : $ echo $LANG en_GB.UTF-8 $ sort -u -k 3 a a b ca d a c c c a b c d a b c e Line 2 and 3 have the same third column, but here the sort key is from the third column to the end of line, so -u retains both. ␠ca␠d sorts before ␠c␠c because spaces are ignored in the first pass in my locale, cad sorts before cc . $ sort -u -k 3,3 a a b c d a b c e a b ca d Above only one is retained for those where the 3rd column is ␠c . Note how the one with ␠␠c (2 leading spaces) is retained. $ sort -k 3 a a b ca d a c c c a b c d a b c e $ sort -k 3,3 a a b c d a c c c a b c e a b ca d See how the order of a b c d and a c c c are reversed. In the first case, because ␠c␠c sorts before ␠c␠d , in the second case because the sort key is the same ( ␠c ), the last resort comparison that compares the lines in full puts a b c d before a c c c . $ sort -b -k 3,3 a a b c d a b c e a c c c a b ca d Once we ignore the blanks, the sort key for the first 3 lines is the same ( c ), so they are sorted by the last resort comparison. $ LC_ALL=C sort -k 3 a a b c e a c c c a b c d a b ca d $ LC_ALL=C sort -k 3,3 a a b c e a b c d a c c c a b ca d In the C locale, ␠␠c sorts before ␠c as there is only one pass there where characters (then single bytes) sort based on their code point value (where space has a lower code point than c ).
{ "source": [ "https://unix.stackexchange.com/questions/104536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54228/" ] }
104,618
I am trying to rsync from one server to another. The servers have the same directory structure but I am having trouble getting the remotes server to properly recognize the path name on the remote when there is a space in it. Here are the details The local directory is mnt/xlses/split/v2/name with space The remote directory is mnt/xlses/split/v2/name with space I have tried everything I can find the latest attempt was rsync --size-only -avzPe ssh /mnt/xlses/split/v2/name\ with\ space/ [email protected]:/mnt/xlses/split/v2/"name with space" when that runs the first thing it reports is that it is creating a new directory I interrupt it and see that there is a new directory mnt/xlses/split/v2/name all of my files are in that directory I was expecting them to be in mnt/xlses/split/v2/name with space
Try rsync --protect-args --size-only -avzPe ssh "/mnt/xlses/split/v2/name with space/" "[email protected]:/mnt/xlses/split/v2/name with space" From man rsync : -s, --protect-args This option sends all filenames and most options to the remote rsync without allowing the remote shell to interpret them. This means that spaces are not split in names, and any non-wildcard special characters are not translated (such as ~, $, ;, &, etc.). Wildcards are expanded on the remote host by rsync (instead of the shell doing it). [...]
{ "source": [ "https://unix.stackexchange.com/questions/104618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50622/" ] }
104,641
I have the grep command. I'm searching for a keyword from a file, but I don't want to display the match. I just want to know the exit status of the grep .
Any POSIX compliant version of grep has the switch -q for quiet: -q Quiet. Nothing shall be written to the standard output, regardless of matching lines. Exit with zero status if an input line is selected. In GNU grep (and possibly others) you can use long-option synonyms as well: -q, --quiet, --silent suppress all normal output Example String exists: $ echo "here" | grep -q "here" $ echo $? 0 String doesn't exist: $ echo "here" | grep -q "not here" $ echo $? 1
{ "source": [ "https://unix.stackexchange.com/questions/104641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54194/" ] }
104,714
What is the very fundamental difference between Unix, Linux, BSD and GNU? Unix was the earliest OS, so the term 'Unix like' is understandable, since they have kernel, file system structure, most of the commands, users etc are same as Unix. Still why are they different? What made them set apart? Is it the kernel? https://www.gnu.org/home.en.html GNU is an operating system that is free software—that is, it respects users' freedom.
That is a difficult question to answer. First "Unix Like" or "*nix" usually means POSIX. All the systems you listed are POSIX systems. POSIX is a set of standards to implement. Now for the harder questions. GNU isn't really an OS. It's more of a set of rules or philosophies that govern free software, that at the same time gave birth to a bunch of tools while trying to create an OS. So GNU tools are basically open versions of tools that already existed but were redone to conform to principles of open software. GNU/Linux is a mesh of those tools and the Linux kernel to form a complete OS, but there are other "GNU"s. GNU/Hurd for example. Unix and BSD are "older" implementations of POSIX that are various levels of "closed source". Unix is usually totally closed source, but there are as many flavors of Unix as there are Linux if not more. BSD is not usually considered "open" by some people but in truth it is a lot more open then anything else that existed. It's licensing also allowed for commercial use with far fewer restrictions as the more "open" licenses allowed. Linux is the new comer. Strictly speaking it's "just a kernel", however, in general it's thought of as a full OS when combined with GNU Tools and a bunch of other things. The main governing difference is ideals. Unix, Linux, and BSD have different ideals that they implement. They are all POSIX, and are all basically interchangeable. They do solve some of the same problems in different ways. So other than ideals and how they choose to implement POSIX standards, there is little difference. For more info, I suggest you read a brief article on the creation of GNU, OSS, Linux, BSD, and UNIX. They will be slanted towards their individual ideas, but when you read through, you will get a good idea of the differences. This Unix genealogy diagram clearly shows the history of Unix, BSD, GNU and Linux ( from Wikimedia ):
{ "source": [ "https://unix.stackexchange.com/questions/104714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54308/" ] }
104,727
Do we have anyway to add a path globally so that each user gets it in $PATH. I want to add path of ANT so that each user doesn't need to add it in his $PATH variable.
Global paths should be set in /etc/profile or /etc/environment , just add this line to /etc/profile : PATH=$PATH:/path/to/ANT/bin
{ "source": [ "https://unix.stackexchange.com/questions/104727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47247/" ] }
104,755
My .bashrc had some code that was repetitive so I used a function to simplify it do_stuff() { local version=$1 export FOO_${version}_X="17" export FOO_${version}_Y="42" } do_stuff '5.1' do_stuff '5.2' However, now when I use my shell the "do_stuff" name is in scope so I can tab-complete and run that function (potentially messing up my environment variables). Is there a way to make "do_stuff" visible only inside the .bashrc?
Use unset as last line in your .bashrc : unset -f do_stuff will delete/unset the function do_stuff . To delete/unset the variables invoke it as follows: unset variablename
{ "source": [ "https://unix.stackexchange.com/questions/104755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23960/" ] }
104,800
I'd like to use find to list all files and directories recursively in a given root for a cpio operation. However, I don't want the root directory itself to appear in the paths. For example, I currently get: $ find diskimg diskimg diskimg/file1 diskimg/dir1 diskimg/dir1/file2 But, I'd like to get file1 dir1 dir1/file2 (note the root is also not in my desired output, but that's easy to get rid of with tail ). I'm on OS X, and I'd prefer not to install any extra tools (e.g. GNU find) if possible, since I'd like to share the script I'm writing with other OS X users. I'm aware this can be done with cut to cut the root listing off, but that seems like a suboptimal solution. Is there a better solution available?
If what you are trying to do is not too complex, you could accomplish this with sed: find diskimg | sed -n 's|^diskimg/||p' Or cut : find diskimg | cut -sd / -f 2-
{ "source": [ "https://unix.stackexchange.com/questions/104800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42359/" ] }
104,821
I have started a wget on remote machine in background using & . Suddenly it stops downloading. I want to terminate its process, then re-run the command. How can I terminate it? I haven't closed its shell window. But as you know it doesn't stop using Ctrl + C and Ctrl + Z .
There are many ways to go about this. Method #1 - ps You can use the ps command to find the process ID for this process and then use the PID to kill the process. Example $ ps -eaf | grep [w]get saml 1713 1709 0 Dec10 pts/0 00:00:00 wget ... $ kill 1713 Method #2 - pgrep You can also find the process ID using pgrep . Example $ pgrep wget 1234 $ kill 1234 Method #3 - pkill If you're sure it's the only wget you've run you can use the command pkill to kill the job by name. Example $ pkill wget Method #4 - jobs If you're in the same shell from where you ran the job that's now backgrounded. You can check if it's running still using the jobs command, and also kill it by its job number. Example My fake job, sleep . $ sleep 100 & [1] 4542 Find it's job number. NOTE: the number 4542 is the process ID. $ jobs [1]+ Running sleep 100 & $ kill %1 [1]+ Terminated sleep 100 Method #5 - fg You can bring a backgrounded job back to the foreground using the fg command. Example Fake job, sleep . $ sleep 100 & [1] 4650 Get the job's number. $ jobs [1]+ Running sleep 100 & Bring job #1 back to the foreground, and then use Ctrl + C . $ fg 1 sleep 100 ^C $
{ "source": [ "https://unix.stackexchange.com/questions/104821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40417/" ] }
104,881
I want to parse a variable (in my case it's development kit version) to make it dot( . ) free. If version='2.3.3' , desired output is 233 . I tried as below, but it requires . to be replaced with another character giving me 2_3_3 . It would have been fine if tr . '' would have worked. 1 VERSION='2.3.3' 2 echo "2.3.3" | tr . _
There is no need to execute an external program. bash 's string manipulation can handle it (also available in ksh93 (where it comes from), zsh and recent versions of mksh , yash and busybox sh (at least)): $ VERSION='2.3.3' $ echo "${VERSION//.}" 233 (In those shells' manuals you can generally find this in the parameter expansion section.)
{ "source": [ "https://unix.stackexchange.com/questions/104881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17781/" ] }
104,901
This may sound trivial but, on more than one occasion, I have found myself having forgotten which file in vim I have open (e.g. when I am looking through different log files and such) and the only way I knew how to find out was to close the file and look in the command history for the most recent command. Is there a command within vim to tell you which file you currently have opened without exiting the program or the file you have opened (e.g. :<which_file_cmd> ?)
In addition to uprego's answer , you can press Ctrl + G (in normal mode) to get the current buffer's name as well as the total number of lines in it and your current position within it. Update As per rxdazn's comment , you can press 1 before Ctrl + G to get the full file path. If you press 2 , you get the full file path and the buffer number you currently have open (useful when you have opened multiple files with vim ).
{ "source": [ "https://unix.stackexchange.com/questions/104901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23944/" ] }
104,982
I've seen in many places used install -d to create directories and install -c to copy a file. Why not use mkdir and cp ? Is there an advantage in using install ?
It depends on what you're doing. The install command is normally used in installation scripts that come with packages and source code for installing a binary to your system. It can also be used to install any other file or directory. In addition to the -d and -c options you have -m for specifying the new permissions of the file to be installed, so you don't have to do a cp and a chmod to get the same result. For instance: install -m644 "$srcdir/$pkgname-$pkgver-linux64" "$pkgdir/opt/$pkgname" You also have options -g and -o for setting the target group and owner, respectively. This avoids separate calls to chown . In general, using install shortens your script and makes it more concise by doing file creation, copying, mode setting and related stuff in one command instead of many. For reference, see man install . For usage, just take a look at any installation script shipped with some package source code .
{ "source": [ "https://unix.stackexchange.com/questions/104982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54297/" ] }
105,026
I got a warning of my /boot partition is almost full(85%). What should I do? Can I remove one of the backup kernel? How to do it safely? My partition right now Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 10321208 719856 9077064 8% / tmpfs 4015460 0 4015460 0% /dev/shm /dev/sda1 101133 80781 15130 85% /boot /dev/sda8 253782660 47668764 193222404 20% /home /dev/sda7 1032088 535840 443820 55% /tmp /dev/sda3 10321208 4823740 4973180 50% /usr /dev/sda5 10321208 1807284 7989636 19% /var The Kernel I have root@server1 [/boot]# rpm -q kernel kernel-2.6.32-358.el6.x86_64 kernel-2.6.32-358.18.1.el6.x86_64 kernel-2.6.32-358.23.2.el6.x86_64 The /Boot directory root@server1 [/boot]# ls -la /boot total 78741 dr-xr-xr-x. 5 root root 2048 Dec 3 05:33 ./ drwxr-xr-x. 23 root root 4096 Dec 4 05:46 ../ -rw-r--r-- 1 root root 104112 Aug 28 12:43 config-2.6.32-358.18.1.el6.x86_64 -rw-r--r-- 1 root root 104112 Oct 16 14:01 config-2.6.32-358.23.2.el6.x86_64 -rw-r--r--. 1 root root 104081 Feb 21 2013 config-2.6.32-358.el6.x86_64 drwxr-xr-x. 3 root root 1024 Sep 20 20:15 efi/ drwxr-xr-x. 2 root root 1024 Oct 21 15:06 grub/ -rw-r--r-- 1 root root 16191847 Sep 20 20:21 initramfs-2.6.32-358.18.1.el6.x86_64.img -rw-r--r-- 1 root root 16261655 Oct 21 15:06 initramfs-2.6.32-358.23.2.el6.x86_64.img -rw-r--r--. 1 root root 16187335 Sep 20 20:16 initramfs-2.6.32-358.el6.x86_64.img -rw------- 1 root root 3698835 Sep 20 20:27 initrd-2.6.32-358.18.1.el6.x86_64kdump.img -rw------- 1 root root 3983771 Dec 3 05:33 initrd-2.6.32-358.23.2.el6.x86_64kdump.img -rw------- 1 root root 3695290 Sep 20 20:21 initrd-2.6.32-358.el6.x86_64kdump.img drwx------. 2 root root 12288 Sep 20 20:13 lost+found/ -rw-r--r-- 1 root root 185949 Aug 28 12:44 symvers-2.6.32-358.18.1.el6.x86_64.gz -rw-r--r-- 1 root root 185978 Oct 16 14:02 symvers-2.6.32-358.23.2.el6.x86_64.gz -rw-r--r--. 1 root root 185734 Feb 21 2013 symvers-2.6.32-358.el6.x86_64.gz -rw-r--r-- 1 root root 2408641 Aug 28 12:43 System.map-2.6.32-358.18.1.el6.x86_64 -rw-r--r-- 1 root root 2408974 Oct 16 14:01 System.map-2.6.32-358.23.2.el6.x86_64 -rw-r--r--. 1 root root 2407466 Feb 21 2013 System.map-2.6.32-358.el6.x86_64 -rwxr-xr-x 1 root root 4046224 Aug 28 12:43 vmlinuz-2.6.32-358.18.1.el6.x86_64* -rw-r--r-- 1 root root 171 Aug 28 12:43 .vmlinuz-2.6.32-358.18.1.el6.x86_64.hmac -rwxr-xr-x 1 root root 4047152 Oct 16 14:01 vmlinuz-2.6.32-358.23.2.el6.x86_64* -rw-r--r-- 1 root root 171 Oct 16 14:01 .vmlinuz-2.6.32-358.23.2.el6.x86_64.hmac -rwxr-xr-x. 1 root root 4043888 Feb 21 2013 vmlinuz-2.6.32-358.el6.x86_64* -rw-r--r--. 1 root root 166 Feb 21 2013 .vmlinuz-2.6.32-358.el6.x86_64.hmac The Kernel I'm using root@server1 [/boot]# uname -a Linux server1 2.6.32-358.23.2.el6.x86_64 #1 SMP Wed Oct 16 18:37:12 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Do the following to keep just the last 2 kernels on your system, to keep /boot clean 1 - Edit /etc/yum.conf and set the following parameter installonly_limit=2 This will make your package manager keep just the 2 last kernels on your system(including the one that is running) 2 - Install yum-utils : yum install yum-utils 3- Make an oldkernel cleanup: package-cleanup --oldkernels --count=2 Done. This will erase in a good fashion the old kernels, and, keep just the last 2 of them for the next upgrades. For special cases where you have vmlinuz-0-rescue-* and initramfs-0-rescue-* files using too much disk space, please take a look at this question on U&L: Removing the rescue image from /boot on fedora
{ "source": [ "https://unix.stackexchange.com/questions/105026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48887/" ] }
105,040
I have this huge folder with thousands of unordered files. Is it feasible to move the first 5000s to a subfolder via the mv command? For now I move files with mv *some_pattern* ./subfolder1/ As for now, I move images quite randomly, it's not really important if there aren't exactly 5000 files in each subfolder. Is there a better way to do it?
mv `ls | head -500` ./subfolder1/
{ "source": [ "https://unix.stackexchange.com/questions/105040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54521/" ] }
105,140
I was copying hundreds of files to another computer using the scp command that I got the stalled error . Now I am going to copy the files again. Is there any way to avoid copying the already copied files?
You can use rsync for it. rsync is really designed for this type of operation. Syntax: rsync -avh /source/path/ host:/destination/path or rsync -a --ignore-existing /local/directory/ host:/remote/directory/ When you run it first time it will copy all content then it will copy only new files. If you need to tunnel the traffic through a SSH connection (for example, for confidentiality purposes), as indicated by you originally asking for a SCP-based solution, simply add -e ssh to the parameters to rsync . For example: rsync -avh -e ssh /source/path/ host:/destination/path
{ "source": [ "https://unix.stackexchange.com/questions/105140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
105,346
We are having a little problem on a server. We want that some users should be able to do e.g. sudo and become root, but with the restriction that the user can't change root password. That is, a guarantee that we still can login to that server and become root no matter of what the other users will do. Is that possible?
This is practically impossible. First of all, if you grant them the power of becoming root , then there's nothing you can do to prevent them from doing anything. In your use case, sudo should be used to grant your users some root powers while restricting others without allowing them to become root . In your scenario, you would need to restrict access to the su and passwd commands and open access to pretty much everything else. The problem is, there's nothing you can do to prevent your users from editing /etc/shadow (or /etc/sudoers for that matter) directly and dropping in a replacement root password to hijack root. And this is just the most straightforward "attack" scenario possible. Sudoers with unrestricted power except for one or two commands can work around the restrictions to hijack full root access. The only solution, as suggested by SHW in the comments is to use sudo to grant your users access to a restricted set of commands instead. Update There might be a way to accomplish this if you use Kerberos tickets for authentication. Read this document explaining the use of the .k5login file. I quote the relevant parts: Suppose the user alice had a .k5login file in her home directory containing the following line: [email protected] This would allow bob to use Kerberos network applications, such as ssh(1), to access alice‘s account, using bob‘s Kerberos tickets. ... Note that because bob retains the Kerberos tickets for his own principal, [email protected] , he would not have any of the privileges that require alice‘s tickets, such as root access to any of the site’s hosts, or the ability to change alice‘s password. I might be mistaken, though. I'm still wading through the documentation and have yet to try Kerberos out for myself.
{ "source": [ "https://unix.stackexchange.com/questions/105346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31769/" ] }
105,568
I'm trying to add some color to my git configuration and I want to know what color names are available on the terminal. I only want to use colors by name so it's easier for others to understand I don't want to add any new colors - I just want to select from the predefined names I would like a solution that works for all distros, but primarily Debian It would be nice to see the color that the name indicates Many online references often talk about color names that are not defined on my system, so I just need a way to see what my default options are.
Many online references often talk about color names that are not defined on my system Those probably are defined, but they are X11 colors; once upon a time you could find them in /lib[64]/X11/rgb.txt . In any case, this is a mapping of strings (e.g., dimgray ) to 24-bit RGB colors (e.g. 0xff8800 or #ff8800 , which would be orange). A 24-bit space is ~16 million colors, obviously X11 does not give them all names (CSS 3 uses X11 names, BTW). The 24-bit space is used by your GUI; transparency is implemented by increasing this to a 32-bit space. However, git isn't a GUI (G = graphical) tool, it's a TUI (T = terminal) tool, and it is limited to the colors available on a normal terminal. I would like a solution that works for all distros, but primarily Debian If you want this to be properly portable, you should rely only on the eight standard ANSI colors : black blue green yellow cyan white magenta red A little disappointing next to the X11 list, but better than nothing at all! These also have a "bold" or "bright" version that is standard, making 16 colors, which you may be able to specify as, e.g., "brightyellow" ( or conversely, "darkyellow"). Most GUI terminals 1 have 256 color support and some terminal apps can exploit this. To test, you first need to set the $TERM variable appropriately: export $TERM=xterm-256color Your terminal emulator may also have a configuration option for this. Colors under the xterm 256 color protocol are indexed: The index number is in the bottom left corner. Notice the set at the bottom of this chart (0-15) is the 16 basic (bright and dark) ANSI colors. To reference one of these colors under the standard, you use color + the index number, eg. color40 . 1. A "GUI terminal" is a terminal emulator that runs in a GUI context, such as xterm, the GNOME terminal, etc. However, this does not make TUI apps (such as git) running in a GUI terminal into GUI apps. They are still TUI apps and are bound by that context.
{ "source": [ "https://unix.stackexchange.com/questions/105568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54796/" ] }
105,569
How can I replace spaces with new lines on an input like: /path/to/file /path/to/file2 /path/to/file3 /path/to/file4 /path/to/file5 etc... To obtain the following: /path/to/file /path/to/file2 /path/to/file3 /path/to/file4 /path/to/file5 Note I'm posting this question to help other users, it was not easy to find a useful answer on UNIX SE until I started to type this question. After that I found the following: Related question How can I find and replace with a new line?
Use the tr command echo "/path/to/file /path/to/file2 /path/to/file3 /path/to/file4 /path/to/file5" \ | tr " " "\n" Found on http://www.unix.com/shell-programming-scripting/67831-replace-space-new-line.html
{ "source": [ "https://unix.stackexchange.com/questions/105569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49721/" ] }
105,747
Consider the following output from df . Filesystem Size Used Avail Use% Mounted on /dev/sda1 23G 6.1G 16G 29% / udev 10M 0 10M 0% /dev tmpfs 397M 420K 397M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 1.8G 904K 1.8G 1% /run/shm /dev/sda6 890G 324G 521G 39% /home /dev/sdb1 459G 267G 169G 62% /home/user/mnt none 4.0K 0 4.0K 0% /sys/fs/cgroup How can I only show lines that begin with "/dev" and keep the heading, but filter out everything else. I'd also like to not have to resort to using using temporary files or variables? Note: the heading is locale dependent, therefore you can't catch it with a regexp.
I would use a slightly more sophisticated approach than simple grep : awk df -h | awk 'NR==1 || /^\/dev/' NR is the current line number so the awk scriptlet above will print if this is the first line or if the current line begins with /dev . And after posting this I see it is the same as @1_CR's answer. Oh well... Perl df -h | perl -ne 'print if (/^\/dev/ || $.==1)' The same idea here, in Perl the special variable $. is the current line number. An alternative way would be df -h | perl -pe '$_="" unless /^\/dev/ || $.==1' The -p switch will print all lines of the input file. The value of the current line is held in $_ so we set $_ to empty unless we want the current line. sed df -h | sed -n '1p; /^\/dev/p' The -n suppresses normal output so no lines are printed. The 1p means print the first line and the /^\/dev/p means print any line that starts with /dev . As pointed out in the comments below, in the unlikely case where the locale on your current system causes the header line to start with /dev , the command above will print it twice. Stephane Chazelas points out that this one will not have that problem: df -h | sed -e 1b -e '/^\/dev/!d' grep df -h | grep -E '^(/dev|File)' This might not be portable because of LOCALE problems as you said. However, I am reasonably certain that no locale or df version will give a path in the first line, so searching for lines that contain no / should also work: df -h | grep -E '^[^/]*$|^/dev'
{ "source": [ "https://unix.stackexchange.com/questions/105747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53904/" ] }
105,800
Unable to ssh to another computer but can ping it? Not sure what I am missing? Using a Netgear router bash-3.2$ ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether xx:xx:xx:xx:xx:xx media: autoselect (none) status: inactive en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether xx:xx:xx:xx:xx:xx inet6 xxxx::xxxx:xxxx:xxxx:xxxxxx prefixlen 64 scopeid 0x5 inet 10.0.0.3 netmask 0xffffff00 broadcast 10.0.0.255 media: autoselect status: active fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr xx:xx:xx:xx:xx:xx:xx:xx media: autoselect <full-duplex> status: inactive bash-3.2$ ssh [email protected] ssh: connect to host 10.0.0.4 port 22: Connection refused bash-3.2$ ssh -p 5900 [email protected] ssh: connect to host 10.0.0.4 port 5900: Connection refused bash-3.2$ ping 10.0.0.3 PING 10.0.0.3 (10.0.0.3): 56 data bytes 64 bytes from 10.0.0.3: icmp_seq=0 ttl=64 time=0.046 ms 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.078 ms 64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.077 ms 64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.079 ms 64 bytes from 10.0.0.3: icmp_seq=5 ttl=64 time=0.081 ms 64 bytes from 10.0.0.3: icmp_seq=6 ttl=64 time=0.078 ms ^C --- 10.0.0.3 ping statistics --- 7 packets transmitted, 7 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.046/0.074/0.081/0.011 ms bash-3.2$ ping 10.0.0.4 PING 10.0.0.4 (10.0.0.4): 56 data bytes 64 bytes from 10.0.0.4: icmp_seq=0 ttl=64 time=2.667 ms 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=2.675 ms 64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=2.969 ms 64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=2.663 ms 64 bytes from 10.0.0.4: icmp_seq=4 ttl=64 time=2.723 ms ^C --- 10.0.0.4 ping statistics --- 5 packets transmitted, 5 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 2.663/2.739/2.969/0.117 ms bash-3.2$
The server is either not running sshd (and hence not listening on port 22) or has a firewall blocking port 22 (the default ssh port), or in incredibly rare cases running ssh on some other port (which is almost certainly not the case). First check to make sure sshd is installed (using debian examples) sudo apt-get install openssh-server And if so, is it running: ps -ef | grep sshd then check to see if it is listening to port 22 sudo netstat -nlp | grep :22 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 946/sshd tcp6 0 0 :::22 :::* LISTEN 946/sshd then check your firewall rules (this varies significantly, so I'll show a debian/ubuntu/etc example): sudo ufw status sudo ufw show listening tcp: 22 * (sshd) 24224 * (ruby) tcp6: 22 * (sshd) 8080 * (java) udp: 123 10.X.Y.Z (ntpd) 123 * (ntpd) 18649 * (dhclient) 24224 * (ruby) 34131 * (ruby) 60001 10.87.43.24 (mosh-server) 68 * (dhclient) udp6: 123 fe80::1031:AAAA:BBBB:CCCC (ntpd) 123 * (ntpd) 48573 * (dhclient) If ufw shows it as closed then run (again a debian/ubuntu example) sudo ufw allow 22
{ "source": [ "https://unix.stackexchange.com/questions/105800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41742/" ] }
105,804
I have two machines, In my machineA I have a jar file with these permissions - -rwxr-xr-x 1 cronus app 16758150 2013-03-19 13:35 exhibitor-1.5.1-jar-with-dependencies.jar In my another machineB , I have a same jar file but with different permissions - -rw-r--r-- 1 root messagebus 19340260 Nov 25 14:28 exhibitor-1.5.1-jar-with-dependencies.jar How to make the permission of machineB jar file same as machineA jar file? In short, how do I get this permission -rwxr-xr-x which I can apply on machineB jar file? And apart from this, can anybody explain me how does this permission work and what does it mean? UPDATE:- Thanks Jordan for the link, I am able to understand its meaning now..
The server is either not running sshd (and hence not listening on port 22) or has a firewall blocking port 22 (the default ssh port), or in incredibly rare cases running ssh on some other port (which is almost certainly not the case). First check to make sure sshd is installed (using debian examples) sudo apt-get install openssh-server And if so, is it running: ps -ef | grep sshd then check to see if it is listening to port 22 sudo netstat -nlp | grep :22 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 946/sshd tcp6 0 0 :::22 :::* LISTEN 946/sshd then check your firewall rules (this varies significantly, so I'll show a debian/ubuntu/etc example): sudo ufw status sudo ufw show listening tcp: 22 * (sshd) 24224 * (ruby) tcp6: 22 * (sshd) 8080 * (java) udp: 123 10.X.Y.Z (ntpd) 123 * (ntpd) 18649 * (dhclient) 24224 * (ruby) 34131 * (ruby) 60001 10.87.43.24 (mosh-server) 68 * (dhclient) udp6: 123 fe80::1031:AAAA:BBBB:CCCC (ntpd) 123 * (ntpd) 48573 * (dhclient) If ufw shows it as closed then run (again a debian/ubuntu example) sudo ufw allow 22
{ "source": [ "https://unix.stackexchange.com/questions/105804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48833/" ] }
105,840
I am starting my application in the background using nohup as mentioned below - root@phx5qa01c:/bezook# nohup java -jar ./exhibitor-1.5.1/lib/exhibitor-1.5.1-jar-with-dependencies.jar -c file --fsconfigdir /opt/exhibitor/conf --hostname phx5qa01c.phx.qa.host.com > exhibitor.out & [1] 30781 root@phx5qa01c:/bezook# nohup: ignoring input and redirecting stderr to stdout But every time I see this message - nohup: ignoring input and redirecting stderr to stdout Will there be any problem if I see this message? What does it mean and how can I avoid it?
To make sure that your application is disassociated from its terminal - so that it will not interfere with foreground commands and will continue to run after you logout - nohup ensures that neither stdin nor stdout nor stderr are a terminal-like device. The documentation describes what actions it takes: If the standard output is a terminal, all output written by the named utility to its standard output shall be appended to the end of the file nohup.out in the current directory. If nohup.out cannot be created or opened for appending, the output shall be appended to the end of the file nohup.out in the directory specified by the HOME environment variable. If neither file can be created or opened for appending, utility shall not be invoked. If the standard error is a terminal, all output written by the named utility to its standard error shall be redirected to the same file descriptor as the standard output. You redirected stdout to a file when you typed > exhibitor.out in your command line. If you're OK with having your application's stderr be directed to the same file as its stdout, you don't need to do anything else. Or you can redirect stderr to a different file by adding an argument such as 2> exhibitor.err . (Thanks to an unknown user - my notifications didn't show a name - for suggesting inclusion of this alternative.)
{ "source": [ "https://unix.stackexchange.com/questions/105840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22434/" ] }
105,876
I'm going through http://mywiki.wooledge.org/BashGuide/CommandsAndArguments and came across this: $ type rm rm is hashed (/bin/rm) $ type cd cd is a shell builtin Just a little earlier, the guide listed the various types of commands understood by Bash: aliases, functions, builtins, keywords and executables. But there wasn't mention of "hashed". So, in this context, what does "hashed" mean?
It's a performance thing; instead of searching the whole path for the binary every time it is called, it's put into a hash table for quicker lookup. So any binary that's already in this hash table, is hashed. If you move binaries around when they're already hashed, it will still try to call them in their old location. See also help hash , or man bash and search for hash under builtin commands there.
{ "source": [ "https://unix.stackexchange.com/questions/105876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
105,908
Sometimes it is not comfortable to see meminfo in kilobytes when you have several gigs of RAM. In Linux, it looks like: And here is how it looks in Mac OS X: Is there a way to display meminfo in Linux top in terabytes, gigabytes and megabytes?
When in top, typing capital "E" cycles through different memory units (KiB, MiB, GiB, etc., which are different from kB, MB and GB) in the total memory info: While lower-case "e" does the same individual process lines: From the manpage: 2c. MEMORY Usage This portion consists of two lines which may express values in kibibytes (KiB) through exbibytes (EiB) depending on the scaling factor enforced with the 'E' interactive command. Version Information: top -version : procps-ng version 3.3.9 System: CentOS 7
{ "source": [ "https://unix.stackexchange.com/questions/105908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16253/" ] }
105,935
phone_missing=false echo "missing $phone_missing" if [ ! $phone_missing ] then echo "Lost phone at $readabletime" $phone_missing=true fi I just can't understand this. The line echo "missing $phone_missing" echos missing false , I would expect the statement if [ ! $phone_missing ] to be true and enter the if clause, but it doesn't? What am I missing here!?
The variable $phone_missing is a string that happens to contain false . And a non-empty string evaluates to true . See also http://www.linuxintro.org/wiki/Babe#empty_strings
{ "source": [ "https://unix.stackexchange.com/questions/105935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54910/" ] }
105,958
I have an issue where if I type in very long commands in bash the terminal will not render what I'm typing correctly. I'd expect that if I had a command like the following: username@someserver ~/somepath $ ssh -i /path/to/private/key [email protected] The command should render on two lines. Instead it will often wrap around and start writing over the top of my prompt, somewhat like this: [email protected] -i /path/to/private/key If I decide to go back and change some argument there's no telling where the cursor will show up, sometimes in the middle of the prompt, but usually on the line above where I'm typing. Additional fun happens when when I Up to a previous command. I've tried this in both gnome-terminal and terminator and on i3 and Cinnamon. Someone suggested it was my prompt, so here that is: \[\033[01;32m\]\u:\[\033[01;34m\] \W\033[01;34m \$\[\033[00m\] Ctrl l , reset , and clear all do what they say, but when I type the command back in or Up the same things happens. I checked and checkwinsize is enabled in bash. This happens on 80x24 and other window sizes. Is this just something I learn to live with? Is there some piece of magic which I should know? I've settled for just using a really short prompt, but that doesn't fix the issue.
Non-printable sequences should be enclosed in \[ and \] . Looking at your PS1 it has a unenclosed sequence after \W . But, the second entry is redundant as well as it repeats the previous statement "1;34" . \[\033[01;32m\]\u:\[\033[01;34m\] \W\033[01;34m \$\[\033[00m\] |_____________| |_| | | +--- Let this apply to this as well. As such this should have intended coloring: \[\033[1;32m\]\u:\[\033[1;34m\] \W \$\[\033[0m\] |_____| | +---- Bold blue. Keeping the "original" this should also work: \[\033[1;32m\]\u:\[\033[1;34m\] \W\[\033[1;34m\] \$\[\033[0m\] |_| |_| | | +-----------+-- Enclose in \[ \] Edit: The reason for the behavior is because bash believes the prompt is longer then it actually is. As a simple example, if one use: PS1="\033[0;34m$" 1 2345678 The prompt is believed to be 8 characters and not 1. As such if terminal window is 20 columns, after typing 12 characters, it is believed to be 20 and wraps around. This is also evident if one then try to do backspace or Ctrl+u . It stops at column 9. However it also does not start new line unless one are on last column, as a result the first line is overwritten. If one keep typing the line should wrap to next line after 32 characters.
{ "source": [ "https://unix.stackexchange.com/questions/105958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54985/" ] }
105,969
OS: CentOS-6.5-x86_64-minimal I downloaded the latest version of OpenSSL Extracted it with tar -xvzf openssl-1.0.1e.tar.gz cd openssl-1.0.1e ./config --prefix=/usr/local make it gives me the following error: making all in crypto... make[1]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto' making all in crypto/objects... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/objects' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/objects' making all in crypto/md4... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/md4' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/md4' making all in crypto/md5... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/md5' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/md5' making all in crypto/sha... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/sha' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/sha' making all in crypto/mdc2... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/mdc2' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/mdc2' making all in crypto/hmac... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/hmac' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/hmac' making all in crypto/ripemd... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/ripemd' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/ripemd' making all in crypto/whrlpool... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/whrlpool' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/whrlpool' making all in crypto/des... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/des' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/des' making all in crypto/aes... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/aes' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/aes' making all in crypto/rc2... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/rc2' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/rc2' making all in crypto/rc4... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/rc4' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/rc4' making all in crypto/idea... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/idea' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/idea' making all in crypto/bf... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/bf' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/bf' making all in crypto/cast... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/cast' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/cast' making all in crypto/camellia... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/camellia' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/camellia' making all in crypto/seed... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/seed' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/seed' making all in crypto/modes... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/modes' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/modes' making all in crypto/bn... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/bn' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/bn' making all in crypto/ec... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/ec' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/ec' making all in crypto/rsa... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/rsa' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/rsa' making all in crypto/dsa... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/dsa' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/dsa' making all in crypto/ecdsa... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/ecdsa' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/ecdsa' making all in crypto/dh... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/dh' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/dh' making all in crypto/ecdh... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/ecdh' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/ecdh' making all in crypto/dso... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/dso' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/dso' making all in crypto/engine... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/engine' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/engine' making all in crypto/buffer... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/buffer' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/buffer' making all in crypto/bio... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/bio' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/bio' making all in crypto/stack... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/stack' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/stack' making all in crypto/lhash... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/lhash' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/lhash' making all in crypto/rand... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/rand' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/rand' making all in crypto/err... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/err' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/err' making all in crypto/evp... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/evp' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/evp' making all in crypto/asn1... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/asn1' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/asn1' making all in crypto/pem... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/pem' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/pem' making all in crypto/x509... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/x509' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/x509' making all in crypto/x509v3... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/x509v3' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/x509v3' making all in crypto/conf... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/conf' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/conf' making all in crypto/txt_db... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/txt_db' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/txt_db' making all in crypto/pkcs7... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/pkcs7' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/pkcs7' making all in crypto/pkcs12... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/pkcs12' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/pkcs12' making all in crypto/comp... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/comp' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/comp' making all in crypto/ocsp... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/ocsp' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/ocsp' making all in crypto/ui... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/ui' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/ui' making all in crypto/krb5... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/krb5' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/krb5' making all in crypto/cms... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/cms' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/cms' making all in crypto/pqueue... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/pqueue' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/pqueue' making all in crypto/ts... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/ts' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/ts' making all in crypto/srp... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/srp' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/srp' making all in crypto/cmac... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/crypto/cmac' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto/cmac' if [ -n "" ]; then \ (cd ..; make libcrypto.so.1.0.0); \ fi make[1]: Leaving directory `/usr/local/src/openssl-1.0.1e/crypto' making all in ssl... make[1]: Entering directory `/usr/local/src/openssl-1.0.1e/ssl' if [ -n "" ]; then \ (cd ..; make libssl.so.1.0.0); \ fi make[1]: Leaving directory `/usr/local/src/openssl-1.0.1e/ssl' making all in engines... make[1]: Entering directory `/usr/local/src/openssl-1.0.1e/engines' echo making all in engines/ccgost... make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/engines/ccgost' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/engines/ccgost' make[1]: Leaving directory `/usr/local/src/openssl-1.0.1e/engines' making all in apps... make[1]: Entering directory `/usr/local/src/openssl-1.0.1e/apps' rm -f openssl shlib_target=; if [ -n "" ]; then \ shlib_target="linux-shared"; \ elif [ -n "" ]; then \ FIPSLD_CC="gcc"; CC=/usr/local/ssl/fips-2.0/bin/fipsld; export CC FIPSLD_CC; \ fi; \ LIBRARIES="-L.. -lssl -L.. -lcrypto" ; \ make -f ../Makefile.shared -e \ APPNAME=openssl OBJECTS="openssl.o verify.o asn1pars.o req.o dgst.o dh.o dhparam.o enc.o passwd.o gendh.o errstr.o ca.o pkcs7.o crl2p7.o crl.o rsa.o rsautl.o dsa.o dsaparam.o ec.o ecparam.o x509.o genrsa.o gendsa.o genpkey.o s_server.o s_client.o speed.o s_time.o apps.o s_cb.o s_socket.o app_rand.o version.o sess_id.o ciphers.o nseq.o pkcs12.o pkcs8.o pkey.o pkeyparam.o pkeyutl.o spkac.o smime.o cms.o rand.o engine.o ocsp.o prime.o ts.o srp.o" \ LIBDEPS=" $LIBRARIES -ldl" \ link_app.${shlib_target} make[2]: Entering directory `/usr/local/src/openssl-1.0.1e/apps' ( :; LIBDEPS="${LIBDEPS:--L.. -lssl -L.. -lcrypto -ldl}"; LDCMD="${LDCMD:-gcc}"; LDFLAGS="${LDFLAGS:--DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -Wa,--noexecstack -m64 -DL_ENDIAN -DTERMIO -O3 -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM}"; LIBPATH=`for x in $LIBDEPS; do echo $x; done | sed -e 's/^ *-L//;t' -e d | uniq`; LIBPATH=`echo $LIBPATH | sed -e 's/ /:/g'`; LD_LIBRARY_PATH=$LIBPATH:$LD_LIBRARY_PATH ${LDCMD} ${LDFLAGS} -o ${APPNAME:=openssl} openssl.o verify.o asn1pars.o req.o dgst.o dh.o dhparam.o enc.o passwd.o gendh.o errstr.o ca.o pkcs7.o crl2p7.o crl.o rsa.o rsautl.o dsa.o dsaparam.o ec.o ecparam.o x509.o genrsa.o gendsa.o genpkey.o s_server.o s_client.o speed.o s_time.o apps.o s_cb.o s_socket.o app_rand.o version.o sess_id.o ciphers.o nseq.o pkcs12.o pkcs8.o pkey.o pkeyparam.o pkeyutl.o spkac.o smime.o cms.o rand.o engine.o ocsp.o prime.o ts.o srp.o ${LIBDEPS} ) ../libcrypto.a(x86_64cpuid.o): In function `OPENSSL_cleanse': (.text+0x1a0): multiple definition of `OPENSSL_cleanse' ../libcrypto.a(mem_clr.o):mem_clr.c:(.text+0x0): first defined here ../libcrypto.a(cmll-x86_64.o): In function `Camellia_cbc_encrypt': (.text+0x1f00): multiple definition of `Camellia_cbc_encrypt' ../libcrypto.a(cmll_cbc.o):cmll_cbc.c:(.text+0x0): first defined here ../libcrypto.a(aes-x86_64.o): In function `AES_encrypt': (.text+0x460): multiple definition of `AES_encrypt' ../libcrypto.a(aes_core.o):aes_core.c:(.text+0x5cf): first defined here ../libcrypto.a(aes-x86_64.o): In function `AES_decrypt': (.text+0x9f0): multiple definition of `AES_decrypt' ../libcrypto.a(aes_core.o):aes_core.c:(.text+0xa4b): first defined here ../libcrypto.a(aes-x86_64.o): In function `private_AES_set_encrypt_key': (.text+0xab0): multiple definition of `private_AES_set_encrypt_key' ../libcrypto.a(aes_core.o):aes_core.c:(.text+0x0): first defined here ../libcrypto.a(aes-x86_64.o): In function `private_AES_set_decrypt_key': (.text+0xd80): multiple definition of `private_AES_set_decrypt_key' ../libcrypto.a(aes_core.o):aes_core.c:(.text+0x3e5): first defined here ../libcrypto.a(aes-x86_64.o): In function `AES_cbc_encrypt': (.text+0xfa0): multiple definition of `AES_cbc_encrypt' ../libcrypto.a(aes_cbc.o):aes_cbc.c:(.text+0x0): first defined here collect2: ld returned 1 exit status make[2]: *** [link_app.] Error 1 make[2]: Leaving directory `/usr/local/src/openssl-1.0.1e/apps' make[1]: *** [openssl] Error 2 make[1]: Leaving directory `/usr/local/src/openssl-1.0.1e/apps' make: *** [build_apps] Error 1 I tried yum -y install openssl . I want to install OpenSSL to be able to use the HTTPS protocol in CURL, and different applications. openssl (which is the binary) is installed, but OpenSSL (which is required for the HTTPS protocol is not installed). Any solutions to this problem?
I wanted to compile tomcat with OpenSSL support and OpenSSL source code alone wasn't enough. Try installing the OpenSSL development libraries: yum install -y openssl-devel
{ "source": [ "https://unix.stackexchange.com/questions/105969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54963/" ] }
106,008
Day-of-week: Allowed range 0 – 7. Sunday is either 0 or 7. I found this after Googling, my question is why should both values (0,7) correspond to Sunday?
This is a matter of portability. In early Unices, some versions of cron accepted 0 as Sunday, and some accepted 7 as Sunday -- this format is an attempt to be portable with both. From man 5 crontab in vixie-cron (emphasis my own): When specifying day of week, both day 0 and day 7 will be considered Sunday. BSD and AT&T seem to disagree about this.
{ "source": [ "https://unix.stackexchange.com/questions/106008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53243/" ] }
106,039
I heard about "useless use of cat" and found some suggestions, but the following outputs nothing in my bash shell. < filename Using cat works as expected though. cat filename I'm using Fedora Core 18 and GNU bash, version 4.2.45(1). EDIT: Using it in front of a pipe doesn't work either. < filename | grep pattern Whereas using cat works as expected. cat filename | grep pattern EDIT2: To clarify, I know that I can use this grep pattern < filename but I read here https://stackoverflow.com/questions/11710552/useless-use-of-cat that I can also use it in front of the command. It does not work in front of the command though.
The less than and symbol ( < ) is opening the file up and attaching it to the standard input device handle of some application/program. But you haven't given the shell any application to attach the input to. Example These 2 examples do essentially the same thing but get their input in 2 slightly different manners. opens file $ cat blah.txt hi opens STDIN $ cat < blah.txt hi Peeking behind the curtain You can use strace to see what's going on. When we read from a file open("blah.txt", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0664, st_size=3, ...}) = 0 fadvise64(3, 0, 0, POSIX_FADV_SEQUENTIAL) = 0 read(3, "hi\n", 65536) = 3 write(1, "hi\n", 3hi ) = 3 read(3, "", 65536) = 0 close(3) = 0 close(1) = 0 When we read from STDIN (identified as 0) read(0, "hi\n", 65536) = 3 write(1, "hi\n", 3hi ) = 3 read(0, "", 65536) = 0 close(0) = 0 close(1) = 0 In the first example we can see that cat opened the file and read from it, blah.txt . In the second we can see that cat reads the contents of the file blah.txt via the STDIN file descriptor, identified as descriptor number 0. read(0, "hi\n", 65536) = 3
{ "source": [ "https://unix.stackexchange.com/questions/106039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22310/" ] }
106,122
I have three machines in production - machineA 10.66.136.129 machineB 10.66.138.181 machineC 10.66.138.183 and all those machines have Ubuntu 12.04 installed in it and I have root access to all those three machines. Now I am supposed to do below things in my above machines - Create mount point /opt/exhibitor/conf Mount the directory in all servers. sudo mount <NFS-SERVER>:/opt/exhibitor/conf /opt/exhibitor/conf/ I have already created /opt/exhibitor/conf directory in all those three machines as mentioned above. Now I am trying to create a Mount Point. So I followed the below process - Install NFS support files and NFS kernel server in all the above three machines $ sudo apt-get install nfs-common nfs-kernel-server Create the shared directory in all the above three machines $ mkdir /opt/exhibitor/conf/ Edited the /etc/exports and added the entry like this in all the above three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.66.136.129(rw) /opt/exhibitor/conf/ 10.66.138.181(rw) /opt/exhibitor/conf/ 10.66.138.183(rw) I have tried mounting on machineA like below from machineB and machineC and it gives me this error- root@machineB:/# sudo mount -t nfs 10.66.136.129:/opt/exhibitor/conf /opt/exhibitor/conf/ mount.nfs: access denied by server while mounting 10.66.136.129:/opt/exhibitor/conf root@machineC:/# sudo mount -t nfs 10.66.136.129:/opt/exhibitor/conf /opt/exhibitor/conf/ mount.nfs: access denied by server while mounting 10.66.136.129:/opt/exhibitor/conf Did my /etc/exports file looks good? I am pretty sure, I have messed up my exports file. As I have the same content in all the three machines in exports file. Any idea what wrong I am doing here? And what will be the correct /exports file here?
exportfs When you create a /etc/exports file on a server you need to make sure that you export it. Typically you'll want to run this command: $ exportfs -a This will export all the entries in the exports file. showmount The other thing I'll often do is from other machines I'll check any machine that's exporting NFS shares to the network using the showmount command. $ showmount -e <NFS server name> Example Say for example I'm logged into scully. $ showmount -e mulder Export list for mulder: /export/raid1/isos 192.168.1.0/24 /export/raid1/proj 192.168.1.0/24 /export/raid1/data 192.168.1.0/24 /export/raid1/home 192.168.1.0/24 /export/raid1/packages 192.168.1.0/24 fstab To mount these upon boots you'd add this line to your client machines that want to consume the NFS mounts. server:/shared/dir /opt/mounted/dir nfs rsize=8192,wsize=8192,timeo=14,intr automounting If you're going to be rebooting these servers then I highly suggest you look into setting up automounting ( autofs ) instead of adding these entries to /etc/fstab . It's a bit more work but is well worth the effort. Doing so will allow you to reboot the servers more independently from one another and also will only create the NFS mount when it's actually needed and/or being used. When it goes idle it will get unmounted. References 18.2. NFS Client Configuration - CentOS 5 Deployment Guide
{ "source": [ "https://unix.stackexchange.com/questions/106122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22434/" ] }
106,131
I'm studying PAM, and I'm a bit clueless about the meaning of some combination of control flags. From the Red Hat documentation we have: required failure of such a PAM will ultimately lead to the PAM-API returning failure but only after the remaining stacked modules (for this service and type) have been invoked requisite like required, however, in the case that such a module returns a failure, control is directly returned to the application. sufficient success of such a module is enough to satisfy the authentication requirements of the stack of modules (if a prior required module has failed the success of this one is ignored). A failure of this module is not deemed as fatal to satisfying the application that this type has succeeded. If the module succeeds the PAM framework returns success to the application immediately without trying any other modules. So, in my understanding, if a module requisite fails, the entire stack of modules will not be parsed, and the control will be back to the application immediately. If a module sufficient succeeds, the rest of modules stack will not be parsed and the control will be back to the application immediately. If a module required fails, the entire stack will be parsed. Now, I cannot understand what will be the behavior when a certain module required fails and another module sufficient succeeds.
PAM proceeds through the items on the stack in sequence. It only keeps the memory of what state it's in (success or denied, with success meaning success so far), not of how it reached that state. If an item marked sufficient succeeds, the PAM library stops processing that stack. This happens whether there were previous required items or not. At this point, PAM returns the current state: success if no previous required item failed, otherwise denied. Similarly, if an item marked requisite fails, the PAM library stops processing and returns a failure. At that point, it's irrelevant whether a previous required item failed. In other words, required doesn't necessarily cause the whole stack to be processed. It only means to keep going.
{ "source": [ "https://unix.stackexchange.com/questions/106131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12479/" ] }
106,133
I have been trying to use rsnapshot for making backups, but I'm finding it unusable. While it is able to diff a directory (50gb) and duplicate it (hardlinking every file) in a few minutes, and I can cp the whole directory in about half an hour, it takes well over an hour to delete it. Even directly using rm -rfv , I find it can take up to half a second to rm a single file, whereas the cp and link commands complete instantly. Why is rm so slow? Is there any faster way to recursively remove hardlinks? It doesn't make sense to me that copying a file should take less time than removing it. The filesystem I am working on is an external storage drive, connected via usb and type fuseblk (which I think means it's ntfs). My computer is running ubuntu linux. Output from top: Cpu(s): 3.0%us, 1.5%sy, 0.0%ni, 54.8%id, 40.6%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 8063700k total, 3602416k used, 4461284k free, 557604k buffers
Ultimately, no matter what you do, rm has to run unlink on every single file that you want to remove (even if you call rm -r on the parent directory). If there are a lot of files to remove, this can take a long time. There are two particularly time consuming processes when you run rm -r : readdir , followed by, a number of calls to unlink . Finding all the files, and then going through every single file to remove it, can take a really, really long time. If you find this "unusable" because it renders the directory unusable for some time, consider moving the parent directory before removing it. This will free up that name for the program to use again, without the time being too much of an inconvenience. Assuming that the file system really is NTFS (it's unclear from your question), NTFS is generally quite slow at deleting large swathes of files. You might consider using a more suitable filesystem for your purposes (the more recent ext filesystems have pretty good delete performance, if you don't have any other particular needs). FUSE itself is also not particularly fast, in general. You might consider seeing if you can do this in some way that does not use FUSE.
{ "source": [ "https://unix.stackexchange.com/questions/106133", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5032/" ] }
106,234
Given a 2.6.x or newer Linux kernel and existing userland that is capable of running both ELF32 and ELF64 binaries (i.e. well past How do I know that my CPU supports 64bit operating systems under Linux? ) how can I determine if a given process (by PID) is running in 32- or 64-bit mode? The naive solution would be to run: file -L /proc/pid/exe | grep -o 'ELF ..-bit [LM]SB' but is that information exposed directly in /proc without relying on libmagic ?
If you want to limit yourself to ELF detection, you can read the ELF header of /proc/$PID/exe yourself. It's quite trivial: if the 5th byte in the file is 1, it's a 32-bit binary. If it's 2, it's 64-bit. For added sanity checking: If the first 5 bytes are 0x7f, "ELF", 1 : it's a 32 bit ELF binary. If the first 5 bytes are 0x7f, "ELF", 2 : it's a 64 bit ELF binary. Otherwise: it's inconclusive. You could also use objdump , but that takes away your libmagic dependency and replaces it with a libelf one. Another way : you can also parse the /proc/$PID/auxv file. According to proc(5) : This contains the contents of the ELF interpreter information passed to the process at exec time. The format is one unsigned long ID plus one unsigned long value for each entry. The last entry contains two zeros. The meanings of the unsigned long keys are in /usr/include/linux/auxvec.h . You want AT_PLATFORM , which is 0x00000f . Don't quote me on that, but it appears the value should be interpreted as a char * to get the string description of the platform. You may find this StackOverflow question useful. Yet another way : you can instruct the dynamic linker ( man ld ) to dump information about the executable. It prints out to standard output the decoded AUXV structure. Warning: this is a hack, but it works. LD_SHOW_AUXV=1 ldd /proc/$SOME_PID/exe | grep AT_PLATFORM | tail -1 This will show something like: AT_PLATFORM: x86_64 I tried it on a 32-bit binary and got i686 instead. How this works: LD_SHOW_AUXV=1 instructs the Dynamic Linker to dump the decoded AUXV structure before running the executable. Unless you really like to make your life interesting, you want to avoid actually running said executable. One way to load and dynamically link it without actually calling its main() function is to run ldd(1) on it. The downside: LD_SHOW_AUXV is enabled by the shell, so you'll get dumps of the AUXV structures for: the subshell, ldd , and your target binary. So we grep for AT_PLATFORM, but only keep the last line. Parsing auxv : if you parse the auxv structure yourself (not relying on the dynamic loader), then there's a bit of a conundrum: the auxv structure follows the rule of the process it describes, so sizeof(unsigned long) will be 4 for 32-bit processes and 8 for 64-bit processes. We can make this work for us. In order for this to work on 32-bit systems, all key codes must be 0xffffffff or less. On a 64-bit system, the most significant 32 bits will be zero. Intel machines are little endians, so these 32 bits follow the least significant ones in memory. As such, all you need to do is: 1. Read 16 bytes from the `auxv` file. 2. Is this the end of the file? 3. Then it's a 64-bit process. 4. Done. 5. Is buf[4], buf[5], buf[6] or buf[7] non-zero? 6. Then it's a 32-bit process. 7. Done. 8. Go to 1. Parsing the maps file : this was suggested by Gilles, but didn't quite work. Here's a modified version that does. It relies on reading the /proc/$PID/maps file. If the file lists 64-bit addresses, the process is 64 bits. Otherwise, it's 32 bits. The problem lies in that the kernel will simplify the output by stripping leading zeroes from hex addresses in groups of 4, so the length hack can't quite work. awk to the rescue: if ! [ -e /proc/$pid/maps ]; then   echo "No such process" else case $(awk </proc/$pid/maps -- 'END { print substr($1, 0, 9); }') in *-) echo "32 bit process";; *[0-9A-Fa-f]) echo "64 bit process";; *) echo "Insufficient permissions.";; esac fi This works by checking the starting address of the last memory map of the process. They're listed like 12345678-deadbeef . So, if the process is a 32-bit one, that address will be eight hex digits long, and the ninth will be a hyphen. If it's a 64-bit one, the highest address will be longer than that. The ninth character will be a hex digit. Be aware: all but the first and last methods need Linux kernel 2.6.0 or newer, since the auxv file wasn't there before.
{ "source": [ "https://unix.stackexchange.com/questions/106234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45719/" ] }
106,251
Selecting lines in nano can be achieved using Esc+A . With multiple lines selected, how do I then indent all those lines at once?
Once you have selected the block, you can indent it using Alt + } (not the key, but whatever key combination is necessary to produce a closing curly bracket).
{ "source": [ "https://unix.stackexchange.com/questions/106251", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55169/" ] }
106,275
I am a graduate student of computational chemistry with access to a Linux cluster. The cluster consists of a very large (25 TB) fileserver, to which several dozen compute nodes are connected. Each compute node consists of 8 to 24 Intel Xeon cores. Each compute node also contains a local disk of about 365 TB. Since the fileserver is routinely accessed by a dozen or so users in the research group, the fileserver is mainly used for long term file storage (it is backed up nightly, whereas the compute nodes' local disks are never backed up). Thus, the system administrator has instructed us to run simulations on the local disks -- which have faster I/O than the fileserver -- so as to not slow down the fileserver for the other users. So, I run simulations on the local disks and then, after they are finished, I copy the trajectory files -- I am running molecular dynamics (MD) simulations -- to the fileserver for storage. Suppose I have a trajectory file called traj.trr in a directory on the local disk of a node, /home/myusername/mysimulation1/traj.trr . For long term storage, I always copy traj.trr to a directory in the fileserver, ~/mysimulation1/traj.trr , where ~ represents my directory in the fileserver, /export/home/myusername . After copying it, then I habitually use du -h to verify that /home/myusername/mysimulation1/traj.trr has the same file size as ~/mysimulation1/traj.trr . This way, I can be at least reasonably sure that the transfer to the fileserver was successful. For example: cd /home/myusername/mysimulation1/ cp -v traj.trr ~/mysimulation1/ du /home/myusername/mysimulation1/traj.trr -h du ~/mysimulation1/traj.trr -h If the two calls to du -h give the same human-readable file size, then I can be reasonably sure that the transfer/copy was successful. (My typical traj.trr files range in size from about 15 to 20 GB, depending on the exact simulation I have run.) If I run du (i.e., without the -h switch) on the two traj.trr files, their sizes in bytes are usually very, very similar -- usually within just a few bytes. I have been using this overall method for the past year and a half, with no problems. However, recently I have run into the following problem: sometimes du -h reports that the two traj.trr files are different in size by several GB. Here is an example: cd /home/myusername/mysimulation1/ # this is the local disk cp -v traj.trr ~/mysimulation1/ du traj.trr -h cd ~/mysimulation1/ # this is the fileserver du traj.trr -h The output from the two calls to du -h is as follows, respectively: 20G traj.trr 28G traj.trr I believe that the former (i.e., the traj.trr in the local disk, /home/myusername/mysimulation1/ ) is the correct file size, since my simulation trajectories are expected to be about 15 to 20 GB each. But then how could the file on the fileserver actually be larger ? I could see how it could be smaller, if somehow the cp transfer failed. But I don't see how it could actually be larger . I get similar output when I execute the same commands as above, but without the -h switch given to du : 20717480 traj.trr 28666688 traj.trr Can you think of any reason for the difference? If, by some unlikely chance, du is somehow malfunctioning, I can be okay with that. But I just really need to make sure that the copy of traj.trr on the fileserver is complete and identical to its source version on the local disk. I need to delete the local file so that I have enough local disk space to run new simulations, but I can't afford to have the version of traj.trr on the fileserver be corrupted. The .trr file format (from the Gromacs molecular dynamics package) is a binary format, not text. Thus, I am not sure if the files can be reliably compared by a program such as diff .
You really should use something like md5sum or sha1sum to check integrity. If you really want to use the size use ls -l or du -b . The du utility normally only shows the disk usage of the file, i.e. how much of the file system is used by it. This value totally depends on the backing file system and other factors like sparse files. Example: $ truncate -s 512M foo $ cat foo >bar $ ls -l foo bar -rw-r--r-- 1 michas users 536870912 23. Dez 00:06 bar -rw-r--r-- 1 michas users 536870912 23. Dez 00:03 foo $ du foo bar 0 foo 524288 bar $ du -b foo bar 536870912 foo 536870912 bar We have two files both containing 512MB of zeros. The first one is stored sparse and does not use any disk space, while the second stores each byte explicitly on disk. -- Same file, but completely different disk usage. The -b option might be good for you: -b, --bytes equivalent to '--apparent-size --block-size=1' --apparent-size print apparent sizes, rather than disk usage; although the apparent size is usually smaller, it may be larger due to holes in ('sparse') files, internal fragmentation, indirect blocks, and the like
{ "source": [ "https://unix.stackexchange.com/questions/106275", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
106,330
I'd like to sort all the directories/files in a specific directory based on their size (using du -sh "name" ). I need to apply this command to all directories in my location, then sort them based on this result. How can I do that ?
With GNU sort and GNU du (which it appears you have, since you state you are using du 's -h option): du -sh -- * | sort -rh # Files and directories, or du -sh -- */ | sort -rh # Directories only The output looks something like this: 22G foo/ 21G bar/ 5.4G baz/ 2.1G qux/ 1021M wibble/ 4.0K wobble/
{ "source": [ "https://unix.stackexchange.com/questions/106330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52350/" ] }
106,425
I am looking at the vim help quickref commands for moving cursor. Following lines confuse me because I find both of them working same. Could some one elaborate me difference with an example. N w N words forward N W N blank-separated WORDS forward Same question is true for e and E as well.
According to this cheat sheet it would seem to come down to punctuation. w jump by start of words (punctuation considered words) W jump by words (spaces separate words) e jump to end of words (punctuation considered words) E jump to end of words (no punctuation) Example demo using w demo using W Notice in the first demo how when w is continuously hit that the cursor lands on the punctuation, while in the 2nd demo the punctuation is skipped. This is the difference between the lowercase commands and their upper case equivalents.
{ "source": [ "https://unix.stackexchange.com/questions/106425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33254/" ] }
106,480
I'm using Linux (centos) machine, I already connected to the other system using ssh . Now my question is how can I copy files from one system to another system? Suppose, in my environment, I have two system like System A and System B . I'm using System A machine and some other using System B machine. How can I copy a file from System B to System A ? And, copy a file from System A to System B ?
Syntax: scp <source> <destination> To copy a file from B to A while logged into B : scp /path/to/file username@a:/path/to/destination To copy a file from B to A while logged into A : scp username@b:/path/to/file /path/to/destination
{ "source": [ "https://unix.stackexchange.com/questions/106480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53846/" ] }
106,502
I need run Apache2 on my Debian 7 server. But it run only on tcpv6 port, not on tcpv4 port. I installed it with apt-get install. If I go to localhost or 127.0.0.1 or my server IPv4 address it does not display any website. My /etc/apache2/ports.conf : # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> netstat -plntu : tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3558/sshd tcp 0 0 0.0.0.0:3466 0.0.0.0:* LISTEN 2820/mysqld tcp6 0 0 :::80 :::* LISTEN 2097/apache2 tcp6 0 0 :::22 :::* LISTEN 3558/sshd
The fact that netstat shows only tcp6 here is not the problem. If you don't specify an address to listen on, apache will listen on all supported address families using a single socket (for design reasons, sshd uses a unique socket per address & address family, hence showing up twice in your netstat output). Here's one of my systems, showing apache having only tcp6 sockets, and yet still working fine via both IPv4 and IPv6. woodpecker ~ # netstat -anp |grep apache tcp6 0 0 :::80 :::* LISTEN 1637/apache2 tcp6 0 0 :::443 :::* LISTEN 1637/apache2 woodpecker ~ # wget http://127.0.0.1/ -O /dev/null --2013-12-25 08:52:38-- http://127.0.0.1/ Connecting to 127.0.0.1:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45 [text/html] ... # wget http://[::1]/ -O /dev/null --2013-12-25 08:53:00-- http://[::1]/ Connecting to [::1]:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45 [text/html] ... If you run wget http://127.0.0.1/ -O - on the server what happens? Does it successfully connect? Does it return the raw HTML for your website as expected?
{ "source": [ "https://unix.stackexchange.com/questions/106502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52636/" ] }
106,561
I am installing hadoop on my Ubuntu system. When I start it, it reports that port 9000 is busy. I used: netstat -nlp|grep 9000 to see if such a port exists and I got this: tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN But how can I get the PID of the process which is holding it?
Your existing command doesn't work because Linux requires you to either be root or the owner of the process to get the information you desire. On modern systems, ss is the appropriate tool to use to get this information: $ sudo ss -lptn 'sport = :80' State Local Address:Port Peer Address:Port LISTEN 127.0.0.1:80 *:* users:(("nginx",pid=125004,fd=12)) LISTEN ::1:80 :::* users:(("nginx",pid=125004,fd=11)) You can also use the same invocation you're currently using, but you must first elevate with sudo : $ sudo netstat -nlp | grep :80 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 125004/nginx You can also use lsof: $ sudo lsof -n -i :80 | grep LISTEN nginx 125004 nginx 3u IPv4 6645 0t0 TCP 0.0.0.0:80 (LISTEN)
{ "source": [ "https://unix.stackexchange.com/questions/106561", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49308/" ] }
106,565
I can do the following to see if some word is available in the output of "cat": cat filename | grep word This filters the output and shows only those lines which contain "word". Now, is it possible to only highlight the "word" in the output, without dropping other lines?
You can grep for an EOL along with your real query (if you already have an alias for grep to use --color , as is default in many distributions, you can omit it in the following examples): grep --color=auto 'word\|$' file Since the EOL is not a real character, it won't highlight anything, but it will match all lines. If you would prefer not to have to escape the pipe character, you can use extended regular expressions: grep -E --color=auto 'word|$' file
{ "source": [ "https://unix.stackexchange.com/questions/106565", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
106,601
I have a function which converts epoch time to date. Here is the definition date1(){ date -d @$1 } I'd like to be able to write: $ date1 xxxyyy Where xxxyyy is the parameter I pass into my function so I can get the corresponding date. I understand I have to add it in either .bash_profile , .profile , or .bashrc and then source it: $ source file But, I'm not sure which file to put it in. Currently, I have it in .profile . But to run it, I have to do source .profile every time. Ideally, it should make it available, when the computer starts up like the environment variable.
From man bash : When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. In other words, you can put it in any one of ~/.bash_profile , ~/.bash_login or ~/.profile , or any files source d by either of those . Typically ~/.profile will source ~/.bashrc , which is the "personal initialization file, executed for login shells." To enable it, either start a new shell, run exec $SHELL or run source ~/.bashrc .
{ "source": [ "https://unix.stackexchange.com/questions/106601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50943/" ] }
106,644
I followed this tutorial and now I'm able to connect to serial line. Now I want to change the width of terminal. How can I do this by using screen or minicom or something else?
Serial connections don't have a standard way of setting terminal geometry. The assumed geometry is often 80x23 or 80x24 (terminals with zero to two status lines). Once you're logged in, you can set your preferred geometry via the shell, using something like stty rows 50 cols 132 This will last for the duration of your terminal session, but is not persistent across terminal sessions (e.g. logging out and logging in again). Unfortunately, resizing the GUI window the terminal emulator runs in won't update this unless some cunning magic is taking place I'm entirely unaware of.
{ "source": [ "https://unix.stackexchange.com/questions/106644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13050/" ] }
106,656
In Windows I have the services manager, where I see all system services, that can be started through Windows itself, I set up the user it uses, the rights management is in there, and I can pass variables and some other information to the services, I can name them, and I can create duplicates services of one program and so on. So I have a main management tool in Windows. How can I do the same in Linux? How can I trigger to run "svnserve" at startup, or how can I configure services to be running in a special context. How can I view all "programmed" services?
There are currently 3 main init systems used by linux. A few years ago, there was just one, SysVinit. But SysVinit was seriously lacking in capabilities such as service dependency graphing, so it's been deprecated in most distros by now. Currently most distros are switching to systemd . Though there is also upstart . But here's the answer to your question for each of the 3 init systems: SysVinit SysVinit currently used by Debian and RedHat. Though the next version of RedHat (7) will be using systemd. The univeral way of enabling SysVinit services on boot is to symlink them in /etc/rc3.d (or /etc/rc2.d ). All services can be found in /etc/init.d . Note however that distros will often have their own tool for managing these files, and that tool should be used instead. (Fedora/RedHat has service and chkconfig , ubuntu has update-rc.d ) List services: ls /etc/init.d/ Start service: /etc/init.d/{SERVICENAME} start or service {SERVICENAME} start Stop service: /etc/init.d/{SERVICENAME} stop or service {SERVICENAME} stop Enable service: cd /etc/rc3.d ln -s ../init.d/{SERVICENAME} S95{SERVICENAME} (the S95 is used to specify the order. S01 will start before S02, etc) Disable service: rm /etc/rc3.d/*{SERVICENAME} Systemd The most notable distribution using systemd is Fedora. Though it is used by many others. Additionally, with Debian having chosen to go with systemd over upstart, it will become the defacto upstart system for most distributions (ubuntu has already announced they will be dropping upstart for systemd). List services: systemctl list-unit-files Start service: systemctl start {SERVICENAME} Stop service: systemctl stop {SERVICENAME} Enable service: systemctl enable {SERVICENAME} Disable service: systemctl disable {SERVICENAME} Upstart Upstart was developed by the Ubuntu folks. But after debian decided to go with systemd , Ubuntu announced they would drop upstart . Upstart was also briefly used by RedHat, as it is present in RHEL-6, but it is not commonly used. List services: initctl list Start service: initctl start {SERVICENAME} Stop service: initctl stop {SERVICENAME} Enable service: 2 ways unfortunately: There will be a file /etc/default/{SERVICENAME} which contains a line ENABLED=... . Change this line to ENABLED=1 . There will be a file /etc/init/{SERVICENAME}.override . Make sure it contains start (or is absent entirely), not manual . Disable service: echo manual > /etc/init/{SERVICENAME}.override Note: There is also the 'OpenRC' init system which is used by Gentoo. Currently Gentoo is the only distro which uses it, and it is not being considered for use, nor supported by any other distro. So I am not covering it's usage (though if opinion is that I do, I can add it).
{ "source": [ "https://unix.stackexchange.com/questions/106656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55349/" ] }
106,663
I am trying to follow what I assume is best practices of using sudo instead of root account. I am running a simple concat file operation such as: sudo echo 'clock_hctosys="YES"' >> /etc/conf.d/hwclock This fails as to the right of the ">>" it is running as the normal user. Adding extra sudos also fails (expected behaviour since piping to the sudo command and not to the file). Example is just that but it has been verified and tested under the root account.
You can invoke a new shell as root: sudo sh -c 'echo clock_hctosys=\"YES\" >> /etc/conf.d/hwclock' You could also just elevate a process to write to the file: sudo tee -a /etc/conf.d/hwclock > /dev/null << EOF clock_hctosys="YES" EOF
{ "source": [ "https://unix.stackexchange.com/questions/106663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32340/" ] }
106,678
We all know that SSDs have a limited predetermined life span. How do I check in Linux what the current health status of an SSD is? Most Google search results would ask you to look up S.M.A.R.T. information for a percentage field called Media_Wearout_Indicator, or other jargons indicators like Longterm Data Endurance -- which don't exist -- Yes I did check two SSDs, both lack these fields. I could go on to find a third SSD, but I feel the fields are not standardized. To demonstrate the problem here are the two examples. With the first SSD, it is not clear which field indicates wearout level. However there is only one Unknown_Attribute whose RAW VALUE is between 1 and 100, thus I can only assume that is what we are looking for: $ sudo smartctl -A /dev/sda smartctl 6.2 2013-04-20 r3812 [x86_64-linux-3.11.0-14-generic] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 6568 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 1555 171 Unknown_Attribute 0x0002 100 100 000 Old_age Always - 0 172 Unknown_Attribute 0x0002 100 100 000 Old_age Always - 0 173 Unknown_Attribute 0x0002 100 100 000 Old_age Always - 57 174 Unknown_Attribute 0x0002 100 100 000 Old_age Always - 296 187 Reported_Uncorrect 0x0002 100 100 000 Old_age Always - 0 230 Unknown_SSD_Attribute 0x0002 100 100 000 Old_age Always - 190 232 Available_Reservd_Space 0x0003 100 100 005 Pre-fail Always - 0 234 Unknown_Attribute 0x0002 100 100 000 Old_age Always - 350 241 Total_LBAs_Written 0x0002 100 100 000 Old_age Always - 742687258 242 Total_LBAs_Read 0x0002 100 100 000 Old_age Always - 1240775277 So this SSD has used 57% of its rewrite life-span, is it correct? With the other disk, the SSD_Life_Left ATTRIBUTE stands out, but its Raw value of 0, indicating 0% life left, is unlikely for an apparently-healthy SSD unless it happen to be in peril (we will see in a few days), and if it reads "0% life has been used", also impossible for a worn hard disk (worn = used for more than a year). > sudo /usr/sbin/smartctl -A /dev/sda smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.11.6-4-desktop] (SUSE RPM) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 104 100 050 Pre-fail Always - 0/8415644 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always - 0 9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age Always - 4757h+02m+17.130s 12 Power_Cycle_Count 0x0032 099 099 000 Old_age Always - 1371 171 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0 172 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0 174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 52 177 Wear_Range_Delta 0x0000 000 000 000 Old_age Offline - 2 181 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0 182 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 194 Temperature_Celsius 0x0022 030 030 000 Old_age Always - 30 (Min/Max 30/30) 195 ECC_Uncorr_Error_Count 0x001c 104 100 000 Old_age Offline - 0/8415644 196 Reallocated_Event_Count 0x0033 100 100 000 Pre-fail Always - 0 231 SSD_Life_Left 0x0013 100 100 010 Pre-fail Always - 0 233 SandForce_Internal 0x0000 000 000 000 Old_age Offline - 3712 234 SandForce_Internal 0x0032 000 000 000 Old_age Always - 1152 241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 1152 242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 3072
In your first example, what I think you are referring to is the "Media Wearout Indicator" on Intel drives, which is attribute 233. Yes, it has a range of 0-100, with 100 being a brand new, unused drive, and 0 being completely worn out. According to your ouptut, this field doesn't seem to exist. In your second example, please read the official docs about SSD_Life_Left. Per that page: The RAW value of this attribute is always 0 and has no meaning. Check the normalized VALUE instead. It starts at 100 and indicates the approximate percentage of SDD life left. It typically decreases when Flash blocks are marked as bad, see the RAW value of Retired_Block_Count It's really important that you fully understand what smartctl(8) is saying, and not making assumptions. Unfortunately, the S.M.A.R.T. tools aren't always up to date with the latest SSDs and their attributes. As such, there isn't always a clean way to tell how many times the chips have been written to. Best you can do, is look at the "Power_On_Hours", which in your case is "6568", determine your average disk utilization, and average it out. You should be able to lookup your drive specs, and determine the process used to make the chips. 32nm process chips will have a longer write endurance than 24nm process chips. However, it seems that "on average", you could probably expect about 3,000 to 4,000 writes, with a minimum of 1,000 and a max of 6,000. So, if you have a 64GB SSD, then you should expect somewhere in the neighborhood of a total of 192TB to 256TB written to the SSD, assuming wear leveling. As an example, if you're sustaining a utilization of say 11 KBps to your drive, then you could expect to see about 40 MB written per hour. At 6568 powered on hours, you've written roughly 260 GB to disk. Knowing that you could probably sustain about 200 TB of total writes, before failure, you have about 600 years before failure due to wearing out the chips. Your disk will likely fail due to worn out capacitors or voltage regulation.
{ "source": [ "https://unix.stackexchange.com/questions/106678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54687/" ] }
106,847
ps aux seems to conveniently list all processes and their status and resource usage (Linux/BSD/MacOS), however I cannot comprehend the meaning of parameter aux using man ps . What does aux mean?
a = show processes for all users u = display the process's user/owner x = also show processes not attached to a terminal By the way, man ps is a good resource. Historically, BSD and AT&T developed incompatible versions of ps . The options without a leading dash (as per the question) are the BSD style while those with a leading dash are AT&T Unix style. On top of this, Linux developed a version which supports both styles and then adds to it a third style with options that begin with double dashes. All (or nearly all) non-embedded Linux distributions use a variant of the procps suite. The above options are as defined in the procps ps man page . In the comments, you say you are using Apple MacOS (OSX, I presume). The OSX man page for ps is here and it shows support only for AT&T style.
{ "source": [ "https://unix.stackexchange.com/questions/106847", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22534/" ] }
106,980
This command works fine in bash: bash-3.2$ scp luna4:/u/paige/maye/src/diviner/notebooks/plots/hk_* . hk_az.png 100% 126KB 126.0KB/s 00:00 hk_baffle.png 100% 166KB 166.3KB/s 00:01 hk_bb.png 100% 144KB 143.8KB/s 00:00 hk_el.png 100% 115KB 115.3KB/s 00:00 hk_fpa.png 100% 123KB 123.2KB/s 00:00 hk_fpb.png 100% 126KB 125.7KB/s 00:00 hk_hybrid.png 100% 99KB 98.7KB/s 00:00 hk_oba.png 100% 140KB 139.7KB/s 00:00 hk_solar.png 100% 206KB 205.6KB/s 00:00 hk_temp.png 100% 62KB 61.8KB/s 00:00 hk_yoke.png 100% 122KB 121.7KB/s 00:00 bash-3.2$ exit but in zsh it fails, no files are found: $ scp luna4:/u/paige/maye/src/diviner/notebooks/plots/hk_* . zsh: no matches found: luna4:/u/paige/maye/src/diviner/notebooks/plots/hk_* What is going wrong?
The problem is that zsh is globbing the remote path. You can verify this by scp luna4:"/u/paige/maye/src/diviner/notebooks/plots/hk_*" . To turn globbing off for scp remote paths, but otherwise leave globbing the same (from here ) add this to your .zshrc - # Disable globbing on the remote path. alias scp='noglob scp_wrap' function scp_wrap { local -a args local i for i in "$@"; do case $i in (*:*) args+=($i) ;; (*) args+=(${~i}) ;; esac; done command scp "${(@)args}" }
{ "source": [ "https://unix.stackexchange.com/questions/106980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17172/" ] }
107,182
I installed CentOS on my laptop about 8 month ago, and I want to know when exactly I installed it , how can I do that in CentOS 6.4 ?
tune2fs You can use the command tune2fs to find out when the filesystem was created. $ tune2fs -l /dev/main/partition |grep 'Filesystem created' Example $ sudo tune2fs -l /dev/dm-1 |grep 'Filesystem created' Filesystem created: Sat Dec 7 20:42:03 2013 which disk to use? If you don't have /dev/dm-1 you can use the command blkid to determine your HDD topology. $ blkid /dev/sda1: UUID="XXXX" TYPE="ext4" /dev/sda2: UUID="XXXX" TYPE="LVM2_member" /dev/mapper/fedora_greeneggs-swap: UUID="XXXX" TYPE="swap" /dev/mapper/fedora_greeneggs-root: UUID="XXXX" TYPE="ext4" /dev/mapper/fedora_greeneggs-home: UUID="XXXX" TYPE="ext4" You can also find out what filesystem a directory is coming from using the df -h . command. $ df -h . Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_greeneggs-root 50G 9.3G 38G 20% / From kickstart .cfg file You can also look at the date this file was created, assuming it wasn't deleted. $ sudo ls -lah ~root/anaconda-ks.cfg -rw-------. 1 root root 1.3K Dec 7 21:10 /root/anaconda-ks.cfg From RPM Another method would be to find out when the package setup was installed. This package is rarely updated, only from version of version of distro, so it should be fairly safe to query it in this manner. Example $ rpm -qi setup | grep Install Install Date: Sat 07 Dec 2013 08:46:32 PM EST Another package that has similar qualities to setup is basesystem . $ rpm -qi basesystem | grep Install Install Date: Sat 07 Dec 2013 08:46:47 PM EST Lastly you could just take the full list of installed packages and get the last few to see what their install dates were. $ rpm -qa --last | tail nhn-nanum-fonts-common-3.020-8.fc19.noarch Sat 07 Dec 2013 08:46:47 PM EST basesystem-10.0-8.fc19.noarch Sat 07 Dec 2013 08:46:47 PM EST m17n-db-1.6.4-2.fc19.noarch Sat 07 Dec 2013 08:46:46 PM EST gnome-user-docs-3.8.2-1.fc19.noarch Sat 07 Dec 2013 08:46:45 PM EST foomatic-db-filesystem-4.0-38.20130604.fc19.noarch Sat 07 Dec 2013 08:46:45 PM EST mozilla-filesystem-1.9-9.fc19.x86_64 Sat 07 Dec 2013 08:46:35 PM EST dejavu-fonts-common-2.33-5.fc19.noarch Sat 07 Dec 2013 08:46:34 PM EST telepathy-filesystem-0.0.2-5.fc19.noarch Sat 07 Dec 2013 08:46:33 PM EST setup-2.8.71-1.fc19.noarch Sat 07 Dec 2013 08:46:32 PM EST fontpackages-filesystem-1.44-7.fc19.noarch Sat 07 Dec 2013 08:46:31 PM EST
{ "source": [ "https://unix.stackexchange.com/questions/107182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46819/" ] }
107,192
I created a shell script that I will execute in the morning to open up all the apps that I want startup.sh #!/bin/sh open /Applications/Firefox.app & open /Applications/Messages.app & open /Applications/iTerm.app & open /Applications/Screenhero.app & open /Desktop/LimeChat.app & exit 0 The other apps work. But not the LimeChat application. I get this error: The file /Desktop/LimeChat.app does not exist. It does not show up in my applications folder, it only shows up in the desktop Desktop. And here it is very clearly on my desktop.
tune2fs You can use the command tune2fs to find out when the filesystem was created. $ tune2fs -l /dev/main/partition |grep 'Filesystem created' Example $ sudo tune2fs -l /dev/dm-1 |grep 'Filesystem created' Filesystem created: Sat Dec 7 20:42:03 2013 which disk to use? If you don't have /dev/dm-1 you can use the command blkid to determine your HDD topology. $ blkid /dev/sda1: UUID="XXXX" TYPE="ext4" /dev/sda2: UUID="XXXX" TYPE="LVM2_member" /dev/mapper/fedora_greeneggs-swap: UUID="XXXX" TYPE="swap" /dev/mapper/fedora_greeneggs-root: UUID="XXXX" TYPE="ext4" /dev/mapper/fedora_greeneggs-home: UUID="XXXX" TYPE="ext4" You can also find out what filesystem a directory is coming from using the df -h . command. $ df -h . Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_greeneggs-root 50G 9.3G 38G 20% / From kickstart .cfg file You can also look at the date this file was created, assuming it wasn't deleted. $ sudo ls -lah ~root/anaconda-ks.cfg -rw-------. 1 root root 1.3K Dec 7 21:10 /root/anaconda-ks.cfg From RPM Another method would be to find out when the package setup was installed. This package is rarely updated, only from version of version of distro, so it should be fairly safe to query it in this manner. Example $ rpm -qi setup | grep Install Install Date: Sat 07 Dec 2013 08:46:32 PM EST Another package that has similar qualities to setup is basesystem . $ rpm -qi basesystem | grep Install Install Date: Sat 07 Dec 2013 08:46:47 PM EST Lastly you could just take the full list of installed packages and get the last few to see what their install dates were. $ rpm -qa --last | tail nhn-nanum-fonts-common-3.020-8.fc19.noarch Sat 07 Dec 2013 08:46:47 PM EST basesystem-10.0-8.fc19.noarch Sat 07 Dec 2013 08:46:47 PM EST m17n-db-1.6.4-2.fc19.noarch Sat 07 Dec 2013 08:46:46 PM EST gnome-user-docs-3.8.2-1.fc19.noarch Sat 07 Dec 2013 08:46:45 PM EST foomatic-db-filesystem-4.0-38.20130604.fc19.noarch Sat 07 Dec 2013 08:46:45 PM EST mozilla-filesystem-1.9-9.fc19.x86_64 Sat 07 Dec 2013 08:46:35 PM EST dejavu-fonts-common-2.33-5.fc19.noarch Sat 07 Dec 2013 08:46:34 PM EST telepathy-filesystem-0.0.2-5.fc19.noarch Sat 07 Dec 2013 08:46:33 PM EST setup-2.8.71-1.fc19.noarch Sat 07 Dec 2013 08:46:32 PM EST fontpackages-filesystem-1.44-7.fc19.noarch Sat 07 Dec 2013 08:46:31 PM EST
{ "source": [ "https://unix.stackexchange.com/questions/107192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44653/" ] }
107,194
I have about 7 Debian servers I manage, and I would like to set them to automatically update themselves. So, I created a script as such: #!/bin/sh apt-get update apt-get upgrade and placed it on the root 's crontab list. Unfortunately, it always hangs on the Upgrade section, asking if I'm sure I want to upgrade. Because it's a cron job, I don't see the output until it emails me saying it's failed. Is there a way to have it skip that prompt, and just do the upgrade automatically?
Use the -y option to apt-get to have it not ask. From man apt-get : -y, --yes, --assume-yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. If an undesirable situation, such as changing a held package, trying to install a unauthenticated package or removing an essential package occurs then apt-get will abort. Configuration Item: APT::Get::Assume-Yes. You can also set the DEBIAN_FRONTEND env variable DEBIAN_FRONTEND=noninteractive apt-get -y upgrade
{ "source": [ "https://unix.stackexchange.com/questions/107194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43525/" ] }
107,371
It is normally nice to have color output from ls , grep , etc. But when you don't want it (such as in a script where you're piping the results to another command) is there a switch that can turn it off? ls -G turns it on (with some BSD-derived versions of ls ) if it's not the default, but ls +G does not turn it off. Is there anything else that will?
Color output for ls is typically enabled through an alias in most distros nowadays. $ alias ls alias ls='ls --color=auto' You can always disable an alias temporarily by prefixing it with a backslash. $ \ls Doing the above will short circuit the alias just for this one invocation. You can use it any time you want to disable any alias.
{ "source": [ "https://unix.stackexchange.com/questions/107371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
107,633
I'm running QEMU/KVM on Debian Testing x64 with this command: kvm -m 1024 -hda win7.img -cdrom win7x86.iso -boot d -net user But when I click inside the virtual machine, QEMU captures my mouse and won't let it go. I thought the key combination to free the mouse was Right Ctrl , but nothing happens when I press it. I also tried appending the -usbdevice tablet or -usbdevice mouse options: kvm -m 1024 -hda win7.img -cdrom win7x86.iso -boot d -net user -usbdevice tablet or kvm -m 1024 -hda win7.img -cdrom win7x86.iso -boot d -net user -usbdevice mouse but the situation is the same. I'm using QEMU emulator version 1.7.0 (Debian 1.7.0+dfsg-2).
Keyboard methods If using the SDL frontend of QEMU: You can release focus using the Left Ctrl + Left Alt . Notice you have to use the left keys! If using the GTK frontend of QEMU (default since QEMU 1.5): Press Ctrl + Alt + G Focus free method See my question I posted on this exact thing on ServerFault. The Q&A is titled: Any way to release focus on a KVM guest in virt-manager without having to click Ctrl_L + Alt_L? . This will allow you to no longer have to use the keyboard to release focus between the host and the guest. There are 2 methods discussed in answers to the question. The first involves adding another mouse, the other makes use of Spice which allows for smooth focus transitions between the host and the guest.
{ "source": [ "https://unix.stackexchange.com/questions/107633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49592/" ] }
107,638
Using below test.sh , I am getting result in dump.txt as below: test.sh: #!/bin/bash #"Maxl-script" zone starts..... essmsh -l admin password -s localhost -i << EOF spool on to 'dump.txt'; display session all; spool off; EOF #"Maxl-script" zone ends..... dump.txt: user session login_time application database db_connect_time request request_time connection_source connection_ip request_state +-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+------------------- admin 0 9 0 none 0 Not Requested* a00:bf32:: admin 989855740 1335 DRRDEVMH DRRPRODB 1201 none 0 Not Requested a00:8a45:: admin 1768947706 932 test test 916 none 0 Not Requested a00:94b6:: WARNING - 1241024 - Possible string truncation in column 1. WARNING - 1241028 - Output column defined with warnings. WARNING - 1241024 - Possible string truncation in column 9. WARNING - 1241028 - Output column defined with warnings. WARNING - 1241024 - Possible string truncation in column 10. WARNING - 1241028 - Output column defined with warnings. OK/INFO - 1241044 - Records returned: [3]. As we can see in the last line of dump.txt there is a string Records returned: [3] . That digit 3 is my target, I want to send a email notification to users having following Lines as Email body: Total Number of Sessions running = 3 NOTE: This is an automatically generated email, please don't reply to it. After Googling, I have managed to get below method to get it done. Can anybody please help me out for how to use sed or any other method to meet my objective in above script? $ var=Records returned : echo "Total Number of Sessions running = $var | sed 's/.*://' " > file.tmp echo -e "\n" >> file.tmp echo "NOTE: This is an automatically generated email, please don't reply to it." >> file.tmp fi mailx -s "Subject" emailaddresses < file.tmp rm file.tmp NOTE: Records Returned... line will be always the last line in dump.txt .
Keyboard methods If using the SDL frontend of QEMU: You can release focus using the Left Ctrl + Left Alt . Notice you have to use the left keys! If using the GTK frontend of QEMU (default since QEMU 1.5): Press Ctrl + Alt + G Focus free method See my question I posted on this exact thing on ServerFault. The Q&A is titled: Any way to release focus on a KVM guest in virt-manager without having to click Ctrl_L + Alt_L? . This will allow you to no longer have to use the keyboard to release focus between the host and the guest. There are 2 methods discussed in answers to the question. The first involves adding another mouse, the other makes use of Spice which allows for smooth focus transitions between the host and the guest.
{ "source": [ "https://unix.stackexchange.com/questions/107638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50725/" ] }
107,648
I have two different files as shown below. Content of a.txt : HDR|1|||||||||| DTL|@||||||||||| TLR||||||||||||| HDR|1|||||||||||| DTL||||||||||||| TLR||||||||||||| Content of b.txt : HDR|2|||||||||| DTL||||||||||||| TLR||||||||||||| HDR|2|||||||||||| DTL|last|||||||||||| TLR||||||||||||| Here I have to take out all the lines till 1st "TLR" in a.txt and b.txt and merge into 1.txt ,same way take out all the lines After 1st "TLR" in a.txt and b.txt and merge into 2.txt output should be: Content of 1.txt : HDR|1|||||||||| DTL|@||||||||||| TLR||||||||||||| HDR|2|||||||||| DTL|||||||||||| TLR||||||||||||| Content of 2.txt : HDR|1|||||||||| DTL|||||||||||| TLR||||||||||||| HDR|2|||||||||| DTL|last||||||||||| TLR||||||||||||| How can we accomplish this using UNIX Script?
Keyboard methods If using the SDL frontend of QEMU: You can release focus using the Left Ctrl + Left Alt . Notice you have to use the left keys! If using the GTK frontend of QEMU (default since QEMU 1.5): Press Ctrl + Alt + G Focus free method See my question I posted on this exact thing on ServerFault. The Q&A is titled: Any way to release focus on a KVM guest in virt-manager without having to click Ctrl_L + Alt_L? . This will allow you to no longer have to use the keyboard to release focus between the host and the guest. There are 2 methods discussed in answers to the question. The first involves adding another mouse, the other makes use of Spice which allows for smooth focus transitions between the host and the guest.
{ "source": [ "https://unix.stackexchange.com/questions/107648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44820/" ] }
107,703
I have a really strange situation here. My PC works fine, at least in most cases, but there's one thing that I can't deal with. When I try to copy a file from my pendrive, everything is ok -- I got 16-19M/s , it works pretty well. But when I try to copy something to the same pendrive, my PC freezes. The mouse pointer stops moving for a sec or two, then it moves a little bit and it stops again. When something is playing, for example, in Amarok, the sound acts like a machine gun. The speed jumps from 500K/s to 15M/s, average 8M/s. This occurs only when I'm copying something to a pendrive. When the process of copying is done, everything backs to normal. I tried everything -- other pendrive, a different USB port on front panel or those ports from back, I even changed the USB pins on motherboard (front panel), but no matter where I put my USB stick, it's always the same. I tried different filesystem -- fat32 , ext4 . I have no problem with the device on Windows, on my laptop. It has to be my PC or something in my system. I have no idea what to look for. I'm using Debian testing with standalone Openbox. My PC is kind of old -- Pentium D 3GHz, 1GiB of RAM, 1,5TB WD Green disk. If you have something that would help me to solve this issue, I'd be glad to hear that. I don't know what else info I should provide, but if you need something, just ask, I'll update this post as soon as possible. I tried to reproduce this problem on ubuntu 13.04 live cd. I mounted my encrypted partition + encrypted swap and connected my pendrive to a usb port. Next I tried to start some apps, and now I have ~820MiB in RAM and about 400MiB in SWAP. There's no problem with copying, no freezing at all, everything is as it should be. So, it looks like it's a fault of the system, but where exactly? What would cause such a weird behavior?
Are you using a 64-bit version of Linux with a lot of memory? In that case the problem could be that Linux can lock for minutes on big writes on slow devices like for example SD cards or USB sticks. It's a known bug that should be fixed in newer kernels. See http://lwn.net/Articles/572911/ Workaround: as root issue: echo $((16*1024*1024)) > /proc/sys/vm/dirty_background_bytes echo $((48*1024*1024)) > /proc/sys/vm/dirty_bytes I have added it to my /etc/rc.local file in my 64bit machines. TANSTAAFL ; this change can (and probably will) reduce your throughput to these devices --- it's a compromise between latency and speed. To get back to the previous behavior you can echo 0 > /proc/sys/vm/dirty_background_bytes echo 0 > /proc/sys/vm/dirty_bytes ...which are the default values, meaning that the writeback behavior will be controlled by the parameters dirty_ratio and dirty_background_ratio . Note for the not-so-expert-with-linux people: the files in /proc are pseudofiles --- just communication channels between the kernel and user space. Never use an editor to change or look at them; get instead a shell prompt --- for example, with sudo -i (Ubuntu flavors) or su root and use echo and cat ). Update 2016/04/18 it seems that, after all, the problem is still here. You can look at it at LWN.net , in this article about writeback queues .
{ "source": [ "https://unix.stackexchange.com/questions/107703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52763/" ] }
107,739
How do I configure my system to destroy all personal data when a certain password is entered? The motivation behind this being NSA stuff. I imagine there being three primary usage cases. At login, the entering of a predetermined password triggers destruction of user data. At system wake up. entering of a predetermined password triggers destruction of personal data. Entering any privileged command with a predetermined password triggers destruction of personal data. I know that something like dd if=/dev/urandom of=/dev/$HOME Should be adequate for data destruction. I don't know how to have that triggered by a certain password, however. Bonus points if it then permits a login while the data is being deleted.
Idea #1 - Hidden OS As an alternative method you could make use of TrueCrypt's "Hidden Operating System" . This allows you to access a fake alternative OS when a certain password is used, rather than the primary OS. excerpt If your system partition or system drive is encrypted using TrueCrypt, you need to enter your pre-boot authentication password in the TrueCrypt Boot Loader screen after you turn on or restart your computer. It may happen that you are forced by somebody to decrypt the operating system or to reveal the pre-boot authentication password. There are many situations where you cannot refuse to do so (for example, due to extortion). TrueCrypt allows you to create a hidden operating system whose existence should be impossible to prove (provided that certain guidelines are followed — see below). Thus, you will not have to decrypt or reveal the password for the hidden operating system. Bruce Schneier covers the efficacy of using these ( Deniable File Systems , so you might want to investigate it further before diving in. The whole idea of Deniable Encryption is a bit of a can of worms, so caution around using it in certain situations needs to be well thought out ahead of time. Idea #2 - Add a script to /etc/passwd You can insert alternative scripts to a user's entry in the /etc/passwd file. Example # /etc/passwd tla:TcHypr3FOlhAg:237:20:Ted L. Abel:/u/tla:/usr/local/etc/sdshell You could setup a user's account so that it runs a script such as /usr/local/etc/sdshell which will check to see what password was provided. If it's the magical password that triggers the wipe, it could begin this process (backgrounded even) and either drop to a shell or do something else. If the password provided is not this magical password, then continue on running a normal shell, /bin/bash , for example. Source: 19.6.1 Integrating One-Time Passwords with Unix
{ "source": [ "https://unix.stackexchange.com/questions/107739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55873/" ] }
107,800
I have a file servers.txt , with list of servers: server1.mydomain.com server2.mydomain.com server3.mydomain.com when I read the file line by line with while and echo each line, all works as expected. All lines are printed. $ while read HOST ; do echo $HOST ; done < servers.txt server1.mydomain.com server2.mydomain.com server3.mydomain.com However, when I want to ssh to all servers and execute a command, suddenly my while loop stops working: $ while read HOST ; do ssh $HOST "uname -a" ; done < servers.txt Linux server1 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/Linux This only connects to the first server in the list, not to all of them. I don't understand what is happening here. Can somebody please explain? This is even stranger, since using for loop works fine: $ for HOST in $(cat servers.txt ) ; do ssh $HOST "uname -a" ; done Linux server1 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/Linux Linux server2 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/Linux Linux server3 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/Linux It must be something specific to ssh , because other commands work fine, such as ping : $ while read HOST ; do ping -c 1 $HOST ; done < servers.txt
ssh is reading the rest of your standard input. while read HOST ; do … ; done < servers.txt read reads from stdin. The < redirects stdin from a file. Unfortunately, the command you're trying to run also reads stdin, so it winds up eating the rest of your file. You can see it clearly with: $ while read HOST ; do echo start $HOST end; cat; done < servers.txt start server1.mydomain.com end server2.mydomain.com server3.mydomain.com Notice how cat ate (and echoed) the remaining two lines. (Had read done it as expected, each line would have the "start" and "end" around the host.) Why does for work? Your for line doesn't redirect to stdin. (In fact, it reads the entire contents of the servers.txt file into memory before the first iteration). So ssh continues to read its stdin from the terminal (or possibly nothing, depending on how your script is called). Solution At least in bash, you can have read use a different file descriptor. while read -u10 HOST ; do ssh $HOST "uname -a" ; done 10< servers.txt # ^^^^ ^^ ought to work. 10 is just an arbitrary file number I picked. 0, 1, and 2 have defined meanings, and typically opening files will start from the first available number (so 3 is next to be used). 10 is thus high enough to stay out of the way, but low enough to be under the limit in some shells. Plus its a nice round number... Alternative Solution 1: -n As McNisse points out in his/her answer , the OpenSSH client has an -n option that'll prevent it from reading stdin. This works well in the particular case of ssh , but of course other commands may lack this—the other solutions work regardless of which command is eating your stdin. Alternative Solution 2: second redirect You can apparently (as in, I tried it, it works in my version of Bash at least...) do a second redirect, which looks something like this: while read HOST ; do ssh $HOST "uname -a" < /dev/null; done < servers.txt You can use this with any command, but it'll be difficult if you actually want terminal input going to the command.
{ "source": [ "https://unix.stackexchange.com/questions/107800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
107,807
Connecting to a remote Solaris 10 system over X11 I observe inconsistent behavior regarding the used fonts. I am connecting from a Cygwin/X system. When I connect using ssh forwarding like this $ ssh -Y mymachine.example.org fonts work as expected, i.e. the rendering is very nice and programs seem to find all kind of different fonts (e.g. gvim or emacs). When I connect to the same machine via XDMCP (to the stock blue Solaris 10 login manager screen) and login there seems only 1 fixed size font available. An Emacs from OpenCSW even fails to execute because it can't find the fonts it needs. It prints that it can't find a font using following specification: -dt-interface user-medium-r-normal-m I establish the XDMCP connection like this: $ XWin -query mymachine.example.org -from mywindowsclient.example.org My objective is no to get also proper fonts for the XDMCP use case. How can I investigate this issue? Can I duplicate some configuration which is implicitly used with ssh -Y for the XDMCP case? How is the font-thing usually setup during ssh-X11-forwarding?
ssh is reading the rest of your standard input. while read HOST ; do … ; done < servers.txt read reads from stdin. The < redirects stdin from a file. Unfortunately, the command you're trying to run also reads stdin, so it winds up eating the rest of your file. You can see it clearly with: $ while read HOST ; do echo start $HOST end; cat; done < servers.txt start server1.mydomain.com end server2.mydomain.com server3.mydomain.com Notice how cat ate (and echoed) the remaining two lines. (Had read done it as expected, each line would have the "start" and "end" around the host.) Why does for work? Your for line doesn't redirect to stdin. (In fact, it reads the entire contents of the servers.txt file into memory before the first iteration). So ssh continues to read its stdin from the terminal (or possibly nothing, depending on how your script is called). Solution At least in bash, you can have read use a different file descriptor. while read -u10 HOST ; do ssh $HOST "uname -a" ; done 10< servers.txt # ^^^^ ^^ ought to work. 10 is just an arbitrary file number I picked. 0, 1, and 2 have defined meanings, and typically opening files will start from the first available number (so 3 is next to be used). 10 is thus high enough to stay out of the way, but low enough to be under the limit in some shells. Plus its a nice round number... Alternative Solution 1: -n As McNisse points out in his/her answer , the OpenSSH client has an -n option that'll prevent it from reading stdin. This works well in the particular case of ssh , but of course other commands may lack this—the other solutions work regardless of which command is eating your stdin. Alternative Solution 2: second redirect You can apparently (as in, I tried it, it works in my version of Bash at least...) do a second redirect, which looks something like this: while read HOST ; do ssh $HOST "uname -a" < /dev/null; done < servers.txt You can use this with any command, but it'll be difficult if you actually want terminal input going to the command.
{ "source": [ "https://unix.stackexchange.com/questions/107807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
107,812
I am wanting to move (not just copy) a group of files/directories to a different directory some of which have name clashes with files/directories in the target directory. My main objective is to move the files, so I can tolerate the non-empty directory being overwritten. I am currently using mv ... destination however, occasionally I get mv: cannot move `target' to /destination/target': Directory not empty I tried mv -f ... destination with no success and since I want the files to be gone from their original location, rsync doesn't seem to be appropriate. As a bonus, is there a good solution for preserving the files intended to be overwritten somehow maybe by renaming?
If you use mv --backup=numbered (or one of the other options for the --backup switch), then mv will complete the merge and preserve the files intended to be overwritten.
{ "source": [ "https://unix.stackexchange.com/questions/107812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30423/" ] }
107,828
I was making some changes to /etc/fstab , when this chicken and egg question occurred to me - if /etc/fstab contains the instructions for mounting the file systems, including the root partition, then how does the OS read that file in the first place?
When the boot loader calls the kernel it passes it a parameter called root . So once the kernel finished initializing it will continue by mounting the given root partition to / and then calling /sbin/init (unless this has been overriden by other parameters). Then the init process starts the rest of the system by loading all services that are defined to be started in your default runlevel. Depending on your configuration and on the init system that you use, there can be multiple other steps between the ones that I mentioned. Currently the most popular init systems on Linux are SysVInit (the traditional one), Upstart and Systemd. You can find more details about the boot process in this wikipedia article . Here is a simplified example of my Grub config. The important part to answer your question is on the second to last line, there is a root=/dev/sda3 : menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-40864544-2d0f-471a-ab67-edd7e4754dae' { set root='hd0,msdos1' echo 'Loading Linux 3.12.6-gentoo-c2 ...' linux /kernel-3.12.6-gentoo-c2 root=/dev/sda3 ro } In many configurations the kernel mounts / in read-only mode and all the rest of the options are set to the defaults. In /etc/fstab you might specify file system parameters which would then be applied once init remounts it.
{ "source": [ "https://unix.stackexchange.com/questions/107828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9108/" ] }
107,851
I have noticed in my .bashrc that some lines have export in front of them, such as export HISTTIMEFORMAT="%b-%d %H:%M " ... export MYSQL_HISTFILE="/root/.mysql_history" whereas others don't, such as HISTSIZE=100000 I am wondering if, first, this is correct, and second what the rule is for using export in .bashrc .
You only need export for variables that should be "seen" by other programs which you launch in the shell, while the ones that are only used inside the shell itself don't need to be export ed. This is what the man page says: The supplied names are marked for automatic export to the environ‐ ment of subsequently executed commands. If the -f option is given, the names refer to functions. If no names are given, or if the -p option is supplied, a list of all names that are exported in this shell is printed. The -n option causes the export property to be removed from each name. If a variable name is followed by =word, the value of the variable is set to word. export returns an exit status of 0 unless an invalid option is encountered, one of the names is not a valid shell variable name, or -f is supplied with a name that is not a function. This can be demonstrated with the following: $ MYVAR="value" $ echo ${MYVAR} value $ echo 'echo ${MYVAR}' > echo.sh $ chmod +x echo.sh $ ./echo.sh $ export MYVAR="value-exported" $ ./echo.sh value-exported Explanation: I first set ${MYVAR} to be a Shell variable with MYVAR="value" . Using echo I can echo the value of it because echo is part of the shell. Then I create echo.sh . That's a little script that basically does the same, it just echoes ${MYVAR} , but the difference is that it will run in a different process because it's a separate script. When calling echo.sh it outputs nothing, because the new process does not inherit ${MYVAR} Then I export ${MYVAR} into my environment with the export keyword When I now run the same echo.sh again, it echoes the content of ${MYVAR} because it gets it from the environment So to answer your question: It depends where a variable is going to be used, whether you have to export it or not.
{ "source": [ "https://unix.stackexchange.com/questions/107851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }