source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
3,892
Is there functionality in Unix that allows for the following: echo "Some Text" | copy-to-clipboard
There are a couple tools capable of writing to the clipboard; I use xsel . It takes flags to write to the primary X selection ( -p ), secondary selection ( -s ), or clipboard ( -b ). Passing it -i will tell it to read from stdin, so you want: $ echo "Some Text" | xsel -i -b
{ "source": [ "https://unix.stackexchange.com/questions/3892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
3,934
Is there a way to generate a full process listing in solaris, without truncated lines? I've tried the ps command, with the following arguments: -f Generates a full listing. (See below for significance of columns in a full list- ing.) -l Generates a long listing. (See below.) So, those both seem to do what I want, however, further down in the ps man page, I find this: args The command with all its arguments as a string. The implementation may truncate this value to the field width; it is implementation-dependent whether any further truncation occurs. It is unspecified whether the string represented is a version of the argument list as it was passed to the command when it started, or is a version of the arguments as they may have been modified by the application. Applications cannot depend on being able to modify their argument list and having that modifica- tion be reflected in the output of ps. The Solaris implementation limits the string to 80 bytes; the string is the version of the argument list as it was passed to the command when it started. Which basically says the output is going to be truncated and there is nothing I can do about it. So, I'm coming here. Surely other people have run into this problem and maybe even have a way around it. I'm guessing ps can't do it and so I need to use other tools to do this. Is that accurate?
you could try pargs <PID> this gives you a list of all arguments or else use an other ps. If run as root (or any user with enough privileges for that matter) /usr/ucb/ps auxww will give you all arguments. Its part of SUNWscpu, "Source Compatibility, (Usr)"
{ "source": [ "https://unix.stackexchange.com/questions/3934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298/" ] }
3,961
I'm looking for a linux alternative to WinDirStat . I would like to know what is taking up space on my hard drives. A program that works on console and doesn't require a UI is preferred .
If you want a command-line tool, I prefer ncdu , an ncurses version of du . It scans the disk (or a given folder) and then shows the top-level space usages; you can select a given directory to get the corresponding summary for that directory, and go back without needing to reanalyze: If you're ok with a GUI program, Filelight is the closest thing to WinDirStat I've found; it shows a graphical view of space consumption: Like ncdu , Filelight lets you select a given directory to get the breakdown for that directory
{ "source": [ "https://unix.stackexchange.com/questions/3961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
4,004
Sometimes I want to start a process and forget about it. If I start it from the command line, like this: redshift I can't close the terminal, or it will kill the process. Can I run a command in such a way that I can close the terminal without killing the process?
One of the following 2 should work: $ nohup redshift & or $ redshift & $ disown See the following for a bit more information on how this works: man nohup help disown Difference between nohup, disown and & (be sure to read the comments too)
{ "source": [ "https://unix.stackexchange.com/questions/4004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115/" ] }
4,029
I know many directories with .d in their name: init.d yum.repos.d conf.d Does it mean directory? If yes, from what does this disambiguate? UPDATE: I've had many interesting answers about what the .d means, but the title of my question was not well chosen. I changed "mean" to "stand for".
The .d suffix here means directory. Of course, this would be unnecessary as Unix doesn't require a suffix to denote a file type but in that specific case, something was necessary to disambiguate the commands ( /etc/init , /etc/rc0 , /etc/rc1 and so on) and the directories they use ( /etc/init.d , /etc/rc0.d , /etc/rc1.d , ...) This convention was introduced at least with Unix System V but possibly earlier. The init command used to be located in /etc but is generally now in /sbin on modern System V OSes. Note that this convention has been adopted by many applications moving from a single file configuration file to multiple configuration files located in a single directory, eg: /etc/sudoers.d Here again, the goal is to avoid name clashing, not between the executable and the configuration file but between the former monolithic configuration file and the directory containing them.
{ "source": [ "https://unix.stackexchange.com/questions/4029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/653/" ] }
4,034
I have a running program on a SSH shell. I want to pause it and be able to unpause its execution when I come back. One way I thought of doing that was to transfer its ownership to a screen shell, thus keeping it running in there. Is there a different way to proceed?
You can revoke “ownership” of the program from the shell with the disown built-in: # press Ctrl+Z to suspend the program bg disown However this only tells the shell not to send a SIGHUP signal to the program when the shell exits. The program will retain any connection it has with the terminal, usually as standard input, output and error streams. There is no way to reattach those to another terminal. ( Screen works by emulating a terminal for each window, so the programs are attached to the screen window.) It is possible to reattach the filedescriptors to a different file by attaching the program in a debugger (i.e. using ptrace ) and making it call open , dup and close . There are a few tools that do this; this is a tricky process, and sometimes they will crash the process instead. The possibilities include (links collected from answers to How can I disown a running process and associate it to a new screen shell? and Can I nohup/screen an already-started process? ): grab (and the more ambitious cryopid ) neercs reredirect reptyr retty
{ "source": [ "https://unix.stackexchange.com/questions/4034", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2651/" ] }
4,079
I use !n where (n) is the line number for executing a line in the history file I want executed at the command prompt which I find via history|less. But there is a command line history event I wish to manually modify. How can I insert into the command line a history events contents without it actually executing so I can modify and then press return? Best,
To request that the command be printed rather than executed after history substitution, add the :p modifier , e.g. !42:p . The resulting command will also be entered in the history, so you can press Up to edit it. If you have the histverify option set ( shopt -s histverify ), you will always have the opportunity to edit the result of history substitutions. The fc builtin gives limited access to history expansion (no word designators), and lets you edit a previous command in an external editor. You can use !prefix to refer to the last command beginning with prefix , and !?substring to refer to the last command beginning with substring . When you know what you're looking for, this can save a lot of time over history | less . Another way to search through previous history is incremental search: press Ctrl + R and start entering a substring of what you're looking for. Press Ctrl + R to go to the previous occurence of the search string so far and Ctrl + S if you've gone too far. Most keys other than Ctrl + R , Ctrl + S , Backspace and ordinary characters terminate the incremental search and have their usual effect (e.g. arrow keys to move the cursor in the line you've reached, Enter to run the command).
{ "source": [ "https://unix.stackexchange.com/questions/4079", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1325/" ] }
4,091
So, there are lots of different versions of Unix out there: HP-UX, AIX, BSD, etc. Linux is considered a Unix clone rather than an implementation of Unix. Are all the "real" Unices actual descendants of the original? If not, what separates Linux from Unix?
That depends on what you mean by “Unix”, and by “Linux”. UNIX is a registered trade mark of The Open Group . The trade mark has had an eventful history, and it's not completely clear that it's not genericized due to the widespread usage of “Unix” refering to Unix-like systems (see below). Currently the Open Group grants use of the trade mark to any system that passes a Single UNIX certification . See also Why is there a * When There is Mention of Unix Throughout the Internet? . Unix is an operating system that was born in 1969 at Bell Labs . Various companies sold, and still sell, code derived from this original system, for example AIX , HP-UX , Solaris . See also Evolution of Operating systems from Unix . There are many systems that are Unix-like, in that they offer similar interfaces to programmers, users and administrators. The oldest production system is the Berkeley Software Distribution , which gradually evolved from Unix-based (i.e. containing code derived from the original implementation) to Unix-like (i.e. having a similar interface). There are many BSD-based or BSD-derived operating systems: FreeBSD , NetBSD , OpenBSD , Mac OS X , etc. Other examples include OSF/1 (now discontinued, it was a commercial Unix-like non-Unix-based system), Minix (originally a toy Unix-like operating system used as a teaching tool, now a production embedded Unix-like system), and most famously Linux . Strictly speaking, Linux is an operating system kernel that is designed like Unix's kernel. Linux is most commonly used as a name of Unix-like operating systems that use Linux as their kernel. As many of the tools outside the kernel are part of the GNU project , such systems are often known as GNU/Linux . All major Linux distributions consist of GNU/Linux and other software. There are Linux-based Unix-like systems that don't use many GNU tools, especially in the embedded world, but I don't think any of them does away with GNU development tools, in particular GCC . There are operating systems that have Linux as their kernel but are not Unix-like. The most well-known is Android , which doesn't have a Unix-like user experience (though you can install a Unix-like command line) or administrator experience or (mostly) programmer experience (“native” Android programs use an API that is completely different from Unix).
{ "source": [ "https://unix.stackexchange.com/questions/4091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2180/" ] }
4,105
I want to see how many files are in subdirectories to find out where all the inode usage is on the system. Kind of like I would do this for space usage du -sh /* which will give me the space used in the directories off of root, but in this case I want the number of files, not the size.
find . -maxdepth 1 -type d | while read -r dir do printf "%s:\t" "$dir"; find "$dir" -type f | wc -l; done Thanks to Gilles and xenoterracide for safety/compatibility fixes. The first part: find . -maxdepth 1 -type d will return a list of all directories in the current working directory.  (Warning: -maxdepth is a GNU extension and might not be present in non-GNU versions of find .)  This is piped to... The second part: while read -r dir; do (shown above as while read -r dir (newline) do ) begins a while loop – as long as the pipe coming into the while is open (which is until the entire list of directories is sent), the read command will place the next line into the variable dir . Then it continues... The third part: printf "%s:\t" "$dir" will print the string in $dir (which is holding one of the directory names) followed by a colon and a tab (but not a newline). The fourth part: find "$dir" -type f makes a list of all the files inside the directory whose name is held in $dir . This list is sent to... The fifth part: wc -l counts the number of lines that are sent into its standard input. The final part: done simply ends the while loop. So we get a list of all the directories in the current directory. For each of those directories, we generate a list of all the files in it so that we can count them all using wc -l . The result will look like: ./dir1: 234 ./dir2: 11 ./dir3: 2199 ...
{ "source": [ "https://unix.stackexchange.com/questions/4105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,109
When I do ls -l I get this: calico@A000505:~/Documentos$ ls -l total 2020 -rwxr-xr-x 1 calico calico 8559 2010-11-16 11:12 a.out -rwxrw-rw- 1 smt smt 2050138 2010-10-14 10:40 Java2.pdf -rwxrw-rw- 1 ocv ocv 234 2010-11-16 11:11 test.c But what does the "total 2020" mean? I only have 3 files so it's not the number of files or directories, and I guess it's not the size either. So what is it?
The number of 1kB blocks used by the files in the directory, non-recursively. Use ls -lh to have some more meaningful output.
{ "source": [ "https://unix.stackexchange.com/questions/4109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2200/" ] }
4,116
When I need to open different processes or terminals that I need to check, I just open a new tab in my terminal and use different workspaces in my machine to keep everything organized. I do some web development, using a Linux machine. I've seen that a lot of people use screen to accomplish what I'm doing, but I can't see any advantage. In fact, I thought it would be worse since now I have to remember all states in screen instead of having some terminals in a workspace named "terminals". What am I missing? How do you actually use screen?
I use screen both locally and remotely. I find that I use screen because it gives me the ability to Run multiple tasks without making multiple ssh connections to a remote server, Run a long-running task in screen, detach, disconnect. The job will still be running in screen and I can come back later, reattach, and check its progress. Have a more or less persistent workspace on a server, which is nice when I am doing something that involves multiple steps over the course of a day. Receive important system information in a non-intrusive way using the screen profile customizations provided by byobu . Use "Named Tabs": In screen I can give each "tab" in screen a name, allowing me to instantly know where to switch to. Use more keyboard shortcuts. If you do most of your work at the computer, not having to use the mouse is a real plus. I find that screen 's keyboard shortcuts provide a bit more power, but this may just because I've never invested in truly learning all of the GTK shortcuts. Here is a screen shot of a recently started screen session using byobu and other customizations:
{ "source": [ "https://unix.stackexchange.com/questions/4116", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2689/" ] }
4,126
I think these terms almost refer to the same thing, when used loosely: terminal shell tty console What exactly does each of these terms refer to?
A terminal is at the end of an electric wire, a shell is the home of a turtle, tty is a strange abbreviation and a console is a kind of cabinet. Well, etymologically speaking, anyway. In unix terminology, the short answer is that terminal = tty = text input/output environment console = physical terminal shell = command line interpreter Console, terminal and tty are closely related. Originally, they meant a piece of equipment through which you could interact with a computer: in the early days of unix, that meant a teleprinter -style device resembling a typewriter, sometimes called a teletypewriter, or “tty” in shorthand. The name “terminal” came from the electronic point of view, and the name “console” from the furniture point of view. Very early in unix history, electronic keyboards and displays became the norm for terminals. In unix terminology, a tty is a particular kind of device file which implements a number of additional commands ( ioctls ) beyond read and write. In its most common meaning, terminal is synonymous with tty. Some ttys are provided by the kernel on behalf of a hardware device, for example with the input coming from the keyboard and the output going to a text mode screen, or with the input and output transmitted over a serial line. Other ttys, sometimes called pseudo-ttys , are provided (through a thin kernel layer) by programs called terminal emulators , such as Xterm (running in the X Window System ), Screen (which provides a layer of isolation between a program and another terminal), Ssh (which connects a terminal on one machine with programs on another machine), Expect (for scripting terminal interactions), etc. The word terminal can also have a more traditional meaning of a device through which one interacts with a computer, typically with a keyboard and display. For example an X terminal is a kind of thin client , a special-purpose computer whose only purpose is to drive a keyboard, display, mouse and occasionally other human interaction peripherals, with the actual applications running on another, more powerful computer. A console is generally a terminal in the physical sense that is by some definition the primary terminal directly connected to a machine. The console appears to the operating system as a (kernel-implemented) tty. On some systems, such as Linux and FreeBSD, the console appears as several ttys (special key combinations switch between these ttys); just to confuse matters, the name given to each particular tty can be “console”, ”virtual console”, ”virtual terminal”, and other variations. See also Why is a Virtual Terminal “virtual”, and what/why/where is the “real” Terminal? . A shell is the primary interface that users see when they log in, whose primary purpose is to start other programs. (I don't know whether the original metaphor is that the shell is the home environment for the user, or that the shell is what other programs are running in.) In unix circles, shell has specialized to mean a command-line shell , centered around entering the name of the application one wants to start, followed by the names of files or other objects that the application should act on, and pressing the Enter key. Other types of environments don't use the word “shell”; for example, window systems involve “ window managers ” and “ desktop environments ”, not a “shell”. There are many different unix shells. Popular shells for interactive use include Bash (the default on most Linux installations), zsh (which emphasizes power and customizability) and fish (which emphasizes simplicity). Command-line shells include flow control constructs to combine commands. In addition to typing commands at an interactive prompt, users can write scripts. The most common shells have a common syntax based on the Bourne_shell . When discussing “ shell programming ”, the shell is almost always implied to be a Bourne-style shell. Some shells that are often used for scripting but lack advanced interactive features include the Korn shell (ksh) and many ash variants. Pretty much any Unix-like system has a Bourne-style shell installed as /bin/sh , usually ash, ksh or bash. In unix system administration, a user's shell is the program that is invoked when they log in. Normal user accounts have a command-line shell, but users with restricted access may have a restricted shell or some other specific command (e.g. for file-transfer-only accounts). The division of labor between the terminal and the shell is not completely obvious. Here are their main tasks. Input: the terminal converts keys into control sequences (e.g. Left → \e[D ). The shell converts control sequences into commands (e.g. \e[D → backward-char ). Line editing, input history and completion are provided by the shell. The terminal may provide its own line editing, history and completion instead, and only send a line to the shell when it's ready to be executed. The only common terminal that operates in this way is M-x shell in Emacs. Output: the shell emits instructions such as “display foo ”, “switch the foreground color to green”, “move the cursor to the next line”, etc. The terminal acts on these instructions. The prompt is purely a shell concept. The shell never sees the output of the commands it runs (unless redirected). Output history (scrollback) is purely a terminal concept. Inter-application copy-paste is provided by the terminal (usually with the mouse or key sequences such as Ctrl + Shift + V or Shift + Insert ). The shell may have its own internal copy-paste mechanism as well (e.g. Meta + W and Ctrl + Y ). Job control (launching programs in the background and managing them) is mostly performed by the shell. However, it's the terminal that handles key combinations like Ctrl + C to kill the foreground job and Ctrl + Z to suspend it.
{ "source": [ "https://unix.stackexchange.com/questions/4126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
4,140
I found a file formatted with Markdown. Could you guys suggest what viewer that I could use to view this type of files? Hopefully one without gui (if it's possible) Update I was actually looking for a viewer that could parse markdown file format that does not need any conversion. But something close to that should be ok.
The following website provides a tool that will translate markdown into HTML: http://daringfireball.net/projects/markdown/ Once you convert the file to HTML, there are a number of command line tools to use to view the file. Using a test file that contains markdown formatted-text, I found the following worked nicely. $ wget http://daringfireball.net/projects/downloads/Markdown_1.0.1.zip $ unzip Markdown_1.0.1.zip $ cd Markdown_1.0.1/ $ ./Markdown.pl ~/testfile.markdown | html2text html2text is one of many tools you can use to view html formatted text from the command line. Another option, if you want slightly nicer output would be to use lynx : $ ./Markdown.pl ~/testfile.markdown | lynx -stdin If you are an emacs user, someone has written a mode for markdown which is available here: http://jblevins.org/projects/markdown-mode/ . This provides nice syntax highlighting as can be seen in the screenshot on that website. All of these tools should be available for slackware.
{ "source": [ "https://unix.stackexchange.com/questions/4140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1026/" ] }
4,146
I know some filesystems present themselves through Fuse and I was wondering about the pros and cons to this approach.
I'm not positive if you mean real, on-disk filesystems or any filesystem. I've never seen a normal filesystem use FUSE, although I suppose it's possible; the main benefit of FUSE is it lets you present something to applications (or the user) that looks like a filesystem, but really just calls functions within your application when the user tries to do things like list the files in a directory or create a new file. Plan9 is well known for trying to make everything accessible through the filesystem, and the /proc pseudo-filesystem comes from them; FUSE is a way for applications to easily follow that pattern For example, here's a screenshot of a (very featureless) FUSE filesystem that gives access to SE site data: Naturally none of those files actually exist; when ls asked for the list of files in the directory FUSE called a function in my program which did an API request to this site to load information about user 73 (me); cat trying to read from display_name and website_url called more functions that returned the cached data from memory, without anything actually existing on-disk
{ "source": [ "https://unix.stackexchange.com/questions/4146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80/" ] }
4,180
I'd like to take this command find -maxdepth 1 -type d | while read -r dir; do printf "%s:\t" "$dir"; find "$dir" | wc -l; done ( from here ). which has an output of basically ./kennel: 11062 ./shadow: 15449 ./ccc: 9765 ./journeyo: 14200 ./norths: 10710 and sort it by the numbers largest to smallest. but I'm not sure how to make sort , or whatever operate on a different column.
Pipe the lines through sort -n -r -k2 . Edited to sort from largest to smallest.
{ "source": [ "https://unix.stackexchange.com/questions/4180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,186
Before today, I've used the terminal to a limited extent of moving in and out of directories and changing the dates of files using the touch command. I had realised the full extent of the terminal after installing a fun script on Mac and having to chmod 755 the file to make it executable afterwards. I'd like to know what /usr/local/bin is, though. /usr/ , I assume, is the user of the computer. I'm not sure why /local/ is there, though. It obviously stands for the local computer, but since it's on the computer (or a server), would it really be necessary? Wouldn't /usr/bin be fine? And what is /bin ? Why is this area usually used for installing scripts onto the terminal?
/usr/local/bin is for programs that a normal user may run. The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr . Locally installed software must be placed within /usr/local rather than /usr unless it is being installed to replace or upgrade software in /usr . This source helps explain the filesystem hierarchy standard on a deeper level. You might find this article on the use and abuse of /usr/local/bin interesting as well.
{ "source": [ "https://unix.stackexchange.com/questions/4186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/900/" ] }
4,214
What is the difference between a "job" and a "process"?
A process is any running program with its own address space. A job is a concept used by the shell - any program you interactively start that doesn't detach (ie, not a daemon) is a job. If you're running an interactive program, you can press Ctrl Z to suspend it. Then you can start it back in the foreground (using fg ) or in the background (using bg ). While the program is suspended or running in the background, you can start another program - you would then have two jobs running. You can also start a program running in the background by appending an "&" like this: program & . That program would become a background job. To list all the jobs you are running, you can use jobs . For more information on jobs, see this section of the bash man page.
{ "source": [ "https://unix.stackexchange.com/questions/4214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2683/" ] }
4,219
I am looking to get tab-completion on my command line aliases, for example, say I defined the following alias : alias apt-inst='sudo aptitude install' Is there a way to get the completions provided by aptitude when I hit the tab key? i.e. when I write 'sudo aptitude install gnumer' and hit tab, aptitude completes this to gnumeric, or if there was uncertainty lists all the available packages starting with gnumer. If I do it using my alias, nothing - no completion.
There is a great thread about this on the Ubuntu forums . Ole J proposes the following alias completion definition function: function make-completion-wrapper () { local function_name="$2" local arg_count=$(($#-3)) local comp_function_name="$1" shift 2 local function=" function $function_name { ((COMP_CWORD+=$arg_count)) COMP_WORDS=( "$@" \${COMP_WORDS[@]:1} ) "$comp_function_name" return 0 }" eval "$function" echo $function_name echo "$function" } Use it to define a completion function for your alias, then specify that function as a completer for the alias: make-completion-wrapper _apt_get _apt_get_install apt-get install complete -F _apt_get_install apt-inst I prefer to use aliases for adding always-used arguments to existing programs. For instance, with grep , I always want to skip devices and binary files, so I make an alias for grep . For adding new commands such as grepbin , I use a shell script in my ~/bin folder. If that folder is in your path, it will get autocompleted.
{ "source": [ "https://unix.stackexchange.com/questions/4219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2651/" ] }
4,261
My internet connection used to be a direct LAN connection to my provider. Back then, everything would load fine on both Windows and Ubuntu (dual boot). However, a while ago they started needing me to dial (PPPoE) using a username and password. Gateway, subnet mask, IP, DNS servers all stayed the same. But since then, I haven't been able to browse certain websites on Ubuntu, even though there have been no such issues on Windows. Some example websites are - Ovi's sign in page (although share.ovi.com loads fine, and nokia.com loads fine), Live Mail (works on Chrome(ium) and Opera but not on Firefox (both 3.6 and 4)) Mozilla Addons website and other random websites. Some of the websites that don't load show timeout messages and for some websites (like the moz addons one), the browser will keep trying to load without an end (I've left it like that even for hours but not noticed anything different happen). I have tried changing the DNS servers to public ones. I have even tried booting from a Fedora LiveCD and then changing the DNS to those (and even to the ones of OpenDNS), but the exact same thing happens. What could be inherently wrong with some config within Linux itself that is causing this problem? Does anyone know why this is happening and how it can be fixed? Note: This question has been cross-posted on SU, but not gotten any responses. Update: Just saw here that someone else was having similar problem and solved it by putting a NetworkManager.conf file in /etc/NetworkManager . What needs to be in that file?
You have the symptoms of an MTU problem: some TCP connections freeze, more or less reproducibly for a given command or URL but with no easily discernible overall pattern. A telltale symptom is that interactive ssh sessions work well but file transfers almost always fail. Furthermore pppoe is the number one bringer of MTU problem for home users. So I prescribe an MTU check. What is it? The m aximum t ransmission u nit is the maximum size of a packet over a network link. The MTU varies from transport medium to transport medium, e.g. wired Ethernet and wifi (802.11) have different MTUs, and ATM links (which make up most of the long-distance infrastructure) each have their own MTU. PPPOE is an encapsulated protocol, which means that every packet consists of a few bytes of header followed by the underlying packet — so it lowers the maximum packet size by the size of the header. IP allows routers to fragment packets if they detect that they're too big for the next hop, but this doesn't always work. In theory the proper MTU should be discovered automatically , but this also doesn't always work either. In particular googling suggests that Network Manager doesn't always properly act on MTU information obtained from MTU discovery, but I don't know what versions are affected or what the problematic use cases are. How to measure it. If you have tracepath from the Linux iputils , run tracepath 8.8.8.8 to see the MTU over the path to Google's DNS server. If your version of traceroute has a --mtu option, run traceroute -n --mtu 8.8.8.8 . See Discover MTU between me and destination IP for more options. Lacking automated tools, you can measure manually. Try sending ping packets of a given size to an outside hosts that responds to them, e.g. ping -c 1 -s 42 8.8.8.8 (on Linux; on other systems, look up the documentation of your ping command). Your packets should get through for small enough values of 42 (if 42 doesn't work, something is blocking pings.). For larger values, the packet won't get through. 1464 is a typical maximum value if the limiting piece of infrastructure is your local Ethernet network. If you're lucky, when you send a too large packet, you'll see a message like Frag needed and DF set (mtu = 1492) . If you're not lucky, just keep experimenting with the value until you find what the maximum is, then add 28 ( -s specifies the payload size, and there are 28 bytes of headers in addition to that). See also How to Optimize your Internet Connection using MTU and RWIN on the Ubuntu forums. How to set it (replace 1454 by the MTU you have determined, and eth0 by the name of your network interface) As a once-off (Linux): run ifconfig eth0 mtu 1454 Permanently (Debian and derivatives such as Ubuntu, if not using Network Manager): Edit /etc/network/interfaces . Just after the entry for your network interface (after the iface eth0 … directive), add a line with pre-up ifconfig $IFACE mtu 1454 . Alternatively, if your IP address is static, you can add the mtu 1454 parameter to the iface eth0 inet static directive. Permanently (Debian and derivatives such as Ubuntu, with or without Network Manager): Create a script called /etc/network/if-pre-up.d/mtu with the following contents and make it world-executable ( chmod a+rx ): #!/bin/sh ifconfig $IFACE mtu 1454 Further resources How to diagnose a reliably unreliable connection? (particularly Mike Pennington's answer ) may be of assistance if the simple measure-and-limit approach described here doesn't work.
{ "source": [ "https://unix.stackexchange.com/questions/4261", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1452/" ] }
4,262
Is it possible to make bash change directory in command line simply by typing that directory without any commands like cd ? For example is it possible instead of writing this: $ cd /tmp I just want to write only this: $ /tmp In other words if I call directory as an application then I want to set that directory as a working one.
In bash there is also autocd option. You can enable it by using shopt -s autocd : pbm@tauri ~ $ shopt -s autocd pbm@tauri ~ $ django # Now just type this cd ./django <- it's done automatically pbm@tauri ~/django $
{ "source": [ "https://unix.stackexchange.com/questions/4262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2763/" ] }
4,290
Is it a good idea to use the following alias: cd() { pushd $1; } in bash? I think this would be very useful, since I can then use a series of popd s instead of just a cd - once. Is there any case where this might be a problem?
Personally, I have these in my bashrc and use them all the time: pushd() { if [ $# -eq 0 ]; then DIR="${HOME}" else DIR="$1" fi builtin pushd "${DIR}" > /dev/null echo -n "DIRSTACK: " dirs } pushd_builtin() { builtin pushd > /dev/null echo -n "DIRSTACK: " dirs } popd() { builtin popd > /dev/null echo -n "DIRSTACK: " dirs } alias cd='pushd' alias back='popd' alias flip='pushd_builtin' You can then navigate around on the command-line a bit like a browser. cd changes the directory. back goes to the previous directory that you cd ed from. And flip will move between the current and previous directories without popping them from the directory stack. Overall, it works great. The only real problem that I'm aware of is the fact that it's then a set of commands that I'm completely used to but don't exist on anyone else's machine. So, if I have to use someone else's machine, it can be a bit frustrating. If you're used to just using pushd and popd directly, you don't have that problem. And while if you just alias cd put not popd , you won't have the issue of back not existing, you'll still have the problem that cd doesn't do quite what you expect on other machines. I would note, however, that your particular implementation of cd doesn't quite work like cd in that the normal cd by itself will go to your home directory, but yours doesn't. The version that I have here doesn't have that problem. Mine also appends DIRSTACK onto the front of the dirs print out, but that's more a matter of personal taste more than anything. So, as I said, I use these aliases all the time and have no problem with them. It's just that it can be a bit frustrating to have to use another machine and then find them not there (which shouldn't be surprising, but they're one of those things that you use so often that you don't think about them, so having them not work like you're used to can still be surprising).
{ "source": [ "https://unix.stackexchange.com/questions/4290", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
4,335
Possible Duplicate: Redirecting stdout to a file you don't have write permission on Running a command like sudo echo 'text' >> /file.txt fails with: bash: /file.txt: Permission denied
This doesn't work because the redirection is executed by the shell, not by the command it applies to. But your shell is not running as root, only echo 'text' is. A common trick when you need to have root permissions to write to a file, but not to generate the data, is to use tee : echo 'text' | sudo tee -a /file.txt tee prints the text to stdout, too. In order to mute it so it behaves more similar to shell appending ( >> ), route the stdout to /dev/null : echo 'text' | sudo tee -a /file.txt > /dev/null If you do need root permissions to generate the data, you can run two separate sudo commands, or run a shell inside sudo and do the redirection there (careful with the quoting). sudo echo 'text' | sudo tee -a /file.txt sudo sh -c 'echo "text" >>/file.txt' When overwriting rather than appending, if you're used to your shell refusing to truncate an existing file with the > operator ( set -o noclobber ), remember that this protection will not apply. sudo sh -c 'echo >/etc/passwd' and sudo tee /etc/passwd will overwrite /etc/passwd , you'd need sudo sh -o noclobber -c 'echo >/etc/passwd' for that noclobber setting to also be applied to the sh started by sudo .
{ "source": [ "https://unix.stackexchange.com/questions/4335", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
4,339
I accidentally typed l instead of ls today and found that the command still printed a list of the files in my current directory. Trying l --help brings up the help file for ls suggesting that l is just an alias of ls . Howver, each file was suffixed by a * . Why is this and what does it mean? In case it makes a difference, this is when running the latest stable version of Ubuntu.
SHORT ANSWER: understand what exactly this alias does, you can check out the ~/.bashrc file and search for the term " alias l= ". It is nothing but ls -CF LONG ANSWER A good way to inspect what a command is: type l If it's a program or a script, it will give you its location, if it is an alias, it will tell you what it's aliased to, if it's a function, it will print the funciton; otherwise, it will tell you if it is a built-in or a keyword. Examples: $ type l l is aliased to `ls -CF' $ type find find is /usr/bin/find $ type connecthome connecthome is hashed (/usr/local/bin/connecthome) $ type grep grep is aliased to `grep --color=auto --binary-files=without-match --devices=skip' $ type hello_se hello_se is a function hello_se () { echo 'Hello, Stack Exchangers!' } $ type type type is a shell builtin $ type for for is a shell keyword $ type nosuchthing -bash: type: nosuchthing: not found
{ "source": [ "https://unix.stackexchange.com/questions/4339", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2819/" ] }
4,342
It seems when sudo ing down that using sudo -u $user that the environment of root is still being used. How can I make sudo use the users environment? as a special note not all users that I will be using this on have login shells.
Try sudo -i -u $user gerald@book:~$ env |grep HOME HOME=/home/gerald gerald@book:~$ sudo -u ubuntu env |grep HOME HOME=/home/gerald gerald@book:~$ sudo -i -u ubuntu env |grep HOME HOME=/home/ubuntu
{ "source": [ "https://unix.stackexchange.com/questions/4342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,351
I like the watch command, but it has its limitations . I'm curious to know whether I could mimic the functionality of watch with less . I'm mainly looking for the ability to scroll through my directory as it dynamically gets modified via a running script.
In less , you can type F ( Shift + F ) to keep reading at the end of a file (like tail -f ); you can type :e and a file name to view a different file, but unfortunately, if you enter the name of the current file, less doesn't reload the file. However there's a trick to make it re-read the current file, suggested by sabgenton : type :e and enter the name of a non-existent file ; that causes less to display an error message and then reload the current file. If you're looking for an alternative to watch ls , here are a few: Modern file managers (e.g. Nautilus, Thunar, Konqueror, Dolphin, Finder) refresh views in real time. Emacs doesn't have real-time refresh, but with auto-revert-mode , it will reload the file or directory every 5 seconds (the delay is configurable). Although w3m is primarily a web browser, it makes a passable directory and text file viewer. Press R to reload the (local) URL.
{ "source": [ "https://unix.stackexchange.com/questions/4351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/446/" ] }
4,377
I'm trying to learn UNIX programming and came across a question regarding fork(). I understand that fork() creates an identical process of the currently running process, but where does it start? For example, if I have code int main (int argc, char **argv) { int retval; printf ("This is most definitely the parent process\n"); fflush (stdout); retval = fork (); printf ("Which process printed this?\n"); return (EXIT_SUCCESS); } The output is: This is most definitely the parent process Which process printed this? Which process printed this? I thought that fork() creates a same process, so I initially that that in that program, the fork() call would be recursively called forever. I guess that new process created from fork() starts after the fork() call? If I add the following code, to differentiate between a parent and child process, if (child_pid = fork ()) printf ("This is the parent, child pid is %d\n", child_pid); else printf ("This is the child, pid is %d\n",getpid ()); after the fork() call, where does the child process begin its execution?
The new process will be created within the fork() call, and will start by returning from it just like the parent. The return value (which you stored in retval ) from fork() will be: 0 in the child process The PID of the child in the parent process -1 in the parent if there was a failure (there is no child, naturally) Your testing code works correctly; it stores the return value from fork() in child_pid and uses if to check if it's 0 or not (although it doesn't check for an error)
{ "source": [ "https://unix.stackexchange.com/questions/4377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
4,390
Here are some instructions on how to disable your sudo password. These carry the following warning If you disable the sudo password for your account, you will seriously compromise the security of your computer. Anyone sitting at your unattended, logged in account will have complete root access, and remote exploits become much easier for malicious crackers. I'm not worried about people gaining physical access to my machine. What remote exploits are made possible or easier if I were to ignore this warning and disable the password?
If you allow passwordless sudo, anyone who manages to run code on your machine as your user can trivially run code as root. This could be someone who uses your console while you're logged in but not in front of your computer, which you're not worried about (anyway, someone with physical access can do pretty much what they want). This could also be someone who accesses your account on another machine where you've ssh'ed to your own machine. But it could also be someone exploiting a remote security hole — for example a web site that exploits a browser bug to inject code into your browser instance. How big a deal is it? Not that much, for several reasons: An attacker who's found a remote hole can probably find a local root hole as well. A number of attackers don't care about root privileges. All they want is to send spam and infect other machines, and they can do it as your user. An attacker who has access to your account can drop a trojan that captures your keystrokes (including your password) or that piggybacks onto whatever means you next use to gain root to execute command of its own. If you're the only user on your machine, there isn't much to protect that isn't accessible as your user. On the other hand: If you're up-to-date with the security updates, the attacker may not find a local hole to exploit. A non-root attacker can't erase his tracks very well. Having to type a password now and then isn't much of a burden. Having to type a password reminds you that you're doing something dangerous (as in: may lose data or make your computer unusable). (As a non-root user, the only real danger is erasing data by mistake, and it's usually obvious when you're erasing something and should be extra careful.)
{ "source": [ "https://unix.stackexchange.com/questions/4390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2819/" ] }
4,402
From the article Anatomy of the Linux file system by M. Tim Jones, I read that Linux views all the file systems from the perspective of a common set of objects and these objects are superblock , inode , dentry and file . Even though the rest of the paragraph explains the above, I was not that comfortable with that explanation. Could somebody explain to me these terms?
First and foremost, and I realize that it was not one of the terms from your question, you must understand metadata . Succinctly, and stolen from Wikipedia, metadata is data about data. That is to say that metadata contains information about a piece of data. For example, if I own a car then I have a set of information about the car but which is not part of the car itself. Information such as the registration number, make, model, year of manufacture, insurance information, and so on. All of that information is collectively referred to as the metadata. In Linux and UNIX file systems metadata exists at multiple levels of organization as you will see. The superblock is essentially file system metadata and defines the file system type, size, status, and information about other metadata structures (metadata of metadata). The superblock is very critical to the file system and therefore is stored in multiple redundant copies for each file system. The superblock is a very "high level" metadata structure for the file system. For example, if the superblock of a partition, /var, becomes corrupt then the file system in question (/var) cannot be mounted by the operating system. Commonly in this event, you need to run fsck which will automatically select an alternate, backup copy of the superblock and attempt to recover the file system. The backup copies themselves are stored in block groups spread through the file system with the first stored at a 1 block offset from the start of the partition. This is important in the event that a manual recovery is necessary. You may view information about ext2/ext3/ext4 superblock backups with the command dumpe2fs /dev/foo | grep -i superblock which is useful in the event of a manual recovery attempt. Let us suppose that the dumpe2fs command outputs the line Backup superblock at 163840, Group descriptors at 163841-163841 . We can use this information, and additional knowledge about the file system structure, to attempt to use this superblock backup: /sbin/fsck.ext3 -b 163840 -B 1024 /dev/foo . Please note that I have assumed a block size of 1024 bytes for this example. An inode exists in, or on, a file system and represents metadata about a file. For clarity, all objects in a Linux or UNIX system are files; actual files, directories, devices, and so on. Please note that, among the metadata contained in an inode, there is no file name as humans think of it, this will be important later. An inode contains essentially information about ownership (user, group), access mode (read, write, execute permissions), file type, and the data blocks with the file's content. A dentry is the glue that holds inodes and files together by relating inode numbers to file names. Dentries also play a role in directory caching which, ideally, keeps the most frequently used files on-hand for faster access. File system traversal is another aspect of the dentry as it maintains a relationship between directories and their files. A file , in addition to being what humans typically think of when presented with the word, is really just a block of logically related arbitrary data. Comparatively very dull considering all of the work done (above) to keep track of them. I fully realize that a few sentences do not provide a full explanation of any of these concepts so please feel free to ask for additional details when and where necessary.
{ "source": [ "https://unix.stackexchange.com/questions/4402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
4,405
I want to give a Fedora user sudo privileges. How do I do that?
As root (or with sudo from an account which already has the power), add the user to the wheel group: gpasswd wheel -a username I use gpasswd because not all versions of usermod have an easy way to add the user to a group without changing all the users' groups. However, on any recent Fedora, usermod username -a -G wheel should have the same effect. Group membership does not take effect immediately — the details are complicated, but the easy answer is that the user whose account has been modified should log out and in again. If you are using Fedora Workstation with the GNOME desktop environment, you can use the GNOME settings panel to add a user as an "administrator": You will need to unlock the panel either by already being an administrative user (useful for adding the power to another account) or by providing the root password. The switch is grayed out in my screenshot because I'm the only administrative user on this system and it won't let you shoot yourself in the foot in that way. (The command line tools, of course, will.) The GUI switch has the underlying effect of adding you the user to wheel , so it's exactly the same as the command line option. If you are using Fedora Linux 14 or earlier, use visudo to edit the sudoers file, removing the # from this line: %wheel ALL=(ALL) ALL This is the default in the sudoers file on Fedora Linux 15 and newer, so adding the user to wheel is all you need to do. Note that (as above) this won't take effect immediately; the easiest thing to do is log out and in again. See also this question and answer over on Server Fault for information on granting sudo-like "auth as self" behavior to wheel group members for graphical apps which use consolehelper or PackageKit.
{ "source": [ "https://unix.stackexchange.com/questions/4405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
4,408
Fedora 14 uses vi by default when one runs visudo. Is there a way to change this to something else?
Adding Defaults editor=/path/to/editor in the sudoers file will cause visudo to use the specified editor for changes. Additionally, if your sudo package has been built with --with-env-editor, as is the default on some Linux distributions, you can also set the EDITOR environment variable by executing export EDITOR=/path/to/editor . Performed on the command line this will revert as soon as that shell session is terminated, setting the variable in a ~/.bashrc or /etc/profile will cause the change to persist.
{ "source": [ "https://unix.stackexchange.com/questions/4408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
4,415
I know I've probably looked over this a million times in all the vi documents I've read, but I can't seem to find the delete from cursor to end of line command.
D (uppercase letter D) The command dw will delete from the current cursor position to the beginning of the next word character. The command d$ (note, that's a dollar sign, not an 'S') will delete from the current cursor position to the end of the current line. D (uppercase D) is a synonym for d$ (lowercase D + dollar sign).
{ "source": [ "https://unix.stackexchange.com/questions/4415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1571/" ] }
4,420
I have a system with two interfaces. Both interfaces are connected to the internet. One of them is set as the default route; a side effect of this is that if a packet comes in on the non-default-route interface, the reply is sent back through the default route interface. Is there a way to use iptables (or something else) to track the connection and send the reply back through the interface it came from?
echo 200 isp2 >> /etc/iproute2/rt_tables ip rule add from <interface_IP> table isp2 prio 1 ip route add default via <gateway_IP> dev <interface> table isp2 The above doesn't require any packet marking with ipfilter. It works because the outgoing (reply) packets will have the IP address that was originally used to connect to the 2nd interface as the source (from) address on the outgoing packet.
{ "source": [ "https://unix.stackexchange.com/questions/4420", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2180/" ] }
4,426
On my headless NAS I have sdf1 (a flash-card) mounted as / while /home is mounted from lv00 (an LVM volume backed by a software RAID). To be able to access the machine when the RAID fails, I have a copy of my ssh public key, etc. in /home/foo/.ssh on the file-system from sdf1 . To update the files that are hidden by the mounted /home I normally remount lv00 in /mnt/home , do what I have to do, and then move lv00 back in place. Is there a way to achieve this without unmounting /home ?
mkdir /mnt/root mount --bind / /mnt/root ls /mnt/root/home/foo/.ssh As long as you use --bind (as opposed to --rbind ), you get a clone of the mount without the stuff mounted on top of it.
{ "source": [ "https://unix.stackexchange.com/questions/4426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2881/" ] }
4,430
Why do we use ./filename to execute a file in linux? Why not just enter it like other commands gcc , ls etc...
In Linux, UNIX and related operating systems, . denotes the current directory. Since you want to run a file in your current directory and that directory is not in your $PATH , you need the ./ bit to tell the shell where the executable is. So, ./foo means run the executable called foo that is in this directory. You can use type or which to get the full path of any commands found in your $PATH .
{ "source": [ "https://unix.stackexchange.com/questions/4430", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2269/" ] }
4,460
It appears to be Unix tradition that a wheel group is created automatically, but Debian (and children, naturally) doesn't do so. Is there a rationale somewhere? Where else have you seen this tradition discarded?
Some unix systems allow only members of the wheel group to use su . Others allow anyone to use su if they know the password of the target user. There are even systems where being in the wheel group grants passwordless root access; Ubuntu does this, except that the group is called sudo (and doesn't have id 0). I think wheel is mostly a BSD thing. Linux is a mix of BSD and System V, and the various distributions have different default policies with respect to granting root access. Debian happens not to implement a wheel group by default; if you want to enable it, uncomment the auth required pam_wheel.so line in /etc/pam.d/su .
{ "source": [ "https://unix.stackexchange.com/questions/4460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
4,478
I'm sure there are many ways to do this: how can I count the number of lines in a text file? $ <cmd> file.txt 1020 lines
The standard way is with wc , which takes arguments to specify what it should count (bytes, chars, words, etc.); -l is for lines: $ wc -l file.txt 1020 file.txt
{ "source": [ "https://unix.stackexchange.com/questions/4478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2301/" ] }
4,484
I've been using public key authentication on a remote server for some time now for remote shell use as well as for sshfs mounts. After forcing a umount of my sshfs directory, I noticed that ssh began to prompt me for a password. I tried purging the remote .ssh/authorized_keys from any mention the local machine, and I cleaned the local machine from references to the remote machine. I then repeated my ssh-copy-id, it prompted me for a password, and returned normally. But lo and behold, when I ssh to the remote server I am still prompted for a password. I'm a little confused as to what the issue could be, any suggestions?
sshd gets weird about permissions on $HOME, $HOME/.ssh (both directories) and on $HOME/.ssh/authorized_keys. One of my linux boxes ended up with drwxrwxrwx permissions on my $HOME directory. An Arch linux box absolutely would not log in using public keys until I removed 'w' permission for group, other on my $HOME directory. Try making $HOME and $HOME/.ssh/ have more restrictive permissions for group and other. See if that doesn't let sshd do its stuff.
{ "source": [ "https://unix.stackexchange.com/questions/4484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2925/" ] }
4,489
is there a tool which enables one to: remember current RandR configuration (position, orientation, resolution etc) on per-monitor basis, automatically apply last known good configuration as soon as the display is plugged in, with no need to muck around with applets or xrandr(1)? The configurations would have to be applied on a per-user, per-display basis. If there is no such tool in the wild, I'd like to throw together one myself, but as far as I can see, there's no way to tell that a monitor has been plugged in. Do I have to poll with xrandr -q once in a while to figure out that an output was connected or disconnected, or is there a more efficient way to do it? Can udev be tuned to do just that?
I'm using this simple (homemade) script that keeps polling RandR and switches between LVDS1 and VGA1 when VGA gets connected/disconnected. (For HDMI outputs, in the following script file, change all the VGA1 to HDMI1 ) It's a dirty solution, yet it's working just fine. It's customized for my setup: you'll most likely need to change RandR output names ( LVDS1 and VGA1 ) and unlike me you'll probably be fine with your RandR default mode for VGA. #!/bin/bash # setting up new mode for my VGA xrandr --newmode "1920x1080" 148.5 1920 2008 2052 2200 1080 1089 1095 1125 +hsync +vsync xrandr --addmode VGA1 1920x1080 # default monitor is LVDS1 MONITOR=LVDS1 # functions to switch from LVDS1 to VGA and vice versa function ActivateVGA { echo "Switching to VGA1" xrandr --output VGA1 --mode 1920x1080 --dpi 160 --output LVDS1 --off MONITOR=VGA1 } function DeactivateVGA { echo "Switching to LVDS1" xrandr --output VGA1 --off --output LVDS1 --auto MONITOR=LVDS1 } # functions to check if VGA is connected and in use function VGAActive { [ $MONITOR = "VGA1" ] } function VGAConnected { ! xrandr | grep "^VGA1" | grep disconnected } # actual script while true do if ! VGAActive && VGAConnected then ActivateVGA fi if VGAActive && ! VGAConnected then DeactivateVGA fi sleep 1s done Full Steps: Put above script ( homemadeMonitor.sh ) into you preferred directory Make the .sh file executable by typing the following command in the terminal chmod +x homemadeMonitor.sh Run the .sh file ./homemadeMonitor.sh
{ "source": [ "https://unix.stackexchange.com/questions/4489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2929/" ] }
4,495
Preface: I love bash and have no intention of starting any sort of argument or holy-war, and hopefully this is not an extremely naive question. This question is somewhat related to this post on superuser, but I don't think the OP really knew what he was asking for. I use bash on FreeBSD, linux, OS X, and cygwin on Windows. I've also had extensive experience recently with PowerShell on Windows. Is there a shell for *nix, already available or in the works, that is compatible with bash but adds a layer of object-oriented scripting into the mix? The only thing I know of that comes close is the python console, but as far as I can tell it doesn't provide access to the standard shell environment. For example, I can't just cd ~ and ls , then chmod +x file inside the python console. I would have to use python to perform those tasks rather than the standard unix binaries, or call the binaries using python code. Does such a shell exist?
I can think of three desirable features in a shell: Interactive usability: common commands should be quick to type; completion; ... Programming: data structures; concurrency (jobs, pipe, ...); ... System access: working with files, processes, windows, databases, system configuration, ... Unix shells tend to concentrate on the interactive aspect and subcontract most of the system access and some of the programming to external tools, such as: bc for simple math openssl for cryptography sed , awk and others for text processing nc for basic TCP/IP networking ftp for FTP mail , Mail , mailx , etc. for basic e-mail cron for scheduled tasks wmctrl for basic X window manipulation dcop for KDE ≤3.x libraries dbus tools ( dbus-* or qdbus ) for various system information and configuration tasks (including modern desktop environments such as KDE ≥4) Many, many things can be done by invoking a command with the right arguments or piped input. This is a very powerful approach — better have one tool per task that does it well, than a single program that does everything but badly — but it does have its limitations. A major limitation of unix shells, and I suspect this is what you're after with your “object-oriented scripting” requirement, is that they are not good at retaining information from one command to the next, or combining commands in ways fancier than a pipeline. In particular, inter-program communication is text-based, so applications can only be combined if they serialize their data in a compatible way. This is both a blessing and a curse: the everything-is-text approach makes it easy to accomplish simple tasks quickly, but raises the barrier for more complex tasks. Interactive usability also runs rather against program maintainability. Interactive programs should be short, require little quoting, not bother you with variable declarations or typing, etc. Maintainable programs should be readable (so not have many abbreviations), should be readable (so you don't have to wonder whether a bare word is a string, a function name, a variable name, etc.), should have consistency checks such as variable declarations and typing, etc. In summary, a shell is a difficult compromise to reach. Ok, this ends the rant section, on to the examples. The Perl Shell (psh) “combines the interactive nature of a Unix shell with the power of Perl”. Simple commands (even pipelines) can be entered in shell syntax; everything else is Perl. The project hasn't been in development for a long time. It's usable, but hasn't reached the point where I'd consider using it over pure Perl (for scripting) or pure shell (interactively or for scripting). IPython is an improved interactive Python console, particularly targetted at numerical and parallel computing. This is a relatively young project. irb (interactive ruby) is the Ruby equivalent of the Python console. scsh is a scheme implementation (i.e. a decent programming language) with the kind of system bindings traditionally found in unix shells (strings, processes, files). It doesn't aim to be usable as an interactive shell however. zsh is an improved interactive shell. Its strong point is interactivity (command line edition, completion, common tasks accomplished with terse but cryptic syntax). Its programming features aren't that great (on par with ksh), but it comes with a number of libraries for terminal control, regexps, networking, etc. fish is a clean start at a unix-style shell. It doesn't have better programming or system access features. Because it breaks compatibility with sh, it has more room to evolve better features, but that hasn't happened. Addendum: another part of the unix toolbox is treating many things as files: Most hardware devices are accessible as files. Under Linux, /sys provides more hardware and system control. On many unix variants, process control can be done through the /proc filesystem. FUSE makes it easy to write new filesystems. There are already existing filesystems for converting file formats on the fly, accessing files over various network protocols, looking inside archives, etc. Maybe the future of unix shells is not better system access through commands (and better control structures to combine commands) but better system access through filesystems (which combine somewhat differently — I don't think we've worked out what the key idioms (like the shell pipe) are yet).
{ "source": [ "https://unix.stackexchange.com/questions/4495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/479/" ] }
4,527
I have a command that produces output in color, and I would like to pipe it into a file with the color codes stripped out. Is there a command that works like cat except that it strips color codes? I plan to do something like this: $ command-that-produces-colored-output | stripcolorcodes > outfile
You'd think there'd be a utility for that, but I couldn't find it. However, this Perl one-liner should do the trick: perl -pe 's/\e\[?.*?[\@-~]//g' Example: $ command-that-produces-colored-output | perl -pe 's/\e\[?.*?[\@-~]//g' > outfile Or, if you want a script you can save as stripcolorcodes : #! /usr/bin/perl use strict; use warnings; while (<>) { s/\e\[?.*?[\@-~]//g; # Strip ANSI escape codes print; } If you want to strip only color codes, and leave any other ANSI codes (like cursor movement) alone, use s/\e\[[\d;]*m//g; instead of the substitution I used above (which removes all ANSI escape codes).
{ "source": [ "https://unix.stackexchange.com/questions/4527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1010/" ] }
4,535
I am mounting an ISO file, and looking at this tutorial . They use the command: $ mount -o loop disk1.iso /mnt/disk I'm trying to understand the use of -o loop . I have two questions: When I look at the long man page for mount, it takes time to find that -o option. If I do man mount | grep "-o" I get an error, and when I look in the file I do not find any info that "loop" is a command text for option -o . Where is that documented? Also, what is the "loop device" concept for mounting?
A loop device is a pseudo ("fake") device (actually just a file) that acts as a block-based device . You want to mount a file disk1.iso that will act as an entire filesystem, so you use loop. The -o is short for --options . And the last thing, if you want to search for "-o" you need to escape the '-'. Try: man mount | grep "\-o"
{ "source": [ "https://unix.stackexchange.com/questions/4535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1325/" ] }
4,561
I need to know what hard disks are available, including ones that aren't mounted and possibly aren't formatted. I can't find them in dmesg or /var/log/messages (too much to scroll through). I'm hoping there's a way to use /dev or /proc to find out this information, but I don't know how. I am using Linux.
This is highly platform-dependent. Also different methods may treat edge cases differently (“fake” disks of various kinds, RAID volumes, …). On modern udev installations, there are symbolic links to storage media in subdirectories of /dev/disk , that let you look up a disk or a partition by serial number ( /dev/disk/by-id/ ), by UUID ( /dev/disk/by-uuid ), by filesystem label ( /dev/disk/by-label/ ) or by hardware connectivity ( /dev/disk/by-path/ ). Under Linux 2.6, each disk and disk-like device has an entry in /sys/block . Under Linux since the dawn of time, disks and partitions are listed in /proc/partitions . Alternatively, you can use lshw : lshw -class disk . Linux also provides the lsblk utility which displays a nice tree view of the storage volumes (since util-linux 2.19, not present on embedded devices with BusyBox). If you have an fdisk or disklabel utility, it might be able to tell you what devices it's able to work on. You will find utility names for many unix variants on the Rosetta Stone for Unix , in particular the “list hardware configuration” and “read a disk label” lines.
{ "source": [ "https://unix.stackexchange.com/questions/4561", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,569
I have an operation using cut that I would like to assign result to a variable var4=echo ztemp.xml |cut -f1 -d '.' I get the error: ztemp.xml is not a command The value of var4 never gets assigned; I'm trying to assign it the output of: echo ztemp.xml | cut -f1 -d '.' How can I do that?
You'll want to modify your assignment to read: var4="$(echo ztemp.xml | cut -f1 -d '.')" The $(…) construct is known as command susbtitution .
{ "source": [ "https://unix.stackexchange.com/questions/4569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1325/" ] }
4,573
I've hardly ever used anything other than Meld . Can you recommend anything else? It would be extra nice if you give a reason for your recommendation (as a Comment). [ note ] I want an alternative because Meld has recently lost the feature to copy entire contents from one file to another. I'm referring to the functionality available via the Copy To Left/Right right-click menu item. [ update ] I just checked, and the problem was introduced by 1.3.2 . 1.3.1 works well, and the latest I've checked is 1.4.0 , and it doesn't work.
There are a number of tools that are usable: meld kompare -- diff file viewer kdiff3 -- file difference viewer Diffuse -- file difference viewer Do you have two files and want to view their differences? Use a "file difference viewer". Do you have a diff file and want to look at it in an easy-to-read display? Use a "diff file viewer".
{ "source": [ "https://unix.stackexchange.com/questions/4573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
4,618
I'm a Linux/Windows/Mac user. I like all systems, *nixes more than Windows, but I like all nonethless. I started using a Mac this year, and a difference between Linux and Windows that I can't understand is: why applications never get closes when I hit the "x" button, since this is the way Linux and Windows behave? I need to hit cmd+q or quit by the up menu. I mean, is that only to be different from all of them, or there's a reason for this behavior? I can't see any advantage. If I want to close, I want to close. Period. Anyone knows the reason for that?
In some sense, it is a UI convention with history that goes back all the way to 1984. Since Windows and X11 both post date the original Mac GUI, one might say that Windows does it the Windows way "just to be different" rather than suggesting that the Mac is the oddball. Back in the earliest days of the Macintosh, you could only run one application at a time. It was perfectly reasonable for an application to open with no windows because the application always had a visible menu bar at the top of the screen. When you closed all the windows of an application, it made sense to keep the application open because you could always use the menu bar to create a new document, or open an existing one. Exiting the process just because a window was closed didn't make any sense at the time, because there would have been no other process to yield focus to. A few years on, the Macintosh of the late 80's advanced to the point where there was enough memory to have multiple applications open at once. Since the tools for doing this had to retain backwards compatibility with existing applications, they naturally weren't going to change the basic UI conventions and go killing applications without any windows open. The result was a clean distinction in the UI between a visual GUI element (a window), and an abstract running process (the application). Meanwhile, Microsoft had been developing Windows. By the early 90's, Microsoft had Windows 3.X working well, and Motif on X11 had been heavily inspired by Microsoft's work. While the Macintosh was built around presenting a UI of Applications, Windows (as the name would suggest) was built around the philosophy that the Window itself should be the fundamental unit of the UI, with the only concept of an application being in the form of MDI style container windows. X11 also considered an application largely unimportant from a UI standpoint. A single process could even open up windows on multiple displays connected to several machines across a (very new-fangled) local area network. The trouble with the Windows style approach was that you couldn't do some forms of user interaction, such as opening with just a menu bar, and the user had no real guarantee that a process had actually exited when the windows were gone. A Macintosh user could easily switch to an application that was running without windows to quit it, or to use it, but Windows provided absolutely no way for the user to interact with such a process. (Except to notice it in the task manager, and kill it.) Also, a user couldn't choose to leave a process running so that they could get back to it without relaunching it, except to keep some visible UI from the process cluttering up the screen, and consuming (at the time, very limited) resources. While the Macintosh had an "Applications" menu for switching, Windows popularised a "task bar," which displayed all top level windows without any regard for the process that had opened them. For heavy multitaskers, the "task bar soup" proved unweildy. For more basic users, the upredictability about what exactly qualified as a "top level window" was sometimes confusing as there was no learnable rule about exactly which windows would actually show up on the bar. By the late 90's, Microsoft's GUI was the most commonly used. Most users has a Windows PC rather than a Macintosh or a UNIX X11 workstation. Consequently, as Linux grew in popularity over time, many developers were coming from a background of using Windows UI conventions rather than UNIX UI conventions. That combined with the history of early work on things like Motif drawing from Windows UI conventions, to result in modern Linux desktop environments behaving much more like Windows than classic X11 things like twm or the Macintosh. At this point, "classic" Mac OS had run its course with Mac OS 9, and the Macintosh became a Unix powered machine with very different guts in the form of Mac OS X. Thus, it inherited the NeXT UI concept of a Dock. On the original NeXT machines, X11 was used, but with a fairly unique set of widgets and UI conventions. Probably the most distinctive of them was the Dock, which was a sort of combination program launcher and task switcher. (The "multicolumn" open file dialog box that is known in OS-X also came from NeXT, as well as some other visible things. The most significant changes in the OS-X transition were all the invisible ones, though.) The Dock worked well with the Macintosh's concept of "Application as the fundamental UI element." So, a user could see that an application is open by a mark on the dock icon, and switch to it or launch it by clicking on it. Since modern OS-X now supported multitasking so much better than the classic Mac OS had, it suddenly made sense that a user might want to have all sorts of things running in the background, such as some video conversion software that cranks away in the background, a screen recorder, VOIP software, Internet Radio, a web server, something that speaks in response to a spoken command, etc. None of that stuff necessarily requires a visible window to be open to still have a sensible user experience, and the menu bar was still separate from the windows at the top of the screen, and you could have a menu directly on the dock icon, so a user could always interact with a program that had no open UI. consequently, ditching the existing convention of keeping an application open, just to be more like Windows, would have been seen by most Mac users as a horrible step in the wrong direction. It makes several modes of interaction impossible, with no real benefit. Obviously, some users prefer the Windows convention, and neither is "provably correct." But, migrating away from something useful like that, without any good reason would just make no sense. Hopefully, this tour through some of the history gives you a bit of context that you find useful.
{ "source": [ "https://unix.stackexchange.com/questions/4618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2689/" ] }
4,621
I'm using zsh and gdm to run gnome. Some time ago I discovered that variables are not set correctly. For example LANG/LC_ALL are incorrect ( "" instead of en_GB.UTF-8 ). I split the .zshrc into .zshrc and .profile . In the latter I set the environment variables, but how can I set the variables before the session starts? I tried a few choices ( .xinitrc , .xsessionrc ) but none seemed to work. Edit To clarify - I used .profile and manually sourced it in .zshrc . It does not change question anyway.
The simple way is to invent a time machine, visit the various people who devised shell startup files and tell them to cleanly distinguish between three things: session setup, e.g. environment variables; session launching, i.e., e.g. starting a command-line shell or a window manager or running startx ; shell initialization, e.g. aliases, prompt, key bindings. It's not too hard to get session vs. shell right in a portable way: login-time initialization goes into .profile (or .zprofile , or .login ), shell initialization goes in .bashrc or .zshrc . I've previously written about .bash_profile , zsh vs. other shells , more about portability (mostly about bash) , more about who reads .profile . A remaining problem is distinguishing between session setup and session launching. In most cases, ~/.profile is executed when you log in and can double as both, but there are exceptions: If your login shell is (t)csh or zsh, ~/.login and ~/.zprofile is sourced instead of ~/.profile . Ditto for bash and ~/.bash_profile , but this is easily solved by sourcing ~/.profile from ~/.bash_profile . If you log in under a display manager (xdm, gdm, kdm, …), whether your ~/.profile is read depends on the version of the program, on your distribution (Linux or otherwise), and on what session type you choose. If you count on the display manager to start a session for you, your .profile must set environment variables but not start a session (e.g. a window manager). The traditional configuration file for X sessions is ~/.xsession , doing both session setup and session launching. So the file can be essentially . ~/.xsession; . ~/.xinitrc . Some distributions source ~/.profile before ~/.xsession . Modern distributions only source ~/.xsession when you select a “custom” session from the display manager, and such a session is not always available. Your session manager may have its own way of setting environment variables. (That's an optional part of your desktop environment, chosen by you through a configuration file or by selecting a session type when logging in; don't confuse it with the session startup scripts provided by the display manager, which are executed under your user but chosen on a system-wide basis. Yes, it's a mess.) In summary, ~/.profile is the right place for environment variables. If it's not read, try sourcing it from ~/.xsession (and start your X programs from there), or look for a system-specific method (which may depend on your distribution, display manager if any, session type if display manager, and desktop environment or session manager).
{ "source": [ "https://unix.stackexchange.com/questions/4621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/305/" ] }
4,650
Is there a way for a sourced shell script to find the path to itself? I'm mainly concerned with bash , though I have some coworkers who use tcsh . I'm guessing I may not have much luck here since sourcing causes commands to be executed in the current shell, so $0 is still the current shell's invocation, not the sourced script. My best thought is to do source $script $script so that the first positional parameter contains the necessary information. Does anyone have a better way? To be clear, I am sourcing the script, not running it: source foo.bash
In tcsh , $_ at the beginning of the script will contain the location if the file was sourced and $0 contains it if it was run. #!/bin/tcsh set sourced=($_) if ("$sourced" != "") then echo "sourced $sourced[2]" endif if ("$0" != "tcsh") then echo "run $0" endif In Bash: #!/bin/bash [[ $0 != $BASH_SOURCE ]] && echo "Script is being sourced" || echo "Script is being run"
{ "source": [ "https://unix.stackexchange.com/questions/4650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2376/" ] }
4,664
I am on a MacBook Pro running Apple Leopard (Mac OS X 10.5.8). I would like to unpackage a RPM and view the files contained within the wget-1.11.4-2.el5_4.1.src.rpm . I don't need to install the files to a particular location or run any %postinstall scripts or anything. I just want to unpackage this RPM so that I can view the source files underneath. Is it possible to unpackage a RPM file on a non-RedHat/CentOS system?
On modern systems, the built-in tar utility supports several other archive formats including rpm. So you can extract the files from the rpm with tar -xf foo.rpm Note that if you've installed GNU tools, tar may invoke GNU tar instead of the one that ships with macOS, depending on which set of GNU tools and on your $PATH . You need to use /usr/bin/tar , not GNU tar. You can install rpm through Darwin Ports or Fink or Mac Ports or even a Darwin port, rpm4darwin . To extract files from an rpm package without installing it, you can use the companion utility rpm2cpio , e.g. rpm2cpio foo.rpm | cpio -i -d There's also a portable rpm2cpio script if you don't want or can't get the version that's bundled with the rpm utility (the script may not work with older or newer versions of the rpm format though).
{ "source": [ "https://unix.stackexchange.com/questions/4664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4/" ] }
4,681
How do you sort du -sh /dir/* by size? I read one site that said use | sort -n but that's obviously not right. Here's an example that is wrong. [~]# du -sh /var/* | sort -n 0 /var/mail 1.2M /var/www 1.8M /var/tmp 1.9G /var/named 2.9M /var/run 4.1G /var/log 8.0K /var/account 8.0K /var/crash 8.0K /var/cvs 8.0K /var/games 8.0K /var/local 8.0K /var/nis 8.0K /var/opt 8.0K /var/preserve 8.0K /var/racoon 12K /var/aquota.user 12K /var/portsentry 16K /var/ftp 16K /var/quota.user 20K /var/yp 24K /var/db 28K /var/empty 32K /var/lock 84K /var/profiles 224M /var/netenberg 235M /var/cpanel 245M /var/cache 620M /var/lib 748K /var/spool
If you have GNU coreutils (common in most Linux distributions), you can use du -sh -- * | sort -h The -h option tells sort that the input is the human-readable format (number with unit; 1024-based so that 1023 is considered less than 1K which happens to match what GNU du -h does). This feature was added to GNU Core Utilities 7.5 in Aug 2009 . Note: If you are using an older version of Mac OSX, you need to install coreutils with brew install coreutils , then use gsort as drop-in replacement of sort . Newer versions of macOS (verified on Mojave) support sort -h natively.
{ "source": [ "https://unix.stackexchange.com/questions/4681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,711
Going through the linux 2.6.36 source code at lxr.linux.no , I could not find the ioctl() method in file_operations . Instead I found two new calls: unlocked_ioctl() and compat_ioctl() . What is the difference between ioctl() , unlocked_ioctl() , and compat_ioctl() ?
Meta-answer: All the raw stuff happening to the Linux kernel goes through lkml (the Linux kernel mailing list) . For explicative summaries, read or search lwn (Linux weekly news) . Answer: From The new way of ioctl() by Jonathan Corbet : ioctl() is one of the remaining parts of the kernel which runs under the Big Kernel Lock (BKL). In the past, the usage of the BKL has made it possible for long-running ioctl() methods to create long latencies for unrelated processes. Follows an explanation of the patch that introduced unlocked_ioctl and compat_ioctl into 2.6.11. The removal of the ioctl field happened a lot later, in 2.6.36. Explanation: When ioctl was executed, it took the Big Kernel Lock (BKL), so nothing else could execute at the same time. This is very bad on a multiprocessor machine, so there was a big effort to get rid of the BKL. First, unlocked_ioctl was introduced. It lets each driver writer choose what lock to use instead. This can be difficult, so there was a period of transition during which old drivers still worked (using ioctl ) but new drivers could use the improved interface ( unlocked_ioctl ). Eventually all drivers were converted and ioctl could be removed. compat_ioctl is actually unrelated, even though it was added at the same time. Its purpose is to allow 32-bit userland programs to make ioctl calls on a 64-bit kernel. The meaning of the last argument to ioctl depends on the driver, so there is no way to do a driver-independent conversion.
{ "source": [ "https://unix.stackexchange.com/questions/4711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
4,712
How to write those into one line, also without repeat the same path? rsync -a root@somewhere:/folder/remote/*.txt . rsync -a root@somewhere:/folder/remote/*.jpg .
I'd write it like this: rsync -a root@somewhere:/folder/remote/*.{txt,jpg} .
{ "source": [ "https://unix.stackexchange.com/questions/4712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1875/" ] }
4,738
I want to learn how to write bash completion scripts. Which tutorial would you recommend?
There aren't that many bash completion tutorials around, but this one is pretty good: Introduction to Bash Completion Part 1 is for general knowledge Part 2 covers creating scripts in /etc/bash_completion.d/
{ "source": [ "https://unix.stackexchange.com/questions/4738", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
4,770
I often end up issuing complex commands over ssh; these commands involve piping to awk or perl one-lines, and as a result contain single quotes and $'s. I have neither been able to figure out a hard and fast rule to do the quoting properly, nor found a good reference for it. For instance, consider the following: # what I'd run locally: CMD='pgrep -fl java | grep -i datanode | awk '{print $1}' # this works with ssh $host "$CMD": CMD='pgrep -fl java | grep -i datanode | awk '"'"'{print $1}'"'" (Note the extra quotes in the awk statement.) But how do I get this to work with, e.g. ssh $host "sudo su user -c '$CMD'" ? Is there a general recipe for managing quotes in such scenarios?..
Dealing with multiple levels of quoting (really, multiple levels of parsing/interpretation) can get complicated. It helps to keep a few things in mind: Each “level of quoting” can potentially involve a different language. Quoting rules vary by language. When dealing with more than one or two nested levels, it is usually easiest to work “from the bottom, up” (i.e. innermost to outermost). Levels of Quoting Let us look at your example commands. pgrep -fl java | grep -i datanode | awk '{print $1}' Your first example command (above) uses four languages: your shell, the regex in pgrep , the regex in grep (which might be different from the regex language in pgrep ), and awk . There are two levels of interpretation involved: the shell and one level after the shell for each of the involved commands. There is only one explicit level of quoting (shell quoting into awk ). ssh host … Next you added a level of ssh on top. This is effectively another shell level: ssh does not interpret the command itself, it hands it to a shell on the remote end (via (e.g.) sh -c … ) and that shell interprets the string. ssh host "sudo su user -c …" Then you asked about adding another shell level in the middle by using su (via sudo , which does not interpret its command arguments, so we can ignore it). At this point, you have three levels of nesting going on ( awk → shell, shell → shell ( ssh ), shell → shell ( su user -c ), so I advise using the “bottom, up” approach. I will assume that your shells are Bourne compatible (e.g. sh , ash , dash , ksh , bash , zsh , etc.). Some other kind of shell ( fish , rc , etc.) might require different syntax, but the method still applies. Bottom, Up Formulate the string you want to represent at the innermost level. Select a quoting mechanism from the quoting repertoire of the next-highest language. Quote the desired string according to your selected quoting mechanism. There are often many variations how to apply which quoting mechanism. Doing it by hand is usually a matter of practice and experience. When doing it programatically, it is usually best to pick the easiest to get right (usually the “most literal” (fewest escapes)). Optionally, use the resulting quoted string with additional code. If you have not yet reached your desired level of quoting/interpretation, take the resulting quoted string (plus any added code) and use it as the starting string in step 2. Quoting Semantics Vary The thing to keep in mind here is that each language (quoting level) may give slightly different semantics (or even drastically different semantics) to the same quoting character. Most languages have a “literal” quoting mechanism, but they vary in exactly how literal they are. The single quote of Bourne-like shells is actually literal (which means you can not use it to quote a single quote character itself). Other languages (Perl, Ruby) are less literal in that they interpret some backslash sequences inside single quoted regions non-literally (specifically, \\ and \' result in \ and ' , but other backslash sequences are actually literal). You will have to read the documentation for each of your languages to understand its quoting rules and the overall syntax. Your Example The innermost level of your example is an awk program. {print $1} You are going to embed this in a shell command line: pgrep -fl java | grep -i datanode | awk … We need to protect (at a minimum) the space and the $ in the awk program. The obvious choice is to use single quote in the shell around the whole program. '{print $1}' There are other choices though: {print\ \$1} directly escape the space and $ {print' $'1} single quote only the space and $ "{print \$1}" double quote the whole and escape the $ {print" $"1} double quote only the space and $ This may be bending the rules a bit (unescaped $ at the end of a double quoted string is literal), but it seems to work in most shells. If the program used a comma between the open and close curly braces we would also need to quote or escape either the comma or the curly braces to avoid “brace expansion” in some shells. We pick '{print $1}' and embed it in the rest of the shell “code”: pgrep -fl java | grep -i datanode | awk '{print $1}' Next, you wanted to run this via su and sudo . sudo su user -c … su user -c … is just like some-shell -c … (except running under some other UID), so su just adds another shell level. sudo does not interpret its arguments, so it does not add any quoting levels. We need another shell level for our command string. We can pick single quoting again, but we have to give special handling to the existing single quotes. The usual way looks like this: 'pgrep -fl java | grep -i datanode | awk '\''{print $1}'\' There are four strings here that the shell will interpret and concatenate: the first single quoted string ( pgrep … awk ), an escaped single quote, the single-quoted awk program, another escaped single quote. There are, of course many alternatives: pgrep\ -fl\ java\ \|\ grep\ -i\ datanode\ \|\ awk\ \'{print\ \$1} escape everything important pgrep\ -fl\ java\|grep\ -i\ datanode\|awk\ \'{print\$1} the same, but without superfluous whitespace (even in the awk program!) "pgrep -fl java | grep -i datanode | awk '{print \$1}'" double quote the whole thing, escape the $ 'pgrep -fl java | grep -i datanode | awk '"'"'{print \$1}'"'" your variation; a bit longer than the usual way due to using double quotes (two characters) instead of escapes (one character) Using different quoting in the first level allows for other variations at this level: 'pgrep -fl java | grep -i datanode | awk "{print \$1}"' 'pgrep -fl java | grep -i datanode | awk {print\ \$1}' Embedding the first variation in the sudo /*su* command line give this: sudo su user -c 'pgrep -fl java | grep -i datanode | awk '\''{print $1}'\' You could use the same string in any other single shell level contexts (e.g. ssh host … ). Next, you added a level of ssh on top. This is effectively another shell level: ssh does not interpret the command itself, but it hands it to a shell on the remote end (via (e.g.) sh -c … ) and that shell interprets the string. ssh host … The process is the same: take the string, pick a quoting method, use it, embed it. Using single quotes again: 'sudo su user -c '\''pgrep -fl java | grep -i datanode | awk '\'\\\'\''{print $1}'\'\\\' Now there are eleven strings that are interpreted and concatenated: 'sudo su user -c ' , escaped single quote, 'pgrep … awk ' , escaped single quote, escaped backslash, two escaped single quotes, the single quoted awk program, an escaped single quote, an escaped backslash, and a final escaped single quote. The final form looks like this: ssh host 'sudo su user -c '\''pgrep -fl java | grep -i datanode | awk '\'\\\'\''{print $1}'\'\\\' This is a bit unwieldy to type by hand, but the literal nature of the shell’s single quoting makes it easy to automate a slight variation: #!/bin/sh sq() { # single quote for Bourne shell evaluation # Change ' to '\'' and wrap in single quotes. # If original starts/ends with a single quote, creates useless # (but harmless) '' at beginning/end of result. printf '%s\n' "$*" | sed -e "s/'/'\\\\''/g" -e 1s/^/\'/ -e \$s/\$/\'/ } # Some shells (ksh, bash, zsh) can do something similar with %q, but # the result may not be compatible with other shells (ksh uses $'...', # but dash does not recognize it). # # sq() { printf %q "$*"; } ap='{print $1}' s1="pgrep -fl java | grep -i datanode | awk $(sq "$ap")" s2="sudo su user -c $(sq "$s1")" ssh host "$(sq "$s2")"
{ "source": [ "https://unix.stackexchange.com/questions/4770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2879/" ] }
4,782
So I have a script that, when I give it two addresses, will search two HTML links: echo "http://maps.google.be/maps?saddr\=$1\&daddr\=$2" | sed 's/ /%/g' I want to send this to wget and then save the output in a file called temp.html . I tried this, but it doesn't work. Can someone explain why and/or give me a solution please? #!/bin/bash url = echo "http://maps.google.be/maps?saddr\=$1\&daddr\=$2" | sed 's/ /%/g' wget $url
You can use backticks (`) to evaluate a command and substitute in the command's output, like: echo "Number of files in this directory: `ls | wc -l`" In your case: wget `echo http://maps.google.be/maps?saddr\=$1\&daddr\=$2 | sed 's/ /%/g'`
{ "source": [ "https://unix.stackexchange.com/questions/4782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
4,795
I'm trying to put in an email the temperature outside in degrees. On my Mac, the degree symbol ( ° ) is Option + Shift + 8 . But I'm writing the email in Thunderbird on an Ubuntu 10.10 with the default US English keyboard layout. What key combination do I use to get the degree symbol under X11? EDIT: Gert successfully answered the question... but, bonus points for any easier to use keystroke than what's in his answer!
Set up a Compose key . On Ubuntu, this is easily done in the keyboard preferences, “Layout” tab, “Options” subdialog. Caps Lock is a good choice as it's pretty much useless (all remotely serious editors have a command to make the selection uppercase for the rare times it's needed). Press Compose followed by two characters (occasionally three) to enter a character you don't have on your keyboard. Usually the resulting character combines the two characters you type, for example Compose ' a enters á and Compose s s enters ß . The degree symbol ° is one of the less memorable combinations, it's on Compose o o .
{ "source": [ "https://unix.stackexchange.com/questions/4795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1974/" ] }
4,822
When I visited the kernel.org website to download the latest Linux kernel, I noticed a package named 2.6.37-rc5 in the repository. What is the meaning of the "rc5" at the end?
Release Candidate. By convention, whenever an update for a program is almost ready, the test version is given a rc number. If critical bugs are found, that require fixes, the program is updated and reissued with a higher rc number. When no critical bugs remain, or no additional critical bugs are found, then the rc designation is dropped.
{ "source": [ "https://unix.stackexchange.com/questions/4822", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2269/" ] }
4,840
I run the following command: grep -o "[0-9] errors" verification_report_3.txt | awk '{print $1}' and I get the following result: 1 4 0 8 I'd like to add each of the numbers up to a running count variable. Is there a magic one liner someone can help me build?
grep -o "[0-9] errors" verification_report_3.txt | awk '{ SUM += $1} END { print SUM }' That doesn't print the list but does print the sum. If you want both the list and the sum, you can do: grep -o "[0-9] errors" verification_report_3.txt | awk '{ SUM += $1; print $1} END { print SUM }'
{ "source": [ "https://unix.stackexchange.com/questions/4840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3115/" ] }
4,845
What are the (best) solutions for remote desktop in linux? Ideally I'd like to be able to be able to log in to a remote X (KDE) session without even logging into my local machine. Maybe if I could have a remote X session forwarded to a different virtual terminal session so I can switch back and forth between local and remote with Ctrl + alt + n? This is going to be over the internet via a VPN, so data-light solutions would be best =]
grep -o "[0-9] errors" verification_report_3.txt | awk '{ SUM += $1} END { print SUM }' That doesn't print the list but does print the sum. If you want both the list and the sum, you can do: grep -o "[0-9] errors" verification_report_3.txt | awk '{ SUM += $1; print $1} END { print SUM }'
{ "source": [ "https://unix.stackexchange.com/questions/4845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1571/" ] }
4,859
I generally set both VISUAL and EDITOR environment variables to the same thing, but what's the difference? Why would I set them differently? When developing apps, why should I choose to look at VISUAL before EDITOR or vice versa?
The EDITOR editor should be able to work without use of "advanced" terminal functionality (like old ed or ex mode of vi ). It was used on teletype terminals. A VISUAL editor could be a full screen editor as vi or emacs . E.g. if you invoke an editor through bash (using C-x C-e ), bash will try first VISUAL editor and then, if VISUAL fails (because terminal does not support a full-screen editor), it tries EDITOR . Nowadays, you can leave EDITOR unset or set it to vi -e .
{ "source": [ "https://unix.stackexchange.com/questions/4859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,884
What is the difference between procfs and sysfs? Why are they made as file systems? As I understand it, proc is just something to store the immediate info regarding the processes running in the system.
What is the difference between procfs and sysfs? proc is the old one, it is more or less without rules and structure. And at some point it was decided that proc was a little too chaotic and a new way was needed. Then sysfs was created, and the new stuff that was added was put into sysfs like device information. So in some sense they do the same, but sysfs is a little bit more structured. Why are they made as file systems? UNIX philosophy tells us that everything is a "file", therefore it was created so it behaves as files. As I understand it, proc is just something to store the immediate info regarding the processes running in the system. Those parts has always been there and they will probably never move into sysfs . But there is more old stuff that you can find in proc , that has not been moved.
{ "source": [ "https://unix.stackexchange.com/questions/4884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
4,899
I can write VAR=$VAR1 VAR=${VAR1} VAR="$VAR1" VAR="${VAR1}" the end result to me all seems about the same. Why should I write one or the other? are any of these not portable/POSIX?
VAR=$VAR1 is a simplified version of VAR=${VAR1} . There are things the second can do that the first can't, for instance reference an array index (not portable) or remove a substring (POSIX-portable). See the More on variables section of the Bash Guide for Beginners and Parameter Expansion in the POSIX spec. Using quotes around a variable as in rm -- "$VAR1" or rm -- "${VAR}" is a good idea. This makes the contents of the variable an atomic unit. If the variable value contains blanks (well, characters in the $IFS special variable, blanks by default) or globbing characters and you don't quote it, then each word is considered for filename generation (globbing) whose expansion makes as many arguments to whatever you're doing. $ find . . ./*r* ./-rf ./another ./filename ./spaced filename ./another spaced filename ./another spaced filename/x $ var='spaced filename' # usually, 'spaced filename' would come from the output of some command and you weren't expecting it $ rm $var rm: cannot remove 'spaced': No such file or directory # oops! I just ran 'rm spaced filename' $ var='*r*' $ rm $var # expands to: 'rm' '-rf' '*r*' 'another spaced filename' $ find . . ./another ./spaced filename ./another spaced filename $ var='another spaced filename' $ rm -- "$var" $ find . . ./another ./spaced filename On portability: According to POSIX.1-2008 section 2.6.2 , the curly braces are optional.
{ "source": [ "https://unix.stackexchange.com/questions/4899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,929
I'm interested in the difference between Highmem and Lowmem: Why is there such a differentiation? What do we gain by doing so? What features does each have?
On a 32-bit architecture, the address space range for addressing RAM is: 0x00000000 - 0xffffffff or 4'294'967'295 (4 GB). The linux kernel splits that up 3/1 (could also be 2/2, or 1/3 1 ) into user space (high memory) and kernel space (low memory) respectively. The user space range: 0x00000000 - 0xbfffffff Every newly spawned user process gets an address (range) inside this area. User processes are generally untrusted and therefore are forbidden to access the kernel space. Further, they are considered non-urgent, as a general rule, the kernel tries to defer the allocation of memory to those processes. The kernel space range: 0xc0000000 - 0xffffffff A kernel processes gets its address (range) here. The kernel can directly access this 1 GB of addresses (well, not the full 1 GB, there are 128 MB reserved for high memory access). Processes spawned in kernel space are trusted, urgent and assumed error-free, the memory request gets processed instantaneously. Every kernel process can also access the user space range if it wishes to. And to achieve this, the kernel maps an address from the user space (the high memory) to its kernel space (the low memory), the 128 MB mentioned above are especially reserved for this. 1 Whether the split is 3/1, 2/2, or 1/3 is controlled by the CONFIG_VMSPLIT_... option; you can probably check under /boot/config* to see which option was selected for your kernel.
{ "source": [ "https://unix.stackexchange.com/questions/4929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
4,961
Which application would you recommend for Linux to tag MP3s? Under Windows I used to use Tag&Rename and liked it a lot; it works well under Wine, but I want something that runs natively.
There are various: easytag has a lot of options kid3 if you're on a Qt/KDE environment id3v2 or eyeD3 for the command line Generally music players can also edit common tags, f.e. banshee , rhythmbox or amarok and a lot others, try searching your distributions repository and test some of them.
{ "source": [ "https://unix.stackexchange.com/questions/4961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/665/" ] }
4,988
I'm trying to write all of my sh startup/env scripts to work with as much DRY and as much: "works on every *nix I clone it to", as possible. This means making sure that if I try to run code that's not there, that the code fails gracefully. To that end I need to be able to test if programs exist. I know how to test if a file exists, but I'm not sure how to test to see if an application is executable within the path. I'd rather use the $PATH, as some of these need to work on arch, ubuntu, and centos. Some might be installed in my homedir, on systems where I don't have root, others might not be installed, and others yet my be installed in system paths.
Use type commandname . This returns true if commandname is anything executable: alias, function, built-in or external command (looked up in $PATH ). Alternatively, use command commandname which returns true if commandname is a built-in or external command (looked up in $PATH ). exists () { type "$1" >/dev/null 2>/dev/null } There are a few sh variants (definitely pre-POSIX; I know of /bin/sh under OSF1 ≤3.x and some versions of the Almquist shell found in early NetBSD versions and a few 20th-century Linux distributions) where type always returns 0 or doesn't exist. I don't think any systems shipped with these this millennium. If you ever encounter them, here's a function you can use to search in $PATH manually: exists () { ( IFS=: for d in $PATH; do if test -x "$d/$1"; then return 0; fi done return 1 ) } This function is generally useful if you want to exclude built-ins and functions and look up the name in $PATH . Most shells have a built-in for this, command -v , though it's a relatively recent addition to POSIX (still optional as of POSIX:2004). It's basically a programmer-friendly version of type : it prints the full path for an executable in $PATH , the bare name for a built-in or function, and an alias definition for an alias. exists_in_path () { case $(command -v -- "$1") in /*) return 0;; alias\ *) return 1;; # alias *) return 1;; # built-in or function esac } Ksh, bash and zsh also have type -p to look up only executables in $PATH . Note that in bash, the return status of type -p foo is 0 if foo is a built-in or function; if you want to test for an executable in $PATH , you need to check that the output is not empty. type -p is not in POSIX; for instance Debian's ash (which is /bin/sh on Ubuntu) doesn't have it.
{ "source": [ "https://unix.stackexchange.com/questions/4988", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
4,999
I'm looking for somthing like top is to CPU usage. Is there a command line argument for top that does this? Currently, my memory is so full that even 'man top' fails with out of memory :)
From inside top you can try the following: Press SHIFT + f Press the Letter corresponding to %MEM Press ENTER You might also try: $ ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5 This will give the top 5 processes by memory usage.
{ "source": [ "https://unix.stackexchange.com/questions/4999", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
5,010
I would need a program, that outputs the number of the different characters in a file. Example: > stats testfile ' ': 207 'e': 186 'n': 102 Exists any tool, that do this?
The following should work: $ sed 's/\(.\)/\1\n/g' text.txt | sort | uniq -c First, we insert a newline after every character, putting each character on its own line. Then we sort it. Then we use the uniq command to remove the duplicates, prefixing each line with the number of occurrences of that character. To sort the list by frequency, pipe this all into sort -nr .
{ "source": [ "https://unix.stackexchange.com/questions/5010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2655/" ] }
5,017
When I installed Ubuntu 10.04 and, now, 10.10, I was offered the option of enabling "encrypted LVM" for my hard drive. After choosing that option, I am prompted for my password during boot to decrypt the LVM. Now, I am thinking about setting up a headless server that runs Linux (not necessarily Ubuntu), but I am worried that since the server is headless I won't be able to decrypt it during startup. Would I be able to SSH in during boot to enter my password for the encrypted LVM? If so how do I set it up? Or is there another solution? Again this question is NOT specific to Ubuntu. Thanks.
For newer versions of ubuntu, for example, 14.04, I found a combination of @dragly and this blogposts' answers very helpful. To paraphrase: (On server) Install Dropbear sudo apt-get install dropbear (On server) Copy and assign permissions for root public/private key login sudo cp /etc/initramfs-tools/root/.ssh/id_rsa ~/. sudo chown user:user ~/id_rsa remember to change user to your username on the server (On client) Fetch private key from server scp [email protected]:~/id_rsa ~/.ssh/id_rsa_dropbear (On client) Add an entry to ssh config Host parkia Hostname 192.168.11.111 User root UserKnownHostsFile ~/.ssh/know_hosts.initramfs IdentityFile ~/.ssh/id_rsa_dropbear Remember to change _parkia_ to whatever you'd like to type `ssh my-box` to be. (On server) Create this file at /etc/initramfs-tools/hooks/crypt_unlock.sh (On server) Make that file executable sudo chmod +x /etc/initramfs-tools/hooks/crypt_unlock.sh Update the initramfs sudo update-initramfs -u Disable the dropbear service on boot so openssh is used after partition is decrypted sudo update-rc.d dropbear disable You're done. Try it out. Check the blog post linked to above for instructions about how to configure the server with a static IP address if that is something you'd need to do.
{ "source": [ "https://unix.stackexchange.com/questions/5017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1375/" ] }
5,024
Suppose I have in main.sh: $NAME="a string" if [ -f $HOME/install.sh ] . $HOME/install.sh $NAME fi and in install.sh: echo $1 This is supposed to echo "a string" , but it echoes nothing. Why?
Michael Mrozek covers most of the issues and his fixes will work since you are using Bash. You may be interested in the fact that the ability to source a script with arguments is a bashism. In sh or dash your main.sh will not echo anything because the arguments to the sourced script are ignored and $1 will refer to the argument to main.sh. When you source the script in sh , it is as if you just copy and pasted the text of the sourced script into the file from which it was sourced. Consider the following (note, I've made the correction Michael recommended): $ bash ./test.sh A String $ sh ./test.sh $ sh ./test.sh "HELLO WORLD" HELLO WORLD
{ "source": [ "https://unix.stackexchange.com/questions/5024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2689/" ] }
5,085
With sfdisk -s I can see the disk capacity as follows: $ sfdisk -s /dev/cciss/c0d0: 143338560 total: 143338560 blocks How do I see disk details like disk manufacturer? I tried hdparm , but got an error: $ hdparm -i /dev/cciss/c0d0 /dev/cciss/c0d0: HDIO_GET_IDENTITY failed: Inappropriate ioctl for device
Try these commands: lshw -class disk hwinfo --disk You may have to install hwinfo . Concerning hdparm : hdparm(8) says: Although this utility is intended primarily for use with SATA/IDE hard disk devices, several of the options are also valid (and permitted) for use with SCSI hard disk devices and MFM/RLL hard disks with XT interfaces. and: Some options (eg. -r for SCSI) may not work with old kernels as necessary ioctl()´s were not supported.
{ "source": [ "https://unix.stackexchange.com/questions/5085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
5,088
I would like to know about Linux spinlocks in detail; could someone explain them to me?
A spin lock is a way to protect a shared resource from being modified by two or more processes simultaneously. The first process that tries to modify the resource "acquires" the lock and continues on its way, doing what it needed to with the resource. Any other processes that subsequently try to acquire the lock get stopped; they are said to "spin in place" waiting on the lock to be released by the first process, thus the name spin lock. The Linux kernel uses spin locks for many things, such as when sending data to a particular peripheral. Most hardware peripherals aren't designed to handle multiple simultaneous state updates. If two different modifications have to happen, one has to strictly follow the other, they can't overlap. A spin lock provides the necessary protection, ensuring that the modifications happen one at a time. Spin locks are a problem because spinning blocks that thread's CPU core from doing any other work. While the Linux kernel does provide multitasking services to user space programs running under it, that general-purpose multitasking facility doesn't extend to kernel code. This situation is changing, and has been for most of Linux's existence. Up through Linux 2.0, the kernel was almost purely a single-tasking program: whenever the CPU was running kernel code, only one CPU core was used, because there was a single spin lock protecting all shared resources, called the Big Kernel Lock (BKL). Beginning with Linux 2.2, the BKL is slowly being broken up into many independent locks that each protect a more focused class of resource. Today, with kernel 2.6, the BKL still exists, but it's only used by really old code that can't be readily moved to some more granular lock. It is now quite possible for a multicore box to have every CPU running useful kernel code. There's a limit to the utility of breaking up the BKL because the Linux kernel lacks general multitasking. If a CPU core gets blocked spinning on a kernel spin lock, it can't be retasked, to go do something else until the lock is released. It just sits and spins until the lock is released. Spin locks can effectively turn a monster 16-core box into a single-core box, if the workload is such that every core is always waiting for a single spin lock. This is the main limit to the scalability of the Linux kernel: doubling CPU cores from 2 to 4 probably will nearly double the speed of a Linux box, but doubling it from 16 to 32 probably won't, with most workloads.
{ "source": [ "https://unix.stackexchange.com/questions/5088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
5,093
I am trying to (as close as possibly) atomically change a symlink. I've tried: ln -sf other_dir existing_symlink That just put the new symlink in the directory that existing_symlink pointed to. ln -sf other_dir new_symlink mv -f new_symlink existing_symlink That did the same thing: it moved the symlink into the directory. cp -s other_dir existing_symlink It refuses because it's a directory. I've read that mv -T is was made for this, but busybox doesn't have the -T flag.
This can indeed be done atomically with rename(2) , by first creating the new symlink under a temporary name and then cleanly overwriting the old symlink in one go. As the man page states: If newpath refers to a symbolic link the link will be overwritten. In the shell, you would do this with mv -T as follows: $ mkdir a b $ ln -s a z $ ln -s b z.new $ mv -T z.new z You can strace that last command to make sure it is indeed using rename(2) under the hood: $ strace mv -T z.new z lstat64("z.new", {st_mode=S_IFLNK|0777, st_size=1, ...}) = 0 lstat64("z", {st_mode=S_IFLNK|0777, st_size=1, ...}) = 0 rename("z.new", "z") = 0 Note that in the above, both mv -T and strace are Linux-specific. On FreeBSD, use mv -h alternately.
{ "source": [ "https://unix.stackexchange.com/questions/5093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2180/" ] }
5,117
Imagine there's a company A that releases a new graphics adapter. Who manages the process that results in this new graphics adapter being supported by the Linux kernel in the future? How does that proceed? I'm curious how kernel support for any new hardware is handled; on Windows companies develop drivers on their own, but how does Linux get specific hardware support?
Driver support works the same way as with all of open source: someone decides to scratch their own itch. Sometimes the driver is supplied by the company providing the hardware, just as on Windows. Intel does this for their network chips, 3ware does this for their RAID controllers, etc. These companies have decided that it is in their best interest to provide the driver: their "itch" is to sell product to Linux users, and that means ensuring that there is a driver. In the best case, the company works hard to get their driver into the appropriate source base that ships with Linux distros. For most drivers, that means the Linux kernel. For graphics drivers, it means X.org . There's also CUPS for printer drivers, NUT for UPS drivers, SANE for scanner drivers, etc. The obvious benefit of doing this is that Linux distros made after the driver gets accepted will have support for the hardware out of the box. The biggest downside is that it's more work for the company to coordinate with the open source project to get their driver in, for the same basic reasons it's difficult for two separate groups to coordinate anything. Then there are those companies that choose to offer their driver source code directly, only. You typically have to download the driver source code from their web site, build it on your system, and install it by hand. Such companies are usually smaller or specialty manufacturers without enough employees that they can spare the effort to coordinate with the appropriate open source project to get their driver into that project's source base. A rare few companies provide binary-only drivers instead of source code. An example are the more advanced 3D drivers from companies like NVIDIA. Typically the reason for this is that the company doesn't want to give away information they feel proprietary about. Such drivers often don't work with as many Linux distros as with the previous cases, because the company providing the hardware doesn't bother to rebuild their driver to track API and ABI changes. It's possible for the end user or the Linux distro provider to tweak a driver provided as source code to track such changes, so in the previous two cases, the driver can usually be made to work with more systems than a binary driver will. When the company doesn't provide Linux drivers, someone in the community simply decides to do it. There are some large classes of hardware where this is common, like with UPSes and printers. It takes a rare user who a) has the hardware; b) has the time; c) has the skill; and d) has the inclination to spend the time to develop the driver. For popular hardware, this usually isn't a problem because with millions of Linux users, these few people do exist. You get into trouble with uncommon hardware.
{ "source": [ "https://unix.stackexchange.com/questions/5117", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3086/" ] }
5,120
This is Ubuntu server 10.04 64 and samba 3.4.7. I have a shared directory /home/mit/share and another one /home/temp that I link into the shared one: ln -s /home/temp /home/mit/share/temp But on windows, after using internet, I cannot open S:/temp , but on Linux it is possible to access /home/mit/share/temp like expected. This works if I link directories inside /home/mit/share/temp , so I guess samba is restricting to jump with a link outside/above the shared directory. EDIT: See also this question titled Ubuntu + latest samba version, symlinks no longer work on share mounted in Windows . It seems best to put unix extensions = no into the global section and follow symlinks = yes and wide links = yes only into the shares section, where you really need it. The unix extension flag has to live in the global section and not in the individual shares sections. But for security reasons it is better to use the other options only where you need it, and not globally.
Edit smb.conf [global] unix extensions = no [share] follow symlinks = yes wide links = yes Note: If you're using a newer version of samba the following may work for you instead: [global] allow insecure wide links = yes [share] follow symlinks = yes wide links = yes documentation on follow symlinks and wide links flags: https://www.samba.org/samba/docs/using_samba/ch08.html#samba2-CHP-8-TABLE-1
{ "source": [ "https://unix.stackexchange.com/questions/5120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2418/" ] }
5,257
When I install a port, I am often presented with a menu screen to select configuration options. If I'm going to install a really big package with lots of dependencies, that will be extremely inconvenient. Is there a make flag for accepting the default answers for all such prompts?
Probably BATCH , described in ports(7) , is what you're looking for: # cd /usr/ports/sysutils/screen # export BATCH=yes # make rmconfig # make install clean (no configuration menu is displayed) make rmconfig removes OPTIONS config for this port, and you can use it to remove OPTIONS which were previously saved when you configured and installed screen(1) the first time. OPTIONS are stored to directory which is specifed via PORT_DB_DIR (defaults to /var/db/ports ). If you use bash, BATCH can be set automatically every time you log in: # echo 'export BATCH=yes' >> ~/.bash_profile
{ "source": [ "https://unix.stackexchange.com/questions/5257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2339/" ] }
5,260
When I was first introduced to Linux, working at Cisco Systems in 2000, I was taught the merits of the sync command, used to flush buffers to disk to prevent filesystem corruption / data loss. I was told not only by coworkers there, but by friends in college to always run sync "a few" or "a bunch" of times, that is, maybe 5 - 10 times, instead of just once. I've continued this habit ever since, but, is there any merit to this? Has anyone else ever heard this? And most importantly, can anyone provide good rationale / empirical evidence for/against the idea that you need to run sync more than once for it to be effective?
I heard it (sorry, I forget where) as typing the sync command three times (as in: S Y N C Return , wait for the prompt, repeat, repeat). I also read that the origin was a particular system where it would take a couple of seconds for the disk to finish flushing its buffers, even after it had told the operating system everything was fine. Typing the command twice more gave the disk enough time to settle. It seems that over the years, the purpose was forgotten, and the advice was abbreviated as sync; sync; sync which wouldn't have had the desired effect (since the disk had reported the “all clear”, the second and third syncs would complete instantly and the prompt would come back too early). I have never heard of a system where multiple sync operations have any use, and I am highly skeptical any exist. I consider this an urban legend. On the other hand, I find it highly believable that there would be systems where you should wait a couple of seconds after sync'ing and before powering down. Googling leads to a few independent concurring analyses, e.g. The Legend of sync . See also Is execution of sync(8) still required before shutting down linux? .
{ "source": [ "https://unix.stackexchange.com/questions/5260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1974/" ] }
5,277
I need a command that will wait for a process to start accepting requests on a specific port. Is there something in linux that does that? while (checkAlive -host localhost -port 13000 == false) do some waiting ...
The best test to see if a server is accepting connections is to actually try connecting. Use a regular client for whatever protocol your server speaks and try a no-op command. If you want a lightweight TCP or UDP client you can drive simply from the shell, use netcat . How to program a conversation depends on the protocol; many protocols have the server close the connection on a certain input, and netcat will then exit. while ! echo exit | nc localhost 13000; do sleep 10; done You can also tell netcat to exit after establishing the connection. It returns 1 if there's no connection and 0 if there is so we negate its output. Depending on your version of netcat, it may support one or both of the following commands: while ! nc -z localhost 13000 </dev/null; do sleep 10; done while ! nc -q 1 localhost 13000 </dev/null; do sleep 10; done An alternative approach is to wait for the server process to open a listening socket. while netstat -lnt | awk '$4 ~ /:13000$/ {exit 1}'; do sleep 10; done If you are on Mac OS, netstat uses a slightly different output format, so you would want the following intead: while netstat -lnt | awk '$4 ~ /\.13000$/ {exit 1}'; do sleep 10; done Or you might want to target a specific process ID: while ! lsof -n -Fn -p $pid | grep -q '^n.*:13000$'; do sleep 10; done I can't think of any way to react to the process starting to listen to the socket (which would avoid a polling approach) short of using ptrace .
{ "source": [ "https://unix.stackexchange.com/questions/5277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3390/" ] }
5,297
Does anybody have a suggestion for how to move the root partition to a new drive and set up grub2 to boot on that drive? I seem to have no luck instructing grub-mkconfig what it is I want to do (e.g. chroot'int into my new root just confuses all the scripts). Background I am running Debian Squeeze on a headless low-power NAS. My current setup is / on sda0 and /boot on sde0 (a CF card): I needed the separate /boot because sd[a-d] need to do a delayed spin-up. Now I've found an old 2.5" IDE disk to use as / including /boot to allow me to spin all the big disks down. What I've tried Basically I went mount -o rw /dev/sdf5 /mnt/newroot cp -ax / /mnt/newroot cp -ax /boot /mnt/newroot/boot Then I tried chroot /mnt/newroot update-grub But that failed with grub asking if root was mounted. Then I did a half-hearted attempt at setting up /mnt/newroot/grub/grub.cfg to find the kernel image on sdf5 , followed by a grub-install --root-directory=/mnt/newroot /dev/sdf . But this just landed me a grub rescue prompt when I tried booting from sdf . My backup plan is to just reinstall, so a bonus question (no checkmarks for this one): What do I have to do to get my lvm2 and mdadm config across? Is it all stored in the filesystems (and will it be automatically discovered), or do I need to take of it myself? Solution (thanks to Maciej Piechotka): As Maciej points out, I need to to a proper chroot for all the grub tools to work. For reference, this is how I did it: janus@nasguld:/mnt/newroot$ sudo cp -ax / /mnt/newroot janus@nasguld:/mnt/newroot$ sudo cp -ax /boot /mnt/newroot All the files are now copied (see here for a discussion of copy strategies). Fix the new etc/fstab to point to new root: janus@nasguld:/mnt/newroot$ diff -u etc/fstab.old etc/fstab -UUID=399b6a6d-c067-4caf-bb3e-85317d66cf46 / ext3 errors=remount-ro 0 1 -UUID=b394b614-a977-4860-bbd5-7862d2b7e02a /boot ext3 defaults 0 2 +UUID=b9d62595-e95c-45b1-8a46-2c0b37fcf153 / ext3 noatime,errors=remount-ro 0 1 Finally, mount dev , sys and proc to the new root and chroot: janus@nasguld:/mnt/newroot$ sudo mount -o bind /dev /mnt/newroot/dev janus@nasguld:/mnt/newroot$ sudo mount -t proc none /mnt/newroot/proc janus@nasguld:/mnt/newroot$ sudo mount -t sysfs none /mnt/newroot/sys janus@nasguld:/mnt/newroot$ sudo parted /dev/sdb set 5 boot on janus@nasguld:/mnt/newroot$ sudo chroot . We are now chrooted to the future root exactly as it will look. According to Maciej, it should be ok to just call grub-install , but I did an update-grub first to get a look at the generated /boot/grub/grub.cfg before installing the bootloader. I am not sure it will be automatically updated? root@nasguld:/# update-grub root@nasguld:/# grub-install /dev/sdb
Mount basic filesystems and copy/modify files while chrooting like: /dev ( mount -o bind /dev/ /path/to/chroot/dev ) /proc ( mount -t proc none /path/to/chroot/proc ) /sys ( mount -t sysfs none /path/to/chroot/sys ) IIRC that worked for me while installing Grub 2 in arch and numerous times on Gentoo. Then after chroot to /path/to/chroot command was simply: grub-install /dev/<boot_disk> As of lvm2 (and I belive madm but I haven't used it) the configuration is stored on disk. There is configuration what should be read to discover devices. Assuming your devices are in standard locations ( /dev/sd* or /dev/hd* ) there should be no problem. PS. I would not trust simple cp of live system as there are several places where it can go wrong: Forgot to change /etc/fstab and other useful files Files changed during access Coping garbage ( /tmp etc.)
{ "source": [ "https://unix.stackexchange.com/questions/5297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2881/" ] }
5,337
I would like to understand in detail the difference between fork() and vfork(). I was not able to digest the man page completely. I would also like to clarify one of my colleagues comment " In current Linux, there is no vfork(), even if you call it, it will internally call fork() ."
Man pages are usually terse reference documents. Wikipedia is a better place to turn to for conceptual explanations. Fork duplicates a process: it creates a child process which is almost identical to the parent process (the most obvious difference is that the new process has a different process ID). In particular, fork (conceptually) must copy all the parent process's memory. As this is rather costly, vfork was invented to handle a common special case where the copy is not necessary. Often, the first thing the child process does is to load a new program image, so this is what happens: if (fork()) { # parent process … } else { # child process (with a new copy of the process memory) execve("/bin/sh", …); # discard the process memory } The execve call loads a new executable program, and this replaces the process's code and data memory by the code of the new executable and a fresh data memory. So the whole memory copy created by fork was all for nothing. Thus the vfork call was invented. It does not make a copy of the memory. Therefore vfork is cheap, but it's hard to use since you have to make sure you don't access any of the process's stack or heap space in the child process. Note that even reading could be a problem, because the parent process keeps executing. For example, this code is broken (it may or may not work depending on whether the child or the parent gets a time slice first): if (vfork()) { # parent process cmd = NULL; # modify the only copy of cmd } else { # child process execve("/bin/sh", "sh", "-c", cmd, (char*)NULL); # read the only copy of cmd } Since the invention of vfork, better optimizations have been invented. Most modern systems, including Linux, use a form of copy-on-write , where the pages in the process memory are not copied at the time of the fork call, but later when the parent or child first writes to the page. That is, each page starts out as shared, and remains shared until either process writes to that page; the process that writes gets a new physical page (with the same virtual address). Copy-on-write makes vfork mostly useless, since fork won't make any copy in the cases where vfork would be usable. Linux does retain vfork. The fork system call must still make a copy of the process's virtual memory table, even if it doesn't copy the actual memory; vfork doesn't even need to do this. The performance improvement is negligible in most applications.
{ "source": [ "https://unix.stackexchange.com/questions/5337", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
5,366
So, I've looked at history and at Ctrl + R , but they are not what I thought I knew. Is there a way that I can type in the beginning of a command, and cycle through the matches in my history with some bash shortcut? # mysq(some shortcut key) Gives me: # mysqldump --add-drop-table -e -q -n -C -u (some shortcut key) # mysql -u ben.dauphinee -p
Pressing Ctrl + R will open the reverse history search. Now start typing your command, this will give the first match. By pressing Ctrl + R again (and again) you can cycle through the history. mysq(Ctrl+R) Would give: mysqldump --add-drop-table -e -q -n -C -u Ctrl + R again: mysql -u ben.dauphinee -p
{ "source": [ "https://unix.stackexchange.com/questions/5366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3429/" ] }
5,389
How can I concatenate a shell variable to other other parameters in my command lines? For example, #!/bin/sh WEBSITE="danydiop" /usr/bin/mysqldump --opt -u root --ppassword $WEBSITE > $WEBSITE.sql I need to concatenate .sql to $WEBSITE
Use ${ } to enclosure a variable. Without curly brackets: VAR="foo" echo $VAR echo $VARbar would give foo and nothing, because the variable $VARbar doesn't exist. With curly brackets: VAR="foo" echo ${VAR} echo ${VAR}bar would give foo foobar Enclosing the first $VAR is not necessary, but a good practice. For your example: #!/bin/sh WEBSITE="danydiop" /usr/bin/mysqldump --opt -u root --ppassword ${WEBSITE} > ${WEBSITE}.sql This works for bash , zsh , ksh , maybe others too.
{ "source": [ "https://unix.stackexchange.com/questions/5389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2504/" ] }
5,394
I use Ctrl-P very frequently to scroll backward in the command history, but I often mistype it as Mod4-P , which is bound to the switch display function. I've searched around Keyboard shortcuts and CompizConfig, etc., but I couldn't find where Mod4-P is bound. What controls that?
Use ${ } to enclosure a variable. Without curly brackets: VAR="foo" echo $VAR echo $VARbar would give foo and nothing, because the variable $VARbar doesn't exist. With curly brackets: VAR="foo" echo ${VAR} echo ${VAR}bar would give foo foobar Enclosing the first $VAR is not necessary, but a good practice. For your example: #!/bin/sh WEBSITE="danydiop" /usr/bin/mysqldump --opt -u root --ppassword ${WEBSITE} > ${WEBSITE}.sql This works for bash , zsh , ksh , maybe others too.
{ "source": [ "https://unix.stackexchange.com/questions/5394", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3267/" ] }
5,414
on our servers, typing sar show's the system load statistics for today starting at midnight, is it possible to show yesterdays statistics?
Usually, sysstat , which provides a sar command, keeps logs in /var/log/sysstat/ or /var/log/sa/ with filenames such as /var/log/sysstat/sadd where dd is a numeric value for the day of the month (starting at 01). By default, the file from the current day is used; however, you can change the file that is used with the -f command line switch. Thus for the 3rd of the month you would do something like: sar -f /var/log/sysstat/sa03 If you want to restrict the time range, you can use the -s and -e parameters. If you want to routinely get yesterday's file and can never remember the date and have GNU date you could try sar -f /var/log/sysstat/sa$(date +%d -d yesterday) I highly recommend reading the manual page for sar .
{ "source": [ "https://unix.stackexchange.com/questions/5414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
5,421
Where did the terminology "tty" come from in Linux?
As Danny has stated tty is teletype terminals . The fact is that most of us have used it many times, but few of us have gone so far as to understand it. Here is a very good article which gives us a basic understanding of TTYs in Linux. The TTY Demystified
{ "source": [ "https://unix.stackexchange.com/questions/5421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2269/" ] }
5,478
Given an X11 window ID, is there a way to find the ID of the process that created it? Of course this isn't always possible, for example if the window came over a TCP connection. For that case I'd like the IP and port associated with the remote end. The question was asked before on Stack Overflow , and a proposed method was to use the _NET_WM_PID property. But that's set by the application. Is there a way to do it if the application doesn't play nice?
Unless your X-server supports XResQueryClientIds from X-Resource v1.2 extension I know no easy way to reliably request process ID. There're other ways however. If you just have a window in front of you and don't know its ID yet — it's easy to find it out. Just open a terminal next to the window in question, run xwininfo there and click on that window. xwininfo will show you the window-id. So let's assume you know a window-id, e.g. 0x1600045, and want to find, what's the process owning it. The easiest way to check who that window belongs to is to run XKillClient for it i.e.: xkill -id 0x1600045 and see which process just died. But only if you don't mind killing it of course! Another easy but unreliable way is to check its _NET_WM_PID and WM_CLIENT_MACHINE properties: xprop -id 0x1600045 That's what tools like xlsclients and xrestop do. Unfortunately this information may be incorrect not only because the process was evil and changed those, but also because it was buggy. For example after some firefox crash/restart I've seen orphaned windows (from flash plugin, I guess) with _NET_WM_PID pointing to a process, that died long time ago. Alternative way is to run xwininfo -root -tree and check properties of parents of the window in question. That may also give you some hints about window origins. But! While you may not find what process have created that window, there's still a way to find where that process have connected to X-server from. And that way is for real hackers. :) The window-id 0x1600045 that you know with lower bits zeroed (i.e. 0x1600000) is a "client base". And all resource IDs, allocated for that client are "based" on it (0x1600001, 0x1600002, 0x1600003, etc). X-server stores information about its clients in clients[] array, and for each client its "base" is stored in clients[i]->clientAsMask variable. To find X-socket, corresponding to that client, you need to attach to X-server with gdb , walk over clients[] array, find client with that clientAsMask and print its socket descriptor, stored in ((OsCommPtr)(clients[i]->osPrivate))->fd. There may be many X-clients connected, so in order to not check them all manually, let's use a gdb function: define findclient set $ii = 0 while ($ii < currentMaxClients) if (clients[$ii] != 0 && clients[$ii]->clientAsMask == $arg0 && clients[$ii]->osPrivate != 0) print ((OsCommPtr)(clients[$ii]->osPrivate))->fd end set $ii = $ii + 1 end end When you find the socket, you can check, who's connected to it, and finally find the process. WARNING : Do NOT attach gdb to X-server from INSIDE the X-server. gdb suspends the process it attaches to, so if you attach to it from inside X-session, you'll freeze your X-server and won't be able to interact with gdb. You must either switch to text terminal ( Ctrl+Alt+F2 ) or connect to your machine over ssh. Example: Find the PID of your X-server: $ ps ax | grep X 1237 tty1 Ssl+ 11:36 /usr/bin/X :0 vt1 -nr -nolisten tcp -auth /var/run/kdm/A:0-h6syCa Window id is 0x1600045, so client base is 0x1600000. Attach to X-server and find client socket descriptor for that client base. You'll need debug information installed for X-server (-debuginfo package for rpm-distributions or -dbg package for deb's). $ sudo gdb (gdb) define findclient Type commands for definition of "findclient". End with a line saying just "end". > set $ii = 0 > while ($ii < currentMaxClients) > if (clients[$ii] != 0 && clients[$ii]->clientAsMask == $arg0 && clients[$ii]->osPrivate != 0) > print ((OsCommPtr)(clients[$ii]->osPrivate))->fd > end > set $ii = $ii + 1 > end > end (gdb) attach 1237 (gdb) findclient 0x1600000 $1 = 31 (gdb) detach (gdb) quit Now you know that client is connected to a server socket 31. Use lsof to find what that socket is: $ sudo lsof -n | grep 1237 | grep 31 X 1237 root 31u unix 0xffff810008339340 8512422 socket (here "X" is the process name, "1237" is its pid, "root" is the user it's running from, "31u" is a socket descriptor) There you may see that the client is connected over TCP, then you can go to the machine it's connected from and check netstat -nap there to find the process. But most probably you'll see a unix socket there, as shown above, which means it's a local client. To find a pair for that unix socket you can use the MvG's technique (you'll also need debug information for your kernel installed): $ sudo gdb -c /proc/kcore (gdb) print ((struct unix_sock*)0xffff810008339340)->peer $1 = (struct sock *) 0xffff810008339600 (gdb) quit Now that you know client socket, use lsof to find PID holding it: $ sudo lsof -n | grep 0xffff810008339600 firefox 7725 username 146u unix 0xffff810008339600 8512421 socket That's it. The process keeping that window is "firefox" with process-id 7725 2017 Edit : There are more options now as seen at Who's got the other end of this unix socketpair? . With Linux 3.3 or above and with lsof 4.89 or above, you can replace points 3 to 5 above with: lsof +E -a -p 1237 -d 31 to find out who's at the other end of the socket on fd 31 of the X-server process with ID 1237.
{ "source": [ "https://unix.stackexchange.com/questions/5478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
5,490
Say I have a file which contains: A A A B CC I want to have the output like this: A 3 B 1 CC 1
I figured it out; one of uniq 's options is -c , for "prefix lines by the number of occurrences": $ uniq -c
{ "source": [ "https://unix.stackexchange.com/questions/5490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1875/" ] }
5,515
I want a script to sleep unless a certain file is modifed/deleted (or a file created in a certain directory, or ...). Can this be achieved in some elegant way? The simplest thing that comes to my mind is a loop that sleeps for some time before checking the status again, but maybe there is a more elegant way?
On linux, you can use the kernel's inotify feature. Tools for scripting can be found there: inotify-tools . Example use from wiki: #!/bin/sh EVENT=$(inotifywait --format '%e' ~/file1) # blocking without looping [ $? != 0 ] && exit [ "$EVENT" = "MODIFY" ] && echo 'file modified!' [ "$EVENT" = "DELETE_SELF" ] && echo 'file deleted!' # etc...
{ "source": [ "https://unix.stackexchange.com/questions/5515", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/863/" ] }
5,518
While browsing through the Kernel Makefiles, I found these terms. So I would like to know what is the difference between vmlinux , vmlinuz , vmlinux.bin , zimage & bzimage ?
vmlinux This is the Linux kernel in an statically linked executable file format. Generally, you don't have to worry about this file, it's just a intermediate step in the boot procedure. The raw vmlinux file may be useful for debugging purposes. vmlinux.bin The same as vmlinux, but in a bootable raw binary file format. All symbols and relocation information is discarded. Generated from vmlinux by objcopy -O binary vmlinux vmlinux.bin . vmlinuz The vmlinux file usually gets compressed with zlib . Since 2.6.30 LZMA and bzip2 are also available. By adding further boot and decompression capabilities to vmlinuz, the image can be used to boot a system with the vmlinux kernel. The compression of vmlinux can occur with zImage or bzImage. The function decompress_kernel() handles the decompression of vmlinuz at bootup, a message indicates this: Decompressing Linux... done Booting the kernel. zImage ( make zImage ) This is the old format for small kernels (compressed, below 512KB). At boot, this image gets loaded low in memory (the first 640KB of the RAM). bzImage ( make bzImage ) The big zImage (this has nothing to do with bzip2 ), was created while the kernel grew and handles bigger images (compressed, over 512KB). The image gets loaded high in memory (above 1MB RAM). As today's kernels are way over 512KB, this is usually the preferred way. An inspection on Ubuntu 10.10 shows: ls -lh /boot/vmlinuz-$(uname -r) -rw-r--r-- 1 root root 4.1M 2010-11-24 12:21 /boot/vmlinuz-2.6.35-23-generic file /boot/vmlinuz-$(uname -r) /boot/vmlinuz-2.6.35-23-generic: Linux kernel x86 boot executable bzImage, version 2.6.35-23-generic (buildd@rosea, RO-rootFS, root_dev 0x6801, swap_dev 0x4, Normal VGA
{ "source": [ "https://unix.stackexchange.com/questions/5518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
5,535
From my research, I seem to notice that all package managers insist on being used as a privileged user and must be installed into / . Typically, what I like to do is create a throwaway account, compile some software, and install to $HOME for that account. I can try a variety of setups and then when I'm done, just destroy the account. However, compiling software becomes tedious. My experience is really just limited to yum , but I don't understand why I wouldn't be able to drop a repo file into ~/etc/yum.repos.d and have yum install everything into a home account. Is there any reason why package managers must be used as a privleged user to install software?
Binary packages are compiled with the assumption that they will be installed to specific locations in / . This is not always easily changed, and it would take additional QA effort (which is difficult enough in the first place!) to determine whether specific binaries are or aren't relocatable. To an extent, you can use things like fakechroot to create an entire system in a subdirectory as a non-root user, but this is tedious and fragile. You will have better luck with source packages. Gentoo Prefix and Rootless GoboLinux are both package managers that can install to non- / locations and may be usable by non- root users.
{ "source": [ "https://unix.stackexchange.com/questions/5535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1517/" ] }
5,609
When I execute a program without specifying the full path to the executable, and Bash must search the directories in $PATH to find the binary, it seems that Bash remembers the path in some sort of cache. For example, I installed a build of Subversion from source to /usr/local , then typed svnsync help at the Bash prompt. Bash located the binary /usr/local/bin/svnsync for "svnsync" and executed it. Then when I deleted the installation of Subversion in /usr/local and re-ran svnsync help , Bash responds: bash: /usr/local/bin/svnsync: No such file or directory But, when I start a new instance of Bash, it finds and executes /usr/bin/svnsync . How do I clear the cache of paths to executables?
bash does cache the full path to a command. You can verify that the command you are trying to execute is hashed with the type command: $ type svnsync svnsync is hashed (/usr/local/bin/svnsync) To clear the entire cache: $ hash -r Or just one entry: $ hash -d svnsync For additional information, consult help hash and man bash .
{ "source": [ "https://unix.stackexchange.com/questions/5609", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3535/" ] }
5,642
I have a process I can't kill with kill -9 <pid> . What's the problem in such a case, especially since I am the owner of that process. I thought nothing could evade that kill option.
kill -9 ( SIGKILL ) always works, provided you have the permission to kill the process. Basically either the process must be started by you and not be setuid or setgid, or you must be root. There is one exception: even root cannot send a fatal signal to PID 1 (the init process). However kill -9 is not guaranteed to work immediately . All signals, including SIGKILL, are delivered asynchronously: the kernel may take its time to deliver them. Usually, delivering a signal takes at most a few microseconds, just the time it takes for the target to get a time slice. However, if the target has blocked the signal , the signal will be queued until the target unblocks it. Normally, processes cannot block SIGKILL. But kernel code can, and processes execute kernel code when they call system calls . Kernel code blocks all signals when interrupting the system call would result in a badly formed data structure somewhere in the kernel, or more generally in some kernel invariant being violated. So if (due to a bug or misdesign) a system call blocks indefinitely, there may effectively be no way to kill the process. (But the process will be killed if it ever completes the system call.) A process blocked in a system call is in uninterruptible sleep . The ps or top command will (on most unices) show it in state D (originally for “ d isk”, I think). A classical case of long uninterruptible sleep is processes accessing files over NFS when the server is not responding; modern implementations tend not to impose uninterruptible sleep (e.g. under Linux, since kernel 2.6.25, SIGKILL does interrupt processes blocked on an NFS access). If a process remains in uninterruptible sleep for a long time, you can get information about what it's doing by attaching a debugger to it, by running a diagnostic tool such as strace or dtrace (or similar tools, depending on your unix flavor), or with other diagnostic mechanisms such as /proc/ PID /syscall under Linux. See Can't kill wget process with `kill -9` for more discussion of how to investigate a process in uninterruptible sleep. You may sometimes see entries marked Z (or H under Linux, I don't know what the distinction is) in the ps or top output. These are technically not processes, they are zombie processes, which are nothing more than an entry in the process table, kept around so that the parent process can be notified of the death of its child. They will go away when the parent process pays attention (or dies).
{ "source": [ "https://unix.stackexchange.com/questions/5642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
5,644
I am using winFF in Ubuntu 10.04. WinFF is a graphical frontend to ffmpeg. Typically, after selecting a file in WinFF, setting my conversion settings and pressing the "Convert" button, a console appears with output from the conversion process and prompts requesting permission to continue. However, now, when I press convert, I only see a blank console with a command prompt such as: oneat@oneat-desktop:~$ I thought I had misconfigured something, but I reinstalled everything but problem continues. Could you help me ? In general console output still works, since I see output when I run the following script: #!/bin/sh echo -n "\033]0; Converting _quot;LIKE A G6_quot; (OFFICIAL) FAR EAST MOVEMENT (FM) feat (1/1)\007" /usr/bin/ffmpeg -i "/home/oneat/dwhelper/_quot;LIKE A G6_quot; (OFFICIAL) FAR EAST MOVEMENT (FM) feat.flv" -acodec libmp3lame -vcodec msmpeg4 -ab 192kb -b 1000kb -s 640x480 -ar 44100 "/home/oneat/_quot;LIKE A G6_quot; (OFFICIAL) FAR EAST MOVEMENT (FM) feat.avi" read -p "Press Enter to Continue" dumbyvar rm "/home/oneat/.winff/ff110111193250.sh"
kill -9 ( SIGKILL ) always works, provided you have the permission to kill the process. Basically either the process must be started by you and not be setuid or setgid, or you must be root. There is one exception: even root cannot send a fatal signal to PID 1 (the init process). However kill -9 is not guaranteed to work immediately . All signals, including SIGKILL, are delivered asynchronously: the kernel may take its time to deliver them. Usually, delivering a signal takes at most a few microseconds, just the time it takes for the target to get a time slice. However, if the target has blocked the signal , the signal will be queued until the target unblocks it. Normally, processes cannot block SIGKILL. But kernel code can, and processes execute kernel code when they call system calls . Kernel code blocks all signals when interrupting the system call would result in a badly formed data structure somewhere in the kernel, or more generally in some kernel invariant being violated. So if (due to a bug or misdesign) a system call blocks indefinitely, there may effectively be no way to kill the process. (But the process will be killed if it ever completes the system call.) A process blocked in a system call is in uninterruptible sleep . The ps or top command will (on most unices) show it in state D (originally for “ d isk”, I think). A classical case of long uninterruptible sleep is processes accessing files over NFS when the server is not responding; modern implementations tend not to impose uninterruptible sleep (e.g. under Linux, since kernel 2.6.25, SIGKILL does interrupt processes blocked on an NFS access). If a process remains in uninterruptible sleep for a long time, you can get information about what it's doing by attaching a debugger to it, by running a diagnostic tool such as strace or dtrace (or similar tools, depending on your unix flavor), or with other diagnostic mechanisms such as /proc/ PID /syscall under Linux. See Can't kill wget process with `kill -9` for more discussion of how to investigate a process in uninterruptible sleep. You may sometimes see entries marked Z (or H under Linux, I don't know what the distinction is) in the ps or top output. These are technically not processes, they are zombie processes, which are nothing more than an entry in the process table, kept around so that the parent process can be notified of the death of its child. They will go away when the parent process pays attention (or dies).
{ "source": [ "https://unix.stackexchange.com/questions/5644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1924/" ] }
5,656
In order to save disk space, I want to have two OS installations share a single swap partition (a dual-boot). Is this a good idea?
It's possible. In fact, you can share the swap space between completely different operating systems, as long as you initialize the swap space when you boot. It used to be relatively common to share swap space between Linux and Windows , back when it represented a significant portion of your hard disk. Two restrictions come to mind: The OSes cannot be running concurrently (which you might want to do with virtual machines). You can't hibernate one of the OSes while you run another.
{ "source": [ "https://unix.stackexchange.com/questions/5656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
5,665
What does the "etc" folder in the root directory stand for? I think knowing this will help me remember where certain files are located. Update : Might be useful for others, the folder is used for "Host specific configuration files" - reference .
Define - /etc? has some good history. You can find references to "et cetera" in old Bell Labs UNIX manuals and so on – nowadays it's used only for system configuration, but it used to be where all the stuff that didn't fit into other directories went.
{ "source": [ "https://unix.stackexchange.com/questions/5665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3572/" ] }
5,711
This has always puzzled me. Why does the root directory contain a reference to a parent directory? bob@bob:/$ ls -a . build home lib32 mnt .rpmdb sys vmlinuz .. cdrom initrd.img lib64 opt sbin tmp vmlinuz.old bin dev initrd.img.old lost+found proc selinux usr boot etc lib media root srv var I understand how directories are managed in the filesystem - each directory has n+2 pointers to itself (n = number of subdirectories inside the directory). One for each immediate subdirectory, one for its parent, and one for itself. But what is / 's parent?
/.. points to / : $ ls -id / 2 / $ ls -id /.. 2 /.. Both have the same inode number, which happens to be 2 on this system. (The exact value doesn't matter.) It's done for consistency. This way, there doesn't have to be code in the kernel to check where it currently is when it processes a .. in a path. You can say cd .. forever, and never go deeper than the root.
{ "source": [ "https://unix.stackexchange.com/questions/5711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1049/" ] }
5,715
Is there an application to simply preview a font from a TTF file without installing it?
I use character maps heavily and decides to make one which you access from anywhere using a web interface and requires no installation. Works best on Chrome. Features Select your own font file Provides font and character information Character copy-able Supports TTF/OTF Supports Icon fonts Simple interface No installation necessary No server upload necessary Screenshot
{ "source": [ "https://unix.stackexchange.com/questions/5715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2119/" ] }
5,717
I want a script which kills the instance(s) of ssh which are run with the -D argument (setting up a local proxy). Manually, I do ps -A | grep -i ssh , look for the instance(s) with -D, and kill -9 {id} each one. But what does that look like in bash script form? (I am on Mac OS X but will install any necessary commands via port )
Run pgrep -f "ssh.*-D" and see if that returns the correct process ID. If it does, simply change pgrep to pkill and keep the same options and pattern Also, you shouldn't use kill -9 aka SIGKILL unless absolutely necessary because programs can't trap SIGKILL to clean up after themselves before they exit. I only use kill -9 after first trying -1 -2 and -3 .
{ "source": [ "https://unix.stackexchange.com/questions/5717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3194/" ] }
5,776
In case of one program having multiple instances, running pidof program gives: `1 2 3` top -p accepts comma-delimited arguments: 1, 2, 3 . This means that top -p `pidof program` won't work: top: unknown argument '1' usage: top -hv | -bcisSH -d delay -n iterations [-u user | -U user] -p pid [,pid ...] Can you show me how to do this. I'm not familiar with awk, sed, etc...
An alternative to sed for simple things like this is tr : top -p $(pidof program | tr ' ' ',') tr can also easily handle a variable number of spaces: tr -s ' ' ',' Additionally, if you have it available, pgrep can work well here: top -p $(pgrep -d , program) Make sure that you leave a space between -d and , as the comma is the argument (the deliminator). Also, note that pgrep will return every result of "program" so if you have a process called "program-foo," then this will also be returned (hence the name pgrep).
{ "source": [ "https://unix.stackexchange.com/questions/5776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
5,778
There are two syntaxes for command substitution: with dollar-parentheses and with backticks. Running top -p $(pidof init) and top -p `pidof init` gives the same output. Are these two ways of doing the same thing, or are there differences?
The old-style backquotes ` ` do treat backslashes and nesting a bit different. The new-style $() interprets everything in between ( ) as a command. echo $(uname | $(echo cat)) Linux echo `uname | `echo cat`` bash: command substitution: line 2: syntax error: unexpected end of file echo cat works if the nested backquotes are escaped: echo `uname | \`echo cat\`` Linux backslash fun: echo $(echo '\\') \\ echo `echo '\\'` \ The new-style $() applies to all POSIX -conformant shells. As mouviciel pointed out, old-style ` ` might be necessary for older shells. Apart from the technical point of view, the old-style ` ` has also a visual disadvantage: Hard to notice: I like $(program) better than `program` Easily confused with a single quote: '`'`''`''`'`''`' Not so easy to type (maybe not even on the standard layout of the keyboard) (and SE uses ` ` for own purpose, it was a pain writing this answer :)
{ "source": [ "https://unix.stackexchange.com/questions/5778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
5,788
I just know that Interrupt is a hardware signal assertion caused in a processor pin. But I would like to know how Linux OS handles it. What all are the things that happen when an interrupt occurs?
Here's a high-level view of the low-level processing. I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail. When an interrupt occurs, the processor looks if interrupts are masked. If they are, nothing happens until they are unmasked. When interrupts become unmasked, if there are any pending interrupts, the processor picks one. Then the processor executes the interrupt by branching to a particular address in memory. The code at that address is called the interrupt handler . When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers). The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data. If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread. When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts. The interrupt handler must run quickly, because it's preventing any other interrupt from running. In the Linux kernel, interrupt processing is divided in two parts: The “top half” is the interrupt handler. It does the minimum necessary, typically communicate with the hardware and set a flag somewhere in kernel memory. The “bottom half” does any other necessary processing, for example copying data into process memory, updating kernel data structures, etc. It can take its time and even block waiting for some other part of the system since it runs with interrupts enabled. As usual on this topic, for more information, read Linux Device Drivers ; chapter 10 is about interrupts.
{ "source": [ "https://unix.stackexchange.com/questions/5788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2871/" ] }
5,832
I make heavy use of screen's "log" command to log the output of a session to a file, when I am making changes in a given environment. I searched through tmux's man page, but couldn't find an equivalent. Is anyone aware of a similar feature in tmux, or do I have to write my own wrapper scripts to do this? EDIT: I'm aware of 'script' and other utilities that allow me to log a session. The reason that screen's functionality is so useful is the ability to define a logfile variable which uses string escapes to uniquely identify each session. e.g. I have a shell function which, given a hostname, will SSH to that host in a new screen window and set the window title to the hostname. When I start a log of that session, it is prefixed with the window title. If this functionality doesn't exist in tmux, I'll have to create a new set of shell functions to set up 'scripts' of sessions I want to log. This isn't hugely difficult, but it may not be worth the effort given that screen does exactly what I need already.
Let me see if I have deciphered your screen configuration correctly: You use something like logfile "%t-screen.log" (probably in a .screenrc file) to configure the name of the log file that will be started later. You use the title <hostname> (C-a A) screen command to set the title of a new window, or you do screen -t <hostname> ssh0 <hostname> to start a new screen session. You use the C-a H (C-a :log) screen command to toggle logging to the configured file. If so, then is nearly equivalent (requires tmux 1.3+ to support #W in the pipe-pane shell command; pipe-pane is available in tmux 1.0+): In a configuration file (e.g. .tmux.conf ): bind-key H pipe-pane -o "exec cat >>$HOME/'#W-tmux.log'" Use tmux rename-window <hostname> (C-b ,) to rename an existing window, or use tmux new-window -n <hostname> 'ssh <hostname>' to start a new tmux window, or use tmux new-session -n <hostname> 'ssh <hostname>' to start a new tmux session. Use C-b H to toggle the logging. There is no notification that the log has been toggled, but you could add one if you wanted: bind-key H pipe-pane -o "exec cat >>$HOME/'#W-tmux.log'" \; display-message 'Toggled logging to $HOME/#W-tmux.log' Note: The above line is shown as if it were in a configuration file (either .tmux.conf or one you source ). tmux needs to see both the backslash and the semicolon; if you want to configure this from the a shell (e.g. tmux bind-key … ), then you will have to escape or quote both characters appropriately so that they are delivered to tmux intact. There does not seem to be a convenient way to show different messages for toggling on/off when using only a single binding (you might be able to rig something up with if-shell , but it would probably be ugly). If two bindings are acceptable, then try this: bind-key H pipe-pane "exec cat >>$HOME/'#W-tmux.log'" \; display-message 'Started logging to $HOME/#W-tmux.log' bind-key h pipe-pane \; display-message 'Ended logging to $HOME/#W-tmux.log'
{ "source": [ "https://unix.stackexchange.com/questions/5832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2459/" ] }