source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
31,414 | I know that shell scripts just run commands as if they were executed in at the command prompt. I'd like to be able to run shell scripts as if they were functions... That is, taking an input value or string into the script. How do I approach doing this? | The shell command and any arguments to that command appear as numbered shell variables: $0 has the string value of the command itself, something like script , ./script , /home/user/bin/script or whatever. Any arguments appear as "$1" , "$2" , "$3" and so on. The count of arguments is in the shell variable "$#" . Common ways of dealing with this involve shell commands getopts and shift . getopts is a lot like the C getopt() library function. shift moves the value of $2 to $1 , $3 to $2 , and so on; $# gets decremented. Code ends up looking at the value of "$1" , doing things using a case β¦ esac to decide on an action, and then doing a shift to move $1 to the next argument. It only ever has to examine $1 , and maybe $# . | {
"source": [
"https://unix.stackexchange.com/questions/31414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15417/"
]
} |
31,476 | To all the 'sed' doctors out there: How can you get 'sed' to extract a regular expression it has matched in a
line? In other words words, I want just the string corresponding to the regular
expression with all the non-matching characters from the containing line stripped away. I tried using the back-reference feature like below regular expression to be isolated
gets `inserted`
here
|
v
sed -n 's/.*\( \).*/\1/p this works for some expressions like sed -n 's/.*\(CONFIG_[a-zA-Z0-9_]*\).*/\1/p which neatly extracts all macro names starting with 'CONFIG_ ....' ( found in some '*.h' file ) and prints them all out line by line CONFIG_AT91_GPIO
CONFIG_DRIVER_AT91EMAC
.
.
CONFIG_USB_ATMEL
CONFIG_USB_OHCI_NEW
.
e.t.c. BUT the above breaks down for something like sed -n 's/.*\([0-9][0-9]*\).*/\1/p this always returns single digits like 7
9
.
.
6 rather than extracting a contiguous number field such as. 8908078
89670890
.
.
.
23019
.
e.t.c. P.S.: I would be grateful to feedback on how this is achieved in 'sed'.
I know how to do this with 'grep' and 'awk'
I would like to find out if my - albeit limited - understanding of
'sed' has holes in it and if there is way to do this in 'sed' which I have simply overlooked. | When a regexp contains groups, there may be more than one way to match a string against it: regexps with groups are ambiguous. For example, consider the regexp ^.*\([0-9][0-9]*\)$ and the string a12 . There are two possibilities: Match a against .* and 2 against [0-9]* ; 1 is matched by [0-9] . Match a1 against .* and the empty string against [0-9]* ; 2 is matched by [0-9] . Sed, like all other regexp tools out there, applies the earliest longest match rule: it first tries to match the first variable-length portion against a string that's as long as possible. If it finds a way to match the rest of the string against the rest of the regexp, fine. Otherwise, sed tries the next longest match for the first variable-length portion and tries again. Here, the match with the longest string first is a1 against .* , so the group only matches 2 . If you want the group to start earlier, some regexp engines let you make the .* less greedy, but sed doesn't have such a feature. So you need to remove the ambiguity with some additional anchor. Specify that the leading .* cannot end with a digit, so that the first digit of the group is the first possible match. If the group of digits cannot be at the beginning of the line: sed -n 's/^.*[^0-9]\([0-9][0-9]*\).*/\1/p' If the group of digits can be at the beginning of the line, and your sed supports the \? operator for optional parts: sed -n 's/^\(.*[^0-9]\)\?\([0-9][0-9]*\).*/\1/p' If the group of digits can be at the beginning of the line, sticking to standard regexp constructs: sed -n -e 's/^.*[^0-9]\([0-9][0-9]*\).*/\1/p' -e t -e 's/^\([0-9][0-9]*\).*/\1/p' By the way, it's that same earliest longest match rule that makes [0-9]* match the digits after the first one, rather than the subsequent .* . Note that if there are multiple sequences of digits on a line, your program will always extract the last sequence of digits, again because of the earliest longest match rule applied to the initial .* . If you want to extract the first sequence of digits, you need to specify that what comes before is a sequence of non-digits. sed -n 's/^[^0-9]*\([0-9][0-9]*\).*$/\1/p' More generally, to extract the first match of a regexp, you need to compute the negation of that regexp. While this is always theoretically possible, the size of the negation grows exponentially with the size of the regexp you're negating, so this is often impractical. Consider your other example: sed -n 's/.*\(CONFIG_[a-zA-Z0-9_]*\).*/\1/p' This example actually exhibits the same issue, but you don't see it on typical inputs. If you feed it hello CONFIG_FOO_CONFIG_BAR , then the command above prints out CONFIG_BAR , not CONFIG_FOO_CONFIG_BAR . There's a way to print the first match with sed, but it's a little tricky: sed -n -e 's/\(CONFIG_[a-zA-Z0-9_]*\).*/\n\1/' -e T -e 's/^.*\n//' -e p (Assuming your sed supports \n to mean a newline in the s replacement text.) This works because sed looks for the earliest match of the regexp, and we don't try to match what precedes the CONFIG_β¦ bit. Since there is no newline inside the line, we can use it as a temporary marker. The T command says to give up if the preceding s command didn't match. When you can't figure out how to do something in sed, turn to awk. The following command prints the earliest longest match of a regexp: awk 'match($0, /[0-9]+/) {print substr($0, RSTART, RLENGTH)}' And if you feel like keeping it simple, use Perl. perl -l -ne '/[0-9]+/ && print $&' # first match
perl -l -ne '/^.*([0-9]+)/ && print $1' # last match | {
"source": [
"https://unix.stackexchange.com/questions/31476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6918/"
]
} |
31,486 | I am interested in rendering a torrent file into a readable form (to see what files does it reference, what tracker information does it contain etc.). What a tool can I use to do just this? | btshowmetainfo , formerly included in the BitTorrent distribution but now largely installed with BitTornado (a fork of the BitTorrent 3.x codebase), does just that. $ btshowmetainfo amd64cd-5.1.2.iso.torrent
btshowmetainfo 20030621 - decode BitTorrent metainfo files
metainfo file.: amd64cd-5.1.2.iso.torrent
info hash.....: e30c05f2330ba4869eefb90bf5978a505303b235
file name.....: amd64cd-5.1.2.iso
file size.....: 253325312 (966 * 262144 + 94208)
announce url..: http://tracker.netbsd.org:6969/announce | {
"source": [
"https://unix.stackexchange.com/questions/31486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
31,531 | I'm parsing a mailbox file that stores e-mail server reports for unsuccessfully delivered e-mail. I wish to extract bad e-mail addresses, so that I remove them from the system. The log file looks like this: ...some content...
The mail system
<[email protected]>: host mx1.hotmail.com[65.54.188.94] said: 550
Requested action not taken: mailbox unavailable (in reply to RCPT TO
command)
...some content...
The mail system
<[email protected]>: host viking.optimumpro.net[79.101.51.82] said: 550
Unknown user (in reply to RCPT TO command)
...some content...
The mail system
<[email protected]>: host mta5.am0.yahoodns.net[74.6.140.64] said: 554
delivery error: dd This user doesn't have a yahoo.com account
([email protected]) [0] - mta1172.mail.sk1.yahoo.com (in reply to end
of DATA command)
...etc. E-mail address comes 2 lines after a line with "The mail system". Using grep like this gives me the "The mail system" line and the next two lines: grep -A 2 "The mail system" mbox_file However, I don't know how to remove the "The mail system" line and the second empty line from this output. I guess I could write PHP/Perl/Python script to do it, but I wonder if this is possible with grep or some other standard tool. I tried to give negative offset to -B parameter: grep -A 2 -B -2 "The mail system" mbox_file But grep complains: grep: -2: invalid context length argument Is there a way to do this with grep? | The simplest way to solve it using grep only, is to pipe one more inverted grep at the end.
For example: grep -A 4 "The mail system" temp.txt | grep -v "The mail system" | grep -v '^\d*$' | {
"source": [
"https://unix.stackexchange.com/questions/31531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11704/"
]
} |
31,549 | I would like to see what hosts are in my known_hosts file but it doesn't appear to be human readable. Is it possible to read it? More specifically there is a host that I can connect to via several names and I want to find out what the fingerprint I expect for it from my known hosts file. Update: I'm using OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 A line from my known_hosts file looks something like this, |1|guO7PbLLb5FWIpxNZHF03ESTTKg=|r002DA8L2JUYRVykUh7jcVUHeYE= ssh-rsa AAAAB3NzaC1yc2EAAFADAQABAAABAQDWp73ulfigmbbzif051okmDMh5yZt/DlZnsx3DEOYHu3Nu/+THJnUAfkfEc1XkOFiFgbUyK/08Ty0K6ExUaffb1ERfXXyyp63rpCTHOPonSrnK7adl7YoPDd4BcIUZd1Dk7HtuShMmuk4l83X623cr9exbfm+DRaeyFNMFSEkMzztBYIkhpA2DWlDkd90OfVAvyoOrJPxztmIZR82qu/5t2z58sJ6Jm2xdp2ckySgXulq6S4k+hnnGuz2p1klviYCWGJMZfyAB+V+MTjGGD/cj0SkL5v/sa/Fie1zcv1SLs466x3H0kMllz6gAk0/FMi7eULspwnIp65g45qUAL3Oj | You've got HashKnownHosts set to " yes " in your ssh_config file, so the hostnames aren't available in plaintext. If you know the hostname you're looking for ahead of time, you can search for it with: ssh-keygen -H -F hostname
# Or, if SSH runs on port other than 22. Use literal brackets [].
ssh-keygen -H -F '[hostname]:2222' Here's the relevant section from the ssh-keygen(1) man page: -F hostname Search for the specified hostname in a known_hosts file, listing any
occurrences found. This option is useful to find hashed host names
or addresses and may also be used in conjunction with the -H option
to print found keys in a hashed format. | {
"source": [
"https://unix.stackexchange.com/questions/31549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7918/"
]
} |
31,595 | I've read in many places that Linux creates a kernel thread for each user thread in a Java VM. (I see the term "kernel thread" used in two different ways: a thread created to do core OS work and a thread the OS is aware of and schedules to perform user work. I am talking about the latter type.) Is a kernel thread the same as a kernel process, since Linux processes support shared memory spaces between parent and child, or is it truly a different entity? | There is absolutely no difference between a thread and a process on Linux. If you look at clone(2) you will see a set of flags that determine what is shared, and what is not shared, between the threads. Classic processes are just threads that share nothing; you can share what components you want under Linux. This is not the case on other OS implementations, where there are much more substantial differences. | {
"source": [
"https://unix.stackexchange.com/questions/31595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15243/"
]
} |
31,607 | When I try to connect to the Internet using pon provider , I get this error: error sending pppoe packet: Network is down
error receiving pppoe packet: Network is down If I configure the Internet with pppoeconf , then run pon provider , the connection works. I should not have to run pppoeconf every time I turn on my computer. How can I connect to the Internet, with pon without having to run pppoeconf every time? Update: When I installed Debian, the installer could not establish a DHCP connection, so I skipped the "Configure network" option. I have found, running this command allows me to start the Internet, without having to configure pppoeconf again. ifconfig eth0 up
pon dsl-provider Is there some place I should add ifconfig eth0 up so that it begins during startup and shutdown or when I run pon or poff ? | There is absolutely no difference between a thread and a process on Linux. If you look at clone(2) you will see a set of flags that determine what is shared, and what is not shared, between the threads. Classic processes are just threads that share nothing; you can share what components you want under Linux. This is not the case on other OS implementations, where there are much more substantial differences. | {
"source": [
"https://unix.stackexchange.com/questions/31607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13099/"
]
} |
31,669 | I like create an image backup for the first time I'm backing up a system. After this first time I use rsync to do incremental backups. My usual image backup is as follows: Mount and zero out the empty space: dd if=/dev/zero of=temp.dd bs=1M rm temp.dd umount and dd the drive while compressing it dd if=/dev/hda conv=sync,noerror bs=64K | gzip -c > /mnt/sda1/hda.ddimg.gz to put the system back to normal, I will usually do a gunzip -c /mnt/sda1/hda.img.gz | dd of=/dev/hda conv=sync,noerror bs=64K This is really straightforward and allows my to save the 'whole drive' but really just save the used space. Here is the problem. Lets say I do the above but not on a clean system and don't get the rsync backups going soon enough and there are files that I want to access that are on the image. Let's say I don't have the storage space to actually unzip and dd the image to a drive but want to mount the image to get individual files off of it.... Is this possible? Normally, one wouldn't compress the dd image, which will allow you to just mount the image using -o loop ... but this isn't my case... Any suggestions for mounting the compressed img on the fly? Would using AVFS to 'mount' the gz file then mounting the internal dd.img work (I don't think so... but would need verification...)? | It depends on whether the disk image is a full disk image, or just a partition. Washing the partition(s) If the disk is in good working condition, you will get better compression if you wash the empty space on the disk with zeros. If the disk is failing, skip this step. If you're imaging an entire disk then you will want to wash each of the partitions on the disk. CAUTION: Be careful, you want to set the of to a file in the mounted partition, NOT THE PARTITION ITSELF! mkdir image_source
sudo mount /dev/sda1 image_source
dd if=/dev/zero of=image_source/wash.tmp bs=4M
rm image_source/wash.tmp
sudo umount image_source Making a Partition Image mkdir image
sudo dd if=/dev/sda1 of=image/sda1_backup.img bs=4M Where sda is the name of the device, and 1 is the partition number. Adjust accordingly for your system if you want to image a different device or partition. Making a Whole Disk Image mkdir image
sudo dd if=/dev/sda of=image/sda_backup.img bs=4M Where sda is the name of the device. Adjust accordingly for your system if you want to image a different device. Compression Make a "squashfs" image that contains the full uncompressed image. sudo apt-get install squashfs-tools
mksquashfs image squash.img Streaming Compression To avoid making a separate temporary file the full size of the disk, you can stream into a squashfs image. mkdir empty-dir
mksquashfs empty-dir squash.img -p 'sda_backup.img f 444 root root dd if=/dev/sda bs=4M' Mounting a compressed partition image First mount the squashfs image, then mount the partition image stored in the mounted squashfs image. mkdir squash_mount
sudo mount squash.img squash_mount Now you have the compressed image mounted, mount the image itself (that is inside the squashfs image) mkdir compressed_image
sudo mount squash_mount/sda1_backup.img compressed_image Now your image is mounted under compressed_image . EDIT: If you wanted to simply restore the disk image onto a partition at this point (instead of mounting it to browse/read the contents), just dd the image at squash_mount/sda1_backup.img onto the destination instead of doing mount . Mounting a compressed full disk image This requires you to use a package called kpartx. kpartx allows you to mount individual partitions in a full disk image. sudo apt-get install kpartx First, mount your squashed partition that contains the full disk image mkdir compressed_image
sudo mount squash.img compressed_image Now you need to create devices for each of the partitions in the full disk image: sudo kpartx -a compressed_image/sda_backup.img This will create devices for the partitions in the full disk image at /dev/mapper/loopNpP where N is the number assigned for the loopback device, and P is the partition number. For example: /dev/mapper/loop0p1 . Now you have a way to mount the individual partitions in the full disk image: mkdir fulldisk_part1
sudo mount /dev/mapper/loop0p1 fulldisk_part1 | {
"source": [
"https://unix.stackexchange.com/questions/31669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4631/"
]
} |
31,672 | I would like to reduce the size of the font of GRUB boot loader. Is it possible and so how? | After some research based on the answers of @fpmurphy and @hesse, also based on a comprehensive thread at ubuntuforums and on Fedora Wiki , I found out how to reduce the font size of GRUB2. Choose a font, in this example I chose DejaVuSansMono.ttf Convert the font in a format GRUB understands: sudo grub2-mkfont -s 14 -o /boot/grub2/DejaVuSansMono.pf2 /usr/share/fonts/dejavu/DejaVuSansMono.ttf Edit the /etc/default/grub file adding a line: GRUB_FONT=/boot/grub2/DejaVuSansMono.pf2 Update GRUB configuration with: BIOS: sudo grub2-mkconfig -o /boot/grub2/grub.cfg EFI: sudo grub2-mkconfig -o /boot/efi/EFI/{distro}/grub.cfg # distro on RHEL8 is {'redhat'} reboot. The resolution of GRUB display may also affect the size of the font, more on resolution etc. on the ubuntuforums link above. | {
"source": [
"https://unix.stackexchange.com/questions/31672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7892/"
]
} |
31,673 | I've hacked on a lot of shell scripts, and sometimes the simplest things baffle me. Today I ran across a script that made extensive use of the : (colon) bash builtin. The documenation seems simple enough: : (a colon)
: [arguments] Do nothing beyond expanding arguments and performing redirections. The return status is zero. However I have previously only seen this used in demonstrations of shell expansion. The use case in the script I ran across made extensive use of this structure: if [ -f ${file} ]; then
grep some_string ${file} >> otherfile || :
grep other_string ${file} >> otherfile || :
fi There were actually hundreds of greps, but they are just more of the same. No input/output redirects are present other than the simple structure above. No return values are checked later in the script. I am reading this as a useless construct that says "or do nothing". What purpose could ending these greps with "or do nothing" serve? In what case would this construct cause a different outcome than simply leaving off the || : from all instances? | It appears the : s in your script are being used in lieu of true . If grep doesn't find a match in the file, it will return a nonzero exit code; as jw013 mentions in a comment, if errexit is set, probably by -e on the shebang line, the script would exit if any of the grep s fail to find a match. Clearly, that's not what the author wanted, so (s)he added || : to make the exit status of that particular compound command always zero, like the more common (in my experience) || true / || /bin/true . | {
"source": [
"https://unix.stackexchange.com/questions/31673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1925/"
]
} |
31,679 | I'm trying to increase the maximum number of open files for the current user > ulimit -n
1024 I attempt to increase and fail as follows > ulimit -n 4096
bash: ulimit: open files: cannot modify limit: Operation not permitted So I do the natural thing and try to run with temp permission, but fail > sudo ulimit -n 4096
sudo: ulimit: command not found Questions How to increase ulimit? Why is this happening? Using Fedora 14 | ulimit is a shell built-in, not an external command. It needs to be built in because it acts on the shell process itself, like cd : the limits, like the current directory, are a property of that particular process. sudo bash -c 'ulimit -n 4096' would work, but it would change the limit for the bash process invoked by sudo only, which would not help you. There are two values for each limit: the hard limit and the soft limit. Only root can raise the hard limit; anyone can lower the hard limit, and the soft limit can be modified in either direction with the only constraint that it cannot be higher than the hard limit. The soft limit is the actual value that matters. Therefore you need to arrange that all your processes have a hard limit for open files of at least 4096. You can keep the soft limit at 1024. Before launching that process that requires a lot of files, raise the soft limit. In /etc/security/limits.conf , add the lines paislee hard nofile 4096
paislee soft nofile 1024 where paislee is the name of the user you want to run your process as. In the shell that launches the process for which you want a higher limit, run ulimit -Sn unlimited to raise the soft limit to the hard limit. | {
"source": [
"https://unix.stackexchange.com/questions/31679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12870/"
]
} |
31,695 | I have seen in some screen-shots (can't remember where on the web) that the terminal can display the [username@machine /]$ in bold letters. I'm looking forward to getting this too because I always find myself scrolling through long outputs to find out with difficulty the first line after my command. How can I make the user name etc. bold or coloured? | You should be able to do this by setting the PS1 prompt variable in your ~/.bashrc file like this: PS1='[\u@\h \w]\$ ' To make it colored (and possibly bold - this depends on whether your terminal emulator has enabled it) you need to add escape color codes: PS1='\[\e[1;91m\][\u@\h \w]\$\[\e[0m\] ' Here, everything not being escaped between the 1;91m and 0m parts will be colored in the 1;91 color (bold red). Put these escape codes around different parts of the prompt to use different colors, but remember to reset the colors with 0m or else you will have colored terminal output as well. Remember to source the file afterwards to update the current shell: source ~/.bashrc | {
"source": [
"https://unix.stackexchange.com/questions/31695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7892/"
]
} |
31,723 | I want to see all bash commands that have been run on a Linux server across multiple user accounts. The specific distribution I'm using is CentOS 5.7. Is there a way to globally search .bash_history files on a server or would it be a more home-grown process of locate | cat | grep ? (I shudder just typing that out). | Use getent to enumerate the home directories. getent passwd |
cut -d : -f 6 |
sed 's:$:/.bash_history:' |
xargs -d '\n' grep -s -H -e "$pattern" If your home directories are in a well-known location, it could be as simple as grep -e "$pattern" /home/*/.bash_history Of course, if a user uses a different shell or a different value of HISTFILE , this won't tell you much. Nor will this tell you about commands that weren't executed through a shell, or about aliases and functions and now-removed external commands that were in some user directory early in the user's $PATH . If what you want to know is what commands users have run, you need process accounting or some fancier auditing system; see Monitoring activity on my computer. , How to check how long a process ran after it finished? . | {
"source": [
"https://unix.stackexchange.com/questions/31723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
31,726 | I need to determine an appropriate directory naming structure for a package management system. The original directory structure was not POSIX-compliant in any way and certainly not UNIX-style (you'll notice it's similar to GoboLinux). The structure looked somewhat like this: /Applications - applications for users (but that users have not themselves installed) /System/AppResolve - application resolution (effectively /bin ) /System/LibResolve - library resolution (effectively /lib ) /System/Utilities/Applications - essential applications for system operation /System/Utilities/Libraries - essential libraries for system operation Now I need to find a way to represent this directory structure on a more UNIX-like system. AppResolve and LibResolve aren't an issue since /lib and /bin work fine for this, the issue is with the other directories. Under each of the other directories, applications live in their own folder, so for example you might have this kind of path: /System/Utilities/Applications/Tar/1.22/bin/tar Of course, the /bin/tar symlink would resolve to this binary. So the question is this , I need to take this kind of structure and rearrange it to fit within the UNIX-style of naming directories (particularly so that it works with the existing structure on Linux). I thought of the following, but I think it's repetitive and not very nice: /usr/app/user/applications/... /usr/app/system/applications/Tar/1.22/bin/tar /usr/app/system/libraries/... Suggestions? FOR CLARIFICATION: This isn't asking for a mapping to existing UNIX directories; it's asking for the most appropriate leading path for those "user" and "system" directories given the UNIX-naming convention (3-letter directories, etc.) | Use getent to enumerate the home directories. getent passwd |
cut -d : -f 6 |
sed 's:$:/.bash_history:' |
xargs -d '\n' grep -s -H -e "$pattern" If your home directories are in a well-known location, it could be as simple as grep -e "$pattern" /home/*/.bash_history Of course, if a user uses a different shell or a different value of HISTFILE , this won't tell you much. Nor will this tell you about commands that weren't executed through a shell, or about aliases and functions and now-removed external commands that were in some user directory early in the user's $PATH . If what you want to know is what commands users have run, you need process accounting or some fancier auditing system; see Monitoring activity on my computer. , How to check how long a process ran after it finished? . | {
"source": [
"https://unix.stackexchange.com/questions/31726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15553/"
]
} |
31,753 | I want to grep the output of my ls -l command: -rw-r--r-- 1 root root 1866 Feb 14 07:47 rahmu.file
-rw-r--r-- 1 rahmu user 95653 Feb 14 07:47 foo.file
-rw-r--r-- 1 rahmu user 1073822 Feb 14 21:01 bar.file I want to run grep rahmu on column $3 only, so the output of my grep command should look like this: -rw-r--r-- 1 rahmu user 95653 Feb 14 07:47 foo.file
-rw-r--r-- 1 rahmu user 1073822 Feb 14 21:01 bar.file What's the simplest way to do it? The answer must be portable across many Unices, preferably focusing on Linux and Solaris. NB: I'm not looking for a way to find all the files belonging to a given user. This example was only given to make my question clearer. | One more time awk saves the day! Here's a straightforward way to do it, with a relatively simple syntax: ls -l | awk '{if ($3 == "rahmu") print $0;}' or even simpler: (Thanks to Peter.O in the comments) ls -l | awk '$3 == "rahmu"' | {
"source": [
"https://unix.stackexchange.com/questions/31753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
31,760 | On wikipedia, the article for .sh says: For the .sh file extension type, see Bourne shell . How about other unix shells? I know that the shebang is used inside the file to indicate an interpreter for execution, but I wonder: What are the pros and cons for files extensions vs no file extensions? | I would only call .sh something that is meant to be portable (and hopefully is portable). Otherwise I think it's just better to hide the language. The careful reader will find it in the shebang line anyway. (In practice, .bash or .zsh , etc⦠suffixes are rarely used.) | {
"source": [
"https://unix.stackexchange.com/questions/31760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
31,763 | How do I remove a bridge that has an IP address that was brought up manually and isn't in /etc/network/interfaces? $ ifconfig br100
br100 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:172.16.0.5 Bcast:172.16.0.255 Mask:255.255.255.0 Can't delete it: # brctl delbr br100
bridge br100 is still up; can't delete it Can't bring it down with ifdown: # ifdown br100
ifdown: interface br100 not configured | Figured it out: # ip link set br100 down
# brctl delbr br100 | {
"source": [
"https://unix.stackexchange.com/questions/31763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/663/"
]
} |
31,766 | How can I extract some of the folders of a stock FreeBSD install from the ISO without actually installing FreeBSD? I am trying to build a number of cross-compilers for various major versions of FreeBSD and need to get the libc and some includes and would like to be able to extract /usr/include and /usr/lib or rather parts of them ... Edit: given the first response I feel I have to elaborate a bit. It is trivial to mount an ISO file and I know how to do that on a number of platforms (e.g. on my Linux box: mount -o loop FreeBSD-7.0-RELEASE-amd64-disc1.iso freebsd7/ ). However, when you mount the installation ISOs for FreeBSD you will notice that they don't contain a folder usr as can be easily seen from the output of find -type d -name usr while inside the folder in which the ISO is mounted. Evidently the files are stored away in some format and I need to be able to parse whatever meta-information exists to find what file is the archive that contains the stuff I need to extract and then extract it. | Figured it out: # ip link set br100 down
# brctl delbr br100 | {
"source": [
"https://unix.stackexchange.com/questions/31766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
31,779 | Is there a tool available for Linux systems that can measure the "quality" of entropy on the system? I know how to count the entropy: cat /proc/sys/kernel/random/entropy_avail And I know that some systems have "good" sources of entropy (hardware entropy keys), and some don't (virtual machines). But is there a tool that can provide a metric as to the "quality" of the entropy on the system? | http://www.fourmilab.ch/random/ works for me. sudo apt-get install ent
head -c 1M /dev/urandom > /tmp/out
ent /tmp/out | {
"source": [
"https://unix.stackexchange.com/questions/31779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14352/"
]
} |
31,807 | I opened a file using vim on Ubuntu, and the following is displayed at the bottom of the screen: "file.py" [noeol] 553L, 16620C What does noeol indicate? | Unix editors like vi and vim will always put newlines ( \n ) at the end of every line - especially including the last line. If there is no end-of-line ( eol ) on the last line, then it is an unusual situation and the file most certainly was not created by a standard UNIX editor. This unusual situation is brought to your notice by the [noeol] flag in the vim editor; other editors probably have similar flags and notifications. | {
"source": [
"https://unix.stackexchange.com/questions/31807",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16377/"
]
} |
31,818 | I'm a Windows guy, dual booted recently, and now I'm using Linux Mint 12 When a Windows desktop freezes I refresh , or if I am using a program I use alt + F4 to exit the program or I can use ctrl + alt + delete and this command will allow me to fix the Windows desktop by seeing what program is not responding and so on. Mint freezes fewer times than my XP, but when it does, I don't know what to do, I just shut down the pc and restart it. So is there a command to fix Linux when it freezes? | You can try Ctrl + Alt + * to kill the front process ( Screen locking programs on Xorg 1.11 ) or Ctrl + Alt + F1 to open a terminal, launch a command like ps , top , or htop to see running processes and launch kill on not responding process. Note: if not installed, install htop with sudo apt-get install htop . Also, once done in your Ctrl + Alt + F1 virtual console, return to the desktop with Ctrl + Alt + F7 . | {
"source": [
"https://unix.stackexchange.com/questions/31818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15433/"
]
} |
31,821 | Simply typing http://downloads.sourceforge.net/project/romfs/genromfs/0.5.2/genromfs-0.5.2.tar.gz works fine on a browser, but I'm trying to download from a CLI environment with limited utilities. The following just returns an empty file: curl http://downloads.sourceforge.net/project/romfs/genromfs/0.5.2/genromfs-0.5.2.tar.gz How do I get genromfs-0.5.2.tar.gz from sourceforge using curl? | You can do curl -L http://downloads.sourceforge.net/project/romfs/genromfs/0.5.2/genromfs-0.5.2.tar.gz > genromfs.tar.gz to download the file. The -L tells curl to follow any redirects, which sourceforge normally does. If wget is available, that would be far simpler. | {
"source": [
"https://unix.stackexchange.com/questions/31821",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14849/"
]
} |
31,824 | I have detached a process from my terminal, like this: $ process & That terminal is now long closed, but process is still running, and I want to send some commands to that process's stdin. Is that possible? | Yes, it is. First, create a pipe: mkfifo /tmp/fifo .
Use gdb to attach to the process: gdb -p PID Then close stdin: call close (0) ; and open it again: call open ("/tmp/fifo", 0600) Finally, write away (from a different terminal, as gdb will probably hang): echo blah > /tmp/fifo | {
"source": [
"https://unix.stackexchange.com/questions/31824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15487/"
]
} |
31,924 | Maybe I am being daft but can you replace all the characters from where the cursor is to the end of line by one command? Then use . to do the same replace on the next line and so on. | If I understood your question properly, try this: C (that's a capital C) will delete everything from the cursor to the end of the line and put you in INSERT mode, then you write your replacement, leave INSERT mode, use . to repeat the process somewhere else. | {
"source": [
"https://unix.stackexchange.com/questions/31924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10526/"
]
} |
31,947 | Using version control systems I get annoyed at the noise when the diff says No newline at end of file . So I was wondering: How to add a newline at the end of a file to get rid of those messages? | Here you go : sed -i -e '$a\' file And alternatively for OS X sed : sed -i '' -e '$a\' file This adds \n at the end of the file only if it doesnβt already end with a newline. So if you run it twice, it will not add another newline: $ cd "$(mktemp -d)"
$ printf foo > test.txt
$ sed -e '$a\' test.txt > test-with-eol.txt
$ diff test*
1c1
< foo
\ No newline at end of file
---
> foo
$ echo $?
1
$ sed -e '$a\' test-with-eol.txt > test-still-with-one-eol.txt
$ diff test-with-eol.txt test-still-with-one-eol.txt
$ echo $?
0 How it works: $ denotes the end of file a\ appends the following text (which is nothing, in this case) on a new line In other words, if the last line contains a character that is not newline, append a newline. | {
"source": [
"https://unix.stackexchange.com/questions/31947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
31,953 | I got some Mercurial repositories which are served by Apache over HTTP. But there is a dedicated user performing some automated tests, which needs to check out the repositories locally. Recently this started to fail, seemingly due to lacking rights for files in the largefiles subdirectory in .hg : -rw------- 2 www-data www-data 6.3M 2012-01-02 17:23 9358b828fb64feb37d3599a8735320687fa8a3b2 Default umask should be 022. And I used the setgid settings for the directories in .hg according to the multiple committers wiki page , which does not cover .hg/largefiles though. However, as far as I understand it, setting the gid for this directory wouldn't solve the problem, that hg sets such restrictive rights on those files. My other user trying to access this repositories via the filesystem is also in the www-data group, thus an additional read right for group would be sufficient to solve my problem. How can I convince Mercurial, or the system to grant this right properly for new files? I am using: Mercurial Distributed SCM (version 2.1) | Here you go : sed -i -e '$a\' file And alternatively for OS X sed : sed -i '' -e '$a\' file This adds \n at the end of the file only if it doesnβt already end with a newline. So if you run it twice, it will not add another newline: $ cd "$(mktemp -d)"
$ printf foo > test.txt
$ sed -e '$a\' test.txt > test-with-eol.txt
$ diff test*
1c1
< foo
\ No newline at end of file
---
> foo
$ echo $?
1
$ sed -e '$a\' test-with-eol.txt > test-still-with-one-eol.txt
$ diff test-with-eol.txt test-still-with-one-eol.txt
$ echo $?
0 How it works: $ denotes the end of file a\ appends the following text (which is nothing, in this case) on a new line In other words, if the last line contains a character that is not newline, append a newline. | {
"source": [
"https://unix.stackexchange.com/questions/31953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15651/"
]
} |
31,996 | When I list the details of a key I get output like this: $ gpg --edit-key SOMEID
pub [..] created: [..] expires: [..] usage:SC
[..]
sub [..] created: [..] expires: [..] usage: E Or even usage: SCA on another key (the master-key part). What does these abbreviation in the usage field mean? I can derive that: S -> for signing
E -> for encrypting But what about C and A ? And are there more? And where to look stuff like this up? | Ok, the gpg manual does not seem to mention these abbreviations. Thus, one has to look at the source. For example under Debian/Ubuntu: $ apt-get source gnupg2
$ cd gnupg2-2.0.17
$ cscope -bR
$ grep 'usage: %' . -r --exclude '*po*'
$ vim g10/keyedit.c
jump to usage: %
jump to definition of `usagestr_from_pk` From the code one can derive following table: βββββββββββββββββββββββββββββββ
Constant Character
βββββββββββββββββββββββββββββββ
PUBKEY_USAGE_SIG S
PUBKEY_USAGE_CERT C
PUBKEY_USAGE_ENC E
PUBKEY_USAGE_AUTH A
βββββββββββββββββββββββββββββββ Thus, for example, usage: SCA means that the sub-key can be used for signing, for creating a certificate and authentication purposes. | {
"source": [
"https://unix.stackexchange.com/questions/31996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
32,001 | When I open the file in Vim, I see strange ^M characters. Unfortunately, the world's favorite search engine does not do well with special characters in queries, so I'm asking here: What is this ^M character? How could it have got there? How do I get rid of it? | The ^M is a carriage-return character. If you see this, you're probably looking at a file that originated in the DOS/Windows world, where an end-of-line is marked by a carriage return/newline pair, whereas in the Unix world, end-of-line is marked by a single newline. Read this article for more detail, and also the Wikipedia entry for newline . This article discusses how to set up vim to transparently edit files with different end-of-line markers. If you have a file with ^M at the end of some lines and you want to get rid of them, use this in Vim: :s/^M$// (Press Ctrl + V Ctrl + M to insert that ^M .) | {
"source": [
"https://unix.stackexchange.com/questions/32001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4297/"
]
} |
32,008 | Can I mount a file system image without root permission? Normally I would do: mount -o loop DISK_IMAGE FOLDER Without using sudo or setting the suid on mount , is there any suitable way to do this? I know I can use fusermount with some ISO images, but that is pretty limited, even for ISO images, some of my images cannot be mounted, but mount always works. | You can't mount anything that the administrator hasn't somehow given you permission to mount. Only root can call the mount system call. The reason for this is that there are many ways to escalate privileges through mounting, such as mounting something over a system location, making files appear to belong to another user and exploiting a program that relies on file ownership, creating setuid files, or exploiting bugs in filesystem drivers. The mount command is setuid root. But if you aren't root, it only lets you mount things that are mentioned in fstab . The fusermount command is setuid root. It only lets you mount things through a FUSE driver, and restricts your abilities to provide files with arbitrary ownership or permissions that way (under most setups, all files on a FUSE mount belong to you). Your best bet is to find a FUSE filesystem that's capable of reading your disk image. For ISO 9660 images, try both fuseiso and UMfuse's ISO 9660 support (available under Debian as the fuseiso9660 package ). | {
"source": [
"https://unix.stackexchange.com/questions/32008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
32,018 | I want to know which files have the string $Id$ . grep \$Id\$ my_dir/mylist_of_files returns 0 occurrences. I discovered that I have to use grep \$Id$ my_dir/mylist_of_files Then I see that the $Id is colored in the output, i.e. it has been matched. How could I match the second $ and why doesn't \$Id\$ work. It doesn't matter if the second $ is the last character or not. I use grep 2.9. Before posting my question, I used google... I found an answer To search for a $ (dollar sign) in the file named test2, enter: grep \\$ test2 The \\ (double backslash) characters are necessary in order
to force the shell to pass a \$ (single backslash, dollar sign) to the
grep command. The \ (single backslash) character tells the grep
command to treat the following character (in this example the $) as a
literal character rather than an expression character. Use the fgrep
command to avoid the necessity of using escape characters such as the
backslash. but I don't understand why grep \$Id works and why grep \\$Id\\$ doesn't. I'm a little bit confused... | There's 2 separate issues here. grep uses Basic Regular Expressions (BRE), and $ is a special character in BRE's only at the end of an expression. The consequence of this is that the 2 instances of $ in $Id$ are not equal. The first one is a normal character and the second is an anchor that matches the end of the line. To make the second $ match a literal $ you'll have to backslash escape it, i.e. $Id\$ . Escaping the first $ also works: \$Id\$ , and I prefer this since it looks more consistent.ΒΉ There are two completely unrelated escaping/quoting mechanisms at work here: shell quoting and regex backslash quoting. The problem is many characters that regular expressions use are special to the shell as well, and on top of that the regex escape character, the backslash, is also a shell quoting character. This is why you often see messes involving double backslashes, but I do not recommend using backslashes for shell quoting regular expressions because it is not very readable. Instead, the simplest way to do this is to first put your entire regex inside single quotes as in 'regex' . The single quote is the strongest form of quoting the shell has, so as long as your regex does not contain single quotes, you no longer have to worry about shell quoting and can focus on pure BRE syntax. So, applying this back to your original example, let's throw the correct regex ( \$Id\$ ) inside single quotes. The following should do what you want: grep '\$Id\$' my_dir/my_file The reason \$Id\$ does not work is because after shell quote removal (the more correct way of saying shell quoting) is applied, the regex that grep sees is $Id$ . As explained in (1.), this regex matches a literal $Id only at the end of a line because the first $ is literal while the second is a special anchor character. ΒΉ Note also that if you ever switch to Extended Regular Expressions (ERE), e.g. if you decided to use egrep (or grep -E ), the $ character is always special. In ERE's $Id$ would never match anything because you can't have characters after the end of a line, so \$Id\$ would be the only way to go. | {
"source": [
"https://unix.stackexchange.com/questions/32018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/393/"
]
} |
32,096 | The environment variable for the bash prompt is called PS1 (usually set in ~/.bashrc). What does PS1 stand for? Is there a PS2? | PS1 stands for "Prompt String One" or "Prompt Statement One", the first prompt string (that you see at a command line). Yes, there is a PS2 and more! Please read this article and the Arch wiki and of course The Bash Reference Manual . | {
"source": [
"https://unix.stackexchange.com/questions/32096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2683/"
]
} |
32,155 | I am looking for file "WSFY321.c" in a huge directory hierarchy. Usually I would use GNU find : find . -name "WSFY321.c" But I do not know the case, it could be uppercase, lowercase, or a mix of both. What is the easiest way to find this file? Is there something better than find . | grep -i "WSFY321.c" ? | Recent versions of GNU find have an -iname flag, for case-insensitive name searches. find . -iname "WSFY321.c" | {
"source": [
"https://unix.stackexchange.com/questions/32155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2305/"
]
} |
32,162 | I have a shell scripting problem where I'm given a directory full of input files (each file containing many input lines), and I need to process them individually, redirecting each of their outputs to a unique file (aka, file_1.input needs to be captured in file_1.output, and so on). Pre-parallel , I would just iterate over each file in the directory and perform my command, while doing some sort of timer/counting technique to not overwhelm the processors (assuming that each process had a constant runtime). However, I know that won't always be the case, so using a "parallel" like solution seems the best way to get shell script multi-threading without writing custom code. While I have thought of some ways to whip up parallel to process each of these files (and allowing me to manage my cores efficiently), they all seem hacky. I have what I think is a pretty easy use case, so would prefer to keep it as clean as possible (and nothing in the parallel examples seem to jump out as being my problem. Any help would be appreciated! input directory example: > ls -l input_files/
total 13355
location1.txt
location2.txt
location3.txt
location4.txt
location5.txt Script: > cat proces_script.sh
#!/bin/sh
customScript -c 33 -I -file [inputFile] -a -v 55 > [outputFile] Update :
After reading Ole's answer below, I was able to put together the missing pieces for my own parallel implementation. While his answer is great, here is my addition research and notes I took: Instead of running my full process, I figured to start with a proof of concept command to prove out his solution in my environment. See my two different implementations (and notes): find /home/me/input_files -type f -name *.txt | parallel cat /home/me/input_files/{} '>' /home/me/output_files/{.}.out Uses find (not ls, that can cause issues) to find all applicable files within my input files directory, and then redirects their contents to a separate directory and file. My issue from above was reading and redirecting (the actual script was simple), so replacing the script with cat was a fine proof of concept. parallel cat '>' /home/me/output_files/{.}.out ::: /home/me/input_files/* This second solution uses parallel's input variable paradigm to read the files in, however for a novice, this was much more confusing. For me, using find a and pipe met my needs just fine. | GNU Parallel is designed for this kind of tasks: parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.output ::: *.input or: ls | parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.output It will run one jobs per CPU core. You can install GNU Parallel simply by: wget https://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem Watch the intro videos for GNU Parallel to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1 | {
"source": [
"https://unix.stackexchange.com/questions/32162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15720/"
]
} |
32,180 | I have seen the following technique used many times on many different shells, to test if a variable is empty: if [ "x$1" = "x" ]; then
# Variable is empty
fi Are there any advantages on using this over the more canonical if [ -z "$1" ] ? Could it be a portability issue? | Some historical shells implemented a very simple parser that could get confused by things like [ -n = "" ] where the first operand to = looks like an operator, and would parse this as [ -n = ] or cause a syntax error. In [ "x$1" = x"" ] , the x prefix ensures that x"$1" cannot possibly look like an operator, and so the only way the shell can parse this test is by treating = as a binary operator. All modern shells, and even most older shells still in operation, follow the POSIX rules which mandate that all test expressions of up to 4 words be parsed correctly. So [ -z "$1" ] is a proper way of testing if $1 is empty , and [ "$x" = "$y" ] is a proper way to test the equality of two variables. Even some current shells can get confused with longer expressions, and a few expressions are actually ambiguous, so avoid using the -a and -o operators to construct longer boolean tests, and instead use separate calls to [ and the shell's own && and || boolean operators. | {
"source": [
"https://unix.stackexchange.com/questions/32180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
32,182 | I have a script which generates a daily report which I want to serve to the so called general public. The problem is I don't want to add to my headaches maintance of a HTTP server (e.g. Apache) with all the configurations and security implications. Is there a dead simple solution for serving one small HTML page without the effort of configuring a full blown HTTP server? | Try SimpleHTTPServer : python -m SimpleHTTPServer
# or the Python 3 equivalent
python3 -m http.server It should will serve whatever's in the CWD (e.g. index.html) at http://0.0.0.0:8000 . | {
"source": [
"https://unix.stackexchange.com/questions/32182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2122/"
]
} |
32,206 | Is it possible to set the audio volume using the terminal instead of clicking the speaker icon in the top bar? The reason I want to do this is that my keyboard does not have volume increase/decrease buttons and I find it annoying to reach for the mouse. | For interactive usage you can use alsamixer . For scripting (e.g. binding to key combinations) take a look at amixer . alsamixer is included by default in most systems. To set the master volume use: # Gets a list of simple mixer controls
$ amixer scontrols Then set it to the desired volume, as an example $ amixer sset 'Master' 50% | {
"source": [
"https://unix.stackexchange.com/questions/32206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10061/"
]
} |
32,210 | I'm confused with using single or double brackets. Look at this code: dir="/home/mazimi/VirtualBox VMs"
if [[ -d ${dir} ]]; then
echo "yep"
fi It works perfectly although the string contains a space. But when I change it to single bracket: dir="/home/mazimi/VirtualBox VMs"
if [ -d ${dir} ]; then
echo "yep"
fi It says: ./script.sh: line 5: [: /home/mazimi/VirtualBox: binary operator expected When I change it to: dir="/home/mazimi/VirtualBox VMs"
if [ -d "${dir}" ]; then
echo "yep"
fi It works fine. Can someone explain what is happening? When should I assign double quotes around variables like "${var}" to prevent problems caused by spaces? | The single bracket [ is actually an alias for the test command, it's not syntax. One of the downsides (of many) of the single bracket is that if one or more of the operands it is trying to evaluate return an empty string, it will complain that it was expecting two operands (binary). This is why you see people do [ x$foo = x$blah ] , the x guarantees that the operand will never evaluate to an empty string. The double bracket [[ ]] , on the other hand, is syntax and is much more capable than [ ] . As you found out, it does not have the "missing operand" issue and it also allows for more C-like syntax with >, <, >=, <=, !=, ==, &&, || operators. My recommendation is the following: If your interpreter is #!/bin/bash , then always use [[ ]] It is important to note that [[ ]] is not supported by all POSIX shells, however many shells do support it such as zsh and ksh in addition to bash | {
"source": [
"https://unix.stackexchange.com/questions/32210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13865/"
]
} |
32,290 | I am new to bash script programming. I want to implement a bash script 'deploymLog', which accepts as input one string argument(name). [root@localhost Desktop]# ./deploymLog.sh name here I want to pass the string argument(name) through command line As an initial step, I need to append the current timestamp along with this input string to a log file say Logone.txt in current directory in the below format: [name]=[System time timestamp1] How it is possible? | $> cat ./deploymLog.sh
#!/bin/bash
name=$1
log_file="Logone.txt"
if [[ -n "$name" ]]; then
echo "$1=$( date +%s )" >> ${log_file}
else
echo "argument error"
fi The first argument from a command line can be found with the positional parameter $1 . [[ -n "$name" ]] tests to see if $name is not empty. date +%s returns the current timestamp in Unix time. The >> operator is used to write to a file by appending to the existing data in the file. $> ./deploymLog.sh tt
$> cat Logone.txt
tt=1329810941
$> ./deploymLog.sh rr
$> cat Logone.txt
tt=1329810941
rr=1329810953 For more readable timestamp you could play with date arguments. | {
"source": [
"https://unix.stackexchange.com/questions/32290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
32,351 | For example, in X session, I can use Ctrl - Alt - L to lock the screen, so it would ask for password to unlock and prevent somebody from messing with mine computer. But if I have an open terminal session on one of the tty's (which I can access with Ctrl - Alt - F1 , for example) - then it is not locked, and somebody can still use it to do some harm. Is there a way to 'lock' that command line (with some background processes running in it, maybe)? | vlock will do as you ask. However, if you want to run background processes, consider screen instead, which will let you also log off and keep processes running in the background, and then reattach -- even when logged in from alternate places. | {
"source": [
"https://unix.stackexchange.com/questions/32351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15487/"
]
} |
32,409 | set and shopt are both shell builtins that control various options. I often forget which options are set by which command, and which option sets/unsets ( set -o/+o , shopt -s/-u ). Why are there two different commands that seemingly do the same thing (and have different arguments to do so)? Is there any easy way/mnemonic to remember which options go with which command? | As far as I know, the set -o options are the ones that are inherited from other Bourne-style shells (mostly ksh), and the shopt options are the ones that are specific to bash. There's no logic that I know of. | {
"source": [
"https://unix.stackexchange.com/questions/32409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11750/"
]
} |
32,435 | I have a CentOS 5.7 VPS using bash as its shell that displays a branded greeting immediately after logging in via SSH. I've been trying to modify it, but can't seem to find where it is in the usual places. So far I've looked in the motd file and checked sshd_config for banner file settings. A banner file is not set. Where else can I look for where the login message might be? | Traditional unix systems display /etc/motd after the user is successfully authenticated and before the user's shell is invoked. On modern systems, this is done by the pam_motd PAM module, which may be configured in /etc/pam.conf or /etc/pam.d/* to display a different file. The ssh server itself may be configured to print /etc/motd if the PrintMotd option is not turned off in /etc/sshd_config . It may also print the time of the previous login if PrintLastLog is not turned off. Another traditional message might tell you whether that You have new mail or You have mail . On systems with PAM, this is done by the pam_mail module. Some shells might print a message about available mail. After the user's shell is launched, the user's startup files may print additional messages. For an interactive login, if the user's login shell is a Bourne-style shell, look in /etc/profile , ~/.profile , plus ~/.bash_profile and ~/.bash_login for bash. For an interactive login to zsh, look in /etc/zprofile , /etc/zlogin , /etc/zshrc , ~/.zprofile , ~/.zlogin and ~/.zshrc . For an interactive login to csh, look in /etc/csh.login and ~/.login . If the user's login shell is bash and this is a non-interactive login, then bash executes ~/.bashrc (which is really odd, since ~/.bashrc is executed for interactive shells only if the shell is not a login shell). This can be a source for trouble; I recommend including the following snippet at the top of ~/.bashrc to bail out if the shell is not interactive: if [[ $- != *i* ]]; then return; fi | {
"source": [
"https://unix.stackexchange.com/questions/32435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
32,460 | Is there any way to exclude commands like rm -rf , svn revert from being getting stored in bash history? Actually I, by mistake, have issued them a number of times even though I have no intent to do, just because I am doing things quickly and it happened. Hence results in lost of lots of work I have did so far. | You might want $HISTIGNORE : "A colon-separated list of patterns used to decide which command lines should be saved on the history list." This line in your ~/.bashrc should do the job: HISTIGNORE='rm *:svn revert*' Also, you can add a space at the beginning of a command to exclude it from history. This works as long as $HISTCONTROL contains ignorespace or ignoreboth , which is default on any distro I've used. | {
"source": [
"https://unix.stackexchange.com/questions/32460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15804/"
]
} |
32,564 | I tried dd, dd_rescue and ddrescue , all failed. I thought these tools bypass the filesystem and make a bitwise copy. dd is fooled, it finishes but just produces a small file and states it's finished. dd_rescuse and ddrescue are complaining about read errors and are intolerably slow. These tools can copy only a few MB in 10 minutes. IMPORTANT: VLC is unable to open the DVD. Why is this happening, why are these tools failing? AnyDVD makes the disc copyable in a second on a Win7 host. It says that the UDF filesystem is patched, curiously, it also says that there are no bad sectors. The whole disc can be copied in 10 minutes. UPDATE: As for the solution, see my similar question on superuser . | I think that the simplest answer is that dd, dd_rescue and ddrescue are not designed to defeat copy protection schemes. They make no assumptions about the format of the data and try to maintain the integrity of the whole of the original on disk data. In the case of dd I suspect that it is terminating due to an intentional read error on the disk that is part of the copy protection scheme. It would help to confirm this if you included the commandline output from dd with your question. You may also find some read errors recorded in the dmesg command output. You may get dd to copy more of the file by passing the noerror flag to it on the commandline. However you may find that this just leaves you with corruption in your final image. | {
"source": [
"https://unix.stackexchange.com/questions/32564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4928/"
]
} |
32,569 | This is the error I am getting and it's failing because of a variable whose value is supposed to be 2 (I am getting this using a select * from tabel ).
I am getting spaces in that variable. + 0 !=
2
./setjobs[19]: 0: not found. How do I remove all those spaces or a newline from that variable?
Can tr , sed , or anything help? This what I am doing: set_jobs_count=$(echo "set heading off;
select count(*) from oppar_db
where ( oppar_db_job_name, oppar_db_job_rec ) in ($var) ;" | \
sqlplus -s ${OP_ORA_USER}/${OP_ORA_PASS}@$OPERATIONAL_DB_NAME) This works as suggested: | sed 's/[[:space:]]//g' But I still obtain a value like : set_jobs_count=
2 | The reason sed 's/[[:space:]]//g' leaves a newline in the output is because the data is presented to sed a line at a time. The substitution can therefore not replace newlines in the data (they are simply not part of the data that sed sees). Instead, you may use tr tr -d '[:space:]' which will remove space characters,
form feeds,
new-lines,
carriage returns,
horizontal tabs,
and vertical tabs. | {
"source": [
"https://unix.stackexchange.com/questions/32569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8032/"
]
} |
32,574 | First this question is related but definitely not the same as this very nice question: Difference between nohup, disown and & I want to understand something: when I do '&', I'm forking right? Is it ever useful to do "nohup ... &" or is simply & sufficient? Could someone show a case where you'd be using '&' and still would want to use 'nohup'? | First of all, every time you execute a command, you shell will fork a new process, regardless of whether you run it with & or not. & only means you're running it in the background. Note this is not very accurate. Some commands, like cd are shell functions and will usually not fork a new process. type cmd will usually tell you whether cmd is an external command or a shell function. type type tells you that type itself is a shell function. nohup is something different. It tells the new process to ignore SIGHUP . It is the signal sent by the kernel when the parent shell is closed. To answer your question do the following: run emacs & (by default should run in a separate X window) . on the parent shell, run exit . You'll notice that the emacs window is killed, despite running in the background. This is the default behavior and nohup is used precisely to modify that. Running a job in the background (with & or bg , I bet other shells have other syntaxes as well) is a shell feature, stemming from the ability of modern systems to multitask. Instead of forking a new shell instance for every program you want to launch, modern shells ( bash , zsh , ksh , ...) will have the ability to manage a list of programs (or jobs ). Only one of them at a time can be at the foreground , meaning it gets the shell focus. I wish someone could expand more on the differences between a process running in the foreground and one in the background (the main one being acess to stdin / stdout ). In any case, this does not affect the way the child process reacts to SIGHUP . nohup does. | {
"source": [
"https://unix.stackexchange.com/questions/32574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11923/"
]
} |
32,626 | There is a standard command for file splitting - split . For example, if I want to split a words file in several chunks of 10000 lines, I can use: split -dl 10000 words wrd It would generate several files of the form wrd.01 , wrd.02 and so on. But I want to have a specific extension for those files - for example, I want to get wtd.01.txt , wrd.02.txt files. Is there a way to do it? | This wasn't available back then but with more recent versions ( β₯ 8.16 ) of gnu split one can use the --additional-suffix switch to have control over the resulting extension. From man split : --additional-suffix=SUFFIX
append an additional SUFFIX to file names. so when using that option: split -dl 10000 --additional-suffix=.txt words wrd the resulting pieces will automatically end in .txt : wrd00.txt
wrd01.txt
......... | {
"source": [
"https://unix.stackexchange.com/questions/32626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15487/"
]
} |
32,633 | I have a directory with symbolic links to other directories. I want to archive the symbolic links, not as links but as archives containing the files of directories they refer to, using tar command. How can I do this? | Use the -h tar option. From the man page: -h, --dereference
don't archive symlinks; archive the files they point to | {
"source": [
"https://unix.stackexchange.com/questions/32633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15928/"
]
} |
32,678 | The obvious ls -dR does not work. I am currently using find /path/ -type d -ls but the output is not what I need (plain listing of sub-folders) Is there a way out? | Assuming you just want the name of each directory: find /path/ -type d -print | {
"source": [
"https://unix.stackexchange.com/questions/32678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15346/"
]
} |
32,795 | I am a new eCryptfs user and I have a very basic question that I wasn't able to find anywhere.
I am interested in using eCryptfs via my Synology NAS that uses Linux. While trying to encrypt my folder (EXT4) via Synology's encryption app (eCryptfs) I encounter errors that state that my filename length can not exceed 45 characters in length (so, no encryption). If the limit really is 45 characters, eCryptfs may not be a usable tool for most. What is the maximum allowed filename size when encrypting files and folders with eCryptfs?
Is Linux 255 characters? | Full disclosure: I am one of the authors and the current maintainer of the eCryptfs userspace utilities. Great question! Linux has a maximum filename length of 255 characters for most filesystems (including EXT4), and a maximum path of 4096 characters. eCryptfs is a layered filesystem. It stacks on top of another filesystem such as EXT4, which is actually used to write data to the disk. eCryptfs always encrypts file contents, but it can optionally encrypt (obscure) filenames. If filenames are not encrypted, then you can safely write filenames of up to 255 characters and encrypt their contents, as the filenames written to the lower filesystem will simply match. While an attacker would not be able to read the contents of index.html or budget.xls , they would know what file names exist. That may (or may not) leak sensitive information depending on your use case. If filenames are encrypted, things get a little more complicated. eCryptfs prepends a bit of data on the front of the encrypted filename, such that it can identify encrypted filenames definitively. Also, the encryption itself involves "padding" the filename. For instance, I have an encrypted file, ~/.bashrc . This filename is encrypted using my key to: /home/kirkland/.Private/ECRYPTFS_FNEK_ENCRYPTED.dWek2i3.WxXtwxzQdkM23hiYK757lNI7Ydf0xqZ1LpDovrdnruDb1-5l67.EU-- Clearly, that 7 character filename now requires more than 7 characters to be encrypted. Empirically, we have found that character filenames longer than 143 characters start requiring >255 characters to encrypt. So we (as eCryptfs upstream developers) typically recommend you limit your filenames to ~140 characters. Now, all that said, the Synology NAS is a commercial product that embeds and uses eCryptfs and Linux to encrypt and secure data on the device. We (the upstream developers of eCryptfs) have nothing to do with Synology or their products, though we're generally happy to see eCryptfs used in the wild . It seems to me that their recommendation of 45 characters is either a typographical error (from our 140 character recommendation), or simply a far more conservative estimate. | {
"source": [
"https://unix.stackexchange.com/questions/32795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15987/"
]
} |
32,829 | When I am in vim I can change the tab size with the following command: :set ts=4 Is it possible to set tab size for cat command output too? | The first command here emulates the formatting you see in vim . It intelligently expands tabs to the equivalent number of spaces, based on a tab-STOP (ts) setting of every 4 columns. printf "ab\tcd\tde\n" |expand -t4 Output ab cd de To keep the tabs as tabs and have the tab STOP positions set to every 4th column, then you must change the way the environment works with a tab-char (just as vim does with the :set ts=4 command) For example, in the terminal, you can set the tab STOP to 4 with this command; tabs 4; printf "ab\tcd\tde\n" Output ab cd de | {
"source": [
"https://unix.stackexchange.com/questions/32829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10867/"
]
} |
32,845 | I have this very simple line in a bash script which executes successfully (i.e. producing the _data.tar file), except that it doesn't exclude the sub-directories it is told exclude via the --exclude option: /bin/tar -cf /home/_data.tar --exclude='/data/sub1/*' --exclude='/data/sub2/*' --exclude='/data/sub3/*' --exclude='/data/sub4/*' --exclude='/data/sub5/*' /data Instead, it produces a _data.tar file that contains everything under /data, including the files in the subdirectories I wanted to exclude. Any idea why? and how to fix this? Update I implemented my observations based on the link provided in the first answer below (top level dir first, no whitespace after last exclude): /bin/tar -cf /home/_data.tar /data --exclude='/data/sub1/*' --exclude='/data/sub2/*' --exclude='/data/sub3/*' --exclude='/data/sub4/*' --exclude='/data/sub5/*' But that didn't help. All "excluded" sub-directories are present in the resulting _data.tar file. This is puzzling. Whether this is a bug in current tar (GNU tar 1.23, on a CentOS 6.2, Linux 2.6.32) or "extreme sensitivity" of tar to whitespaces and other easy-to-miss typos, I consider this a bug. For now. This is horrible : I tried the insight suggested below (no trailing /* ) and it still doesn't work in the production script: /bin/tar -cf /home/_data.tar /data --exclude='/data/sub1' --exclude='/data/sub2' --exclude='/data/sub3' --exclude='/data/sub4' I can't see any difference between what I tried and what @Richard Perrin tried, except for the quotes and 2 spaces instead of 1. I am going to try this (must wait for the nightly script to run as the directory to be backed up is huge) and report back. /bin/tar -cf /home/_data.tar /data --exclude=/data/sub1 --exclude=/data/sub2 --exclude=/data/sub3 --exclude=/data/sub4 I am beginning to think that all these tar --exclude sensitivities aren't tar's but something in my environment, but then what could that be? It worked! The last variation tried (no single-quotes and single-space instead of double-space between the --exclude s) tested working. Weird but accepting. Unbelievable! It turns out that an older version of tar (1.15.1) would only exclude if the top-level dir is last on the command line. This is the exact opposite of how version 1.23 requires. FYI. | If you want to exclude an entire directory, your pattern should match that directory, not files within it. Use --exclude=/data/sub1 instead of --exclude='/data/sub1/*' Be careful with quoting the patterns to protect them from shell expansion. See this example, with trouble in the final invocation: $ for i in 0 1 2; do mkdir -p /tmp/data/sub$i; echo foo > /tmp/data/sub$i/foo; done
$ find /tmp/data
/tmp/data
/tmp/data/sub2
/tmp/data/sub2/foo
/tmp/data/sub0
/tmp/data/sub0/foo
/tmp/data/sub1
/tmp/data/sub1/foo
$ tar -zvcf /tmp/_data.tar /tmp/data --exclude='/tmp/data/sub[1-2]'
tar: Removing leading `/' from member names
/tmp/data/
/tmp/data/sub0/
/tmp/data/sub0/foo
$ tar -zvcf /tmp/_data.tar /tmp/data --exclude=/tmp/data/sub[1-2]
tar: Removing leading `/' from member names
/tmp/data/
/tmp/data/sub0/
/tmp/data/sub0/foo
$ echo tar -zvcf /tmp/_data.tar /tmp/data --exclude=/tmp/data/sub[1-2]
tar -zvcf /tmp/_data.tar /tmp/data --exclude=/tmp/data/sub[1-2]
$ tar -zvcf /tmp/_data.tar /tmp/data --exclude /tmp/data/sub[1-2]
tar: Removing leading `/' from member names
/tmp/data/
/tmp/data/sub2/
/tmp/data/sub2/foo
/tmp/data/sub0/
/tmp/data/sub0/foo
/tmp/data/sub2/
tar: Removing leading `/' from hard link targets
/tmp/data/sub2/foo
$ echo tar -zvcf /tmp/_data.tar /tmp/data --exclude /tmp/data/sub[1-2]
tar -zvcf /tmp/_data.tar /tmp/data --exclude /tmp/data/sub1 /tmp/data/sub2 | {
"source": [
"https://unix.stackexchange.com/questions/32845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12498/"
]
} |
32,857 | I've used unix for quite a while, and for the last couple years I've felt like swap is an anachronism, but I'd be curious what other folks think. My argument is roughly this (assuming no global ulimit or twiddling of OOM settings): There is little value in swap because if you need to swap out to disk,
odds are it's going to be a vicious cycle where an app will continue
to eat not only real memory, but swap as well until it gets OOM
reaped (_if_ it gets OOM reaped).
If you have swap enabled, it will only prolong this death march to
the detriment of other processes - and in the worst case where the
process is not OOM reaped in a timely manner, grind the system to
a halt.
Without swap, it will probably get OOM reaped sooner (if at all) For any service that is tuned for performance, I would think that understanding the upper limits of it's resource usage would be key to tuning it in the first place, in which case you know how much you need. I can't imagine many situations (some, but not many) where you'd suspend a running process and it could swap out to make room for other things, but you'd still lose your sockets if you did that, so forcing a core-dump via gcc or copying the memory out by hand would be functionally equivalent. I definitely wouldn't want swap on an embedded system (even though it may have a smaller available ram), if you run out of ram I'd rather have my process die than tear up a million-write-per-sector flash memory drive over a weekend by wear-leveling the sectors down to the nub. Any unix-beards out there have any compelling reasons to keep swap around? UPDATE answers && analysis: CONFIRMED? - fork() requires the same amount of memory for the child process as the parent Modern fork() is copy-on-write for children on POSIX (in general), but Linux and FreeBSD specifically, and I'm assuming OSX by extrapolation. I consider this part of the anachronistic luggage that swap carries with it. Curiously, This Solaris article claims that even though Solaris uses Copy-on-Write with fork(), that you should have at least 2x (!) the parent process size in free virtual memory for the fork() not to crap out in the middle. While the Solaris element somewhat busts the argument that swap is an anachronism - I think enough operating systems correctly implement CoW in such a way that it's more important to dispel the myth than to mark it as further justification for swap. Since. Lets face it. At this point the people who actually use Solaris are probably just Oracle guys. No offense Solaris! CONFIRMED - tmpfs/ramfs files can go to swap as a convienence when tmpfs/ramfs fills up Don't use no-limit tmpfs/ramfs! Always explicitly define the amount of ram that you want tmpfs/ramfs to use. PLAUSABLE - Have a little swap 'just in case' One of my old bosses used to have a great saying, 'you don't know what you don't know' - essentially, you can't make a decision based on information you don't have yet. This is a plausible argument for swap to me, however - I suspect that the types of things you'd do to detect if your application is swapping out or not would be heavier than checking to see if malloc() succeeds or catching the exception from a failed new(). This may be useful in cases where you're running a desktop and have a bunch of random things going on, but even still - if something goes nuts I'd rather it be OOM reaped than diving into swap-hell. That's just me. BUSTED! - On Solaris , swap is important for a couple of reasons tmpfs - states The amount of free space available to tmpfs depends on the amount of unallocated swap space in the system. The size of a tmpfs file system grows to accommodate the files written to it, but there are some inherent tradeoffs for heavy users of tmpfs. Tmpfs shares resources with the data and stack segments of executing programs. The execution of very large programs can be affected if tmpfs file systems are close to their maximum allowable size. Tmpfs is free to allocate all but 4MB of the systemβs swap space. Solaris facts and myths about swap - states Virtual memory today consists of the sum total of physical RAM and swap space on disk. Solaris DOES NOT require any swap space to be configured at all. If you choose this option, once RAM is full, you will not be able to start new processes. . I'm unsure if this means that the maximum virtual map you can create is ram+swap , or if you could still do something like mmap() a file larger than ram and rely on mmap()'s lazy initialization.. While you can probably run Solaris these days fine without swap, it seems like it's less friendly about it than other POSIXy operating systems. BUSTED! Popular Linux hibernation tools appear to rely on swap By default, TuxOnIce looks like it relies on swap for hibernation - although other backends exist. However, if you're not running a box that needs to hibernate, I would still stand behind the statement that 'swap is anacronistic on linux' | Don't confuse (the) swap (as a disk area) and (to) swap (as a method to move memory pages from RAM to disk and reciprocally). Excessive swapping is something to be avoided for performance reasons but having a swap area isn't necessarily a problem. On systems, like Linux, that overcommit memory, i.e. that allow processes to allocate more memory than available, running out of RAM with not enough swap to handle the situation will trigger the OOM killer. You have to trust the algorithm used to select the "right" process to kill, and accept one or more of your processes to be killed without being given a chance to shut down properly. Here is a famous analogy that explain why the OOM killer might not be a good idea at all. On systems like Solaris, that do not overcommit memory, i.e that make sure a memory reservation is always backed by virtual memory, whether in RAM or on disk, having a sufficient swap area is absolutely necessary otherwise a potentially significant part of the RAM will be wasted. | {
"source": [
"https://unix.stackexchange.com/questions/32857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14801/"
]
} |
32,907 | Take the following script: #!/bin/sh
sed 's/(127\.0\.1\.1)\s/\1/' [some file] If I try to run this in sh ( dash here), it'll fail because of the parentheses, which need to be escaped. But I don't need to escape the backslashes themselves (between the octets, or in the \s or \1 ). What's the rule here? What about when I need to use {...} or [...] ? Is there a list of what I do and don't need to escape? | There are two levels of interpretation here: the shell, and sed. In the shell, everything between single quotes is interpreted literally, except for single quotes themselves. You can effectively have a single quote between single quotes by writing '\'' (close single quote, one literal single quote, open single quote). Sed uses basic regular expressions . In a BRE, in order to have them treated literally, the characters $.*[\^ need to be quoted by preceding them by a backslash, except inside character sets ( [β¦] ). Letters, digits and (){}+?| must not be quoted (you can get away with quoting some of these in some implementations). The sequences \( , \) , \n , and in some implementations \{ , \} , \+ , \? , \| and other backslash+alphanumerics have special meanings. You can get away with not quoting $^ in some positions in some implementations. Furthermore, you need a backslash before / if it is to appear in the regex outside of bracket expressions. You can choose an alternative character as the delimiter by writing, e.g., s~/dir~/replacement~ or \~/dir~p ; you'll need a backslash before the delimiter if you want to include it in the BRE. If you choose a character that has a special meaning in a BRE and you want to include it literally, you'll need three backslashes; I do not recommend this, as it may behave differently in some implementations. In a nutshell, for sed 's/β¦/β¦/' : Write the regex between single quotes. Use '\'' to end up with a single quote in the regex. Put a backslash before $.*/[\]^ and only those characters (but not inside bracket expressions). (Technically you shouldn't put a backslash before ] but I don't know of an implementation that treats ] and \] differently outside of bracket expressions.) Inside a bracket expression, for - to be treated literally, make sure it is first or last ( [abc-] or [-abc] , not [a-bc] ). Inside a bracket expression, for ^ to be treated literally, make sure it is not first (use [abc^] , not [^abc] ). To include ] in the list of characters matched by a bracket expression, make it the first character (or first after ^ for a negated set): []abc] or [^]abc] (not [abc]] nor [abc\]] ). In the replacement text: & and \ need to be quoted by preceding them by a backslash,
as do the delimiter (usually / ) and newlines. \ followed by a digit has a special meaning. \ followed by a letter has a special meaning (special characters) in some implementations, and \ followed by some other character means \c or c depending on the implementation. With single quotes around the argument ( sed 's/β¦/β¦/' ), use '\'' to put a single quote in the replacement text. If the regex or replacement text comes from a shell variable, remember that The regex is a BRE, not a literal string. In the regex, a newline needs to be expressed as \n (which will never match unless you have other sed code adding newline characters to the pattern space). But note that it won't work inside bracket expressions with some sed implementations. In the replacement text, & , \ and newlines need to be quoted. The delimiter needs to be quoted (but not inside bracket expressions). Use double quotes for interpolation: sed -e "s/$BRE/$REPL/" . | {
"source": [
"https://unix.stackexchange.com/questions/32907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11908/"
]
} |
32,908 | File1 contents: line1-file1 "1"
line2-file1 "2"
line3-file1 "3"
line4-file1 "4" File2 contents: line1-file2 "25"
line2-file2 "24"
Pointer-file2 "23"
line4-file2 "22"
line5-file2 "21" After the execution of perl/shell script, File2 content should become: line1-file2 "25"
line2-file2 "24"
line1-file1 "1"
line2-file1 "2"
line3-file1 "3"
line4-file1 "4"
Pointer-file2 "23"
line4-file2 "22"
line5-file2 "21" i.e paste the contents of File1 in File2 before the line that contains "Pointer". | The sed utility has a function for that and can do the modification in-place: sed -i -e '/Pointer/r file1' file2 But this puts your Pointer line above the file1 contents. To put it below, delay line output: sed -n -i -e '/Pointer/r file1' -e 1x -e '2,${x;p}' -e '${x;p}' file2 With GNU sed : sed '/Pointer/e cat file1' file2 As per the manual for the e [command] : Note that, unlike the r command, the output of the command will be
printed immediately; the r command instead delays the output to the
end of the current cycle. | {
"source": [
"https://unix.stackexchange.com/questions/32908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15896/"
]
} |
32,941 | There is probably simple trick to do this, but I can't figure from man page. How do I cut last 1MB from file with undetermined size for example, by using dd ? | Well, assuming you have stat and bash , you can get the file size with: stat -c %s your_file If you want to extract the last $amount bytes for that file with dd , you could: dd if=your_file of=extracted_part \
bs=1 count=$amount \
skip=$(( $(stat -c %s your_file) - $amount )) But the saner approach would be to use tail : tail -c $(( 1024*1024 )) your_file > target_file | {
"source": [
"https://unix.stackexchange.com/questions/32941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14866/"
]
} |
32,946 | A host-ping test that always works correctly from the interactive bash command line in Cygwin , is behaving incorrectly in its own crontab--always selecting the second host--and I can't figure out why: SHELL=/bin/bash
*/29 7-23 * * * [ -n "$(pidof unison)" ] || (partner=5.174.63.120; ping -A -c5 $partner 6 7 |grep -w "ttl" || partner=5.3.172.247; time nice unison-sync $partner &> /tmp/sync.master.dev.log )
which ping
/usr/bin/ping This basically wants 5.174.63.120 to be the 1st choice sync host preference, however if it doesn't ping back at the moment then use 5.3.172.247 instead for this round. If unison is not already running, that is. Yet running from the command-line always works as expected, echoing the first IP address if available, else the second: partner=5.174.63.120; ping -A -c5 $partner 6 7 |grep -w "ttl" || partner=5.3.172.247; echo $partner In Ubuntu this works both in crontab and cli. Is there a better way I can accomplish this, that is both compact and still portable on both my OSes (Linux Ubuntu 11.10 and Cygwin under Windows7,32bit)? Even better, I'd like to generalize my host-checking to more that two, but still code it into a concise crontab line: Use A if reachable, else use B if reachable, else use C if reachable, ... else just use Z. | Well, assuming you have stat and bash , you can get the file size with: stat -c %s your_file If you want to extract the last $amount bytes for that file with dd , you could: dd if=your_file of=extracted_part \
bs=1 count=$amount \
skip=$(( $(stat -c %s your_file) - $amount )) But the saner approach would be to use tail : tail -c $(( 1024*1024 )) your_file > target_file | {
"source": [
"https://unix.stackexchange.com/questions/32946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13496/"
]
} |
32,986 | I'm looking for a behavior that is similar to how vim(1) handles its split windows with ^w = . I know tmux(1) has predefined layouts with ^b Meta[1-5] , but this likely does not have the layout that I am currently using. When splitting a window, it halves the current window for both panes. Split again, and it halves that pane into two new. Combine vertical and horizontal splits, and they continue to halve each other, each new pane getting smaller and smaller. How can I keep the new layout I've just created, but have all vertical and horizontal splits equally balanced, like vim(1) does with ^w = ? | Vertically select-layout even-vertical Usually assigned to: Ctrl + b , Alt + 2 Horizontally select-layout even-horizontal Usually assigned to: Ctrl + b , Alt + 1 | {
"source": [
"https://unix.stackexchange.com/questions/32986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/646/"
]
} |
32,988 | I am running the following command on an ubuntu system: dd if=/dev/random of=rand bs=1K count=2 However, every time I run it, I end up with a file of a different size. Why is this? How can I generate a file of a given size filled with random data? | You're observing a combination of the peculiar behavior of dd with the peculiar behavior of Linux's /dev/random . Both, by the way, are rarely the right tool for the job. Linux's /dev/random returns data sparingly. It is based on the assumption that the entropy in the pseudorandom number generator is extinguished at a very fast rate. Since gathering new entropy is slow, /dev/random typically relinquishes only a few bytes at a time. dd is an old, cranky program initially intended to operate on tape devices. When you tell it to read one block of 1kB, it attempts to read one block. If the read returns less than 1024 bytes, tough, that's all you get. So dd if=/dev/random bs=1K count=2 makes two read(2) calls. Since it's reading from /dev/random , the two read calls typically return only a few bytes, in varying number depending on the available entropy. See also When is dd suitable for copying data? (or, when are read() and write() partial) Unless you're designing an OS installer or cloner, you should never use /dev/random under Linux, always /dev/urandom . The urandom man page is somewhat misleading; /dev/urandom is in fact suitable for cryptography, even to generate long-lived keys. The only restriction with /dev/urandom is that it must be supplied with sufficient entropy; Linux distributions normally save the entropy between reboots, so the only time you might not have enough entropy is on a fresh installation. Entropy does not wear off in practical terms. For more information, read Is a rand from /dev/urandom secure for a login key? and Feeding /dev/random entropy pool? . Most uses of dd are better expressed with tools such as head or tail . If you want 2kB of random bytes, run head -c 2k </dev/urandom >rand With older Linux kernels, you could get away with dd if=/dev/urandom of=rand bs=1k count=2 because /dev/urandom happily returned as many bytes as requested. But this is no longer true since kernel 3.16, it's now limited to 32MB . In general, when you need to use dd to extract a fixed number of bytes and its input is not coming from a regular file or block device, you need to read byte by byte: dd bs=1 count=2048 . | {
"source": [
"https://unix.stackexchange.com/questions/32988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16055/"
]
} |
33,049 | I have piped a line in bash script and want to check if the pipe has data, before feeding it to a program. Searching I found about test -t 0 but it doesn't work here. Always returns false.
So how to be sure that the pipe has data? Example: echo "string" | [ -t 0 ] && echo "empty" || echo "fill" Output: fill echo "string" | tail -n+2 | [ -t 0 ] && echo "empty" || echo "fill" Output: fill Unlike Standard/canonical way to test whether foregoing pipeline produced output? the input needs to be preserved to pass it to the program. This generalizes How to pipe output from one process to another but only execute if the first has output? which focuses on sending email. | There's no way to peek at the content of a pipe using commonly available shell utilities, nor is there a way to read a character from the pipe then put it back. The only way to know that a pipe has data is to read a byte, and then you have to get that byte to its destination. So do just that: read one byte; if you detect an end of file, then do what you want to do when the input is empty; if you do read a byte then fork what you want to do when the input is not empty, pipe that byte into it, and pipe the rest of the data. first_byte=$(dd bs=1 count=1 2>/dev/null | od -t o1 -A n | tr -dc 0-9)
if [ -z "$first_byte" ]; then
# stuff to do if the input is empty
else
{
printf "\\$first_byte"
cat
} | {
# stuff to do if the input is not empty
}
fi The ifne utility from Joey Hess's moreutils runs a command if its input is not empty. It usually isn't installed by default, but it should be available or easy to build on most unix variants. If the input is empty, ifne does nothing and returns the status 0, which cannot be distinguished from the command running successfully. If you want to do something if the input is empty, you need to arrange for the command not to return 0, which can be done by having the success case return a distinguishable error status: ifne sh -c 'do_stuff_with_input && exit 255'
case $? in
0) echo empty;;
255) echo success;;
*) echo failure;;
esac test -t 0 has nothing to do with this; it tests whether standard input is a terminal. It doesn't say anything one way or the other as to whether any input is available. | {
"source": [
"https://unix.stackexchange.com/questions/33049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14866/"
]
} |
33,110 | I have a file with lines as follows: ...
... <230948203[234]>, ...
... <234[24]>, ...
.. I would like to use sed to remove the characters < , and > from every line I tried using sed 's/<>,//g' but it didn't work (it didn't change anything). Do I need to escape these special characters. Is it possible to delete multiple characters using a single sed command? | With sed : sed 's|[<>,]||g' With tr : tr -d '<>,' | {
"source": [
"https://unix.stackexchange.com/questions/33110",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
33,121 | The --help output for curl lists a --resolve option, which states --resolve <host:port:address> Force resolve of HOST:PORT to ADDRESS I'm not having any luck getting it to work though. The basic command I'm trying to run is curl --resolve foo.example.com:443:localhost https://foo.example.com:443/ and I keep getting the response Couldn't resolve host 'foo.example.com' . I want do do this because I'm testing a certificate for foo.example.com, but I'm not testing it on the actual server. Instead, I'm testing it on a dummy machine. I know that I can edit /etc/hosts so that foo.example.com resolves to localhost, but this curl approach seems like it would be the "correct" way to go, if I could make it work. Does anybody see what I'm doing wrong here? | It appears that the address needs to be a numeric IP address, not a hostname. Try this: curl --resolve foo.example.com:443:127.0.0.1 https://foo.example.com:443/ | {
"source": [
"https://unix.stackexchange.com/questions/33121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16105/"
]
} |
33,155 | My desktop system is: $ uname -a
Linux xmachine 3.0.0-13-generic #22-Ubuntu SMP Wed Nov 2 13:25:36 UTC 2011 i686 i686 i386 GNU/Linux By running ps a | grep getty , I get this output: 900 tty4 Ss+ 0:00 /sbin/getty -8 38400 tty4
906 tty5 Ss+ 0:00 /sbin/getty -8 38400 tty5
915 tty2 Ss+ 0:00 /sbin/getty -8 38400 tty2
917 tty3 Ss+ 0:00 /sbin/getty -8 38400 tty3
923 tty6 Ss+ 0:00 /sbin/getty -8 38400 tty6
1280 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1
5412 pts/1 S+ 0:00 grep --color=auto getty I think ttyX processes is for input/ouput devices but I am not quite sure. Based on this I am wondering that why there are 6 ttyX processes running? I have only one input devices(keyboard) actually. | This shows because one getty process is running on each virtual console (VC) between tty1 and tty6 . You can access them by changing your active virtual console using Alt - F1 through Alt - F6 ( Ctrl - Alt - F1 and Ctrl - Alt - F6 respectively if you are currently within X). For more information on what a TTY is, see this question , and for information on virtual consoles, see this Wikipedia article . | {
"source": [
"https://unix.stackexchange.com/questions/33155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12224/"
]
} |
33,157 | I can't find any documentation about the sed -e switch, for simple replace, do I need it? e.g. sed 's/foo/bar/' VS sed -e 's/foo/bar/' | From the man page: -e script, --expression=script
add the script to the commands to be executed So you can use multiple -e options to build up a script out of many parts. $ sed -e "s/foo/bar/" -e "/FOO/d" Would first replace foo with bar and then delete every line containing FOO . | {
"source": [
"https://unix.stackexchange.com/questions/33157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16424/"
]
} |
33,191 | I have follow this guide ( Virtualization With KVM On Ubuntu 11.10 ) to setup my KVM (Virtual Machines Software) on my Ubuntu 11.10 Server. However, I didn't setup my VM's IP address when creating the VM, instead of using: vmbuilder kvm ubuntu --suite=oneiric --flavour=virtual --arch=amd64 --mirror=http://de.archive.ubuntu.com/ubuntu -o --libvirt=qemu:///system --ip=192.168.0.101 --gw=192.168.0.1 --part=vmbuilder.partition --templates=mytemplates --user=administrator --name=Administrator --pass=howtoforge --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/var/lib/libvirt/images/vm1/boot.sh --mem=256 --hostname=vm1 --bridge=br0 I used: (I deleted "--ip=192.168.0.101 --gw=192.168.0.1" from the command line) vmbuilder kvm ubuntu --suite=oneiric --flavour=virtual --arch=amd64 --mirror=http://de.archive.ubuntu.com/ubuntu -o --libvirt=qemu:///system --part=vmbuilder.partition --templates=mytemplates --user=administrator --name=Administrator --pass=howtoforge --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/var/lib/libvirt/images/vm1/boot.sh --mem=256 --hostname=vm1 --bridge=br0 I have set up the network bridge as the guide instructed and the new VM's interface is connected to the network bridge. I assume the KVM will assign my VM via DHCP but I don't have information on my new VM's IP address, where can I find the VM's IP address and SSH to the new VM? Thanks. [Notes: I have managed to login the VM without knowing the IP address of the VM. Using " Xming + SSH with X Graphic Forwarding " But there is no DHCP ip address assigned to my VM, Besides the above question, I have another question here: How to enable the DCHP on my VM so when I use Xming to login via "virt viewer" I can at least see my IP address is there.] | You can run arp -n to see what IP your virtual machine pick up. In that way, you don't have to login guest vm and type ifconfig . The blog below has more details and includes a perl script which automates finding the address of a virtual machine. Tip: Find the IP address of a virtual machine | {
"source": [
"https://unix.stackexchange.com/questions/33191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16111/"
]
} |
33,204 | After I make&make install vim from source, I found many symbolic links of vim in /usr/local/bin , such as evim, rvim, view... The vim(1) man page said that "rvim" is equivalent to "vim -Z" and so on. Now I wonder: can I make such a symbolic link with ln(1) myself, and if so, how? | You can't without writing a bit of code. Those symlink shortcuts work because vim is written that way. It looks at how (with what name) it was started and acts as if it had been called with the appropriate command line options. This behavior is hardcoded in the executable, it is not a trick done by the symbolic link. So if you want to do that yourself, the easiest is to write a small wrapper script that exec s vim with the options you want: #!/bin/sh
exec vim <options you want> "$@" The "$@" at the end simply passes any command line options given to the script along to vim. | {
"source": [
"https://unix.stackexchange.com/questions/33204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9931/"
]
} |
33,243 | GNOME 3 is not compatible with Compiz, so I cannot use the desktop zoom feature from Compiz. I googled for a long time, and it seems GNOME still does not support desktop zoom -- does it? I am using GNOME 3.2.1 | For those who don't mind using keyboard shortcuts instead of the mouse scrollwheel, here they are (tested with Gnome 3.14.2): Super + Alt + 8 : Toggle zoom enabled/disabled (when enabled, the next two keyboard shortcuts become active) Super + Alt + + : Zoom in (increases zoom factor by 1.0) Super + Alt + - : Zoom out (decreases zoom factor by 1.0, until it is 1.0) (Yes, decreasing zoom factor all the way down to 1.0 will look unzoomed, but zoom (and its keyboard shortcuts) remain active.) | {
"source": [
"https://unix.stackexchange.com/questions/33243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5075/"
]
} |
33,249 | I forgot how many RAM (DIMM) modules are installed on my laptop. I do not want to unscrew it but want to look it up on the console using bash. How do I gather this information? | Since you don't mention, I'm assuming this is on Linux. Any of the following should show you (with root): dmidecode -t memory dmidecode -t 16 lshw -class memory | {
"source": [
"https://unix.stackexchange.com/questions/33249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
33,253 | When I use PuTTY to connect to a specific Linux server via the SSH protocol, and I try to edit a file using the nano editor, the "enter" does not update the display. When I press enter to insert another line break, the following lines do not move down. However, if I save the file and re-open it, the new line breaks are there. I have further discovered that this only occurs on the first 3-4 lines of the file. This particular server runs CentOS 6. When I connect to a different server, I don't have the same problem. Where does the problem lie and how do I fix it? Running infocmp $TERM reports: # Reconstructed via infocmp from file: /usr/share/terminfo/l/linux
linux|linux console,
am, bce, ccc, eo, mir, msgr, xenl, xon,
colors#8, it#8, ncv#18, pairs#64,
acsc=+\020\,\021-\030.^Y0\333`\004a\261f\370g\361h\260i\316j\331k\277l\332m\300n\305o~p\304q\304r\304s_t\303u\264v\301w\302x\263y\363z\362{\343|\330}\234~\376,
bel=^G, blink=\E[5m, bold=\E[1m, civis=\E[?25l\E[?1c,
clear=\E[H\E[J, cnorm=\E[?25h\E[?0c, cr=^M,
csr=\E[%i%p1%d;%p2%dr, cub1=^H, cud1=^J, cuf1=\E[C,
cup=\E[%i%p1%d;%p2%dH, cuu1=\E[A, cvvis=\E[?25h\E[?8c,
dch=\E[%p1%dP, dch1=\E[P, dim=\E[2m, dl=\E[%p1%dM,
dl1=\E[M, ech=\E[%p1%dX, ed=\E[J, el=\E[K, el1=\E[1K,
flash=\E[?5h\E[?5l$<200/>, home=\E[H, hpa=\E[%i%p1%dG,
ht=^I, hts=\EH, ich=\E[%p1%d@, ich1=\E[@, il=\E[%p1%dL,
il1=\E[L, ind=^J,
initc=\E]P%p1%x%p2%{256}%*%{1000}%/%02x%p3%{256}%*%{1000}%/%02x%p4%{256}%*%{1000}%/%02x,
kb2=\E[G, kbs=\177, kcbt=\E[Z, kcub1=\E[D, kcud1=\E[B,
kcuf1=\E[C, kcuu1=\E[A, kdch1=\E[3~, kend=\E[4~, kf1=\E[[A,
kf10=\E[21~, kf11=\E[23~, kf12=\E[24~, kf13=\E[25~,
kf14=\E[26~, kf15=\E[28~, kf16=\E[29~, kf17=\E[31~,
kf18=\E[32~, kf19=\E[33~, kf2=\E[[B, kf20=\E[34~,
kf3=\E[[C, kf4=\E[[D, kf5=\E[[E, kf6=\E[17~, kf7=\E[18~,
kf8=\E[19~, kf9=\E[20~, khome=\E[1~, kich1=\E[2~,
kmous=\E[M, knp=\E[6~, kpp=\E[5~, kspd=^Z, nel=^M^J, oc=\E]R,
op=\E[39;49m, rc=\E8, rev=\E[7m, ri=\EM, rmacs=\E[10m,
rmam=\E[?7l, rmir=\E[4l, rmpch=\E[10m, rmso=\E[27m,
rmul=\E[24m, rs1=\Ec\E]R, sc=\E7, setab=\E[4%p1%dm,
setaf=\E[3%p1%dm,
sgr=\E[0;10%?%p1%t;7%;%?%p2%t;4%;%?%p3%t;7%;%?%p4%t;5%;%?%p5%t;2%;%?%p6%t;1%;%?%p7%t;8%;%?%p9%t;11%;m,
sgr0=\E[0;10m, smacs=\E[11m, smam=\E[?7h, smir=\E[4h,
smpch=\E[11m, smso=\E[7m, smul=\E[4m, tbc=\E[3g,
u6=\E[%i%d;%dR, u7=\E[6n, u8=\E[?6c, u9=\E[c,
vpa=\E[%i%p1%dd, | Since you don't mention, I'm assuming this is on Linux. Any of the following should show you (with root): dmidecode -t memory dmidecode -t 16 lshw -class memory | {
"source": [
"https://unix.stackexchange.com/questions/33253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16167/"
]
} |
33,255 | I am having a hard time defining and running my own shell functions in zsh. I followed the instructions on the official documentation and tried with easy example first, but I failed to get it work. I have a folder: ~/.my_zsh_functions In this folder I have a file called functions_1 with rwx user permissions. In this file I have the following shell function defined: my_function () {
echo "Hello world";
} I defined FPATH to include the path to the folder ~/.my_zsh_functions : export FPATH=~/.my_zsh_functions:$FPATH I can confirm that the folder .my_zsh_functions is in the functions path with echo $FPATH or echo $fpath However, if I then try the following from the shell: > autoload my_function
> my_function I get: zsh: my_test_function: function definition file not found Is there anything else I need to do to be able to call my_function ? Update: The answers so far suggest sourcing the file with the zsh functions. This makes sense, but I am bit confused. Shouldn't zsh know where those files are with FPATH ? What is the purpose of autoload then? | In zsh, the function search path ($fpath) defines a set of directories, which contain files that can be marked to be loaded automatically when the function they contain is needed for the first time. Zsh has two modes of autoloading files: Zsh's native way and another mode that resembles ksh's autoloading. The latter is active if the KSH_AUTOLOAD option is set. Zsh's native mode is the default and I will not discuss the other way here (see "man zshmisc" and "man zshoptions" for details about ksh-style autoloading). Okay. Say you got a directory `~/.zfunc' and you want it to be part of the function search path, you do this: fpath=( ~/.zfunc "${fpath[@]}" ) That adds your private directory to the front of the search path. That is important if you want to override functions from zsh's installation with your own (like, when you want to use an updated completion function such as `_git' from zsh's CVS repository with an older installed version of the shell). It is also worth noting, that the directories from `$fpath' are not searched recursively. If you want your private directory to be searched recursively, you will have to take care of that yourself, like this (the following snippet requires the `EXTENDED_GLOB' option to be set): fpath=(
~/.zfuncs
~/.zfuncs/**/*~*/(CVS)#(/N)
"${fpath[@]}"
) It may look cryptic to the untrained eye, but it really just adds all directories below `~/.zfunc' to `$fpath', while ignoring directories called "CVS" (which is useful, if you're planning to checkout a whole function tree from zsh's CVS into your private search path). Let's assume you got a file `~/.zfunc/hello' that contains the following line: printf 'Hello world.\n' All you need to do now is mark the function to be automatically loaded upon its first reference: autoload -Uz hello "What is the -Uz about?", you ask? Well, that's just a set of options that will cause `autoload' to do the right thing, no matter what options are being set otherwise. The `U' disables alias expansion while the function is being loaded and the `z' forces zsh-style autoloading even if `KSH_AUTOLOAD' is set for whatever reason. After that has been taken care of, you can use your new `hello' function: zsh% hello
Hello world. A word about sourcing these files: That's just wrong . If you'd source that `~/.zfunc/hello' file, it would just print "Hello world." once. Nothing more. No function will be defined. And besides, the idea is to only load the function's code when it is required . After the `autoload' call the function's definition is not read. The function is just marked to be autoloaded later as needed. And finally, a note about $FPATH and $fpath: Zsh maintains those as linked parameters. The lower case parameter is an array. The upper case version is a string scalar, that contains the entries from the linked array joined by colons in between the entries. This is done, because handling a list of scalars is way more natural using arrays, while also maintaining backwards compatibility for code that uses the scalar parameter. If you choose to use $FPATH (the scalar one), you need to be careful: FPATH=~/.zfunc:$FPATH will work, while the following will not: FPATH="~/.zfunc:$FPATH" The reason is that tilde expansion is not performed within double quotes. This is likely the source of your problems. If echo $FPATH prints a tilde and not an expanded path then it will not work. To be safe, I'd use $HOME instead of a tilde like this: FPATH="$HOME/.zfunc:$FPATH" That being said, I'd much rather use the array parameter like I did at the top of this explanation. You also shouldn't export the $FPATH parameter. It is only needed by the current shell process and not by any of its children. Update Regarding the contents of files in `$fpath': With zsh-style autoloading, the content of a file is the body of the function it defines. Thus a file named "hello" containing a line echo "Hello world." completely defines a function called "hello". You're free to put hello () { ... } around the code, but that would be superfluous. The claim that one file may only contain one function is not entirely correct, though. Especially if you look at some functions from the function based completion system (compsys) you'll quickly realise that that is a misconception. You are free to define additional functions in a function file. You are also free to do any sort of initialisation, that you may need to do the first time the function is called. However, when you do you will always define a function that is named like the file in the file and call that function at the end of the file, so it gets run the first time the function is referenced. If - with sub-functions - you didn't define a function named like the file within the file, you'd end up with that function having function definitions in it (namely those of the sub-functions in the file). You would effectively be defining all your sub-functions every time you call the function that is named like the file. Normally, that is not what you want, so you'd re-define a function, that's named like the file within the file. I'll include a short skeleton, that will give you an idea of how that works: # Let's again assume that these are the contents of a file called "hello".
# You may run arbitrary code in here, that will run the first time the
# function is referenced. Commonly, that is initialisation code. For example
# the `_tmux' completion function does exactly that.
echo initialising...
# You may also define additional functions in here. Note, that these
# functions are visible in global scope, so it is paramount to take
# care when you're naming these so you do not shadow existing commands or
# redefine existing functions.
hello_helper_one () {
printf 'Hello'
}
hello_helper_two () {
printf 'world.'
}
# Now you should redefine the "hello" function (which currently contains
# all the code from the file) to something that covers its actual
# functionality. After that, the two helper functions along with the core
# function will be defined and visible in global scope.
hello () {
printf '%s %s\n' "$(hello_helper_one)" "$(hello_helper_two)"
}
# Finally run the redefined function with the same arguments as the current
# run. If this is left out, the functionality implemented by the newly
# defined "hello" function is not executed upon its first call. So:
hello "$@" If you'd run this silly example, the first run would look like this: zsh% hello
initialising...
Hello world. And consecutive calls will look like this: zsh% hello
Hello World. I hope this clears things up. (One of the more complex real-world examples that uses all those tricks is the already mentioned ` _tmux ' function from zsh's function based completion system.) | {
"source": [
"https://unix.stackexchange.com/questions/33255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
33,271 | We are attempting to speed up the installation of oracle nodes for RAC installation.
this requires that we get ssh installed and configured so that it doesn't prompt for a password. The problem is:
On first usage, we are prompted for RSA key fingerprint is 96:a9:23:5c:cc:d1:0a:d4:70:22:93:e9:9e:1e:74:2f.
Are you sure you want to continue connecting (yes/no)? yes Is there a way to avoid that or are we doomed to connect at least once on every server from every server manually? | Update December 2019: As Chris Adams pointed out below, there has been a fairly significant change to Openssh in the 6.5 years since this answer was written, and there is a new option that is much safer than the original advice below: * ssh(1): expand the StrictHostKeyChecking option with two new
settings. The first "accept-new" will automatically accept
hitherto-unseen keys but will refuse connections for changed or
invalid hostkeys. This is a safer subset of the current behaviour
of StrictHostKeyChecking=no. The second setting "off", is a synonym
for the current behaviour of StrictHostKeyChecking=no: accept new
host keys, and continue connection for hosts with incorrect
hostkeys. A future release will change the meaning of
StrictHostKeyChecking=no to the behaviour of "accept-new". bz#2400 So instead of setting StrictHostKeyChecking no in your ssh_config file, set StrictHostKeyChecking accept-new . Set StrictHostKeyChecking no in your /etc/ssh/ssh_config file, where it will be a global option used by every user on the server. Or set it in your ~/.ssh/config file, where it will be the default for only the current user. Or you can use it on the command line: ssh -o StrictHostKeyChecking=no -l "$user" "$host" Here's an explanation of how this works from man ssh_config (or see this more current version ): StrictHostKeyChecking If this flag is set to βyesβ, ssh will never automatically add
host keys to the $HOME/.ssh/known_hosts file, and refuses to
connect to hosts whose host key has changed.Β
This provides maximum protection
against trojan horse attacks, however, can be
annoying when the /etc/ssh/ssh_known_hosts file is
poorly maintained,
or connections to new hosts are frequently made.Β This
option forces the user to manually add all new hosts.Β If this
flag is set to βnoβ, ssh will automatically add new host keys to
the user known hosts files.Β If this flag is set to βaskβ, new
host keys willΒ be added to the user known host files only after
the user has confirmed that is what they really want to do, and ssh will refuse to connect to hosts whose host key has changed.Β
The host keys of known hosts will be verified automatically in
all cases.Β The argument must be βyesβ, βnoβ or βaskβ.Β The
default is βaskβ. | {
"source": [
"https://unix.stackexchange.com/questions/33271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6184/"
]
} |
33,279 | As a follow-up to my previous question , if I have multiple files of the form sw.ras.001
sw.ras.002
sw.ras.003
β¦ What command can I use to remove the ras. in the middle of all the files? | You can do this with a fairly small modification of either answer from the last question: rename s/ras\.// sw.ras.* or for file in sw.ras.*; do
mv "$file" "${file/ras./}"
done Explanation: rename is a perl script that takes a perl regular expression and a list of files, applies the regex to each file's name in turn, and renames each file to the result of applying the regex. In our case, ras is matched literally and \. matches a literal . (as . alone indicates any character other than a newline), and it replaces that with nothing. The for loop takes all files that start with sw.ras. (standard shell glob) and loops over them. ${var/search/replace} searches $var for search and replaces the first occurrence with replace , so ${file/ras./} returns $file with the first ras. removed. The command thus renames the file to the same name minus ras. . Note that with this search and replace, . is taken literally, not as a special character. | {
"source": [
"https://unix.stackexchange.com/questions/33279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15417/"
]
} |
33,284 | Recently, my external hard drive enclosure failed (the hard drive itself powers up in another enclosure). However, as a result, it appears its EXT4 file system is corrupt. The drive has a single partition and uses a GPT partition table (with the label ears ). fdisk -l /dev/sdb shows: Device Boot Start End Blocks Id System
/dev/sdb1 1 1953525167 976762583+ ee GPT testdisk shows the partition is intact: 1 P MS Data 2049 1953524952 1953522904 [ears] ... but the partition fails to mount: $ sudo mount /dev/sdb1 a
mount: you must specify the filesystem type
$ sudo mount -t ext4 /dev/sdb1 a
mount: wrong fs type, bad option, bad superblock on /dev/sdb1, fsck reports an invalid superblock: $ sudo fsck.ext4 /dev/sdb1
e2fsck 1.42 (29-Nov-2011)
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb1 and e2fsck reports a similar error: $ sudo e2fsck /dev/sdb1
Password:
e2fsck 1.42 (29-Nov-2011)
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sdb1 dumpe2fs also: $ sudo dumpe2fs /dev/sdb1
dumpe2fs 1.42 (29-Nov-2011)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb1 mke2fs -n (note, -n ) returns the superblocks: $ sudo mke2fs -n /dev/sdb1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
61054976 inodes, 244190363 blocks
12209518 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7453 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848 ... but trying "e2fsck -b [block]" for each block fails: $ sudo e2fsck -b 71663616 /dev/sdb1
e2fsck 1.42 (29-Nov-2011)
e2fsck: Invalid argument while trying to open /dev/sdb1 However as I understand, these are where the superblocks were when the filesystem was created, which does not necessarily mean they are still intact. I've also ran a testdisk deep search if anyone can decypher the log. It mentions many entry like: recover_EXT2: s_block_group_nr=1/7452, s_mnt_count=6/20,
s_blocks_per_group=32768, s_inodes_per_group=8192
recover_EXT2: s_blocksize=4096
recover_EXT2: s_blocks_count 244190363
recover_EXT2: part_size 1953522904
recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed Running e2fsck with those values gives: e2fsck: Bad magic number in super-block while trying to open /dev/sdb1 I tried that with all superblocks in the testdisk.log for i in $(grep e2fsck testdisk.log | uniq | cut -d " " -f 4); do
sudo e2fsck -b $i -B 4096 /dev/sdb1
done ... all with the same e2fsck error message. In my last attempt, I tried different filesystem offsets. For each offset i , where i is one of 31744, 32768, 1048064, 1049088: $ sudo losetup -v -o $i /dev/loop0 /dev/sdb ... and running testdisk /dev/loop0 , I didn't find anything interesting. I've been fairly exhaustive, but is there any way to recover the file system without resorting to low-level file recovery tools ( foremost / photorec )? | Unfortunately, I was unable to recover the file system and had to resort to lower-level data recovery techniques (nicely summarised in Ubuntu's Data Recovery wiki entry), of which Sleuth Kit proved most useful. Marking as answered for cleanliness' sake. | {
"source": [
"https://unix.stackexchange.com/questions/33284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/635/"
]
} |
33,336 | I remote copied a file to my laptop using: scp someFile [email protected]:/home/USER/put/it/some/where/oh/damn/you/here I want to be able to autocomplete the remote path by hitting tab. | Make sure that you've turned on the fancy autocompletion. On many distributions, this means your ~/.bashrc needs to contain . /etc/bash_completion . You'll need to have passwordless authentication set up, i.e. with a key that's already loaded in ssh-agent . Establishing an SSH connection is slow, so you can considerably speed up completions by establishing a connection once and for all and using that connection thereafter. The relatively complicated way to do that is to open a master SSH connection with ssh -N -M target-host after setting up master-slave connections in ~/.ssh/config ; see Multiple ssh sessions in single command for instructions (you need the ControlMaster and ControlPath options). The simple method is to mount the remote filesystem over SSHFS and use cp with normal shell completion. mkdir ~/remote
sshfs [email protected]:/home/USER ~/remote
cp -p someFile ~/remote/put/it/some/where/oh/damn/you/here | {
"source": [
"https://unix.stackexchange.com/questions/33336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
33,339 | I'm trying to use the curl command to access a http url with a exclamation mark ( ! ) in its path. e.g: curl -v "http://example.org/!287s87asdjh2/somepath/someresource" the console replies with bash: ... event not found . What is going on here? and what would be the proper syntax to escape the exclamation mark? | The exclamation mark is part of history expansion in bash. To use it you need it enclosed in single quotes (eg: 'http://example.org/!132' ). You might try to directly escape it with a backslash ( \ ) before the character (eg: "http://example.org/\!132" ). However, even though a backslash before the exclamation mark does prevent history expansion, the backslash is not removed in such a case. So it's better to use single quotes, so you're not passing a literal backslash to curl as part of the URL. | {
"source": [
"https://unix.stackexchange.com/questions/33339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16211/"
]
} |
33,389 | In bash you can repeat the last command by entering !! , or the third last command !-3 for example. Is there a quick way to repeat the last 3 commands, without having to type out !-1; !-2; !-3 explicitly? | fc -N -1 Where the -N is the last N commands you want to repeat. This will open an editor with the last N commands in it. You can edit the commands as desired and when you close the editor, they will all be run in sequence. | {
"source": [
"https://unix.stackexchange.com/questions/33389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16346/"
]
} |
33,394 | I logged into a Linux server (RHEL) from an linux desktop. There is not any error message at login, but I failed to launch firefox and see the following error emessage: [myname@myserver ~]$ firefox &
[1] 8806
[myname@myserver ~]$ X11 connection rejected because of wrong authentication.
X11 connection rejected because of wrong authentication.
The application 'firefox' lost its connection to the display localhost:11.0;
most likely the X server was shut down or you killed/destroyed
the application. I tried to run the following command [myname@myserver ~]$ xhost + but get the following error message: X11 connection rejected because of wrong authentication.
X connection to localhost:11.0 broken (explicit kill or server shutdown). I also tried to run [myname@myserver ~]$ echo $DISPLAY and got the following result localhost:11.0 I tried to search this problem from SO but I had no luck. What is the problem and how can I make the firefox work? It seems that the X window cannot be opened. | fc -N -1 Where the -N is the last N commands you want to repeat. This will open an editor with the last N commands in it. You can edit the commands as desired and when you close the editor, they will all be run in sequence. | {
"source": [
"https://unix.stackexchange.com/questions/33394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43800/"
]
} |
33,396 | I have recently installed XUbuntu 11.10 64bit, but I am having problem compiling the most simple pthread example. Here is the code pthread_simple.c : #include <stdio.h>
#include <pthread.h>
main() {
pthread_t f2_thread, f1_thread;
void *f2(), *f1();
int i1,i2;
i1 = 1;
i2 = 2;
pthread_create(&f1_thread,NULL,f1,&i1);
pthread_create(&f2_thread,NULL,f2,&i2);
pthread_join(f1_thread,NULL);
pthread_join(f2_thread,NULL);
}
void *f1(int *x){
int i;
i = *x;
sleep(1);
printf("f1: %d",i);
pthread_exit(0);
}
void *f2(int *x){
int i;
i = *x;
sleep(1);
printf("f2: %d",i);
pthread_exit(0);
} And here is the compile command gcc -lpthread pthread_simple.c The results: lptang@tlp-linux:~/test/test-pthread$ gcc -lpthread pthread_simple.c
/tmp/ccmV0LdM.o: In function `main':
pthread_simple.c:(.text+0x2c): undefined reference to `pthread_create'
pthread_simple.c:(.text+0x46): undefined reference to `pthread_create'
pthread_simple.c:(.text+0x57): undefined reference to `pthread_join'
pthread_simple.c:(.text+0x68): undefined reference to `pthread_join'
collect2: ld returned 1 exit status Does anyone know what's causing the problem? | In the latest versions of gcc compiler require that libraries follow the object or source files. So to compile this it should be: gcc pthread_sample.c -lpthread Normally though pthread code is compiled this way: gcc -pthread pthread_sample.c | {
"source": [
"https://unix.stackexchange.com/questions/33396",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16240/"
]
} |
33,450 | How can I check if hyperthreading is enabled on a Linux machine, using a perl script to check for it? I'm trying the following way: dmidecode -t processor | grep HTT Let me know if I'm on right track. | I have always just used the following and looked at 'Thread(s) per core:'. hostname:~ # lscpu
Architecture: x86_64
CPU(s): 24
Thread(s) per core: 2 <-- here
Core(s) per socket: 6
CPU socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Stepping: 2
CPU MHz: 1596.000
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K Note, however, this technique will fail if any logical processor has been turned off with a simple echo 0 > /sys/devices/system/cpu/cpuX/online | {
"source": [
"https://unix.stackexchange.com/questions/33450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16270/"
]
} |
33,502 | Should LVM be used for the partitions when creating VM images (e.g., KVM images)? It seems like it adds complexity if you want to, say, mount a qcow2 image in the host if the image has LVM partitions. On the other hand, it doesn't seem like the advantages of LVM partitions are as significant on a VM image, since it's much easier to take a VM offline and resize partitions than it is for a physical system. | "It depends." If you are on an environment that you control (vmware or kvm or whatever), and can make your own decisions about disk performance QoS, then I'd recommend not using LVM inside your VMs. It doesn't buy you much flexibility that you couldn't get at the hypervisor level. Remember, the hypervisor is already effectively performing these tasks. If you want to be able to arbitrarily resize file systems (a fine idea), just create a separate virtual disk for each filesystem. One thing you might think of as you go down this road. You don't even necessarily need to put partitions on your virtual disks this way. For example, you can create a virtual disk for /home ; it is /dev/vdc inside your vm. When creating the filesystem, just do something like mke2fs -j /dev/vdc instead of specifying a partition. This is a fine idea, but...most tools (and other admins who come after you) will expect to see partitions on every disk. I'd recommend just putting a single partition on the disk and be done with it. It does mean one more step when resizing the filesystem, though. And don't forget to properly align your partitions - starting the first partition at 1MB is a good rule of thumb. All that said - Doing this all at the hypervisor level means that you probably have to reboot the VM to resize partitions. Using LVM would allow you to hot-add a virtual disk (presuming your hypervisor/OS combination allows this), and expand the filesystem without a reboot. This is definitely a plus. Meanwhile, if you are using a cloud provider, it's more subtle. I don't know much about Azure, GCP, or any of the smaller players, so I can't help there. With AWS you can follow my advice above and you'll often be just fine. You can (now) increase the size of EBS volumes (virtual disks) on-the-fly, and resize partitions, etc. However, in the general case, it might make sense to put everything on a single big EBS volume, and use LVM (or, I suppose, plain partitions). Amazon gives you an IOPS limit on each volume. By default, this limit scales with the size of the volume. e.g., for gp2 volumes you get 3 IOPS per GiB (minimum of 100 IOPS). See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html For most workloads, you will want all your available IOPS to be available to any filesystem, depending on the need at the moment. So it makes sense to make one big EBS volume, get all your IOPS in one bucket, and partition/LVM it up. Example: 3 disks with independent filesystems/swap areas, each 100GB in size. Each gets 300 IOPS. Performance is limited to 300 IOPS on each disk. 1 disk, 300GB in size. LVM partitions on the disk of 100GB each. The disk gets 900 IOPS. Any of the partitions can use all 900 IOPS. | {
"source": [
"https://unix.stackexchange.com/questions/33502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/663/"
]
} |
33,555 | May i know the max partition size supported by an Linux system. And how much logical and primary Partition as we can create in an disk installed by linux system? | How Many Partitions I believe other, faster and better people have already answered this perfectly. :) There Is Always One More Limit For the following discussion, always remember that limits are theoretical. Actual limitations are often less than the theoretical limits because either other theoretical limits constrain things. (PCs are very, very complex things indeed these days) there are always more bugs. (this answer not excluded) When Limits are Violated What happens when these limits are violated isn't simple, either. For instance, back in the days of 10GB disks, you could have multi-gigabyte partitions, but some machines couldn't boot code stored after the 1,024th cylinder. This is why so many Linux installers still insist on a separate, small /boot partition in the beginning of the disk. Once you managed to boot, things were just fine. Size of partitions: MS-DOS Partition Table (MBR) MS-DOS stores partitions in a (start,size) format, each of which is 32 bits wide. Each number used to encode cylinder-head-sector co-ordinates in the olden days. Now it simply includes an arbitrary sector number (the disk manages the translation from that to medium-specific co-ordinates). The kernel source for the βMS-DOSβ partition type suggests partition sizes are 32 bits wide, in sectors. Which gives us 2^32 * 512, or 2^41 bytes, or 2^21 binary Megabytes, or 2,097,152 Megabytes, or 2,048 Gigabytes, or 2 Terabytes (minus one sector). GUID Partition Table (GPT) If you're using the GUID Partition Table (GPT) disk label, your partition table is stored as a (start,end) pair. Both are 8 bytes long (64 bits), which allows for quite a lot more than you're likely to ever use: 2^64 512-byte sectors, or 2^73 bytes (8 binary zettabytes), or 2^33 terabytes. If you're booting off of a UEFI ROM rather than the traditional CP/M-era BIOS, you've already got GPT. If not you can always choose to use GPT as your disklabel. If you have a newish disk, you really should. Sector Sizes A sector has been 512 bytes for a long while. This is set to change to 4,096 bytes. Many disks already have this, but emulate 512 byte sectors. When the change comes to the foreground and the allocation unit becomes 4,096 byte sectors, and LBAs address 4,096 byte sectors, all the sizes above will change by 3 binary orders of magnitude: multiply them all by 8 to get the new, scary values. Logical Volume Manager If you use LVM, whatever volume you make must also be supported by LVM, since it sits between your partitions and filesystems. According to the LVM2 FAQ , LVM2 supports up to 8EB (exabytes) on Linux 2.6 on 64-bit architectures; 16TB (terabytes) on Linux 2.6 running on 32-bit architectures; and 2TB on Linux 2.4. Filesystem Limits Of course, these are the size limits per partition (or LVM volume), which is what you're asking. But the point of having partitions is usually to store filesystems, and filesystems have their own limits. In fact, what types of limits a filesystem has depends on the filesystem itself! The only global limits are the maximum size of the filesystem and the maximum size of each file in it. EXT4 allows partitions up to 16TB per file and 1EB (exabyte) per volume. However, it uses 32-bit block numbers, so you'd need to increase the default 4,096-byte block size. This may not be possible on your kernel and architecture, so 16TB per volume may be more realistic on a PC. ZFS allows 16EB files and 16EB volumes, but doubtless it has its own other, unforeseen limits too. Wikipedia has a very nice table of these limits for most filesystems known to man . In Practice If you're using Linux 2.6 or newer on 64-bit machines and GPT partitions, it looks like you should only worry about the choice of filesystem and its limits. Even then, it really shouldn't worry you that much. You probably shouldn't be creating single files of 16TB anyway, and 1 exabyte (1,048,576 TB) will be a surreal limitation for a while. If you're using MBR, and need more than 2 binary terabytes, you should switch to UEFI and GPT because you're operating under a 2TB-per-partition limit (this may be less than trivial on an already deployed computer) Please note that I'm an old fart, and I use binary units when I'm calculating multiples of powers of two. Disk manufacturers like to cheat (and have convinced us they always did this, even though we know they didn't) by using decimal units. So the largest β2TBβ disk is still smaller than 2 binary terabytes, and you won't have trouble. Unless you use Logical Volume Manager or RAID-0. | {
"source": [
"https://unix.stackexchange.com/questions/33555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13636/"
]
} |
33,557 | I have an already established ssh connection between two machines. Is there a way to send commands to the remote machine from a shell script that is run on the local machine, using the already open connection, and without starting another ssh session? | It's very simple with recent enough versions of OpenSSH if you plan in advance. Open a master connection the first time. For subsequent connections, route slave connections through the existing master connection. In your ~/.ssh/config , set up connection sharing to happen automatically: ControlMaster auto
ControlPath ~/.ssh/control:%h:%p:%r If you start an ssh session to the same (user, port, machine) as an existing connection, the second session will be tunneled over the first. Establishing the second connection requires no new authentication and is very fast. | {
"source": [
"https://unix.stackexchange.com/questions/33557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16309/"
]
} |
33,596 | I'm trying to make a PCMCIA tuner card work in my headless home server, running Debian Squeeze. Now, as I have very big troubles finding the correct command line to capture, transcode end stream the video to the network using VLC, I decided to go step by step, and work first on local output. That's where the problem comes in: there seems to be no framebuffer device (/dev/fb0) to access for displaying graphics on the attached screen! And indeed I noticed I don't have the Linux penguin image at boot (didn't pay attention before as screen is attached, but always off, and anyway computer is always on). As I'm not very familiar with Linux graphics, I would like to understand: Is this related to my particular hardware (see below)? Or is it specific to Debian Squeeze/ a kernel version/... ? Is there some driver I need to manually install/load? Now some general information: The computer has no dedicated graphic card, but an embedded graphic chipset (Intel G31 Express), embedded on the motherboard (Gigabyte G31M-ES2L) I don't want to install a full featured X server, just have a framebuffer device for this particular test Any ideas/comments on the issue? | I can address your question, having previously worked with the Linux FB. How Linux Does Its FB. First you need to have FrameBuffer support in your kernel, corresponding to your hardware. Most modern distributions have support via kernel modules. It does not matter if your distro comes preconfigured with a boot logo, I don't use one and have FB support. It does not matter if you have a dedicated graphics card, integrated will work as long as the Hardware Framebuffer is supported. You don't need X, which is the the most enticing aspect of having the FrameBuffer. Some people don't know better, so they advocated some form of X to workaround their misunderstandings. You don't need to work with the FB directly, which many people incorrectly assume. A very awesome library for developing with FrameBuffer is DirectFB it even has some basic acceleration support. I always suggest at least checking it out, if you are starting a full-featured FB based project (Web Browser, Game, GUI ...) Specific To Your Hardware Use the Vesa Generic FrameBuffer, its modules is called vesafb . You can load it, if you have it available, with the commands modprobe vesafb . many distributions preconfigure it disabled, you can check in /etc/modprobe.d/ . blacklist vesafb might need to be commented out with a # , in a blacklist-framebuffer.conf or other blacklist file. The Best option, is a Hardware specific KMS driver. The main one for Intel is Intel GMA, not sure what its modules are named. You will need to read up about it from your distro documents. This is the best performing FB option, I personally would always go KMS first if possible. Use the Legacy Hardware specific FB Drivers, Not recommended as they are sometimes buggy. I would avoid this option, unless last-resort necessary. I believe this covers all your questions, and should provide the information to get that /dev/fb0 device available. Anything more specific would need distribution details, and if you are somewhat experienced, RTFM should be all you need. (after reading this). I hope I have helped, Your lucky your asking about one of my topics! This is a neglected subject on UNIX-SE, as not everybody (knowingly) uses the Linux FrameBuffer. NOTE: UvesaFB Or VesaFB? You may have read people use uvesafb over vesafb , as it had better performance. This WAS generally true, but not in a modern distro with modern Hardware. If your Graphics Hardware supports protected mode VESA (VESA >= 2.0 ), and you have a somewhat recent kernel vesafb is now a better choice. | {
"source": [
"https://unix.stackexchange.com/questions/33596",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16324/"
]
} |
33,598 | An application contacts an Internet host on each start of it. It seems to take just an instant. I don't know if it is HTTP, plain TCP or UDP or what is used but tend to suppose HTTP. What tool can I use to track this connection? | I can address your question, having previously worked with the Linux FB. How Linux Does Its FB. First you need to have FrameBuffer support in your kernel, corresponding to your hardware. Most modern distributions have support via kernel modules. It does not matter if your distro comes preconfigured with a boot logo, I don't use one and have FB support. It does not matter if you have a dedicated graphics card, integrated will work as long as the Hardware Framebuffer is supported. You don't need X, which is the the most enticing aspect of having the FrameBuffer. Some people don't know better, so they advocated some form of X to workaround their misunderstandings. You don't need to work with the FB directly, which many people incorrectly assume. A very awesome library for developing with FrameBuffer is DirectFB it even has some basic acceleration support. I always suggest at least checking it out, if you are starting a full-featured FB based project (Web Browser, Game, GUI ...) Specific To Your Hardware Use the Vesa Generic FrameBuffer, its modules is called vesafb . You can load it, if you have it available, with the commands modprobe vesafb . many distributions preconfigure it disabled, you can check in /etc/modprobe.d/ . blacklist vesafb might need to be commented out with a # , in a blacklist-framebuffer.conf or other blacklist file. The Best option, is a Hardware specific KMS driver. The main one for Intel is Intel GMA, not sure what its modules are named. You will need to read up about it from your distro documents. This is the best performing FB option, I personally would always go KMS first if possible. Use the Legacy Hardware specific FB Drivers, Not recommended as they are sometimes buggy. I would avoid this option, unless last-resort necessary. I believe this covers all your questions, and should provide the information to get that /dev/fb0 device available. Anything more specific would need distribution details, and if you are somewhat experienced, RTFM should be all you need. (after reading this). I hope I have helped, Your lucky your asking about one of my topics! This is a neglected subject on UNIX-SE, as not everybody (knowingly) uses the Linux FrameBuffer. NOTE: UvesaFB Or VesaFB? You may have read people use uvesafb over vesafb , as it had better performance. This WAS generally true, but not in a modern distro with modern Hardware. If your Graphics Hardware supports protected mode VESA (VESA >= 2.0 ), and you have a somewhat recent kernel vesafb is now a better choice. | {
"source": [
"https://unix.stackexchange.com/questions/33598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
33,617 | I've just set up a new machine with Ubuntu Oneiric 11.10 and then run apt-get update
apt-get upgrade
apt-get install git Now if I run git --version it tells me I have git version 1.7.5.4 but on my local machine I have the much newer git version 1.7.9.2 I know I can install from source to get the newest version, but I thought that it was a good idea to use the package manager as much as possible to keep everything standardized. So is it possible to use apt-get to get a newer version of git , and what is the right way to do it? | Here are the commands you need to run, if you just want to get it done: sudo add-apt-repository ppa:git-core/ppa -y
sudo apt-get update
sudo apt-get install git -y
git --version As of Dec 2018, I got git 2.20.1 that way, while the version in the Ubuntu Xenial repositories was 2.7.4. If your system doesn't have add-apt-repository , you can install it via: sudo apt-get install python-software-properties software-properties-common | {
"source": [
"https://unix.stackexchange.com/questions/33617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
33,629 | How can I create a new file and fill it with 1 Gigabyte worth of random data? I need this to test some software. I would prefer to use /dev/random or /dev/urandom . | On most unices: head -c 1G </dev/urandom >myfile If your head doesn't understand the G suffix you can specify the size in bytes: head -c 1073741824 </dev/urandom >myfile If your head doesn't understand the -c option (it's common but not POSIX; you probably have OpenBSD): dd bs=1024 count=1048576 </dev/urandom >myfile Do not use /dev/random on Linux, use /dev/urandom . | {
"source": [
"https://unix.stackexchange.com/questions/33629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4/"
]
} |
33,638 | I have a number of files, I want to check that all those files have the same content. What command line could I use to check that? Usage could be something like: $ diffseveral file1 file2 file3 file4 Result: All files equals OR Files are not all equals | With GNU diff, pass one of the files as an argument to --from-file and any number of others as operand: $ diff -q --from-file file1 file2 file3 file4; echo $?
0
$ echo >>file3
$ diff -q --from-file file1 file2 file3 file4; echo $?
Files file1 and file3 differ
1 You can use globbing as usual. For example, if the current directory contains file1 , file2 , file3 and file4 , the following example compares file2 , file3 and file4 to file1 . $ diff -q --from-file file*; echo $? Note that the βfrom fileβ must be a regular file, not a pipe, because diff will read it multiple times. | {
"source": [
"https://unix.stackexchange.com/questions/33638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2305/"
]
} |
33,650 | I'm reading from a serial port connected to a gps device sending nmea strings. A simplified invocation to illustrate my point: $ awk '{ print $0 }' /dev/ttyPSC9
GPGGA,073651.000,6310.1043,N,01436.1539,E,1,07,1.0,340.2,M,33.3,M,,0000*56
$GPGSA,A,3,28,22,09,27,01,19,17,,,,,,2.3,1.0,2.0*39
$GPRMC,073651.000,A,6310.1043,N,01436.1539,E,0.42,163.42,070312,,,A*67
GPGGA,073652.000,6310.1043,N,01436.1540,E,1,07,1.0,339.2,M,33.3,M,,0000*55
$GPGSA,A,3,28,22,09,27,01,19,17,,,,,,2.3,1.0,2.0*39 If I instead try to read from a pipe, awk buffers the input before sending it to stdout. $ cat /dev/ttyPSC9 | awk '{ print $0 }'
<long pause>
GPGGA,073651.000,6310.1043,N,01436.1539,E,1,07,1.0,340.2,M,33.3,M,,0000*56
$GPGSA,A,3,28,22,09,27,01,19,17,,,,,,2.3,1.0,2.0*39
$GPRMC,073651.000,A,6310.1043,N,01436.1539,E,0.42,163.42,070312,,,A*67
GPGGA,073652.000,6310.1043,N,01436.1540,E,1,07,1.0,339.2,M,33.3,M,,0000*55
$GPGSA,A,3,28,22,09,27,01,19,17,,,,,,2.3,1.0,2.0*39 How can I avoid the buffering? Edit : Kyle Jones suggested that cat is buffering it's output but that doesn't appear to be happening: $ strace cat /dev/ttyPSC9 | awk '{ print $0 }'
write(1, "2,"..., 2) = 2
read(3, "E"..., 4096) = 1
write(1, "E"..., 1) = 1
read(3, ",0"..., 4096) = 2 When I think about it: I thought that a program used line buffering when writing to a terminal and "regular buffering" for all other cases. Then, why is cat not buffering more? Is the serial port signalling EOF? Then why is cat not terminated? My awk is mawk 1.2. | I know it is an old question, but a one-liner may help those who come here searching: cat /dev/ttyPSC9 | awk '{ print $0; system("")}' system("") does the trick, and is POSIX compliant. Non-posix systems: beware. There exists a more specific function fflush() that does the same, but is not available in older versions of awk. An important piece of information from the docs regarding the use of system("") : gawk treats this use of the system() function as a special case and is
smart enough not to run a shell (or other command interpreter) with
the empty command. Therefore, with gawk, this idiom is not only
useful, it is also efficient. | {
"source": [
"https://unix.stackexchange.com/questions/33650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5694/"
]
} |
33,688 | I am trying to install git. I run the following command: sudo apt-get install git-core git-gui git-doc But receive the following error: sudo: apt-get: command not found What should I do? | Since you're using CentOS 5, the default package manager is yum , not apt-get . To install a program using it, you'd normally use the following command: $ sudo yum install <packagename> However, when trying to install git this way, you'll encounter the following error on CentOS 5: $ sudo yum install git
Setting up Install Process
Parsing package install arguments
No package git available.
Nothing to do This tells you that the package repositories that yum knows about don't contain the required rpms (RPM Package Manager files) to install git . This is presumably because CentOS 5 is based on RHEL 5, which was released in 2007, before git was considered a mature version control system. To get around this problem, we need to add additional repositories to the list that yum uses (We're going to add the RPMforge repository, as per these instructions ). This assumes you want the i386 packages. Test by running uname -i . If you want the x86_64 packages, replace all occurrences of i386 with x86_64 in the following commands First, download the rpmforge-release package: $ wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.3-1.el5.rf.i386.rpm Next, verify and install the package: $ sudo rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
$ rpm -K rpmforge-release-0.5.3-1.el5.rf.i386.rpm
$ sudo rpm -i rpmforge-release-0.5.3-1.el5.rf.i386.rpm And now we should be able to install git : $ sudo yum install git-gui yum will work out the dependencies, and ask you at relevant points if you want to proceed. Press y for Yes, and n or return for No. | {
"source": [
"https://unix.stackexchange.com/questions/33688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16183/"
]
} |
33,786 | When I start screen , I get a message giving the version, copyright, and bug-reporting email address. I don't want to see this every time I start screen . Searching the man page didn't seem to result in a solution, and I am hoping that the experts here know a way to bypass this info page. | There's a setting for that in screenrc : # Don't display the copyright page
startup_message off # default: on You could set that system-wide (in /etc/screenrc ) or in your ~/.screenrc . | {
"source": [
"https://unix.stackexchange.com/questions/33786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
33,801 | Under what circumstances would ls -lh show a total that is less than the sum of the individual files? For example: $ ls -lh /var/lib/nova/instances/_base
total 100G
-rw-rw-r-- 1 nova nova 4.3M 2012-02-14 14:07 00000001
-rw-rw-r-- 1 nova nova 5.7M 2012-02-14 14:07 00000002
-rw-rw-r-- 1 nova nova 42G 2012-03-08 15:24 1574bddb75c78a6fd2251d61e2993b5146201319.part
-rw-rw-r-- 1 libvirt-qemu kvm 24M 2012-02-14 14:07 77de68daecd823babbb58edb1c8e14d7106e83bb_sm
-rw-r--r-- 1 libvirt-qemu kvm 65G 2012-03-02 12:43 bd307a3ec329e10a2cff8fb87480823da114f8f4
-rw-rw-r-- 1 libvirt-qemu kvm 160G 2012-02-24 16:06 ephemeral_0_160_None
-rw-rw-r-- 1 libvirt-qemu kvm 80G 2012-02-24 22:38 ephemeral_0_80_None
-rw-r--r-- 1 libvirt-qemu kvm 10G 2012-02-24 22:37 fe5dbbcea5ce7e2988b8c69bcfdfde8904aabc1f
-rw-r--r-- 1 libvirt-qemu kvm 10G 2012-02-24 11:09 fe5dbbcea5ce7e2988b8c69bcfdfde8904aabc1f_sm Edit: now showing some extra flags are per request in comment: $ ls -aiFlh /var/lib/nova/instances/_base/
total 143G
29884440 drwxrwxr-x 2 nova nova 4.0K 2012-03-08 15:45 ./
29884427 drwxr-xr-x 6 nova nova 4.0K 2012-03-08 15:05 ../
29884444 -rw-rw-r-- 1 nova nova 4.3M 2012-02-14 14:07 00000001
29884445 -rw-rw-r-- 1 nova nova 5.7M 2012-02-14 14:07 00000002
29884468 -rw-r--r-- 1 nova nova 65G 2012-03-08 15:59 1574bddb75c78a6fd2251d61e2993b5146201319.converted
29884466 -rw-rw-r-- 1 nova nova 58G 2012-03-08 15:35 1574bddb75c78a6fd2251d61e2993b5146201319.part
29884446 -rw-rw-r-- 1 libvirt-qemu kvm 24M 2012-02-14 14:07 77de68daecd823babbb58edb1c8e14d7106e83bb_sm
29884467 -rw-r--r-- 1 libvirt-qemu kvm 65G 2012-03-02 12:43 bd307a3ec329e10a2cff8fb87480823da114f8f4
29884443 -rw-rw-r-- 1 libvirt-qemu kvm 160G 2012-02-24 16:06 ephemeral_0_160_None
29884442 -rw-rw-r-- 1 libvirt-qemu kvm 80G 2012-02-24 22:38 ephemeral_0_80_None
29884447 -rw-r--r-- 1 libvirt-qemu kvm 10G 2012-02-24 22:37 fe5dbbcea5ce7e2988b8c69bcfdfde8904aabc1f
29884441 -rw-r--r-- 1 libvirt-qemu kvm 10G 2012-02-24 11:09 fe5dbbcea5ce7e2988b8c69bcfdfde8904aabc1f_sm | It will happen if you have sparse files: $ mkdir test; cd test
$ truncate -s 1000000000 file-with-zeroes
$ ls -l
total 0
-rw-r--r-- 1 gim gim 1000000000 03-08 22:18 file-with-zeroes A sparse file is a file which has not been populated with filesystem blocks (or only partially). When you read a non-populated zone of a sparse file you will obtain zeros. Such blank zones do not require actual disk space, and the 'total' reported by ls corresponds to the disk space occupied by the files (just like du ). | {
"source": [
"https://unix.stackexchange.com/questions/33801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/663/"
]
} |
33,844 | I'd like to change group id of a specific group. There are so may solution for changing the gid of a file or directories. But that's not what I want. Is there a way to do that? | The GID is the primary identifier of the group. As far as the system is concerned, a different GID is a different group. So to change the GID, you're going to have to modify all the places where that GID is used. You should avoid treating the GID as significant and use group names instead; you can change the name of a group with a single command (on Linux: groupmod -n NEW_GROUP_NAME OLD_GROUP_NAME ). However, if you do really want to change the GID, this is how: First, you may need to log out users in the group and kill processes who have that group as their effective, real or saved group. Change the entry in the group database. On Linux, run groupmod -g NEWGID GROUPNAME . On other systems, use that system's administration tool, or vigr if available, or edit /etc/group as applicable. Change the group of all the files on your system that belong to the old group. find / -gid OLDGID ! -type l -exec chgrp NEWGID {} \; chgrp clears suid and sgid flags, restore those. If you have any archive that uses the old GID, rebuild it. If you have any configuration file or script that references the old GID, update it. Restart all processes that must use the new GID. | {
"source": [
"https://unix.stackexchange.com/questions/33844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11295/"
]
} |
33,850 | How can I Get a list of all files modified , say 3 months ago.
I checked this question but I was not able to apply it to my scenario.
I am trying this now , it seems to be working , but I know there should be a better way using find. ls -ltR | grep -v '2011-05' | grep -v '2011-06' | grep -v '2011-07' | grep -v '2011-08 | One solution is: find . -type f -mtime 90 That finds files that was last modified 90 days ago (in those 24 hours that started 91 x 24 hours ago and ended 90 x 24 hours ago). find . -type f -mtime -90 finds files that were modified in the last 90 days (or in the future). find . -type f -mtime +90 finds files that were modified at least 91 days ago (at least in POSIX compliant find implementations). | {
"source": [
"https://unix.stackexchange.com/questions/33850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16430/"
]
} |
33,909 | I have a bunch of files from log1 to log164 . I'm trying to LIST the directory (sorted) in a UNIX terminal but the sort functions are only providing the format like this: home:logs Home$ ls -1 | sort
log1.gz
log10.gz
log100.gz
log101.gz
log102.gz
log103.gz
log104.gz
log105.gz
log106.gz
...etc What I want is home:logs Home$ ls -1 | sort
log1.gz
log2.gz
log3.gz
log4.gz
log5.gz
log6.gz
log7.gz
...{more here}
log99.gz
log100.gz
log101.gz
log102.gz
...etc Any suggestions in what I could use to do this? | Why not use the built-in GNU ls feature for this particular case: -v β natural sort of (version) numbers within text For example: ls -1v log* | {
"source": [
"https://unix.stackexchange.com/questions/33909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
33,943 | I want to set up a plugin for the Geany editor on a Debian system. It's a theme changing plugin, so I am following this manual . It says: The simplest way to do this is to copy the contents of the archive
into the ~/.config/geany/filedefs/ folder. I don't understand this. What do they mean by ~/.config ? Is that the default directory where Geany is installed? I have its files at /usr/lib/geany but that doesn't seem to be location they are talking about. | ~ is your home directory, usually /home/username . A file or folder name starting with a . is the Linux version of a hidden file/folder. So ~/.config is a hidden folder within your home directory. Open up your file browser to your home folder, then find the option to show hidden files and folders. If you don't see .config , you'll have to create it. Then navigate into it, find or create the geany folder, go into that, then find or create a folder named filedefs . You can then put the relevant files into there. .config is a convention, defined by XDG Base Directory Specification see also https://stackoverflow.com/questions/1024114/location-of-ini-config-files-in-linux-unix | {
"source": [
"https://unix.stackexchange.com/questions/33943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13654/"
]
} |
34,004 | I am trying to code a shell-script that uses a ssh-connection for doing "heartbeats". I want to terminate the client- and server-side of that connection after a certain timeout (after the connection drops). What I found so far: TCPKeepAlive yes/no for ssh and sshd ClientAliveCountMax for sshd ClientAliveInterval for sshd ServerAliveCountMax for ssh ServerAliveInterval for ssh To change "ClientAliveCountMax" I would have to modify the sshd_config on each target machine (this option is disabled by default). So my question is - can I use "TCPKeepAlive" for my purposes, too (without changing anything else on the source/target machines)? Target operating system is SLES11 SP2 - but I do not think that is relevant here. | You probably want to use the ServerAlive settings for this. They do not require any configuration on the server, and can be set on the command line if you wish. ssh -o ServerAliveInterval=5 -o ServerAliveCountMax=1 $HOST This will send a ssh keepalive message every 5 seconds, and if it comes time to send another keepalive, but a response to the last one wasn't received, then the connection is terminated. The critical difference between ServerAliveInterval and TCPKeepAlive is the layer they operate at. TCPKeepAlive operates on the TCP layer. It sends an empty TCP ACK packet. Firewalls can be configured to ignore these packets, so if you go through a firewall that drops idle connections, these may not keep the connection alive. ServerAliveInterval operates on the ssh layer. It will actually send data through ssh, so the TCP packet has encrypted data in and a firewall can't tell if its a keepalive, or a legitimate packet, so these work better. | {
"source": [
"https://unix.stackexchange.com/questions/34004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11083/"
]
} |
34,008 | Is it possible to use the find command so that it searches the files within a tar.gz archive also using wildcards? like find archive.tar.gz --name *foo* | How about just: $ tar tf archive.tar.gz | grep foo | {
"source": [
"https://unix.stackexchange.com/questions/34008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6425/"
]
} |
34,011 | I need to expand a glob pattern (like ../smth*/* , or /etc/cron*/ ) into a list of files, programmatically. What would be the best way to do it? | Just let it expand inside an array declaration's right side: list=(../smth*/) # grab the list
echo "${#list[@]}" # print array length
echo "${list[@]}" # print array elements
for file in "${list[@]}"; do echo "$file"; done # loop over the array Note that the shell option `nullglob` needs to be set.
**It is not set by default.**
It prevents an error in case the glob (or one of multiple globs) does not match any name. Set it in bash with shopt -s nullglob or in zsh or yash with set -o nullglob though in zsh (where the nullglob initially came from), you'd rather use the (N) glob qualifier to avoid having to change a global setting: list( ../smth*/(N) ) The ksh93 equivalent: list=( ~(N)../smth*/ ) | {
"source": [
"https://unix.stackexchange.com/questions/34011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15487/"
]
} |
34,116 | I have a CentOS 5.7 server that will be backing up its files nightly. I am concerned that visitors to the various sites that the server hosts will experience degraded performance while the backup is transferring across the network. Is it possible to limit a process's maximum allowed throughput to a network interface? I would like to limit the SSH-based file transfer to only half of my available bandwidth. This could be on the server or client side; that is, I'd be happy to do this on either the client that initiates the connection or the server that receives the connection. (Unfortunately, I can't add an interface to dedicate to backups. I could increase my available throughput, but that would merely mean that the network transfer would complete faster, but still max the total capacity of the connection while doing it.) Some Background Perhaps some background is in order. Stepping back, I had a problem with not having enough local space to create the backup itself. Enter SSHFS! The backup is saved to what is ostensibly a local drive so that no backup bits are ever on the web server itself. Why is that important? Because that would seem to invalidate the use of the venerable rsync --bwlimit . rsync isn't actually doing the transfer nor can it because I can't even spare the space to save the backup file. I can hear you ask: "So wait, why do you even need to make a backup file? Why not just rsync the source files and folders?" Because an annoying thing called "Plesk" is in the mix! This is my client-facing web host which uses Plesk for convenience. As such, I use Plesk to initiate the backups because Plesk adds all sorts of extra magic to the backup that makes consuming it during a restoration procedure very safe. sad face | You can use iptables to mark a packet (--pid-owner ...), then use tc to shape the traffic.
Also "--sid-owner" can be used to include threads and children of that process. http://www.frozentux.net/iptables-tutorial/iptables-tutorial.html#OWNERMATCH Match --pid-owner Kernel 2.3, 2.4, 2.5 and 2.6 Example iptables -A OUTPUT -m owner --pid-owner 78 Explanation This match is used to match packets based on the Process ID (PID) that was responsible for them. This match is a bit harder to use, but one example would be only to allow PID 94 to send packets from the HTTP port (if the HTTP process is not threaded, of course). Alternatively we could write a small script that grabs the PID from a ps output for a specific daemon and then adds a rule for it. For an example, you could have a rule as shown in the Pid-owner.txt example | {
"source": [
"https://unix.stackexchange.com/questions/34116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
34,140 | Is there a way to tell the kernel to give back the free disk space now? Like a write to something in /proc/ ? Using Ubuntu 11.10 with ext4. This is probably an old and very repeated theme.
After hitting 0 space only noticed when my editor couldn't save source code files I have open, which to my horror now have 0 byte size in the folder listing, I went on a deleting spree. I deleted 100's of MB of large files both from user and from root, and did some hardlinking too. Just before I did apt-get clean there was over 900MB in /var/cache/apt/archives, now there is only 108KB: # du
108 /var/cache/apt/archives An hour later still no free space and cannot save my precious files opened in the editor, but notice the disparity below: # sync; df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda4 13915072 13304004 0 100% / Any suggestions? I shut off some services/processes but not sure how to check who might be actively eating disk space. More info # dumpe2fs /dev/sda4
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 884736
Block count: 3534300
Reserved block count: 176715
Free blocks: 422679
Free inodes: 520239
First block: 0
Block size: 4096
Fragment size: 4096 | Check with lsof to see if there are files held open. Space will not be freed until they are closed. sudo /usr/sbin/lsof | grep deleted will tell you which deleted files are still held open. | {
"source": [
"https://unix.stackexchange.com/questions/34140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13496/"
]
} |
34,174 | I'm running CentOS 5.7 and I have a backup utility that has the option of dumping its backup file to stdout . The backup file is rather large (multiple gigabytes). The target is an SSHFS filesystem. To ensure that I don't hog the bandwidth and degrade the performance of the network, I would like to limit the speed with which data is written to the "disk". How can I limit the ability of stdout based on a byte number? For example, limiting a process's ability to write to about 768Bps. | You can add a rate limiting tool to your pipeline. For example there is pv which has a rate-limiting option: -L RATE, --rate-limit RATE Limit the transfer to a maximum of RATE bytes per second. A suffix of "k", "m", "g", or "t" can be added to denote kilobytes (*1024), megabytes, and so on An alternative is the tool buffer which has: -u microseconds After every write pause for this many microseconds. Defaults to zero. (Surprisingly a small sleep, 100 usecs, after each write can greatly
enhance throughput on some drives.) | {
"source": [
"https://unix.stackexchange.com/questions/34174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.