source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
52,331 | I've been trying to hack a little code extraction script together, but I can't get it to work. My goal is to examine all .txt files in a directory. If it contains a line which doesn't start with a tab and includes cat.*.c, then extract lines from there (exclusive) to the last line which starts with } (inclusive) and save it to a file with the same names as the source except for with a .c extension. My first stab at trying to find it was this: find . -name "*.txt" -print0 | xargs -0 awk '/[^ \t]cat .*.c/,/[^ \t]}/' I'm not sure why, but the tab matching doesn't work. Obviously I would need to do a bit more. I'll need to loop through the files from find and grab the file directory & name... filename=$(basename "$1")
filename="${filename%.*}"
dirname=`dirname "$1" Firstly, though, I need to figure out how to get the text I want. Is awk an appropriate tool for the job? Would sed / grep be a better choice? Any help is greatly appreciated! Thank you! P.S. I've tried searching around, but the tab issue seems to be unique to me. And lopsided matching (ex/inclusive) seems to be infrequently used also... | I solved the problem. There are two ways: M1 Change the redirection from &>> to 2>&1 . So now crontab -e looks like */1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1 I believe the above works because by default cron is using sh to run the task instead of bash so &>> is not supported by sh . M2 Change the default shell by adding SHELL=/bin/bash in the crontab -e file. | {
"source": [
"https://unix.stackexchange.com/questions/52331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
52,376 | I add this rule: sudo iptables -t nat -A OUTPUT -d a.b.c.d -p tcp \
--dport 1723 -j DNAT --to-destination a.b.c.d:10000 When restart computer rules are deleted. Why? What I can do to make the rules persist? | There is no option in iptables which will make your rules permanent. But you can use iptables-save and iptables-restore to fulfill your task. First add the iptable rule using the command you gave. Then save iptables rules to some file like /etc/iptables.conf using following command: $ iptables-save > /etc/iptables.conf Add the following command in /etc/rc.local to reload the rules in every reboot. $ iptables-restore < /etc/iptables.conf | {
"source": [
"https://unix.stackexchange.com/questions/52376",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16798/"
]
} |
52,534 | Suppose there is a column of numeric values like following: File1: 1
2
3
3
3
4
4
4
5
6 I want the output: 3
4 That is, only the repeated lines. Are there any command line tools to find this out in Linux? (NB: The values are numerically sorted). | You can use uniq(1) for this if the file is sorted: uniq -d file.txt If the file is not sorted, run it through sort(1) first: sort file.txt | uniq -d This will print out the duplicates only. Technically the input does not need to be in sorted order, but the duplicates in the file need to be consecutive. The usual way to achieve that is to sort the file. | {
"source": [
"https://unix.stackexchange.com/questions/52534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22979/"
]
} |
52,634 | I have been searching for a while and I can't find the definition of a regular file. My path is permanent (I start at / ) and I am connecting to scp root@IP: /path/to/picture.jpg Results in an inquiry for a password and then... scp: .: not a regular file | When copying a directory, you should use the -r option: scp -r root@IP:/path/to/file /path/to/filedestination | {
"source": [
"https://unix.stackexchange.com/questions/52634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26192/"
]
} |
52,643 | I'm using archlinux. It never auto-suspend before a recent system upgrade(maybe I updated the kernel?). I think it is related to laptop-mode or acpid , so I stop them: /etc/rc.d/laptop-mode stop
/etc/rc.d/acpid stop I also edit /etc/laptop-mode/laptop-mode.conf : ENABLE_LAPTOP_MODE_TOOLS=0 Then I edit /etc/acpi/actions/lm_lid.sh , commented out the last line: # /usr/sbin/laptop_mode auto But all of above don't work. Following lines were found in /var/log/kernel.log (unrelated lines omitted): Oct 23 15:29:20 localhost kernel: [18617.549098] PM: Syncing filesystems ... done.
Oct 23 15:29:20 localhost kernel: [18618.001898] PM: Preparing system for mem sleep
Oct 23 15:29:30 localhost kernel: [18618.039565] Freezing user space processes ... (elapsed 0.01 seconds) done.
Oct 23 15:29:30 localhost kernel: [18618.052596] Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done.
Oct 23 15:29:30 localhost kernel: [18618.065999] PM: Entering mem sleep
Oct 23 15:29:30 localhost kernel: [18618.066167] Suspending console(s) (use no_console_suspend to debug)
Oct 23 15:29:30 localhost kernel: [18618.097917] sd 0:0:0:0: [sda] Synchronizing SCSI cache
Oct 23 15:29:30 localhost kernel: [18618.098103] sd 0:0:0:0: [sda] Stopping disk
Oct 23 15:29:30 localhost kernel: [18618.270537] snd_hda_intel 0000:00:14.2: power state changed by ACPI to D3hot
Oct 23 15:29:30 localhost kernel: [18619.274374] PM: suspend of devices complete after 1196.192 msecs
Oct 23 15:29:30 localhost kernel: [18619.274691] PM: late suspend of devices complete after 0.313 msecs
Oct 23 15:29:30 localhost kernel: [18619.440877] ohci_hcd 0000:00:14.5: wake-up capability enabled by ACPI
Oct 23 15:29:30 localhost kernel: [18619.642144] ACPI: Waking up from system sleep state S3
Oct 23 15:29:30 localhost kernel: [18620.049424] PM: noirq resume of devices complete after 333.503 msecs
Oct 23 15:29:30 localhost kernel: [18620.049852] PM: early resume of devices complete after 0.334 msecs
Oct 23 15:29:30 localhost kernel: [18622.418605] PM: resume of devices complete after 2371.906 msecs
Oct 23 15:29:30 localhost kernel: [18622.419018] PM: Finishing wakeup.
Oct 23 15:29:30 localhost kernel: [18622.419019] Restarting tasks ... done.
Oct 23 15:29:30 localhost kernel: [18622.464752] video LNXVIDEO:01: Restoring backlight state I think this is not caused by pm-susend , because /var/log/pm-suspend.log don't log anything. I don't want my laptop go to sleep when I close the lid. How to do it? Kernel version: 3.6.2-1-ARCH | Edit /etc/systemd/logind.conf and make sure you have HandleLidSwitch=ignore which will make it ignore the lid being closed. (You may need to also undo the other changes you've made.) Then, you'll want to reload logind.conf to make your changes go into effect (thanks to Ehtesh Choudhury for pointing this out in the comments): systemctl restart systemd-logind Full details over at the archlinux Wiki . The man page for logind.conf also has the relevant information, HandlePowerKey= , HandleSuspendKey= , HandleHibernateKey= , HandleLidSwitch= Controls whether logind shall handle the system power and sleep
keys and the lid switch to trigger actions such as system power-off
or suspend. Can be one of "ignore", "poweroff", "reboot", "halt", "kexec",
"suspend", "hibernate", "hybrid-sleep" and "lock". If "ignore", logind will
never handle these keys. If "lock", all running sessions will be
screen-locked; otherwise, the specified action will be taken in the
respective event. Only input devices with the "power-switch" udev tag
will be watched for key/lid switch events. HandlePowerKey= defaults to "poweroff". HandleSuspendKey= and HandleLidSwitch= default to "suspend". HandleHibernateKey= defaults to "hibernate". | {
"source": [
"https://unix.stackexchange.com/questions/52643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17745/"
]
} |
52,667 | I am using Ubuntu on Virtual Box and I have a folder which is shared between the host (Windows) and the VM (Ubuntu). When I open any file in the share folder in Ubuntu, I can not change it as its owner is set to root. How can I change the ownership to myself? Here is the output of ls -l : -rwxrwxrwx 1 root root 0 2012-10-05 19:17 BuildNotes.txt The output of df is: m@m-Linux:~/Desktop/vbox_shared$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 29640780 10209652 17925440 37% /
none 509032 260 508772 1% /dev
none 513252 168 513084 1% /dev/shm
none 513252 88 513164 1% /var/run
none 513252 0 513252 0% /var/lock
none 513252 0 513252 0% /lib/init/rw
Ubuntu 214153212 31893804 182259408 15% /media/sf_Ubuntu
/dev/sr0 53914 53914 0 100% /media/VBOXADDITIONS_4.2.0_80737
Ubuntu 214153212 31893804 182259408 15% /home/m/Desktop/vbox_shared The options in VM is automount and the readoly is not checked. Tried to use /media/sf_Ubuntu , but getting permission error: m@m-Linux:/media$ ls -l
total 10
drwxrwx--- 1 root vboxsf 4096 2012-10-23 15:35 sf_Ubuntu
drwxrwx--- 2 root vboxsf 4096 2012-10-21 23:41 sf_vbox_shared
dr-xr-xr-x 6 m m 2048 2012-09-13 07:19 VBOXADDITIONS_4.2.0_80737
m@m-Linux:/media$ cd sf_Ubuntu/
bash: cd: sf_Ubuntu/: Permission denied
m@m-Linux:/media$ cd sf_vbox_shared/
bash: cd: sf_vbox_shared/: Permission denied Please note that I am in the group vboxsf : m@m-Linux:~$ id
uid=1000(m) gid=1000(m) groups=4(adm),20(dialout),24(cdrom),46(plugdev),105(lpadmin),119(admin),122(sambashare),1000(m),1001(vboxsf) | The regular way of getting access to the files now, is to allow VirtualBox to automount the shared folder (which will make it show up under /media/sf_directory_name ) and then to add your regular Ubuntu user to the vboxsf group (as root # ). # usermod -aG vboxsf <youruser> By default, without manual action, the mounts look like this, drwxrwx--- 1 root vboxsf 40960 Oct 23 10:42 sf_<name> so the vboxsf group has full access. By adding your user to that group, you gain full access. So you wouldn't worry about changing their permissions (which don't make sense on the Windows host), you just give yourself access. In this specific case, this is the automounted Shared Folder, Ubuntu 214153212 31893804 182259408 15% /media/sf_Ubuntu and it is that directory that should be used to access to the Shared Folder, by putting the local user into the vboxsf group. If you want a 'better' link under your user's home directory, you could always create a symbolic link. ln -s /media/sf_Ubuntu /home/m/Desktop/vbox_shared You will need to reboot your VM for these changes to take effect If you manually mount the shared folder, then you need to use the relevant options on the mount command to set the folder with the right ownership (i.e. the gid, uid and umask options to mount ). This is because the Host OS doesn't support the same permission system as Linux, so VirtualBox has no way of knowing who should own the files. However, I strongly recommend just configuring the shared folder to be auto-mounted (it's a setting on the Shared Folder configuration in VirtualBox itself). For the avoidance of doubt, I do not believe you can change permissions normally anyway, on that filesystem if it's mounted in the regular way, tony@jabba:/media/sf_name$ ls -l tst.txt
-rwxrwx--- 1 root vboxsf 2283 Apr 4 2012 tst.txt
tony@jabba:/media/sf_name$ sudo chown tony tst.txt
[sudo] password for tony:
tony@jabba:/media/sf_name$ ls -l tst.txt
-rwxrwx--- 1 root vboxsf 2283 Apr 4 2012 tst.txt
tony@jabba:/media/sf_name$ | {
"source": [
"https://unix.stackexchange.com/questions/52667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10722/"
]
} |
52,762 | I am trying to sort on multiple columns. The results are not as expected. Here's my data (people.txt): Simon Strange 62
Pete Brown 37
Mark Brown 46
Stefan Heinz 52
Tony Bedford 50
John Strange 51
Fred Bloggs 22
James Bedford 21
Emily Bedford 18
Ana Villamor 44
Alice Villamor 50
Francis Chepstow 56 The following works correctly: bash-3.2$ sort -k2 -k3 <people.txt
Emily Bedford 18
James Bedford 21
Tony Bedford 50
Fred Bloggs 22
Pete Brown 37
Mark Brown 46
Francis Chepstow 56
Stefan Heinz 52
John Strange 51
Simon Strange 62
Ana Villamor 44
Alice Villamor 50 But, the following does not work as expected: bash-3.2$ sort -k2 -k1 <people.txt
Emily Bedford 18
James Bedford 21
Tony Bedford 50
Fred Bloggs 22
Pete Brown 37
Mark Brown 46
Francis Chepstow 56
Stefan Heinz 52
John Strange 51
Simon Strange 62
Ana Villamor 44
Alice Villamor 50 I was trying to sort by surname and then by first name, but you will see the Villamors are not in the correct order. I was hoping to sort by surname, and then when surnames matched, to sort by first name. It seems there is something about how this should work I don't understand. I could do this another way of course (using awk), but I want to understand sort. I am using the standard Bash shell on Mac OS X. | A key specification like -k2 means to take all the fields from 2 to the end of the line into account. So Villamor 44 ends up before Villamor 50 . Since these two are not equal, the first comparison in sort -k2 -k1 is enough to discriminate these two lines, and the second sort key -k1 is not invoked. If the two Villamors had had the same age, -k1 would have caused them to be sorted by first name. To sort by a single column, use -k2,2 as the key specification. This means to use the fields from #2 to #2, i.e. only the second field. sort -k2 -k3 <people.txt is redundant: it's equivalent to sort -k2 <people.txt . To sort by last names, then first names, then age, run the following command: sort -k2,2 -k1,1 <people.txt or equivalently sort -k2,2 -k1 <people.txt since there are only these three fields and the separators are the same. In fact, you will get the same effect from sort -k2,2 <people.txt , because sort uses the whole line as a last resort when all the keys in a subset of lines are identical. Also note that the default field separator is the transition between a non-blank and a blank, so the keys will include the leading blanks (in your example, for the first line, the first key will be "Emily" , but the second key " Bedford" . Add the -b option to strip those blanks: sort -b -k2,2 -k1,1 It can also be done on a per-key basis by adding the b flag at the end of the key start specification: sort -k2b,2 -k1,1 <people.txt But something to bear in mind: as soon as you add one such flag to the key specification, the global flags (like -n , -r ...) no longer apply to them so it's better to avoid mixing per-key flags and global flags. | {
"source": [
"https://unix.stackexchange.com/questions/52762",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24395/"
]
} |
52,779 | I use sed to quickly delete lines with specific position as sed '1d'
sed '5d' But, what if I want to delete the last line of the file and I don't know the count of lines (I know I can get that using wc and several other tricks). Currently, using a workaround with head and tail combined with wc to do so. Any quick twists here? | in sed $ is the last line so to delete the last line: sed '$d' <file> Updated to include a way to delete multiple lines from the bottom of the file: tac <file> | tail -n +3 | tac > <new_file> tac reads the file backwards ( cat backwards) tail -n +3 reads the input starting at the nth line tac reads the input and reverses the order back to the original | {
"source": [
"https://unix.stackexchange.com/questions/52779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
52,800 | I am trying to do an IF statement from the output of an executed commmand. Here is how I am trying to do it, but it doesn't work. Does anyone know the right way to do this? if [ "`netstat -lnp | grep ':8080'`" == *java* ]; then
echo "Found a Tomcat!"
fi EDIT:
I wonder if there is a way to do it by capturing exit code? | Use the bash [[ conditional construct and prefer the $( <command> ) command substitution convention. Additionally, [[ prevents word splitting of variable values therefore there is no need to quote the command substitution bit.. if [[ $(netstat -lnp | grep ':8080') == *java* ]]; then
echo "Found a Tomcat!"
fi | {
"source": [
"https://unix.stackexchange.com/questions/52800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7586/"
]
} |
52,814 | This is probably very simple, but I can't figure it out. I have a directory structure like this ( dir2 is inside dir1 ): /dir1
/dir2
|
--- file1
|
--- file2 What is the best way to 'flatten' this directory structure in such a way as to get file1 and file2 into dir1 not dir2 ? | You can do this with GNU find and GNU mv : find /dir1 -mindepth 2 -type f -exec mv -t /dir1 -i '{}' + Basically, the way that works if that find goes through the entire directory tree and for each file ( -type f ) that is not in the top-level directory ( -mindepth 2 ), it runs a mv to move it to the directory you want ( -exec mv … + ). The -t argument to mv lets you specify the destination directory first, which is needed because the + form of -exec puts all the source locations at the end of the command. The -i makes mv ask before overwriting any duplicates; you can substitute -f to overwrite them without asking (or -n to not ask or overwrite). As Stephane Chazelas points out, the above only works with GNU tools (which are standard on Linux, but not most other systems). The following is somewhat slower (because it invokes mv multiple times) but much more universal: find /dir1 -mindepth 2 -type f -exec mv -i '{}' /dir1 ';' POSIXly, passing more than one argument, but using sh to reorder the list of arguments for mv so the target directory comes last: LC_ALL=C find /dir1 -path '/dir1/*/*' -type f -exec sh -c '
exec mv "$@" /dir1' sh {} + | {
"source": [
"https://unix.stackexchange.com/questions/52814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26274/"
]
} |
52,820 | I am using AIX 6.1 ksh shell. I want to use one liner to do something like this: cat A_FILE | skip-first-3-bytes-of-the-file I want to skip the first 3 bytes of the first line; is there a way to do this? | Old school — you could use dd : dd if=A_FILE bs=1 skip=3 The input file is A_FILE , the block size is 1 character (byte), skip the first 3 'blocks' (bytes). (With some variants of dd such as GNU dd , you could use bs=1c here — and alternatives like bs=1k to read in blocks of 1 kilobyte in other circumstances. The dd on AIX does not support this, it seems; the BSD (macOS Sierra) variant doesn't support c but does support k , m , g , etc.) There are other ways to achieve the same result, too: sed '1s/^...//' A_FILE This works if there are 3 or more characters on the first line. tail -c +4 A_FILE And you could use Perl, Python and so on too. | {
"source": [
"https://unix.stackexchange.com/questions/52820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26326/"
]
} |
52,855 | I am running KVM on RHEL6, and I have created several virtual machines in it. Issuing ifconfig command to the host system command line shows a list of virbr0, virbr1... and vnet0, vnet2... Are they the IP addresses of the the guest OS? What are the differences between virbr# and vnet#? | Those are network interfaces, not IP addresses. A network interface can have packets from any protocol exchanged on them, including IPv4 or IPv6, in which case they can be given one or more IP addresses. virbr are bridge interfaces. They are virtual in that there's no network interface card associated to them. Their role is to act like a real bridge or switch, that is switch packets (at layer 2) between the interfaces (real or other) that are attached to it just like a real ethernet switch would. You can assign an IP address to that device, which basically gives the host an IP address on that subnet which the bridge connects to. It will then use the MAC address of one of the interfaces attached to the bridge. The fact that their name starts with vir doesn't make them any different from any other bridge interface, it's just that those have been created by libvirt which reserves that name space for bridge interfaces vnet interfaces are other types of virtual interfaces called tap interfaces. They are attached to a process (in this case the process runnin the qemu-kvm emulator). What the process writes to that interface will appear as having been received on that interface by the host and what the host transmits on that interface is available for reading by that process. qemu typically uses it for its virtualized network interface in the guest. Typically, a vnet will be added to a bridge interface which means plugging the VM into a switch. | {
"source": [
"https://unix.stackexchange.com/questions/52855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21784/"
]
} |
52,961 | Is it possible to do this: ssh user@socket command /path/to/file/on/local/machine That is to say, I want to execute a remote command using a local file in one step, without first using scp to copy the file. | You missed just one symbol =) ssh user@socket command < /path/to/file/on/local/machine | {
"source": [
"https://unix.stackexchange.com/questions/52961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15359/"
]
} |
52,996 | I'm trying to emulate a EFI environment using QEMU (kmv); virtualbox takes 15 minutes to boot in EFI mode using archboot. Using legacy BIOS mode, I can boot using this command: root@citsnmaiko-deb:/home/maiko/uefi/ovmf# qemu-system-x86_64 -kernel ../bzImage -initrd ../rootfs.gz -append "rw root=/dev/ram0 ramdisk_size=40960" and it works with my custom kernel and file system. file ../bzImage
../bzImage: Linux kernel x86 boot executable bzImage, version 3.6.1 (root@citsnmaiko-deb) #4 , RO-rootFS, swap_dev 0x3, Normal VGA it has EFI support too. I'm trying to do the same with EFI files that I downloaded from here wget http://ufpr.dl.sourceforge.net/project/edk2/OVMF/OVMF-X64-r11337-alpha.zip -P ovmf
cd ovmf/
unzip -x OVMF-X64-r11337-alpha.zip
# rename the files for QEMU find them
mv OVMF.fd bios.bin
mv CirrusLogic5446.rom vgabios-cirrus.bin
# start QEMU
root@citsnmaiko-deb:/home/maiko/uefi/ovmf# qemu-system-x86_64 -L . -kernel ../bzImage -initrd ../rootfs.gz -append "rw root=/dev/ram0 ramdisk_size=40960"
Could not open option rom 'linuxboot.bin': No such file or directory
pci_add_option_rom: failed to find romfile "pxe-e1000.bin" And I'm dropped in an EFI shell, not enable to boot. If I use the the latest Ubuntu release using the same EFI environment root@citsnmaiko-deb:/home/maiko/uefi/ovmf# qemu-system-x86_64 -L . -boot d -cdrom ../ubuntu-12.10-desktop-amd64.iso
pci_add_option_rom: failed to find romfile "pxe-e1000.bin" ... the boot process works fine. I've tried to replace the Ubuntu boot files with mine but maybe I don't completely understand its functionality. When I just replace the files after mounting the ISO: mkdir tmp
bsdtar xf ubuntu-12.10-desktop-amd64.iso -C tmp
cp bzImage tmp/casper/vmlinuz
cp rootfs.gz tmp/casper/initrd.lz
genisoimage -o customUbuntu.iso tmp/
qemu-system-x86_64 -L . -boot d -cdrom customUbuntu.iso the same EFI Shell appears. Is it ok? initrd.lz and rootfs.gz are interchangeable right? How about bzImage and vmlinuz ? What am I missing? | OVMF supports -boot since r13683 , and supports -kernel -append -initrd since r13923 . Download OVMF-0.1+r14071-1.1.x86_64.rpm or newer version. Extract bios.bin from the rpm: rpm2cpio OVMF-0.1+r14071-1.1.x86_64.rpm | cpio -idmv Specify firmware parameter for QEMU: qemu-kvm -bios ./usr/share/qemu-ovmf/bios/bios.bin -m 1G -cdrom boot.iso (Tested with Fedora's boot.iso created with special measures ) I also tested qemu -kernel -append -initrd with kernel 3.5, 3.6, and 3.8. EFI firmware has format and file hierarchy requirements for ISO image to be bootable( 1 ), and other for disks. Your modified ISO image probably did not meet the requirements so the firmware did not recognize it. EFI firmware also has format requirements for the binary to execute, so your bzImage or whatever kernel image needs to be built with EFISTUB. You can boot kernel from EFI shell with parameters manually specified. Examples: 2 . You can create a startup.nsh to the save a little typing. You can use bootloaders to have more complete management. You need to learn these: 2 EFI firmware saves boot options in NVRAM. QEMU currently does not preserve NVRAM, so boot options are lost once you close QEMU. Without boot options, the firmare tries to find \EFI\BOOT\BOOTX64.EFI to execute but it's not here, so it does not know what to boot and leaves control to you. What you need to do to boot the kernel in EFI shell is just enter a filesystem, navigate to a proper path, and execute a binary. fs0:
cd EFI\fedora
grub.efi or vmlinuz.efi ... OVMF support virtio-scsi since EDK2 r13867 . | {
"source": [
"https://unix.stackexchange.com/questions/52996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9216/"
]
} |
53,017 | I downloaded lessn to my webserver and unzipped it. It contains a folder named - . I assumed I know how to deal with that, but I don't. I tried cd -- - , but that doesn't have the desired effect. Using quotes doesn't seem to affect it either. I put slashes all over the place, to no avail. What's the proper way to change into this folder? | You want to avoid it from being a parameter, thus we try to prepend something to it. The current directory can be accessed with . , thus the subfolder - can be accessed alternatively with ./- . cd ./- The reason that cd -- - doesn't work is because this is implemented differently if you compare rm (see man rm ) to cd (see man bash or man cd ), cd has a different interpretation which sees - as a parameter (see man bash or man cd ). It should also be noted that cd is a shell builtin function, as can be read in this answer : cd is not an external command - it is a shell builtin function. It runs in the context of the current shell, and not, as external commands do, in a fork/exec'd context as a separate process. There is an external cd command, but it does something entirely different . This explains why the implementation is different, as Bash and Coreutils are two different things. Let's just suppose you wouldn't believe this, how do we confirm that? Use which and type . $ which cd && type cd
which: no cd in (/usr/local/bin:/usr/bin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.7.2:/usr/games/bin
cd is a shell builtin
$ which rm && type rm
/bin/rm
/bin/rm is /bin/rm See man which for more information, and man bash or man type for type | {
"source": [
"https://unix.stackexchange.com/questions/53017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14860/"
]
} |
53,072 | When using find , how do I return the file name and the line number when searching for a string? I manage to return the file name in one command and the line numbers with another one, but I can't seem to combine them. File names: find . -type f -exec grep -l 'string to search' {} \; Line numbers: find . -type f -exec grep -n 'string to search' {} \; | The command line switch -H forces grep to print the file name, even with just one file. % grep -n 7 test.in
7:7
% grep -Hn 7 test.in
test.in:7:7 -H, --with-filename
Print the filename for each match. Note that as Kojiro says in a comment , this is not part of the POSIX standard; it is in both GNU and BSD grep, but it's possible some systems don't have it (e.g. Solaris). | {
"source": [
"https://unix.stackexchange.com/questions/53072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26452/"
]
} |
53,144 | I know that to remove a scheduled at job I have to use atrm "numjob1 numjob2" , but is there an easy way to do that for all the jobs? | You can run this command to remove all the jobs at the atq for i in `atq | awk '{print $1}'`;do atrm $i;done | {
"source": [
"https://unix.stackexchange.com/questions/53144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26488/"
]
} |
53,154 | I thought Moving tmux pane to window was the same question but it doesn't seem to be. Coming from using GNU screen regularly, I'm looking for tmux to do the same things. On of the things I do regularly is have a couple of different windows open, one with some code open in vim, and a couple of terminals windows open to test the code, and sometimes a window or two for various other things. I split the screen vertically and will often have the vim window in the top pane and then one of the other windows in the bottom bane. The main commands I then use are Ctrl a , Tab to rotate among the panes and Ctrl a , n to rotate between the windows within a pane. For instance, with vim in the top pane, I switch to the bottom pane and then rotate through the other terminals, doing whatever I need. The screen stays split the whole time. The problem is I can't find comparable capability to screen's Ctrl a , n in tmux. Switching windows seems to not work inside a pane, but jumps entirely. If the screen is split the only two options seem to be to jump to some window that isn't split and then split it or to do a sub-split of a pane. Neither are what I was looking for. Suggestions (besides just sticking with screen)? | I believe what you are looking for is Ctrl b +( → , ← , ↑ , ↓ ). Those will allow you to move between the panes. | {
"source": [
"https://unix.stackexchange.com/questions/53154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26490/"
]
} |
53,270 | If I issue the "top" command and receive results such as: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
00001 bob 25 0 77380 1212 1200 R 95.8 0.0 89122:13 fee
00002 bob 25 0 77380 1196 1184 R 95.4 0.0 88954:14 fi
00003 sam 18 0 427m 16m 6308 R 30.0 0.1 54:46.43 fo
00004 sam 18 0 427m 16m 6308 R 26.5 0.1 52:55.33 fum Question: What are the units in the "TIME+" column? What I have tried: (please suggest a better strategy for searching documentation ...) man top | grep -C 4 time or man top | grep <X> when I substitute minute , hour , day , or HH for X ... | minutes:seconds.hundredths Searching for “TIME+” or for “seconds” gives the answer, kind of (I wouldn't call the man page clear). This format is inherited from BSD, you also get it with ps u or ps l under Linux. | {
"source": [
"https://unix.stackexchange.com/questions/53270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19809/"
]
} |
53,310 | I have a string in the next format id;some text here with possible ; inside and want to split it to 2 strings by first occurrence of the ; . So, it should be: id and some text here with possible ; inside I know how to split the string (for instance, with cut -d ';' -f1 ), but it will split to more parts since I have ; inside the left part. | cut sounds like a suitable tool for this: bash-4.2$ s='id;some text here with possible ; inside'
bash-4.2$ id="$( cut -d ';' -f 1 <<< "$s" )"; echo "$id"
id
bash-4.2$ string="$( cut -d ';' -f 2- <<< "$s" )"; echo "$string"
some text here with possible ; inside But read is even more suitable: bash-4.2$ IFS=';' read -r id string <<< "$s"
bash-4.2$ echo "$id"
id
bash-4.2$ echo "$string"
some text here with possible ; inside | {
"source": [
"https://unix.stackexchange.com/questions/53310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14725/"
]
} |
53,313 | Possible Duplicate: How to tell what type of filesystem you’re on? Find filesystem of an unmounted partition from a script How can I quickly check the filesystem of the partition? Can I do that by using df ? | Yes, according to man df you can: -T, --print-type print file system type Another way is to use the mount command. Without parameters it lists the currently mounted devices, including their file systems. In case you need to find out only one certain file system, is easier to use the stat command's -f option instead of parsing out one value from the above mentioned commands' output. | {
"source": [
"https://unix.stackexchange.com/questions/53313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24522/"
]
} |
53,415 | I am downloading an ISO image of Lubuntu; they have two versions: 32 and 64. But why do they call the 64 version amd64 since they say that it works for Intel also? | Because AMD was the first one to release 64-bit x86 (x86-64) CPUs. the AMD64 architecture was positioned by AMD from the beginning as an evolutionary way to add 64-bit computing capabilities to the existing x86 architecture, as opposed to Intel's approach of creating an entirely new 64-bit architecture with IA-64. The first AMD64-based processor, the Opteron, was released in April 2003. In fact, in the kernel the 64-bit support is called 'x86_64' to refer to the fact that both AMD and Intel (and others) implement those instructions. | {
"source": [
"https://unix.stackexchange.com/questions/53415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19195/"
]
} |
53,456 | In this question I asked how to prevent a media failure from halting the system boot process. However, I got two suggestions for /etc/fstab options nobootwait nofail What is the difference between the two? | Firstly nofail allows the boot sequence to continue even if the drive fails to mount. This is what fstab(5) says about nobootwait The mountall(8) program that mounts filesystem during boot also recognises additional options that the ordinary mount(8) tool does not. These are: bootwait which can be applied to remote filesystems
mounted outside of /usr or /var, without which mountall(8) would not hold up the boot for these; nobootwait which can be applied to
non-remote filesystems to explicitly instruct mountall(8) not to hold up the boot for them; optional which causes the entry to be ignored if the filesystem type is not known at boot time; and showthrough which permits a mountpoint to be mounted before its parent mountpoint (this latter should be used carefully, as it can cause boot hangs). fstab(5) has this to say about nofail nofail do not report errors for this device if it does not
exist. | {
"source": [
"https://unix.stackexchange.com/questions/53456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
53,513 | Is it possible to make Linux kernel completely ignore the floppy disk controller? I do not have the drive but obviously my motherboard does contain the controller. I would like to disable the /dev/fd0 device node somehow to avoid Thunar and other tools detecting it and probing it. | On Ubuntu, the floppy driver is loaded as a module. You can blacklist this module so it doesn't get loaded: echo "blacklist floppy" | sudo tee /etc/modprobe.d/blacklist-floppy.conf
sudo rmmod floppy
sudo update-initramfs -u Immediately and upon rebooting, the floppy driver should be banished for good. | {
"source": [
"https://unix.stackexchange.com/questions/53513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17611/"
]
} |
53,581 | I have a Fedora machine that I can SSH to. One of the programs I'd like to use occasionally uses the function keys. The problem is that I'm SSH'ing from an Android tablet (ASUS Transformer Infinity) with a physical keyboard, but no F1 - F12 keys. So, until the terminal app I'm using (VX ConnectBot) decides to add them as a feature, I'm looking for a way to send them using the rest of the keyboard. I can use all printable ASCII characters, Esc , Ctrl , Shift , Enter , and Tab . | Terminals only understand characters, not keys. So al function keys are encoded as sequences of characters, using control characters. Apart from a few common ones that have an associated control character ( Tab is Ctrl+I , Enter is Ctrl+M , Esc is Ctrl+[ ), function keys send escape sequences, beginning with Ctrl+[ [ or Ctrl+[ O . You can use the tput command to see what escape sequence applications expect for each function key on your terminal. These sequences are stored in the terminfo database. For example, the shell snippet below shows the escape sequences corresponding to each function key. $ for x in {1..12}; do echo -n "F$x "; tput kf$x | cat -A; echo; done
F1 ^[OP
F2 ^[OQ
F3 ^[OR
F4 ^[OS
F5 ^[[15~
F6 ^[[17~
F7 ^[[18~
F8 ^[[19~
F9 ^[[20~
F10 ^[[21~
F11 ^[[23~
F12 ^[[24~ Another way to see the escape sequence for a function key is to press Ctrl + V in a terminal application that doesn't rebind the Ctrl + V key (such as the shell). Ctrl + V inserts the next character (which will be the escape character) literally, and you'll be able to see the rest of the sequence, which consists of ordinary characters. Since the sequences may be awkward to type, do investigate changing the key bindings in your application or using another terminal emulator. Also, note that you may have a time limit: some applications only recognize escape sequences if they come in fast enough, so that they can give a meaning to the Esc key alone. | {
"source": [
"https://unix.stackexchange.com/questions/53581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26681/"
]
} |
53,627 | I have no root access on my machine at work, but I have sudo permission to use sudo yum (and only yum). Recently I accidentally installed a faulty repository (dropbox), and now I'd like to remove it. Since I have no write access to the yum.repos.d directory, manually editing or removing the repo file is out of the question. I know you can install repos using yum (it's what I did), but can you remove a repo using yum? Using Scientific Linux 6. By the way, I know I can yum --disablerepo= to ignore the problematic repo. But I would like to remove it for good, because it's also causing problems with the graphical package manager (it keeps popping up notifications saying the updates couldn't be retrieved). | you can remove the repo with yum-config-manager but not with yum : yum-config-manager --disable repository
yum-config-manager --add-repo http://www.example.com/example.repo EDIT: you need some way of running this as root (ie. sudo) | {
"source": [
"https://unix.stackexchange.com/questions/53627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5025/"
]
} |
53,641 | Everyone knows how to make unidirectional pipe between two programs (bind stdout of first one and stdin of second one): first | second . But how to make bidirectional pipe, i.e. cross-bind stdin and stdout of two programs? Is there easy way to do it in a shell? | Well, its fairly "easy" with named pipes ( mkfifo ). I put easy in quotes because unless the programs are designed for this, deadlock is likely. mkfifo fifo0 fifo1
( prog1 > fifo0 < fifo1 ) &
( prog2 > fifo1 < fifo0 ) &
( exec 30<fifo0 31<fifo1 ) # write can't open until there is a reader
# and vice versa if we did it the other way Now, there is normally buffering involved in writing stdout. So, for example, if both programs were: #!/usr/bin/perl
use 5.010;
say 1;
print while (<>); you'd expect a infinite loop. But instead, both would deadlock; you would need to add $| = 1 (or equivalent) to turn off output buffering. The deadlock is caused because both programs are waiting for something on stdin, but they're not seeing it because its sitting in the stdout buffer of the other program, and hasn't yet been written to the pipe. Update : incorporating suggestions from Stéphane Charzelas and Joost: mkfifo fifo0 fifo1
prog1 > fifo0 < fifo1 &
prog2 < fifo0 > fifo1 does the same, is shorter, and more portable. | {
"source": [
"https://unix.stackexchange.com/questions/53641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
53,737 | I would like to list all files in the order of big to small in size and the files could be present anywhere in a certain folder. | Simply use something like: ls -lS /path/to/folder/ Capital S . This will sort files by size. Also see: man ls -S sort by file size If you want to sort in reverse order, just add -r switch. Update: To exclude directories (and provided none of the file names or symlink targets contain newline characters): ls -lS | grep -v '^d' Update 2: I see now how it still shows symbolic links, which could be folders. Symbolic links always start with a letter l, as in link. Change the command to filter for a - . This should only leave regular files: ls -lS | grep '^-' On my system this only shows regular files. update 3: To add recursion, I would leave the sorting of the lines to the sort command and tell it to use the 5th column to sort on. ls -lR | grep '^-' | sort -k 5 -rn -rn means Reverse and numeric to get the biggest files at the top. Down side of this command is that it does not show the full path of the files. If you do need the full path of the files, use something like this: find . -type f -exec du -h {} + | sort -r -h The find command will recursively find all files in all sub directories of . and call du -h (meaning disk usage -humanreadable ) and then sort the output again. If your find / sort doesn't support -h , replace with du -k and sort -rn . Note that size and disk usage are not the same thing. | {
"source": [
"https://unix.stackexchange.com/questions/53737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26770/"
]
} |
53,739 | I can use help with this bit of bash I am trying to write. The scripts purpose is to speed up my development while working with multiple projects.
I labeled the parts that I have questions about in the code. # is there a way to persist this through working enviornments besides this?
declare -x WORKING=`cat ~/.working`
#alias p='builtin cd $WORKING && pwd && ls'
alias pj='builtin cd $WORKING/public/javascripts && pwd && ls'
function pp {
echo `pwd` > ~/.working
}
# is there a way to close the scope of this function?
function p {
# how do I process flags here?
# -f and -d etc. can exist but may not
# either way I want $1 to be set to the first string if there
# is one
if [ -z "$1" ]
then
echo '*'
builtin cd $WORKING && pwd && ls
return
fi
BACK=`pwd`
builtin cd $WORKING
#f=`find . -iname "$1"`
f=( `echo $(find . -type d -o -type f -iname "$1") | grep -v -E "git|node"` )
#echo ${f[1]}
if [ -z "${f[0]}" ]
then
return
fi
if [ -z "${f[1]}" ]
then
# how can I write this as a switch?
if [ -f ${f[0]} ]
then
vim ${f[0]}
return
fi
if [ -d ${f[0]} ]
then
builtin cd ${f[0]}
return
fi
else
echo "multiple found"
#for path in $f
#do
# sort files and dirs
# sort dirs by path
# sort files by path
#done
# display dirs one color
# display files another color
# offer choices
# 1) open all files
# 2) open a file
# 3) cd to selected directory
# 4) do nothing
fi
# nothing found
builtin $BACK
} | Simply use something like: ls -lS /path/to/folder/ Capital S . This will sort files by size. Also see: man ls -S sort by file size If you want to sort in reverse order, just add -r switch. Update: To exclude directories (and provided none of the file names or symlink targets contain newline characters): ls -lS | grep -v '^d' Update 2: I see now how it still shows symbolic links, which could be folders. Symbolic links always start with a letter l, as in link. Change the command to filter for a - . This should only leave regular files: ls -lS | grep '^-' On my system this only shows regular files. update 3: To add recursion, I would leave the sorting of the lines to the sort command and tell it to use the 5th column to sort on. ls -lR | grep '^-' | sort -k 5 -rn -rn means Reverse and numeric to get the biggest files at the top. Down side of this command is that it does not show the full path of the files. If you do need the full path of the files, use something like this: find . -type f -exec du -h {} + | sort -r -h The find command will recursively find all files in all sub directories of . and call du -h (meaning disk usage -humanreadable ) and then sort the output again. If your find / sort doesn't support -h , replace with du -k and sort -rn . Note that size and disk usage are not the same thing. | {
"source": [
"https://unix.stackexchange.com/questions/53739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9110/"
]
} |
53,789 | I'm trying to setup a multiple line PS1, for zsh, but \n doesn't was parsed by zsh, PS1="%~\n %> " How should I set it up? | Use $'\n' For example, PROMPT="firstline"$'\n'"secondline " or NEWLINE=$'\n'
PROMPT="firstline${NEWLINE}secondline " | {
"source": [
"https://unix.stackexchange.com/questions/53789",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
53,814 | On the bash command-line, ↑ gives me the previous command. On the command-lines in ipython or matlab , when I type a few characters, ↑ gives me the previously entered command starting with those characters . How can I enable exactly this behaviour in bash ? I am aware of more advanced ways of searching through the command-line history, but sometimes a simple way is more convenient. | The readline commands that you are looking for are the history-search-* commands: history-search-forward Search forward through the history for the string of characters between the start of the current line and the current cursor position (the point). This is a non-incremental search. history-search-backward Search backward through the history for the string of characters between the start of the current line and the point. This is a non-incremental search. Binding these in your .inputrc , like so: "\e[A": history-search-backward # arrow up
"\e[B": history-search-forward # arrow down will allow you to enter the first characters of a command, and then use the Up and Down keys to move through only those commands in your .bash_history that begin with that string. For example, entering vi and the Up would take you to the first previous command beginning with vi , like vim somefile . Entering Up would take you to the next previous instance, and so on. You can read more about all of the readline bindings here: http://linux.about.com/library/cmd/blcmdl3_readline.htm | {
"source": [
"https://unix.stackexchange.com/questions/53814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15654/"
]
} |
53,841 | I needed a timer which will start at the very beginning of the script and stops at the end. | If you want the duration in seconds, at the top use start=$SECONDS and at the end duration=$(( SECONDS - start )) | {
"source": [
"https://unix.stackexchange.com/questions/53841",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22979/"
]
} |
53,912 | I am trying to install Pass: the standard Unix password manager , however, when I try to add passwords to the appliation I get these errors gpg: Kelly's Passwords: skipped: No public key
gpg: [stdin]: encryption failed: No public key GPG Public Keys? When I type in the command gpg --list-keys I get: /home/khays/.gnupg/pubring.gpg
------------------------------
pub 2048R/64290B2D 2012-11-05
uid Kelly Hays <[email protected]>
sub 2048R/0DF57DA8 2012-11-05 I am a little lost of how to remedy this, any ideas? | How did you create the password store? pass init "Kelly's Passwords" ? If so, this is wrong, you should have called pass init 64290B2D . And if then pass insert foo will fail with: gpg: fooo: skipped: public key not found
gpg: [stdin]: encryption failed: public key not found then you have to trust your own key first ( gpg --edit-key 64290B2D , trust , 5 , save ). | {
"source": [
"https://unix.stackexchange.com/questions/53912",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26858/"
]
} |
53,923 | I am working with a bash script that someone else wrote and I see the following line: cp -v ${LOG_DIR}/${APPLICATION}\.*.log ${ARCHIVED_LOG_DIR} The files with which it's working are all named like: EXAMPLE.command1.log EXAMPLE.command2.log Is there any reason for the backslash escaping the dot since a dot isn't treated specially in filename expansions? What are the implications of doing this vs without the backslash as such?: cp -v ${LOG_DIR}/${APPLICATION}.*.log ${ARCHIVED_LOG_DIR} | How did you create the password store? pass init "Kelly's Passwords" ? If so, this is wrong, you should have called pass init 64290B2D . And if then pass insert foo will fail with: gpg: fooo: skipped: public key not found
gpg: [stdin]: encryption failed: public key not found then you have to trust your own key first ( gpg --edit-key 64290B2D , trust , 5 , save ). | {
"source": [
"https://unix.stackexchange.com/questions/53923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2566/"
]
} |
54,953 | BACKGROUND Trevor logs into his account on ssh://foobar.university.edu as one of the developers on the box, and he gets the message: id: cannot find name for group ID 131 Trevor then checks this out using vim /etc/group PROBLEM Trevor discovers that there is no 131 anywhere in the /etc/group file. Trevor then runs id ... > id trevor
uid=4460(trevor) gid=131 groups=48(foobar),51(doobar),131 To discover his primary group apparently does not have a name attached to it. QUESTIONS What likely happened to cause this circumstance on the foobar.university.edu box ? Suppose trevor wants to fix this (e.g., by just creating a "trevor" group that maps to GID 131) what is the best way to do this without potentially breaking anything else on the server ? | What likely happened is that the UID and GID are provided to the server via LDAP. If the /etc/group file doesn't contain the translation for the GID, then the server administrators likely just failed to update the group definitions. What can you do? Not much. The user id is controlled by the administrator. (Now if you happen to have ROOT privileges, you can add the group into /etc/group . You should also check to see if any other user accounts are using the same group, and if they are, name the group appropriately). | {
"source": [
"https://unix.stackexchange.com/questions/54953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26239/"
]
} |
54,975 | Ex.: an sshd is configured to only listen on wlan0. So. Besides checking the sshd_config how can I check that a daemon is listening on what inerface? netstat can do it? how? (OS: openwrt or scientific linux or openbsd) UPDATE: I thought sshd could be limited to an interface... but no... (192.168.1.5 is on wlan0...) # grep ^ListenAddress /etc/ssh/sshd_config
ListenAddress 192.168.1.5:22
#
# lsof -i -n -P
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 23952 root 3u IPv4 1718551 0t0 TCP 192.168.1.5:22 (LISTEN)
#
# ss -lp | grep -i ssh
0 128 192.168.1.5:ssh *:* users:(("sshd",23952,3))
#
# netstat -lp | grep -i ssh
tcp 0 0 a.lan:ssh *:* LISTEN 23952/sshd
# | (you might have to install the package ip on openwrt (v12 / attitude adjustment) ifconfig/netstat etc. are considered deprecated , so you should use (as root) ss -nlput | grep sshd to show the TCP/UDP sockets on which a running program which contains the string sshd is listening to -n no port to name resolution -l only listening sockets -p show processes listening -u show udp sockets -t show tcp sockets Then you geht a list like this one: tcp LISTEN 0 128 *:22 *:* users:(("sshd",3907,4))
tcp LISTEN 0 128 :::22 :::* users:(("sshd",3907,3))
tcp LISTEN 0 128 127.0.0.1:6010 *:* users:(("sshd",4818,9))
tcp LISTEN 0 128 ::1:6010 :::* users:(("sshd",4818,8)) the interesting thing is the 5th column which shows a combination of IP address and port: *:22 listen on port 22 on every available IPv4 address :::22 listen on port 22 on every available IP address (i do not write IPv6, as IP is IPv6 per RFC 6540 ) 127.0.0.1:6010 listen on IPv4 address 127.0.0.1 (localhost/loopback) and port 6010 ::1:6010 listen on IP address ::1 (0:0:0:0:0:0:0:1 in full notation, also localhost/loopback) and port 6010 You then want to know which interfaces has an IPv4 address (to cover 1.) ip -4 a
# or "ip -4 address"
# or "ip -4 address show" or an IP address (to cover 2.) ip -6 a
# or "ip -6 address
# or "ip -6 address show (if you do not add the option for IP ( -6 ) or IPv4 ( -4 ) both are shown) You can also have an look that output and search for e.g. 127.0.0.1 or any other IP/IPv4-address # here a demo where i show all addresses of the device "lo" (loopback)
ip a show dev lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever The lines beginning with inet and inet6 show that these IPs are bound to this interface, you may have many of these lines per interface: he-ipv6: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN
link/sit 192.0.2.1 peer 192.0.2.3
inet6 2001:db8:12::1/64 scope global
valid_lft forever preferred_lft forever
inet6 2001:db8::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::1111:1111/128 scope link
valid_lft forever preferred_lft forever and in a script: address="127.0.0.1"
for i in $(grep ':' /proc/net/dev | cut -d ':' -f 1 | tr -d ' ') ; do
if $(ip address show dev $i | grep -q "${address}") ; then
echo "${address} found on interface ${i}"
fi
done (replace "127.0.0.1") | {
"source": [
"https://unix.stackexchange.com/questions/54975",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18526/"
]
} |
54,987 | How do I determine the version of a CentOS server without access to any graphical interface? I've tried several commands: # cat /proc/version
Linux version 2.6.18-128.el5 ([email protected])
(gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)) …
# cat /etc/issue
Red Hat Enterprise Linux Server release 5.3 (Tikanga) but which one is correct: 4.1.2-4 from /proc/version or 5.3 from /etc/issue ? | In cases like CentOS the actual version is usually placed in /etc/*elease . cat /etc/*elease granted this file usually holds the version of the entire OS minus the kernel (since you can choose which to load).
This file will have the same information as /etc/issue but with CentOS instead of RedHat | {
"source": [
"https://unix.stackexchange.com/questions/54987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9938/"
]
} |
55,069 | I want to accumulate the line size of a number of files contained in a folder. I have written the following script: let a=0
let num=0
for i in folder/*
do
num=`cat $i | wc -l`
a=$a+$num
done
echo $a What i am geting at the end of the script is 123+234+432+... and not the result of the arithmetic operation of addition. | Your arithmetic evaluation syntax is wrong. Use any of the following (the first is extremely portable but slow, the second is POSIX and portable except to the Bourne shell and earlier versions of the Almquist shell, the last three require ksh , bash or zsh ): a=`expr "$a" + "$num"`
a=$(($a+$num))
((a=a+num))
let a=a+num
((a+=num)) Or you can just skip the entire for loop and just do: wc -l folder/* Or, if you only want the total: cat folder/* | wc -l Or with zsh and its mult_ios option: wc -l < folder/* | {
"source": [
"https://unix.stackexchange.com/questions/55069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3915/"
]
} |
55,076 | Playing with xmodmap I encountered a modifier key I hadn't heard of: Mode_switch . It seems to have something to do with inserting special characters. I assigned it to a key but it seems to have no effect. What is it for? Is it different from ISO_Level3_Shift (Alt Gr) ? | Mode_switch is the old-style (pre-XKB) name of the key that is called AltGr on many keyboard layouts. It is similar to Shift , in that when you press a key that corresponds to a character, you get a different character if Shift or AltGr is also pressed. Unlike Shift , Mod_switch is not a modifier in the X11 sense because it normally applies to characters, not to function keys, so applications only need to perform a character lookup to obtain the desired effect. ISO_Level3_Shift is the XKB version of this key. Generally speaking, XKB is a lot more complicated and can do some extra fancy stuff. XKB's mechanism is more general as it allows keyboard layouts to vary in which keys are influenced by which modifiers, it generalizes sticky ( CapsLock -style) and simultaneous-press ( Shift -style) modifiers and so on. | {
"source": [
"https://unix.stackexchange.com/questions/55076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7047/"
]
} |
55,106 | We have several user accounts that we create for automated tasks that require fine-grained permissions, such as file transfer across systems, monitoring, etc. How do we lock down these user accounts so that these "users" have no shell and are not able to login? We want to prevent the possibility that someone can SSH in as one of these user accounts. | You can use the usermod command to change a user's login shell. usermod -s /sbin/nologin myuser or usermod -s /usr/sbin/nologin myuser If your OS does not provide /sbin/nologin, you can set the shell to a NOOP command such as /bin/false: usermod -s /bin/false myuser | {
"source": [
"https://unix.stackexchange.com/questions/55106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26962/"
]
} |
55,203 | Is it possible to configure bash in such a way that on the first tab autocomplete it lists all possible files and on subsequent ones cycles through the choices? Both options are easy to do separately and I could bind them to different keys, but the above would be perfect, but I can't find anything about it on the net. | This seems close to what you want: bind "TAB:menu-complete"
bind "set show-all-if-ambiguous on" | {
"source": [
"https://unix.stackexchange.com/questions/55203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27023/"
]
} |
55,212 | I'd like to do some general disk io monitoring on a debian linux server. What are the tools I should know about that monitor disk io so I can see if a disk's performance is maxed out or spikes at certain time throughout the day? | For disk I/O trending there are a few options. My personal favorite is the sar command from sysstat . By default, it gives output like this: 09:25:01 AM CPU %user %nice %system %iowait %steal %idle
09:35:01 AM all 0.11 0.00 0.01 0.00 0.00 99.88
09:45:01 AM all 0.12 0.00 0.01 0.00 0.00 99.86
09:55:01 AM all 0.09 0.00 0.01 0.00 0.00 99.90
10:05:01 AM all 0.10 0.00 0.01 0.02 0.01 99.86
Average: all 0.19 0.00 0.02 0.00 0.01 99.78 The %iowait is the time spent waiting on I/O. Using the Debian package, you must enable the stat collector via the /etc/default/sysstat config file after package installation. To see current utilization broken out by device, you can use the iostat command, also from the sysstat package: $ iostat -x 1
Linux 3.5.2-x86_64-linode26 (linode) 11/08/2012 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.84 0.00 0.08 1.22 0.07 97.80
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
xvda 0.09 1.02 2.58 0.49 112.79 12.11 40.74 0.15 48.56 3.88 1.19
xvdb 1.39 0.43 4.03 1.82 43.33 18.43 10.56 0.66 112.73 1.93 1.13 Some other options that can show disk usage in trending graphs is munin and cacti . | {
"source": [
"https://unix.stackexchange.com/questions/55212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27029/"
]
} |
55,359 | I would like to get the multi pattern match with implicit AND between patterns, i.e. equivalent to running several greps in a sequence: grep pattern1 | grep pattern2 | ... So how to convert it to something like? grep pattern1 & pattern2 & pattern3 I would like to use single grep because I am building arguments dynamically, so everything has to fit in one string. Using filter is system feature, not grep, so it is not an argument for it. Don't confuse this question with: grep "pattern1\|pattern2\|..." This is an OR multi pattern match. I am looking for an AND pattern match. | To find the lines that match each and everyone of a list of patterns, agrep (the original one, now shipped with glimpse , not the unrelated one in the TRE regexp library ) can do it with this syntax: agrep 'pattern1;pattern2' With GNU grep , when built with PCRE support, you can do: grep -P '^(?=.*pattern1)(?=.*pattern2)' With ast grep : grep -X '.*pattern1.*&.*pattern2.*' (adding .* s as <x>&<y> matches strings that match both <x> and <y> exactly , a&b would never match as there's no such string that can be both a and b at the same time). If the patterns don't overlap, you may also be able to do: grep -e 'pattern1.*pattern2' -e 'pattern2.*pattern1' The best portable way is probably with awk as already mentioned: awk '/pattern1/ && /pattern2/' Or with sed : sed -e '/pattern1/!d' -e '/pattern2/!d' Or perl : perl -ne 'print if /pattern1/ && /pattern2/' Please beware that all those will have different regular expression syntaxes. | {
"source": [
"https://unix.stackexchange.com/questions/55359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5884/"
]
} |
55,392 | I have the following bash script: #!/bin/bash
upperlim=10
for i in {0..10}
do
echo $i
done
for i in {0..$upperlim}
do
echo $i
done The first for loop ( without the variable upperlim in the loop control) works fine, but the second for loop ( with the variable upperlim in the loop control) does not. Is there any way that I can modify the second for loop so that it works? Thanks for your time. | The reason for this is the order in which things occur in bash. Brace expansion occurs before variables are expanded. In order to accomplish your goal, you need to use C-style for loop: upperlim=10
for ((i=0; i<=upperlim; i++)); do
echo "$i"
done | {
"source": [
"https://unix.stackexchange.com/questions/55392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
55,395 | I am a computational scientist, and I run a lot of lengthy calculations on Linux. Specifically, I run molecular dynamics (MD) simulations using the GROMACS package. These simulations can take days or weeks, running on (for example) 8 to 24 cores. I have access to several nodes of a cluster, which means that at any given time, I am running approximately 4 or 5 jobs (each on a different node, and each on 8-24 cores). The problem is that the simulation take a variable amount of time. I like to keep all nodes working on simulations around the clock, but to start a new simulation, I need to do log in with a terminal and do some manual work. But I always forget how much time is left in a simulation, so I always end up constantly checking them. Is there any way that I can receive an e-mail when a Linux process finishes? Could there be a Linux program that does this? That way I would know when to log in with a terminal and prepare the next simulation. I am using Ubuntu Linux. Thanks for your time. | Yeah, there is command; echo "Process done" | mail -s "Process done" [email protected] Where -s "text" is the subject, the echo gives mail some Text to send to you. | {
"source": [
"https://unix.stackexchange.com/questions/55395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
55,423 | I know I can change some fundamental settings of the Linux console, things like fonts, for instance, with dpkg-reconfigure console-setup . But I'd like to change things like blinkrate, color, and shape (I want my cursor to be a block, at all times). I've seen people accomplishing this. I just never had a chance to ask those people how to do that. I don't mean terminal emulator windows, I mean the Linux text console, you reach with Ctrl + Alt + F-key I'm using Linux Mint at the moment, which is a Debian derivate. I'd like to know how to do that in Fedora as well, though. Edit: I might be on to something I learned from this website , how to do the changes I need. But I'm not finished yet. I've settled on using echo -e "\e[?16;0;200c" for now, but I've got a problem: when running applications like vim or irssi , or attaching a screen session, the cursor reverts back to being a blinking gray underscore. And of course, it only works on this one tty all other text consoles are unaffected. So how can I make those changes permanent? How can I populate them to other consoles? | GitHub Gist: How to change cursor shape, color, and blinkrate of Linux Console I define the following cursor formatting settings in my .bashrc file (or /etc/bashrc ): ##############
# pretty prompt and font colors
##############
# alter the default colors to make them a bit prettier
echo -en "\e]P0000000" #black
echo -en "\e]P1D75F5F" #darkred
echo -en "\e]P287AF5F" #darkgreen
echo -en "\e]P3D7AF87" #brown
echo -en "\e]P48787AF" #darkblue
echo -en "\e]P5BD53A5" #darkmagenta
echo -en "\e]P65FAFAF" #darkcyan
echo -en "\e]P7E5E5E5" #lightgrey
echo -en "\e]P82B2B2B" #darkgrey
echo -en "\e]P9E33636" #red
echo -en "\e]PA98E34D" #green
echo -en "\e]PBFFD75F" #yellow
echo -en "\e]PC7373C9" #blue
echo -en "\e]PDD633B2" #magenta
echo -en "\e]PE44C9C9" #cyan
echo -en "\e]PFFFFFFF" #white
clear #for background artifacting
# set the default text color. this only works in tty (eg $TERM == "linux"), not pts (eg $TERM == "xterm")
setterm -background black -foreground green -store
# http://linuxgazette.net/137/anonymous.html
cursor_style_default=0 # hardware cursor (blinking)
cursor_style_invisible=1 # hardware cursor (blinking)
cursor_style_underscore=2 # hardware cursor (blinking)
cursor_style_lower_third=3 # hardware cursor (blinking)
cursor_style_lower_half=4 # hardware cursor (blinking)
cursor_style_two_thirds=5 # hardware cursor (blinking)
cursor_style_full_block_blinking=6 # hardware cursor (blinking)
cursor_style_full_block=16 # software cursor (non-blinking)
cursor_background_black=0 # same color 0-15 and 128-infinity
cursor_background_blue=16 # same color 16-31
cursor_background_green=32 # same color 32-47
cursor_background_cyan=48 # same color 48-63
cursor_background_red=64 # same color 64-79
cursor_background_magenta=80 # same color 80-95
cursor_background_yellow=96 # same color 96-111
cursor_background_white=112 # same color 112-127
cursor_foreground_default=0 # same color as the other terminal text
cursor_foreground_cyan=1
cursor_foreground_black=2
cursor_foreground_grey=3
cursor_foreground_lightyellow=4
cursor_foreground_white=5
cursor_foreground_lightred=6
cursor_foreground_magenta=7
cursor_foreground_green=8
cursor_foreground_darkgreen=9
cursor_foreground_darkblue=10
cursor_foreground_purple=11
cursor_foreground_yellow=12
cursor_foreground_white=13
cursor_foreground_red=14
cursor_foreground_pink=15
cursor_styles="\e[?${cursor_style_full_block};${cursor_foreground_black};${cursor_background_green};c" # only seems to work in tty
# http://www.bashguru.com/2010/01/shell-colors-colorizing-shell-scripts.html
prompt_foreground_black=30
prompt_foreground_red=31
prompt_foreground_green=32
prompt_foreground_yellow=33
prompt_foreground_blue=34
prompt_foreground_magenta=35
prompt_foreground_cyan=36
prompt_foreground_white=37
prompt_background_black=40
prompt_background_red=41
prompt_background_green=42
prompt_background_yellow=43
prompt_background_blue=44
prompt_background_magenta=45
prompt_background_cyan=46
prompt_background_white=47
prompt_chars_normal=0
prompt_chars_bold=1
prompt_chars_underlined=4 # doesn't seem to work in tty
prompt_chars_blinking=5 # doesn't seem to work in tty
prompt_chars_reverse=7
prompt_reset=0
#start_prompt_coloring="\e[${prompt_chars_bold};${prompt_foreground_black};${prompt_background_green}m"
start_prompt_styles="\e[${prompt_chars_bold}m" # just use default background and foreground colors
end_prompt_styles="\e[${prompt_reset}m"
PS1="${start_prompt_styles}[\u@\h \W] \$${end_prompt_styles}${cursor_styles} "
##############
# end pretty prompt and font colors
############## | {
"source": [
"https://unix.stackexchange.com/questions/55423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1290/"
]
} |
55,533 | I think I rather understand how file permissions work in Linux. However, I don't really understand why they are split into three levels and not into two. I'd like the following issues answered: Is this deliberate design or a patch? That is - was the owner/group permissions designed and created together with some rationale or did they come one after another to answer a need? Is there a scenario where the user/group/other scheme is useful but a group/other scheme will not suffice? Answers to the first should quote either textbooks or official discussion boards. Use cases I have considered are: private files - very easily obtainable by making a group per-user, something that is often done as is in many systems. allowing only the owner (e.g. system service) to write to a file, allowing only a certain group to read, and deny all other access - the problem with this example is that once the requirement is for a group to have write access, the user/group/other fails with that. The answer for both is using ACLs, and doesn't justify, IMHO, the existence of owner permissions. NB I have refined this question after having the question closed in superuser.com . EDIT corrected "but a group/owner scheme will not suffice" to "...group/other...". | History Originally, Unix only had permissions for the owning user, and for other users: there were no groups. See the documentation of Unix version 1, in particular chmod(1) . So backward compatibility, if nothing else, requires permissions for the owning user. Groups came later. ACLs allowing involving more than one group in the permissions of a file came much later. Expressive power Having three permissions for a file allows finer-grained permissions than having just two, at a very low cost (a lot lower than ACLs). For example, a file can have mode rw-r----- : writable only by the owning user, readable by a group. Another use case is setuid executables that are only executable by one group. For example, a program with mode rwsr-x--- owned by root:admin allows only users in the admin group to run that program as root. “There are permissions that this scheme cannot express” is a terrible argument against it. The applicable criterion is, are there enough common expressible cases that justify the cost? In this instance, the cost is minimal, especially given the other reasons for the user/group/other triptych. Simplicity Having one group per user has a small but not insignificant management overhead. It is good that the extremely common case of a private file does not depend on this. An application that creates a private file (e.g. an email delivery program) knows that all it needs to do is give the file the mode 600. It doesn't need to traverse the group database looking for the group that only contains the user — and what to do if there is no such group or more than one? Coming from another direction, suppose you see a file and you want to audit its permissions (i.e. check that they are what they should be). It's a lot easier when you can go “only accessible to the user, fine, next” than when you need to trace through group definitions. (Such complexity is the bane of systems that make heavy use of advanced features such as ACLs or capabilities.) Orthogonality Each process performs filesystem accesses as a particular user and a particular group (with more complicated rules on modern unices, which support supplementary groups). The user is used for a lot of things, including testing for root (uid 0) and signal delivery permission (user-based). There is a natural symmetry between distinguishing users and groups in process permissions and distinguishing users and groups in filesystem permissions. | {
"source": [
"https://unix.stackexchange.com/questions/55533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27191/"
]
} |
55,546 | Possible Duplicate: Removing control chars (including console codes / colours) from script output I'm working on a script to work alongside a program that I'm writing. What i'm trying to do is assess the level of completion of a number of clients using another script. My script does pretty much exactly as I want, but there are some color constants that are inserted into my XML output, which mess things up when I'm parsing the XML into PHP later. long story short, I have a sed expression that I'm using which removes the first portion of the color constants, but it fails to remove the trailing escape sequence which looks something like ^[(B - thus my problem. Here is the sed sequence I'm using: sed -r 's/\x1B\[([0-9]{1,3}((;[0-9]{1,3})*)?)?[m|K]//g' I'm wondering if there is a separate sed sequence I could run afterwards to remove the trailing sequence. I've tried using sed 's/\^\[\\(B//' But it doesn't seem to remove anything.
I don't necessarily need an answer, if someone has a good guide on sed and color codes, that would also be super helpful. I've googled around, but the answers I've found just seem to remove the opening portion of the color. Thanks for your help. | ANSI escape codes start with ESC or character \033 Color codes (a subset of them), are \033[Xm where X is a semicolon-separated list of digits, possibly empty (this means a reset). m is a literal m. Since I keep forgetting these codes myself I "documented" them on https://github.com/seveas/hacks/blob/master/ansi.py . As for a sed expression: I'd go for s/\x1B\[[0-9;]*[JKmsu]//g | {
"source": [
"https://unix.stackexchange.com/questions/55546",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27197/"
]
} |
55,558 | I'm trying to set up a shell script so that it runs background processes, and when I Ctrl c the shell script, it kills the children, then exits. The best that I've managed to come up with is this. It appears that the kill 0 -INT also kills the script before the wait happens, so the shell script dies before the children complete. Any ideas on how I can make this shell script wait for the children to die after sending INT ? #!/bin/bash
trap 'killall' INT
killall() {
echo "**** Shutting down... ****"
kill 0 -INT
wait # Why doesn't this wait??
echo DONE
}
process1 &
process2 &
process3 &
cat # wait forever | Your kill command is backwards. Like many UNIX commands, options that start with a minus must come first, before other arguments. If you write kill -INT 0 it sees the -INT as an option, and sends SIGINT to 0 ( 0 is a special number meaning all processes in the current process group). But if you write kill 0 -INT it sees the 0 , decides there's no more options, so uses SIGTERM by default. And sends that to the current process group, the same as if you did kill -TERM 0 -INT (it would also try sending SIGTERM to -INT , which would cause a syntax error, but it sends SIGTERM to 0 first, and never gets that far.) So your main script is getting a SIGTERM before it gets to run the wait and echo DONE . Add trap 'echo got SIGTERM' TERM at the top, just after trap 'killall' INT and run it again to prove this. As Stephane Chazelas points out, your backgrounded children ( process1 , etc.) will ignore SIGINT by default. In any case, I think sending SIGTERM would make more sense. Finally, I'm not sure whether kill -process group is guaranteed to go to the children first. Ignoring signals while shutting down might be a good idea. So try this: #!/bin/bash
trap 'killall' INT
killall() {
trap '' INT TERM # ignore INT and TERM while shutting down
echo "**** Shutting down... ****" # added double quotes
kill -TERM 0 # fixed order, send TERM not INT
wait
echo DONE
}
./process1 &
./process2 &
./process3 &
cat # wait forever | {
"source": [
"https://unix.stackexchange.com/questions/55558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27205/"
]
} |
55,564 | I happen to know there is a slight difference between adduser and useradd . (i.e., adduser has additional features to useradd , such as creating a home directory.) Then what is the relation between addgroup and groupadd ? Is there a preferred way to create a group? | On most distribution adduser and addgroup are interactive 'convenience' wrappers around the commands useradd and groupadd . You can find addgroup using the command which addgroup , on my machine (Ubuntu 11.04) this lives in /usr/sbin/addgroup . On my box addgroup is a perl script that prompts for various options (interactively) before invoking the groupadd command. groupadd is usually preferable for scripting (say, if you wan't to create users in batch), whereas addgroup is more user friendly (especially if you are unfamiliar with all the options and flags). Of course addgroup also takes many options via the command when you invoke it, but it is primarily intended as an interactive script. Interestingly on my box addgroup is a symlink to adduser , the script checks the name it was invoked under and performs different actions accordingly. | {
"source": [
"https://unix.stackexchange.com/questions/55564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23595/"
]
} |
55,636 | Is there a way to logically combine two shell commands that are invoked with find - exec ? For instance to print out all the .csv files that contain the string foo together with its occurrence I would like to do: find . -iname \*.csv -exec grep foo {} && echo {} \; but bash complains with "missing argument to '-exec' " | -exec is a predicate that runs a command (not a shell) and evaluates to true or false based on the outcome of the command (zero or non-zero exit status). So: find . -iname '*.csv' -exec grep foo {} \; -print would print the file path if grep finds foo in the file. Instead of -print you can use another -exec predicate or any other predicate find . -iname '*.csv' -exec grep foo {} \; -exec echo {} \; See also the ! and -o find operators for negation and or . Alternatively, you can start a shell as: find . -iname '*.csv' -exec sh -c '
grep foo "$1" && echo "$1"' sh {} \; Or to avoid having to start a shell for every file: find . -iname '*.csv' -exec sh -c '
for file do
grep foo "$file" && echo "$file"
done' sh {} + | {
"source": [
"https://unix.stackexchange.com/questions/55636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24044/"
]
} |
55,755 | Is there a simple way I can echo a file, skipping the first and last lines? I was looking at piping from head into tail , but for those it seems like I would have to know the total lines from the outset. I was also looking at split , but I don't see a way to do it with that either. | Just with sed , without any pipes : sed '1d;$d' file.txt NOTE 1 mean first line d mean delete ; is the separator for 2 commands $ mean last line More readable: sed -e '1d' -e '$d' file.txt | {
"source": [
"https://unix.stackexchange.com/questions/55755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/394/"
]
} |
55,770 | On my machine I get the following output when I run these commands: $ echo foos > myfile
$ hexdump myfile
6f66 736f 000a The output from hexdump is little-endian. Does this mean that my machine is little-endian, or does hexdump always use little-endian format? | The traditional BSD hexdump utility uses the platform's endianness, so the output you see means your machine is little-endian. Use hexdump -C (or od -t x1 ) to get consistent byte-by-byte output irrespective of the platform's endianness. | {
"source": [
"https://unix.stackexchange.com/questions/55770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
55,777 | What is the difference between yum update and yum upgrade , and when should I use one over the other? | yum upgrade forces the removal of obsolete packages, while yum update may or may not also do this. The removal of obsolete packages can be risky, as it may remove packages that you use. This makes yum update the safer option. From man yum : update If run without any packages, update will update every currently installed package. If one or more packages or package globs are specified, Yum will only update the listed packages. While updating packages, yum will ensure that all dependencies are satisfied. (See Specifying package names for more information) If the packages or globs specified match to packages which are not currently installed then update will not install them. update operates on groups, files, provides and filelists just like the "install" command. If the main obsoletes configure option is true (default) or the --obsoletes flag is present yum will include package obsoletes in its calculations - this makes it better for distro-version changes, for example: upgrading from somelinux 8.0 to somelinux 9. upgrade Is the same as the update command with the --obsoletes flag set . See update for more details. | {
"source": [
"https://unix.stackexchange.com/questions/55777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9866/"
]
} |
55,779 | Given a linux TCP socket's inode (obtained via /proc/<pid>/fd ), is there a faster way to look up the information that I can get from /proc/net/tcp about this socket? I have written a troubleshooting tool which monitors processes and prints realtime information about IO operations ( strace -type info collected into higher level abstractions and presented in a less raw way), but on a heavily loaded network server, I am finding that the time it takes to look up socket info (e.g. the foreign address/port) is prohibitive simply due to the very large size of /proc/net/tcp (about 2MB on the server I'm currently looking at). I can manage this somewhat with caching, but this necessarily introduces latency and makes me wonder about the absurdity of an "API" that requires reading and parsing 2MB worth of ASCII text in order to find info on a socket. | yum upgrade forces the removal of obsolete packages, while yum update may or may not also do this. The removal of obsolete packages can be risky, as it may remove packages that you use. This makes yum update the safer option. From man yum : update If run without any packages, update will update every currently installed package. If one or more packages or package globs are specified, Yum will only update the listed packages. While updating packages, yum will ensure that all dependencies are satisfied. (See Specifying package names for more information) If the packages or globs specified match to packages which are not currently installed then update will not install them. update operates on groups, files, provides and filelists just like the "install" command. If the main obsoletes configure option is true (default) or the --obsoletes flag is present yum will include package obsoletes in its calculations - this makes it better for distro-version changes, for example: upgrading from somelinux 8.0 to somelinux 9. upgrade Is the same as the update command with the --obsoletes flag set . See update for more details. | {
"source": [
"https://unix.stackexchange.com/questions/55779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27298/"
]
} |
55,791 | I've run into a bit of a puzzle and haven't had much luck finding a solution. Right now I am (sadly) connected to the net via Verizon 3G. They filter all incoming traffic so it is impossible for me to open ports to accept connections. I currently have a Linux virtual machine at linode.com, and the thought crossed my mind to install pptpd and attempt to do some iptables port forwarding. I have pptpd installed and my home machine connects happily. That said, here's some general info: Server (Debian) WAN IP: x.x.x.x on eth0 - pptpd IP: y.y.y.1 on ppp0 - Client VPN IP: y.y.y.100 To verify I wasn't going insane, I attempted some connections from the server to the open ports on the client, and the client does accept the connections via the VPN IP. What I want to accomplish is this: Internet -> WAN IP:Port -> Forward to Client VPN IP:Port So for instance, if I had port 6000 open on my client, a person could telnet in to x.x.x.x:6000, and the server would catch that and forward it to 192.168.3.100:6000. I have tried at least 20 different Googled up iptables configs and none have worked yet. Does anyone have any ideas, or perhaps even a totally different approach I might not be aware of? The goal here is to listen through a horribly firewalled connection, preferably both TCP and UDP traffic. | You need to do three things on your VPN server (the Linode) to make this work: You must enable IP forwarding: sysctl -w net.ipv4.ip_forward=1 Set up destination NAT (DNAT) to forward the port. You've probably already figured this out because it's standard port forwarding stuff, but for completeness: iptables -t nat -A PREROUTING -d x.x.x.x -p tcp --dport 6000 -j DNAT --to-dest y.y.y.100:6000 Set up source NAT (SNAT) so that from your VPN client's perspective, the connection is coming from the VPN server: iptables -t nat -A POSTROUTING -d y.y.y.100 -p tcp --dport 6000 -j SNAT --to-source y.y.y.1 The reason you need the SNAT is because otherwise your VPN client will send its return packets straight to the host which initiated the connection (z.z.z.z) via its default gateway (i.e. Verizon 3G), and not via the VPN. Thus the source IP address on the return packets will be your Verizon 3G address, and not x.x.x.x. This causes all sorts of problems, since z.z.z.z really initiated the connection to x.x.x.x. In most port forwarding setups, the SNAT is not needed because the host performing the port forwarding is also the default gateway for the destination host (e.g. a home router). Also note that if you want to forward port 6000 to a different port (say 7000), then the SNAT rule should match on 7000, not 6000. | {
"source": [
"https://unix.stackexchange.com/questions/55791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27300/"
]
} |
55,913 | What's the easiest way to find an unused local port? Currently I'm using something similar to this: port=$RANDOM
quit=0
while [ "$quit" -ne 1 ]; do
netstat -a | grep $port >> /dev/null
if [ $? -gt 0 ]; then
quit=1
else
port=`expr $port + 1`
fi
done It feels awfully roundabout, so I'm wondering if there's a more simple path such as a builtin that I've missed. | If your application supports it, you can try passing port 0 to the application. If your application passes this to the kernel, the port will be dynamically allocated at request time, and is guaranteed not to be in use (allocation will fail if all ports are already in use). Otherwise, you can do this manually. The script in your answer has a race condition, the only way to avoid it is to atomically check if it is open by trying to open it. If the port is in use, the program should quit with a failure to open the port. For example, say you're trying to listen with GNU netcat. #!/bin/bash
read lower_port upper_port < /proc/sys/net/ipv4/ip_local_port_range
while :; do
for (( port = lower_port ; port <= upper_port ; port++ )); do
nc -l -p "$port" 2>/dev/null && break 2
done
done | {
"source": [
"https://unix.stackexchange.com/questions/55913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19636/"
]
} |
56,004 | Every time update-grub is run all hard drives are scanned. Each drives that is in standby state will spin up to go idle. This is a waste of energy. We use update-grub version 1.98: # update-grub -v
grub-mkconfig (GRUB) 1.98+20100804-14+squeeze1 Regression There is a GRUB_DISABLE_OS_PROBER=true option in the /etc/default/grub file. But that seems to only work from version 2 and up. At least it doesn't stop scanning all drives in our version 1.98. There is a /etc/grub.d/20_linux_xen script that might be run as a part of update-grub. After removing execute rights for all users with chmod a-x /etc/grub.d/20_linux_xen all drives do still spin up. How to stop update-grub from scanning each and every hard drive? | In file /etc/grub.d/30_os-prober the line OSPROBED="`os-prober | tr ' ' '^' | paste -s -d ' '`" makes all drives spin (standby -> idle). Os-prober is a utility to find Linux installations at drives other then your boot drive. It is the os-prober that needs to be disabled. One way is to remove the package : apt-get --purge remove os-prober . Another way is to remove executable rights for os-prober . First find the location of os-prober using $ which os-prober . Output might look like: /usr/bin/os-prober . The remove the executable rights for all users for that file: # chmod a-x /usr/bin/os-prober Another way is to remove executable rights for 30_os-prober . Find the location of 30_os-prober using $ locate /30_os-prober . Output might look like: /etc/grub.d/30_os-prober . The remove the executable rights for all users for that file: # chmod a-x /etc/grub.d/30_os-prober Yet another way is to skip the execution of /etc/grub.d/30_os-prober . For example by making the GRUB_DISABLE_OS_PROBER=true option work in our grub version 1.98. This can be done by inserting in file /etc/grub.d/30_os-prober the code below the line set -e : ... if [ "x${GRUB_DISABLE_OS_PROBER}" = "xtrue" ]; then
exit 0
fi | {
"source": [
"https://unix.stackexchange.com/questions/56004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17560/"
]
} |
56,051 | In bash I often use for-loops such as the following for file in *.type; do
sommecommand "$file";
done; to perform an operation for all files matching *.type . If no file with this ending is found in the working directories the asterisk is not expanded and usually I will get an error message saying that somecommand didn't find the file. I can immediately think of several ways to avoid this error. But adding a conditional does not seem to be very elegant. Is there a short and clean way to achieve this? | Yes, run the following command : shopt -s nullglob it will nullify the match and no error will be triggered. if you want this behaviour by default, add the command in your ~/.bashrc if you want to detect a null glob in POSIX shell, try for i in *.txt; do
[ "$i" = '*.txt' ] && [ ! -e '*.txt' ] && continue
done See http://mywiki.wooledge.org/NullGlob | {
"source": [
"https://unix.stackexchange.com/questions/56051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18047/"
]
} |
56,055 | I'm trying to setup a ftp account for a user that has read/write access to one folder: /items/myuser I've set up the account with username 'myuser' in Linux with the adduser command. After that, I've changed my /etc/vsftpd.conf file to: chroot_local_user=NO
chroot_list_enable=YES
user_config_dir=/etc/vsftpd_user_conf I've also edited the /etc/vsftpd.chroot_list and added 'myuser' to this list. After that, I've edited /etc/vsftpd_user_conf/myuser and added the following line: local_root = /items/myuser After that, I've created this local_root folder and ran the following commands: chown myuser:myuser /items/myuser
chmod ug-w /items/myuser
mkdir /items/myuser/homefolder
mount --bind /items/myuser /items/myuser/homefolder I can login and I'm restricted to this folder and his subfolders but when I want to write a file, I'm getting an 553 error. Does anyone know what I've forgotten to do? Thanks in advance! | Yes, run the following command : shopt -s nullglob it will nullify the match and no error will be triggered. if you want this behaviour by default, add the command in your ~/.bashrc if you want to detect a null glob in POSIX shell, try for i in *.txt; do
[ "$i" = '*.txt' ] && [ ! -e '*.txt' ] && continue
done See http://mywiki.wooledge.org/NullGlob | {
"source": [
"https://unix.stackexchange.com/questions/56055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27454/"
]
} |
56,083 | I am trying to write bash shell script in Ubuntu 11.10 Linux distro, that will get executed automatically on logging into the system. But I am not able to figure out that what to write in script that by it will get automatically executed on logging in. | If you want it to be global, modify /etc/profile or add a script to /etc/profile.d If you want it to be user-specific, modify ~/.profile | {
"source": [
"https://unix.stackexchange.com/questions/56083",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20231/"
]
} |
56,084 | I have a symbolic link to a file in one directory. I would like to have that same link in another directory. How do I copy a symbolic link? I tried to cp the symbolic link but this copies the file it points to instead of the symbolic link itself. | Use cp -P (capital P) to never traverse any symbolic link and copy the symbolic link instead. This can be combined with other options such as -R to copy a directory hierarchy — cp -RL traverses all symbolic links to directories, cp -RP copies all symbolic links as such. cp -R might do one or the other depending on the unix variants; GNU cp (as found on CentOS) defaults to -P . Even with -P , you can copy the target of a symbolic link to a directory on the command line by adding a / at the end: cp -RP foo/ bar copies the directory tree that foo points to. GNU cp has a convenient -a option that combines -R , -P , -p and a little more. It makes an exact copy of the source (as far as possible), preserving the directory hierarchy, symbolic links, permissions, modification times and other metadata. | {
"source": [
"https://unix.stackexchange.com/questions/56084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6250/"
]
} |
56,093 | I want to do some simple computation of the number of lines per minute added to a log file. I also want to store the count for each second. What I need is the output of the following command as a list which will be updated every second: watch -n1 'wc -l my.log' How can I output the 'update' of the 'watch' command as a list? | You can use the -t switch to watch which causes it not to print header. However, that will still clear the screen so you might be better off with a simple shell loop: while sleep 1; do
wc -l my.log
done One of the advantages is, that you can easily add other commands (e.g. date ) and/or pipe the output through sed to reformat it. By the way, if you swap sleep 1 with wc in the loop, it will automatically terminate on errors. | {
"source": [
"https://unix.stackexchange.com/questions/56093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27485/"
]
} |
56,123 | I use this cat foo.txt | sed '/bar/d' to remove lines containing the string bar in the file. I would like however to remove those lines and the line directly after it . Preferably in sed , awk or other tool that's available in MinGW32. It's a kind of reverse of what I can get in grep with -A and -B to print matching lines as well as lines before/after the matched line. Is there any easy way to achieve it? | If you have GNU sed (so non-embedded Linux or Cygwin): sed '/bar/,+1 d' If you have bar on two consecutive lines, this will delete the second line without analyzing it. For example, if you have a 3-line file bar / bar / foo , the foo line will stay. | {
"source": [
"https://unix.stackexchange.com/questions/56123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10745/"
]
} |
56,131 | Dealing with a subversion repo and a new user that didn't quite understand how it worked. Long story short, since their local structure was messed up due to copying random .svn folders about, I did the following: copied the local structure to a folder called staging recursively deleted all .svn folders from the staging directory checked out the repo to a "clean folder" Now we're at the last step — getting the staging folder contents to overwrite the clean contents. I need to have a command copy the contents of the staging directory to the clean directory, removing everything that is only in the clean directory, BUT leaving the clean folder's .svn folders in tact. This sounds like a job for rsync. Would the following command be correct? rsync -avr --exclude=.svn* [staging] [clean] | The -C option for rsync may be what you want. Don't let the short description in the man page fool you though. It would seem like the option only applies to CVS, but depending on your rsync version it will skip the files of almost every common version control system in existence. # rsync version: 3.0.7
-C, --cvs-exclude auto-ignore files in the same way CVS does
...
The exclude list is initialized to exclude the following items
(these initial items are marked as perishable — see the FILTER
RULES section):
RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak
*.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe
*.Z *.elc *.ln core .svn/ .git/ .bzr/
then, files listed in a $HOME/.cvsignore are added to the list
and any files listed in the CVSIGNORE environment variable (all
cvsignore names are delimited by whitespace). | {
"source": [
"https://unix.stackexchange.com/questions/56131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2517/"
]
} |
56,197 | I've noticed that ls -l doesn't only change the formatting of the output, but also how directory symlinks are handled: > ls /rmn
biweekly.sh daily.sh logs ...
> ls -l /rmn
lrwxrwxrwx 1 root root 18 Feb 11 2011 /rmn -> /root/maintenance/ I'd like to get a detailed listing of what's in /rmn , not information about the /rmn symlink. One work-around I can think of is to create a shell function that does something like this: cd /rmn
ls -l
cd - But that seems too hacky, especially since it messes up the next use of cd - . Is there a better way? (I'm on CentOS 2.6.9) | See if your ls has the options: -H, --dereference-command-line
follow symbolic links listed on the command line
--dereference-command-line-symlink-to-dir
follow each command line symbolic link that points to a directory If those don't help, you can make your macro work without messing up cd - by doing: (cd /rmn ; ls -l) which runs in a subshell. | {
"source": [
"https://unix.stackexchange.com/questions/56197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27537/"
]
} |
56,199 | I'm trying to run a minecraft server on linux. Running the server starts an important interactive session. I can run the server in the background by appending & at the end of the command and log off the server. But then I don't know how to get back to that interactive session when I log back in. I know about screen , but it seems like there should be a better way of running processes in the background and being able to go into them later. | See if your ls has the options: -H, --dereference-command-line
follow symbolic links listed on the command line
--dereference-command-line-symlink-to-dir
follow each command line symbolic link that points to a directory If those don't help, you can make your macro work without messing up cd - by doing: (cd /rmn ; ls -l) which runs in a subshell. | {
"source": [
"https://unix.stackexchange.com/questions/56199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27785/"
]
} |
56,294 | My employer wants me and the team of developers to communicate with them using gotomeeting.com service . Is it possible to use GoToMeeting in Debian? I know that officially GoToMeeting supports only Mac and Win. The reason: I am very happy with software development under Linux and don't want to migrate to Windows just because of one or two programs. | Use the HTML5 version, which runs fine under Chrome, no need to use phone for voice. Open it at: https://app.gotomeeting.com Or if you want to directly open the meeting using the ID: https://app.gotomeeting.com/index.html?meetingid=<id> Create Shortcut You can create a Web App in Chrome, clicking in the sandwich icon, then Install GoToMeeting . If this entry is not available, you can use More Tools > Create Shortcut . | {
"source": [
"https://unix.stackexchange.com/questions/56294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7288/"
]
} |
56,316 | I just did my first install of any Linux OS, and I accidentally selected "Desktop GUI" in the install, but I want to build everything myself. Is there any way by which I can remove the GUI environment without re-installing OS? | Debian uses tasksel for installing software for a specific system. The command gives you some information: > tasksel --list-tasks
i desktop Graphical desktop environment
u web-server Web server
u print-server Print server
u dns-server DNS server
u file-server File server
u mail-server Mail server
u database-server SQL database
u ssh-server SSH server
u laptop Laptop
u manual manual package selection The command above lists all tasks known to tasksel . The line desktop should print an i in front. If that is the case you can have a look at all packages which this task usually installs: > tasksel --task-packages desktop
twm
eject
openoffice.org
xserver-xorg-video-all
cups-client
… On my system the command outputs 36 packages. You can uninstall them with the following command: > apt-get purge $(tasksel --task-packages desktop) This takes the list of packages (output of tasksel ) and feeds it into the purge command of apt-get . Now apt-get tells you what it wants to uninstall from the system. If you confirm it everything will be purged from your system. | {
"source": [
"https://unix.stackexchange.com/questions/56316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27626/"
]
} |
56,340 | Background : I need to receive an alert when my server is down. When the server is down, maybe the Sysload collector will not be able to send any alert. To receive an alert when the server is down, I have an external source (server) to detect it. Question : Is there any way (i prefer bash script) to detect when my server is down or offline and sends an alert message (Email + SMS)? | If you have a separate server to run your check script on, something like this would do a simple Ping test to see if the server is alive: #!/bin/bash
SERVERIP=192.168.2.3
[email protected]
ping -c 3 $SERVERIP > /dev/null 2>&1
if [ $? -ne 0 ]
then
# Use your favorite mailer here:
mailx -s "Server $SERVERIP is down" -t "$NOTIFYEMAIL" < /dev/null
fi You can cron the script to run periodically. If you don't have mailx, you'll have to replace that line with whatever command line email program you have and probably change the options. If your carrier provides an SMS email address, you can send the email to that address. For example, with AT&T, if you send an email to phonenumber @txt.att.net, it will send the email to your phone. Here's a list of email to SMS gateways: http://en.wikipedia.org/wiki/List_of_SMS_gateways If your server is a publicly accessible webserver, there are some free services to monitor your website and alert you if it's down, search the web for free website monitoring to find some. | {
"source": [
"https://unix.stackexchange.com/questions/56340",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23069/"
]
} |
56,343 | I want to do something like this in Bash: how to format the path in a zsh prompt? But everything I try results in the PWD being fixed to the first directory I start my terminal in. Strangely I've also got a function in my PS1 to put the current git branch in the prompt and that always updates so I'm confused as to why the PWD gets stuck. My current prompt is here incidentally. I tried replacing \w with $(pwd|grep --color=always /) but that just gets stuck. I also tried doing it using a bash string replacement but that doesn't work either. ${PWD////$bldred/$bldblu} ($bldred and $bldblu are defined in my prompt.sh). | If you have a separate server to run your check script on, something like this would do a simple Ping test to see if the server is alive: #!/bin/bash
SERVERIP=192.168.2.3
[email protected]
ping -c 3 $SERVERIP > /dev/null 2>&1
if [ $? -ne 0 ]
then
# Use your favorite mailer here:
mailx -s "Server $SERVERIP is down" -t "$NOTIFYEMAIL" < /dev/null
fi You can cron the script to run periodically. If you don't have mailx, you'll have to replace that line with whatever command line email program you have and probably change the options. If your carrier provides an SMS email address, you can send the email to that address. For example, with AT&T, if you send an email to phonenumber @txt.att.net, it will send the email to your phone. Here's a list of email to SMS gateways: http://en.wikipedia.org/wiki/List_of_SMS_gateways If your server is a publicly accessible webserver, there are some free services to monitor your website and alert you if it's down, search the web for free website monitoring to find some. | {
"source": [
"https://unix.stackexchange.com/questions/56343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27010/"
]
} |
56,421 | I have a directory with plenty of .txt.gz files (where the names do not follow a specific pattern.) What is the simplest way to gunzip them? I want to preserve their original names, so that they go from whatevz.txt.gz to whatevz.txt | How about just this? $ gunzip *.txt.gz gunzip will create a gunzipped file without the .gz suffix and remove the original file by default (see below for details). *.txt.gz will be expanded by your shell to all the files matching. This last bit can get you into trouble if it expands to a very long list of files. In that case, try using find and -exec to do the job for you. From the man page gzip(1) : gunzip takes a list of files on its command line and replaces each file
whose name ends with .gz, -gz, .z, -z, or _z (ignoring case) and which
begins with the correct magic number with an uncompressed file without the
original extension. Note about 'original name' gzip can store and restore the filename used at compression time. Even if you rename the compressed file, you can be surprised to find out it restores to the original name again. From the gzip manpage: By default, gzip keeps the original file name and timestamp in the compressed
file. These are used when decompressing the file with the -N option. This is
useful when the compressed file name was truncated or when the time stamp was
not preserved after a file transfer. And these file names stored in metadata can also be viewed with file : $ echo "foo" > myfile_orig
$ gzip myfile_orig
$ mv myfile_orig.gz myfile_new.gz
$ file myfile_new.gz
myfile_new.gz: gzip compressed data, was "myfile_orig", last modified: Mon Aug 5 08:46:39 2019, from Unix
$ gunzip myfile_new.gz # gunzip without -N
$ ls myfile_*
myfile_new
$ rm myfile_*
$ echo "foo" > myfile_orig
$ gzip myfile_orig
$ mv myfile_orig.gz myfile_new.gz
# gunzip with -N
$ gunzip -N myfile_new.gz # gunzip with -N
$ ls myfile_*
myfile_orig | {
"source": [
"https://unix.stackexchange.com/questions/56421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26674/"
]
} |
56,429 | Input file1 is: dog 123 4335
cat 13123 23424
deer 2131 213132
bear 2313 21313 I give the match the pattern from in other file ( like dog 123 4335 from file2). I match the pattern of the line is dog 123 4335 and after printing
all lines without match line my output is: cat 13123 23424
deer 2131 213132
bear 2313 21313 If only use without address of line only use the pattern, for example 1s how to match and print the lines? | In practice, I'd probably use Aet3miirah's answer most of the time, and alexey's answer is wonderful for navigating through the lines (also, it works with less ). OTOH, I really like another approach (which is kind of the reversed Gilles' answer ): sed -n '/dog 123 4335/,$p' When called with the -n flag, sed does not print by default the lines it processes anymore. Then we use a 2-address form that says to apply a command from the line matching /dog 123 4335/ until the end of the file (represented by $ ). The command in question is p , which prints the current line. So, this means "print all lines from the one matching /dog 123 4335/ until the end." | {
"source": [
"https://unix.stackexchange.com/questions/56429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24337/"
]
} |
56,444 | If I run export TEST=foo
echo $TEST It outputs foo. If I run TEST=foo echo $TEST It does not. How can I get this functionality without using export or a script? | This is because the shell expands the variable in the command line before it actually runs the command and at that time the variable doesn't exist. If you use TEST=foo; echo $TEST it will work. export will make the variable appear in the environment of subsequently executed commands (for on how this works in bash see help export ). If you only need the variable to appear in the environment of one command, use what you have tried, i.e.: TEST=foo your-application The shell syntax describes this as being functionally equivalent to: export TEST=foo
your-application
unset TEST See the specification for details. Interesting part is, that the export command switches the export flag for the variable name . Thus if you do: unset TEST
export TEST
TEST="foo" TEST will be exported even though it was not defined at the time when it was exported. However further unset should remove the export attribute from it. | {
"source": [
"https://unix.stackexchange.com/questions/56444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1138/"
]
} |
56,453 | My machine is a server so I want to ignore connections being made to my server (e.g. when someone visits my website). I want to see only connections/requests being made by my server to other places. How do I see only those outgoing connections? EDIT: I'm new to these type of things. What I'm trying to do is just see if anything from my server is being sent out other than data for my web apps. For example, if someone visits my websites, then obviously my server will send out data to the client's browser. But suppose there's also code somewhere in my web app's framework that sends statistical data to somewhere else I'm not aware of. I'd like to see those places my server is sending data to, if any. It's probably not likely, but suppose you decide to use a php or nodejs framework that you didn't write: there's a small chance it may send some type of data somewhere. If so, that's what I'd like to see. | Use netstat . For example $ netstat -nputw
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
[...]
tcp 0 0 192.168.25.222:22 192.168.0.134:42903 ESTABLISHED 32663/sshd: gert [p lists all UDP ( u ), TCP ( t ) and RAW ( w ) outgoing connections (not using l or a ) in a numeric form ( n , prevents possible long-running DNS queries) and includes the program ( p ) associated with that. Consider adding the c option to get output being updated continuously. | {
"source": [
"https://unix.stackexchange.com/questions/56453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5536/"
]
} |
56,481 | Possible Duplicate: Is it possible to install the linux kernel alone? I had watched the documentary Revolution OS and there is a basic operating System by GNU and the kernel by Linux. Then there come distributions which are modified versions of the Linux operating system. I want the Operating System which is the default Linux operating system and not any distribution. I have tried to look at the Linux website but there is information about distributions only. Is the default Linux OS not available for users? | Linux by itself is not very useful because there are no applications: it is purely a kernel. In fact, when the kernel finishes booting, the first thing it does is launch an application called init . If that application isn't there, you get a big error message, and you can't do anything with it*. Distributions are so named because they distribute the Linux kernel along with a set of applications. Likewise, the GNU utilities by themselves are not useful without a kernel. You could put them on a storage medium and turn on a computer, but there is nothing there to run those programs. Also, even if there were something that started init , init and all the other programs rely on the kernel for services. For instance, the first thing that the program that is usually called init does is open a file /etc/inittab ; to open that file, it calles a function open() ; that function is provided by the kernel. Now, you can build a distribution that has no (or few) GNU applications. See Alpine Linux for an example. This is why I do not call Linux GNU/Linux; when I say Linux, I am not referring to the subset of Linux systems that have GNU utilities. *Technically, there are some things you can do with just the kernel. | {
"source": [
"https://unix.stackexchange.com/questions/56481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27726/"
]
} |
56,485 | I have an existing CentOS 6 installation working wonderfully and its owner now wants to add a separate Debian installation onto the same machine. He only accesses this host via the command line and via a NoMachine VNC-type GUI connection, since it's located in a data center and thus out of reach. My questions: Can I create a "hot partition" on the existing CentOS 6 installation without destroying what we have now? If I can safely add a new partition, is there any tool or method the user could use to choose which OS it will boot into? Something like a boot picker. Remember he does not have physical access to the machine. I'm coming from a Mac/Windows background and thus don't know the answers.
Suggestions or ideas are welcome. | Linux by itself is not very useful because there are no applications: it is purely a kernel. In fact, when the kernel finishes booting, the first thing it does is launch an application called init . If that application isn't there, you get a big error message, and you can't do anything with it*. Distributions are so named because they distribute the Linux kernel along with a set of applications. Likewise, the GNU utilities by themselves are not useful without a kernel. You could put them on a storage medium and turn on a computer, but there is nothing there to run those programs. Also, even if there were something that started init , init and all the other programs rely on the kernel for services. For instance, the first thing that the program that is usually called init does is open a file /etc/inittab ; to open that file, it calles a function open() ; that function is provided by the kernel. Now, you can build a distribution that has no (or few) GNU applications. See Alpine Linux for an example. This is why I do not call Linux GNU/Linux; when I say Linux, I am not referring to the subset of Linux systems that have GNU utilities. *Technically, there are some things you can do with just the kernel. | {
"source": [
"https://unix.stackexchange.com/questions/56485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27730/"
]
} |
56,495 | What are the practical differences from a sysadmin point of view when deploying services on a unix based system? | The traditional way of daemonizing is: fork()
setsid()
close(0) /* and /dev/null as fd 0, 1 and 2 */
close(1)
close(2)
fork() This ensures that the process is no longer in the same process group as the terminal and thus won't be killed together with it. The IO redirection is to make output not appear on the terminal. | {
"source": [
"https://unix.stackexchange.com/questions/56495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27659/"
]
} |
56,523 | I have set up two monitors in my system. One is powered by the HDMI port and the other one is powered by the normal analogue port of the same GPU (Nvidia Ge-force 210). I just setup twin display in Nvidia settings but can't see cinnamon's panel on the second monitor. How can I fix this? | In Cinnamon 2.6 and later you can have additional panels in any monitor without installing additional software. Just right-click the panel, click on Modify panel ... and then on Add panel . The top and bottom edges of all monitors should get highlighted and a new panel will be set up where you click. It works perfectly. Here is a github post from when it was merged. Moreover, if you add a panel to a second monitor and add a Window list applet to it, it will only show you the windows in that monitor. It made me very happy! In addition, if you want the windows to align towards the left instead of the right, you can drag and drop the Window list to the left side of the panel (red colored). | {
"source": [
"https://unix.stackexchange.com/questions/56523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27743/"
]
} |
56,524 | If I buy a piece of x86 32-bit or 64-bit software but I don't receive the source code, and I need to modify the software, I'll need to convert the machine code back into a high level language or at least assembly code. Is there a good utility to go from machine code to C? I assume that it would attempt to identify whether the program was compiled with a C compiler as opposed to C++ or Objective C or anything else. Thanks. | In Cinnamon 2.6 and later you can have additional panels in any monitor without installing additional software. Just right-click the panel, click on Modify panel ... and then on Add panel . The top and bottom edges of all monitors should get highlighted and a new panel will be set up where you click. It works perfectly. Here is a github post from when it was merged. Moreover, if you add a panel to a second monitor and add a Window list applet to it, it will only show you the windows in that monitor. It made me very happy! In addition, if you want the windows to align towards the left instead of the right, you can drag and drop the Window list to the left side of the panel (red colored). | {
"source": [
"https://unix.stackexchange.com/questions/56524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27744/"
]
} |
56,531 | In the old days I just modified /etc/inittab . Now, with systemd, it seems to start tty[1-6] automatically, how should I disable tty[4-6]? Looks like there's only one systemd service file, and it use a %I to discern different tty sessions. I hope I don't need to remove that service, and create each [email protected] manually. | There is no real need to disable "extra" TTYs as under systemd gettys are generated on demand: see man systemd-getty-generator for details. Note that, by default, this automatic spawning is done for the VTs up to VT6 only (to mimic traditonal Linux systems). As Lennart says in a blog post 1 : In order to make things more efficient login prompts are now started on demand only. As you switch to the VTs the getty service is instantiated to [email protected], [email protected] and so on. Since we don't have to unconditionally start the getty processes anymore this allows us to save a bit of resources, and makes start-up a bit faster. If you do wish to configure a specific number of gettys, you can, just modify logind.conf with the appropriate entry, in this example 3: NAutoVTs=3 1. In fact the entire series of posts—currently numbering 18— systemd for Administrators , is well worth reading. | {
"source": [
"https://unix.stackexchange.com/questions/56531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
56,545 | I want to write a Makefile to find specific files and run a Python script on each file. The Python script accepts sys.stdin input. find $(W)/$(OVE) -name "*.xml" -print | \
while read x ; do \
cat $x | /opt/exp/bin/python2.7 process_results.py > $(W)/$(OVE)/$(dirname $x)_$(basename $x).xml \
done The output is $(dirname $x)_$(basename $x).xml file which is an empty _.xml file. When I run this command on command line, it works properly but in Makefile it doesn't work. Can you help me what is wrong with this command? | There is no real need to disable "extra" TTYs as under systemd gettys are generated on demand: see man systemd-getty-generator for details. Note that, by default, this automatic spawning is done for the VTs up to VT6 only (to mimic traditonal Linux systems). As Lennart says in a blog post 1 : In order to make things more efficient login prompts are now started on demand only. As you switch to the VTs the getty service is instantiated to [email protected], [email protected] and so on. Since we don't have to unconditionally start the getty processes anymore this allows us to save a bit of resources, and makes start-up a bit faster. If you do wish to configure a specific number of gettys, you can, just modify logind.conf with the appropriate entry, in this example 3: NAutoVTs=3 1. In fact the entire series of posts—currently numbering 18— systemd for Administrators , is well worth reading. | {
"source": [
"https://unix.stackexchange.com/questions/56545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27751/"
]
} |
56,549 | I have Linux Mint 13 64 bit version with cinnamon, NVdia Geforce 320m, proprietary driver 304.54 if I'm not mistaken. I installed bumblebee following this guide: sudo add-apt-repository ppa:bumblebee/stable
sudo apt-get update
sudo apt-get install bumblebee bumblebee-nvidia linux-headers-generic Rebooted, and now I have 640x480 resolution, half-broken gnome 3 was loaded (there are no buttons to log out or reboot, no animations of course). When I try to run nvidia-settings I get: You do not appear to be using the NVIDIA X driver.
Please edit your X configuration file (just run `nvidia-xconfig` as root),
and restart the X server. What shall I do to fix this? I want my discrete graphics card and cinnamon back. | There is no real need to disable "extra" TTYs as under systemd gettys are generated on demand: see man systemd-getty-generator for details. Note that, by default, this automatic spawning is done for the VTs up to VT6 only (to mimic traditonal Linux systems). As Lennart says in a blog post 1 : In order to make things more efficient login prompts are now started on demand only. As you switch to the VTs the getty service is instantiated to [email protected], [email protected] and so on. Since we don't have to unconditionally start the getty processes anymore this allows us to save a bit of resources, and makes start-up a bit faster. If you do wish to configure a specific number of gettys, you can, just modify logind.conf with the appropriate entry, in this example 3: NAutoVTs=3 1. In fact the entire series of posts—currently numbering 18— systemd for Administrators , is well worth reading. | {
"source": [
"https://unix.stackexchange.com/questions/56549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27755/"
]
} |
56,625 | I have file a and b and I would like to output lines of b that changed since it was cloned from a . Just the modified lines, no surrounding context, no diff offset marks. How can I do that using shell scripting? (No Python/Perl/PHP/...) Sed and awk are acceptable solutions. For now, what I am doing is diff -y with --suppress-common-lines and sed using regex backreferences to just fetch the right part after the whitespace. There must be a better way? Using perl (which is forbidden), I´d do something like this: diff -y --suppress-common-lines -W $COLUMNS Eclipse_Preferences_Export_*.epf | perl -pe 's/.*\t|\t(.*)$/\1/g' | With GNU diffutils package's diff this will output only lines from file b which either were modified or newly inserted: diff --unchanged-line-format= --old-line-format= --new-line-format='%L' a b | {
"source": [
"https://unix.stackexchange.com/questions/56625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10283/"
]
} |
56,655 | Is there is any difference between these two. [[ $a == z* ]] and [ $a == z* ] Can I have an example where they would have different outputs? Furthermore, how does the working of [[ ]] differs from [ ] ? | The difference between [[ … ]] and [ … ] is mostly covered in Why does parameter expansion with spaces without quotes work inside double brackets "[[" but not inside single brackets "["? .
Crucially, [[ … ]] is special syntax, whereas [ is a funny-looking name for a command. [[ … ]] has special syntax rules for what's inside, [ … ] doesn't. With the added wrinkle of a wildcard, here's how [[ $a == z* ]] is evaluated: Parse the command: this is the [[ … ]] conditional construct around the conditional expression $a == z* . Parse the conditional expression: this is the == binary operator, with the operands $a and z* . Expand the first operand into the value of the variable a . Evaluate the == operator: test if the value of the variable a matches the pattern z* . Evaluate the conditional expression: its result is the result of the conditional operator. The command is now evaluated, its status is 0 if the conditional expression was true and 1 if it was false. Here's how [ $a == z* ] is evaluated: Parse the command: this is the [ command with the arguments formed by evaluating the words $a , == , z* , ] . Expand $a into the value of the variable a . Perform word splitting and filename generation on the parameters of the command. For example, if the value of a is the 6-character string foo b* (obtained by e.g. a='foo b*' ) and the list of files in the current directory is ( bar , baz , qux , zim , zum ), then the result of the expansion is the following list of words: [ , foo , bar , baz , == , zim , zum , ] . Run the command [ with the parameters obtained in the previous step. With the example values above, the [ command complains of a syntax error and returns the status 2. Note: In [[ $a == z* ]] , at step 3, the value of a does not undergo word splitting and filename generation, because it's in a context where a single word is expected (the left-hand argument of the conditional operator == ). In most cases, if a single word makes sense at that position then variable expansion behaves like it does in double quotes. However, there's an exception to that rule: in [[ abc == $a ]] , if the value of a contains wildcards, then abc is matched against the wildcard pattern. For example, if the value of a is a* then [[ abc == $a ]] is true (because the wildcard * coming from the unquoted expansion of $a matches bc ) whereas [[ abc == "$a" ]] is false (because the ordinary character * coming from the quoted expansion of $a does not match bc ). Inside [[ … ]] , double quotes do not make a difference, except on the right-hand side of the string matching operators ( = , == , != and =~ ). | {
"source": [
"https://unix.stackexchange.com/questions/56655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8032/"
]
} |
56,717 | I bungled the commands and wrote sh -man Now I've entered a program called sh-3.2 that is seemingly impossible to exit. Ctrl c , Ctrl z , or Ctrl x does not work. exit , quit , q , : q also does not work. All google answers are for exiting shell scripts programmatically. | Ctrl + D does the trick for me. Actually it is the -n flag that introduces this behaviour. It is meant to do only syntax checking of the commands, but doesn't actually execute them. | {
"source": [
"https://unix.stackexchange.com/questions/56717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26674/"
]
} |
56,734 | Am trying to write a one-liner that can probe the output of df -h and alert when one of the partitions is out [or almost] of space. It's the part using xargs that kicking me in the ass now... echo 95 | xargs -n1 -I{} [ {} -ge 95 ] && echo "No Space on disk {}% full -- remove old backups please" How can i make the second {} show "95" too? | That && is not part of the xargs command, it's a completely separate invocation. I think you'll want to execute a shell explicitly: echo 95 | xargs -I_percent sh -c '[ "$1" -ge 95 ] && echo "No Space on disk $1% full -- remove old backups please"' sh _percent Note also that I'm using _percent instead of {} to avoid extra quoting headaches with the shell. It's not a shell variable; still just an xargs replacement string. | {
"source": [
"https://unix.stackexchange.com/questions/56734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21688/"
]
} |
56,765 | I'm trying to create a user without password like this: sudo adduser \
--system \
--shell /bin/bash \
--gecos ‘User for managing of git version control’ \
--group \
--disabled-password \
--home /home/git \
git It's created fine. But when I try to login under the git user I'm getting the password entering: su git
Password:... When I leave it empty I get an error: su: Authentication failed What's wrong? | You've created a user with a “disabled password”, meaning that there is no password that will let you log in as this user. This is different from creating a user that anyone can log in as without supplying a password, which is achieved by specifying an empty password and is very rarely useful. In order to execute commands as such “system” users who don't log in normally, you need to hop via the root account: su -c 'su git -c "git init"' or sudo -u git git init If you want certain users to be able to run commands as the git user without letting them run commands as root, set up sudo (run visudo as root and add a line like %gitters ALL = (git) ALL ). | {
"source": [
"https://unix.stackexchange.com/questions/56765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27850/"
]
} |
56,810 | I would like to add text to the end of filename but before the extension. Right now I am trying, for f in *.shp; do echo $f_poly; done the output is, Quercus_acutifolia.shp_poly
Quercus_agrifolia.shp_poly
Quercus_corrugata.shp_poly
Quercus_cortesii.shp_poly
Quercus_costaricensis.shp_poly
Quercus_havardii.shp_poly
Quercus_hemisphaerica.shp_poly
Quercus_kelloggii.shp_poly
Quercus_knoblochii.shp_poly
Quercus_laceyi.shp_poly I want it to be, Quercus_acutifolia_poly.shp
Quercus_agrifolia_poly.shp
Quercus_corrugata_poly.shp
Quercus_cortesii_poly.shp
Quercus_costaricensis_poly.shp
Quercus_havardii_poly.shp
Quercus_hemisphaerica_poly.shp
Quercus_kelloggii_poly.shp
Quercus_knoblochii_poly.shp
Quercus_laceyi_poly.shp | Using standard POSIX parameter expansion : for f in *.shp; do printf '%s\n' "${f%.shp}_poly.shp"; done | {
"source": [
"https://unix.stackexchange.com/questions/56810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27799/"
]
} |
56,837 | For Bash versions prior to "GNU bash, Version 4.2" are there any equivalent alternatives for the -v option of the test command? For example: shopt -os nounset
test -v foobar && echo foo || echo bar
# Output: bar
foobar=
test -v foobar && echo foo || echo bar
# Output: foo | Portable to all POSIX shells: if [ -n "${foobar+1}" ]; then
echo "foobar is defined"
else
echo "foobar is not defined"
fi Make that ${foobar:+1} if you want to treat foobar the same way whether it is empty or not defined. You can also use ${foobar-} to get an empty string when foobar is undefined and the value of foobar otherwise (or put any other default value after the - ). In ksh, if foobar is declared but not defined, as in typeset -a foobar , then ${foobar+1} expands to the empty string. Zsh doesn't have variables that are declared but not set: typeset -a foobar creates an empty array. In bash, arrays behave in a different and surprising way. ${a+1} only expands to 1 if a is a non-empty array, e.g. typeset -a a; echo ${a+1} # prints nothing
e=(); echo ${e+1} # prints nothing!
f=(''); echo ${f+1} # prints 1 The same principle applies to associative arrays: array variables are treated as defined if they have a non-empty set of indices. A different, bash-specific way of testing whether a variable of any type has been defined is to check whether it's listed in ${! PREFIX *} . This reports empty arrays as defined, unlike ${foobar+1} , but reports declared-but-unassigned variables ( unset foobar; typeset -a foobar ) as undefined. case " ${!foobar*} " in
*" foobar "*) echo "foobar is defined";;
*) echo "foobar is not defined";;
esac This is equivalent to testing the return value of typeset -p foobar or declare -p foobar , except that typeset -p foobar fails on declared-but-unassigned variables. In bash, like in ksh, set -o nounset; typeset -a foobar; echo $foobar triggers an error in the attempt to expand the undefined variable foobar . Unlike in ksh, set -o nounset; foobar=(); echo $foobar (or echo "${foobar[@]}" ) also triggers an error. Note that in all situations described here, ${foobar+1} expands to the empty string if and only if $foobar would cause an error under set -o nounset . | {
"source": [
"https://unix.stackexchange.com/questions/56837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27412/"
]
} |
56,941 | I know what it does, but I don't know why . What attack(s) does it prevent? Is it relevant for all kind of authentication methods? (hostbased, password, publickey, keyboard-interactive ...) | The UseDNS option is mostly useless. If the client machines are out there on the Internet, there is a high chance that they don't have any reverse DNS, their reverse DNS doesn't resolve forward, or their DNS doesn't provide any information other than “belongs to this ISP” which the IP address already tells you. In typical configurations, DNS is only used for logging. It can be used for authentication, but only if IgnoreRhosts no is specified in sshd_config . This is for compatibility with old installations that used rsh, where you can say “the user called bob on the machine called darkstar may log in as alice without showing any credentials” (by writing darkstar bob in ~alice/.rhosts ). It is only secure if you trust all the machines that may possibly be connecting to the ssh server. In other words, this is very very rarely usable in a secure way. Given that the DNS lookup doesn't provide any useful information except in very peculiar circumstances, it should be turned off. As far as I can tell, the only reason it's on by default is that it's technically more secure (if you're concerned about authentication, not availability), even though that only applies to a tiny set of circumstances. Another argument for turning off this feature is that every superfluous feature is an unnecessary security risk . | {
"source": [
"https://unix.stackexchange.com/questions/56941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5528/"
]
} |
57,013 | Is there a way to zip all files in a given directory with the zip command? I've heard of using *.* , but I want it to work for extensionless files, too. | You can just use * ; there is no need for *.* . File extensions are not special on Unix. * matches zero or more characters—including a dot. So it matches foo.png , because that's zero or more characters (seven, to be exact). Note that * by default doesn't match files beginning with a dot (neither does *.* ). This is often what you want. If not, in bash, if you shopt -s dotglob it will (but will still exclude . and .. ). Other shells have different ways (or none at all) of including dotfiles. Alternatively, zip also has a -r (recursive) option to do entire directory trees at once (and not have to worry about the dotfile problem): zip -r myfiles.zip mydir where mydir is the directory containing your files. Note that the produced zip will contain the directory structure as well as the files. As peterph points out in his comment, this is usually seen as a good thing: extracting the zip will neatly store all the extracted files in one subdirectory. You can also tell zip to not store the paths with the -j / --junk-paths option. The zip command comes with documentation telling you about all of its (many) options; type man zip to see that documentation. This isn't unique to zip; you can get documentation for most commands this way. | {
"source": [
"https://unix.stackexchange.com/questions/57013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19064/"
]
} |
57,124 | I have a variable whose value is found using sql query. I want to remove the new line character from that variable since I want to concatenate this variable with the other. Below is the code: dt=`sqlplus -s user/pwd@servicename <<EOF
set feedback off;
set head off;
select replace(to_char((sysdate-7),'YYYYMonDD')||'_'||to_char((sysdate-1),'YYYYMonDD'),chr(10), '') from dual;
exit;
EOF`
echo "test $dt" | If you are using bash , you can use Parameter Expansion: dt=${dt//$'\n'/} # Remove all newlines.
dt=${dt%$'\n'} # Remove a trailing newline. The following should work in /bin/sh as well: dt="${dt%
}" # Remove a trailing newline. | {
"source": [
"https://unix.stackexchange.com/questions/57124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28036/"
]
} |
57,138 | On my local machine, I run: ssh -X [email protected] (For completeness, I have also tested all of the following using -Y with identical results). As expected, this accesses remotemachine.com fine, and all appears well. If I then attempt to run xcalc however, I get: connect /tmp/.X11-unix/X0: No such file or directory
Error: Can't open display: localhost:10.0 But, $ ls -la /tmp/.X11-unix/
total 36
drwxrwxrwt 2 root root 4096 2012-11-23 09:29 .
drwxrwxrwt 8 root root 32768 2012-11-29 08:22 ..
srwxrwxrwx 1 root root 0 2012-11-23 09:29 X0 So not only does /tmp/.X11-unix/X0 exist, it has universal r/w/x permissions! I've previously used x-forwarding without issue, though not in some time... uname -a on the server for reference: Linux machinename 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux Been searching around on the web for a couple hours now without success. Other mentions of the same problem, but no solutions. | If you have a X server running and the DISPLAY environment variable is set to :0 , that tells applications to connect to the X server using a unix domain socket which is generally to be found on Linux in /tmp/.X11-unix/X0 (though see below about the abstract namespace on recent Linux). When you ssh to machine remotemachine , sshd on remotemachine sets DISPLAY to localhost:10 (for instance), which this time means that X connections are do be done over TCP to port 6010 of machine localhost. sshd on remotemachine listens for connections on there and forwards any incoming connection to the ssh client. The ssh client then tries to connect to /tmp/.X11-unix/X0 (on the local end, not the remote) to contact your X server. Now, maybe you don't have a X server running (are you on Mac?) or maybe the unix domain socket is not to be found in /tmp/.X11-unix which would mean ssh hasn't been configured properly at compile time. To figure out what the proper path is for the unix socket, you could try a strace -e connect xlogo (or the equivalent on your system) on your local machine to see what a normal X application does. netstat -x | grep X may also give a clue. For the record, on a Linux Debian wheezy machine here, Xorg listens on both /tmp/.X11-unix/X0 in the filesystem and /tmp/.X11-unix/X0 on the abstract namespace (generally written @/tmp/.X11-unix/X0 ). From strace , X11 applications seem to now use that abstract namespace by default, which explains why those still work if /tmp/.X11-unix is removed, while ssh doesn't use that abstract namespace. | {
"source": [
"https://unix.stackexchange.com/questions/57138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11231/"
]
} |
57,222 | I'd like to use the Unix column command to format some text. I have fields delimited by tabs, but within each field there are also spaces. column delimits on white space (tabs and spaces). How can I make column only use tabs as the delimiter? I was trying to specify tab as the delimiter using: cat myfile | column -t -s"\t" | column -t -s '\t' would separate columns on \ and t characters. column -s \t is the same as column -s 't' , as the backslash is interpreted as a quoting operator by the shell. Here you want to pass a real TAB character to column. With ksh93, zsh, bash, mksh, busybox sh or FreeBSD sh: column -ts $'\t' Or enter a real tab character by typing Ctrl-V Tab at the shell prompt (within quotes or preceded by a backslash as the tab character is a token separator in the shell syntax just like space), or use "$(printf '\t')" (those double quotes needed to disable the split+glob operator as the tab character also happens to be in the default value of $IFS ). | {
"source": [
"https://unix.stackexchange.com/questions/57222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26274/"
]
} |
57,305 | Is it possible to compare two directories with rsync and only print the differences? There's a dry-run option, but when I increase verbosity to a certain level, every file compared is shown. ls -alR and diff is no option here, since there are hardlinks in the source making every line different. (Of course, I could delete this column with perl.) | You will propably have to run something like rsync -avun --delete in both directions. But what are you actually trying to accomplish? Update : rsync -avun --delete $TARGET $SOURCE |grep "^deleting " will give you a list of files that do not exist in the target-directory. "grep delet" because each line prints : delet ing ..file.. rsync -avun $SOURCE $TARGET will give you a list of "different" files (including new files). | {
"source": [
"https://unix.stackexchange.com/questions/57305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18250/"
]
} |
57,309 | I have a small home router running OpenWrt (Kind of embedded Linux for routers). It has five Ethernet ports, one labeled WAN and four labeld LAN 1 to 4. It has the following Network Interfaces defined as per ifconfig : root@TIBERIUS: ~ > ifconfig | grep Link
br-lan Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
eth0 Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
lan1 Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
lan2 Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
lan3 Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
lan4 Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
lo Link encap:Local Loopback
pppoe-wan Link encap:Point-to-Point Protocol
wan Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
wlan0 Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0 As you can see, quite a number of devices, but only one MAC address. I understand some of those devices are virtual. Let's put aside lo and pppoe-wan , that's the loopback device and my PPPoE Connection. But for the rest of those, how am I supposed to be able to tell whether they are physical or virtual? I understand there is a naming convention for labeling virtual Interfaces like eth0.1 , but that is obviously not adhered to here. Let's see the Output of ifconfig for two of these interfaces: root@TIBERIUS: ~ > ifconfig wan
wan Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15007 errors:0 dropped:0 overruns:0 frame:0
TX packets:12055 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13341276 (12.7 MiB) TX bytes:1831757 (1.7 MiB)
root@TIBERIUS: ~ > ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:23:CD:20:C3:B0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:25799 errors:0 dropped:0 overruns:23 frame:0
TX packets:25294 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15481996 (14.7 MiB) TX bytes:15160380 (14.4 MiB)
Interrupt:4 Apart from the obscure Detail of txqueuelen having a non-zero for eth0 , the only striking difference is that eth0 has an Interrupt entry, which as far as I know is a Hardware Feature. So is that how you tell a Network Interface is physical or not, by looking for an Interrupt entry in ifconfig ? Or is there a better way? A simple and straightforward way to find out whether a network device is physical or virtual? Note there is a related question but while it does have an accepted answer, it isn't conclusive. Update In reply to derobert's answer, here's information derivd from ls -l /sys/class/net : br-lan -> ../../devices/virtual/net/br-lan
eth0 -> ../../devices/platform/ag71xx.0/net/eth0
lan1 -> ../../devices/platform/dsa.0/net/lan1
lan2 -> ../../devices/platform/dsa.0/net/lan2
lan3 -> ../../devices/platform/dsa.0/net/lan3
lan4 -> ../../devices/platform/dsa.0/net/lan4
lo -> ../../devices/virtual/net/lo
pppoe-wan -> ../../devices/virtual/net/pppoe-wan
wan -> ../../devices/platform/dsa.0/net/wan [Addendum to this list: wlan0 would have shown up as well as wlan0 -> ../../devices/platform/ath9k/net/wlan0 , but when I copied the above list I had WLAN disabled, which is why it didn't show up.] I would say eth0 is the only device. Not clear what dsa.0 is. And in reply to Bryan Agee's answer: root@TIBERIUS: ~ > cat /etc/config/network
config interface 'loopback'
option ifname 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config interface 'eth'
option ifname 'eth0'
option proto 'none'
config interface 'lan'
option ifname 'lan1 lan2 lan3 lan4'
option type 'bridge'
option proto 'static'
option ipaddr '192.168.33.1'
option netmask '255.255.255.0'
config interface 'wan'
option ifname 'wan'
option proto 'pppoe'
option username '…'
option password '…' | You can check /sys : anthony@Zia:/sys/class/net$ ls -l /sys/class/net/
total 0
lrwxrwxrwx 1 root root 0 Dec 11 15:38 br0 -> ../../devices/virtual/net/br0
lrwxrwxrwx 1 root root 0 Dec 11 15:38 lan -> ../../devices/pci0000:00/0000:00:1e.0/0000:07:01.0/net/lan
lrwxrwxrwx 1 root root 0 Dec 11 15:38 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx 1 root root 0 Dec 11 15:38 tun0 -> ../../devices/virtual/net/tun0 So, actual devices show in /sys/class/net. Note that aliases (like lan:0) do not (so you can tell which are aliases). And you can clearly see which are actual hardware (lan) and which aren't (br0, lo, tun0). to clarify You can tell which are real in the above because the virtual ones are all in virtual. And lan is on the PCI bus. In your case, you have six: eth0, wan, and lan1–4. This is rather odd, since you say you only have five ports total. I'd guess eth0 is actually hardwired to a a switch-ish chip, and the other 5 ports are ports on that switch. wlan0 is probably real as well (would be the wireless adapter), though it isn't showing in /sys…. So, I'd say that for all practical purposes, your real ports are wan, lan1–4, and wlan0. br-lan is a bridge set up to make all 4 lan ports function as a switch (so you might be able to split that switch). | {
"source": [
"https://unix.stackexchange.com/questions/57309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27995/"
]
} |
57,311 | I am used to four workspaces in Gnome, but now that I have Cinnamon installed I only have two. Can I increase the number? If so, how? | What Cinnamon version do you use? As far as I know, their latest version can do this seamlessly. In my Linux Mint 14 I can just use Ctrl Alt Up to show all workspaces, and then click + button on the right edge of the screen to add new workspace. You may want to check cinnamon 1.6 release page . Clem already explained how to do this over there. | {
"source": [
"https://unix.stackexchange.com/questions/57311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28135/"
]
} |
57,439 | Is there any way to prevent mc from taking 10-30 seconds to open? | mc/subshell integration is a frequent culprit; to verify, try: alias mc="mc --nosubshell" | {
"source": [
"https://unix.stackexchange.com/questions/57439",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28212/"
]
} |
57,538 | Is there an easy way to do something like tail -f mylogfile but to have the changes of more than one file displayed (maybe with the file name added as prefix to each line)? Or maybe a GUI tool? I am running Debian. | Have you tried tail -f file1 file2 ? It appears to do exactly what you want, at least on my FreeBSD machine. Perhaps the tail that comes with a Debian system can do it too? | {
"source": [
"https://unix.stackexchange.com/questions/57538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28267/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.