source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
157,022 | Nginx does not follow symbolic links. I get a 404 error. In my directory, I have this link: lrwxrwxrwx 1 root root 48 Sep 23 08:52 modules -> /path/to/dir/ but the files stored in /path/to/dir aren't found. | I insert disable_symlinks off; in my nginx.conf and i resolved, works fine! http { disable_symlinks off;} | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/157022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64116/"
]
} |
157,047 | In a script I'd like to purge a mercurial repository but be able to retain a number of (configurable) file patterns which I read from $FILENAME. The syntax for the hg command is hg purge --all --exclude PATTERN1 --exclude PATTERN2 ... Thus if $FILENAME contains a list of file patterns (one pattern per line), each pattern must be prepended by an "--exclude " in order to construct the command line My current approach is to use for construction of the argument list grep -v -E "^[[:blank:]]*$" $FILENAME | sed "s/^/--exclude /g" | xargs echo which also will skip empty lines and those which only contain tabs or spaces which would result in an error if used to construct the above command line. Thus in total: hg purge --all `grep -v -E "^[[:blank:]]*$" $FILENAME | sed "s/^/--exclude /g" | xargs echo` Is there a nicer way, maybe with some xargs arguments which I'm unaware of? | Seems there is even a shorthand way in mercurial itself, making use of file lists (suggested by mg in #mercurial): hg purge --all --exclude "listfile:$FILENAME" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85105/"
]
} |
157,059 | I'm getting the following errors every 10-30 seconds on a virtual Red Hat Enterprise Linux 6.5 2 Server on Amazons EC2 . Sep 23 09:57:05 ServerName init: ttyS0 (/dev/ttyS0) main process (1612) terminated with status 1Sep 23 09:57:05 ServerName init: ttyS0 (/dev/ttyS0) main process ended, respawningSep 23 09:57:05 ServerName agetty[1613]: /dev/ttyS0: tcgetattr: Input/output error Does anyone know what is causing this and how I could go about fixing it? Thanks. | A virtual Red Hat installation probably doesn't have any serial ports connected (which is what /dev/ttyS0 is: COM1 in DOS parlance), so trying to start agetty to listen to the serial port is doomed to fail. Find the line in /etc/inittab that contains agetty and ttyS0 and change respawn to off . EDIT: In case the system is using upstart, as in redhat 6, do stop ttyS0 to stop the service now, and do echo manual | sudo tee /etc/init/ttyS0.override to prevent starting the service after a reboot according to https://askubuntu.com/a/468250/146273 For documentation purposes, you might also consider doing: sudo tee -a /etc/init/ttyS0.conf <<EOF# Disabled. See https://unix.stackexchange.com/a/157489/9745EOF Further reading: http://upstart.ubuntu.com/cookbook/#disabling-a-job-from-automatically-starting | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83673/"
]
} |
157,060 | After system packages are updated with "yum update", config files which could not be overwritten are not replaced, but we can find *.rpmnew files near by. By design system administrator must merge config files. In Gentoo Linux there is a etc-update tool , which allow to merge config file changes interactively, like that: Beginning of differences between /etc/pear.conf and /etc/._cfg0000_pear.conf[...]End of differences between /etc/pear.conf and /etc/._cfg0000_pear.conf1) Replace original with update2) Delete update, keeping original as is3) Interactively merge original with update4) Show differences again I wonder if there is a way to merge configs interactively in RHEL/Fedora/CentOS? | This yum plugin adds the "--merge-conf" command line option. With this option, Yum will ask you what to do with config files which have changed on updating a package. https://apps.fedoraproject.org/packages/yum-plugin-merge-conf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16253/"
]
} |
157,092 | I'd like to know the equivalent of cat inputfile | sed 's/\(.\)/\1\n/g' | sort | uniq -c presented in https://stackoverflow.com/questions/4174113/how-to-gather-characters-usage-statistics-in-text-file-using-unix-commands for production of character usage statistics in text files for binary files counting simple bytes instead of characters, i.e. output should be in the form of 18383 5712543 4411555 127 8393 0 It doesn't matter if the command takes as long as the referenced one for characters. If I apply the command for characters to binary files the output contains statistics for arbitrary long sequences of unprintable characters (I don't seek explanation for that). | With GNU od : od -vtu1 -An -w1 my.file | sort -n | uniq -c Or more efficiently with perl (also outputs a count (0) for bytes that don't occur): perl -ne 'BEGIN{$/ = \4096}; $c[$_]++ for unpack("C*"); END{for ($i=0;$i<256;$i++) { printf "%3d: %d\n", $i, $c[$i]}}' my.file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63502/"
]
} |
157,097 | I am trying to configure OpenWrt on my device and got out of space. I was downloading some tooling packages. Now how can I determine their weights so that decide what to uninstall? Is it possible to display the size of installed packages with OPKG? | Not every OpenWrt environment ist set up the same way, so my answer is a shot in the dark... The example output is taken from OpenWrt-12.09 on a "TP-Link TL-WDR4300". ssh into your router. Check your filesytsems. root@AP9:~# dfFilesystem 1K-blocks Used Available Use% Mounted onrootfs 5184 2124 3060 41% //dev/root 2048 2048 0 100% /romtmpfs 63340 948 62392 1% /tmptmpfs 512 0 512 0% /dev/dev/mtdblock3 5184 2124 3060 41% /overlayoverlayfs:/overlay 5184 2124 3060 41% //dev/sda1 31234700 593536 29075728 2% /mnt/sda1 /dev/sda1 is the micro SD card of my UMTS stick... just ignore this. Many routers are flashed in a similar fashion like seen here: A readonly root filesytem is made pseudo writable by an overlay filesystem. Look inside /overlay ... root@AP9:~# cd /overlay/usr/lib/opkg/info/root@AP9:/overlay/usr/lib/opkg/info# ls *.list | tail -3usb-modeswitch-data.listusb-modeswitch.listzlib.list This directory contains the info about additionally installed packages. The files ending with .list are lists of files installed by the package with the similar name (without .list ): root@AP9:/overlay/usr/lib/opkg/info# cat zlib.list /usr/lib/libz.so.1.2.7/usr/lib/libz.so.1/usr/lib/libz.so Package zlib has 3 files installed. root@AP9:/overlay/usr/lib/opkg/info# du $(cat zlib.list) 71 /usr/lib/libz.so.1.2.71 /usr/lib/libz.so.11 /usr/lib/libz.so Package zlib has 73kbytes of installed files. A crude 1-liner to glue this all together and it's shortened output: # awk 'BEGIN{D="cd /overlay/usr/lib/opkg/info&&";C=D"ls *.list";while(C|getline>0){P=substr(F=$1,1,length($1)-5);J=D"du -sk $(cat "F")";s=0;while(J|getline>0){s+=$1;t+=$1}close(J);print s"\t"P}print t"\t---TOTAL---"}'26 blkid30 block-mount17 chat55 comgt6 kmod-fs-exportfs(((...some lines skipped...)))14 portmap48 swap-utils223 usb-modeswitch-data45 usb-modeswitch73 zlib4184 ---TOTAL--- HTH! Added 2014-10-17: The following output is taken from OpenWrt-12.09 on a "TP-Link TL-WR703N" and shows how to add sorting the output by package size. Have a look on where and how the variable S comes into the game... # awk 'BEGIN{D="cd /overlay/usr/lib/opkg/info&&";C=D"ls *.list";S="sort -n";while(C|getline>0){P=substr(F=$1,1,length($1)-5);J=D"du -sk $(cat "F")";s=0;while(J|getline>0){s+=$1;t+=$1}close(J);print s"\t"P|S}close(S);print t"\t---TOTAL---"}'5 kmod-lib-crc165 luci-proto-3g12 libuuid13 kmod-usb-serial-wwan17 chat24 kmod-usb-acm24 libusb26 blkid30 block-mount41 kmod-usb-serial45 usb-modeswitch48 kmod-usb-serial-option48 swap-utils55 comgt67 kmod-usb-storage148 libblkid154 kmod-scsi-core223 usb-modeswitch-data382 kmod-fs-ext41367 ---TOTAL--- Again: HTH! Added 2018-01-13: The above way was tested on OpenWrt-AA. Now looking at LEDE-17.01 a path has changed: Replacing /overlay with /overlay/upper fixes this. Status quo ( opkg-list-user-installed-sorted-by-size not as 1-liner): #!/usr/bin/awk -fBEGIN { D="cd /overlay/upper/usr/lib/opkg/info&&" C=D"ls *.list" S="sort -n" while(C|getline>0) { P=substr(F=$1,1,length($1)-5) J=D"du -sk $(cat "F")" s=0 while(J|getline>0) { s+=$1 t+=$1 } close(J) print s"\t"P|S } close(S) print t"\t---TOTAL---"} Test run: root@zsun0:~# ./opkg-list-user-installed-sorted-by-size8 luci-ssl9 libustream-mbedtls13 px5g-mbedtls338 libmbedtls368 ---TOTAL--- Open question: When did this change in /overlay 's structure happen? LEDE-17 is OpenWrt-CC's successor and I have no systems runnig OpenWrt at hand. So If you need this on OpenWrt-BB or -CC, have a look inside /overlay first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29528/"
]
} |
157,111 | When I go from my graphical session to a virtual console by Ctrl + Alt + F i (with i in 1 - 7 and 9-12) I see a completely black screen. Only on F8 I see the GUI. Not even a blinking coursor on the others. When I enter anything, I can't see anything. What is the problem and how do I fix it? My system $ uname -aLinux pc09 3.13.0-36-generic #63-Ubuntu SMP Wed Sep 3 21:30:07 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux$ cat /etc/issueLinux Mint 17 Qiana \n \l$ lspci | grep VGA01:00.0 VGA compatible controller: NVIDIA Corporation GK110B [GeForce GTX Titan Black] (rev a1)$ lspci -k | grep -A 2 -i "VGA"01:00.0 VGA compatible controller: NVIDIA Corporation GK110B [GeForce GTX Titan Black] (rev a1)Subsystem: NVIDIA Corporation Device 1066Kernel driver in use: nvidia edit: I tried the first steps suggested on http://forums.linuxmint.com/viewtopic.php?f=42&t=168108 and the problem seems to be the framebuffer. I did this: This has been an issue that has been an annoyance with Nvidia proprietary drivers for two or three years, and has kept me away from Ubuntu-based distros for some time. Finally, on the Nvidia forum, I found the workaround I'd been looking for. The problem arises with Nvidia proprietary drivers (Nouveau doesn't show this behavior): when you push ctrl-alt-F1, you get only a black screen or, at best, a flashing cursor that does nothing. The problem apparently, has to do with the way the framebuffer in implemented and this needs to be disabled. To see if this is the problem, first you need to make a couple of minor modifications to /etc/default/grub - but first, make a backup! $ sudo cp /etc/default/grub /etc/default/grub.bak Now edit the file by entering $ sudo pluma /etc/default/grub in the editor, uncomment the lines #GRUB_TERMINAL=console#GRUB_GFXMODE=640x480 by removing the # . Save the file and run undate-grub to implement the changes sudo update-grub Now I have (a low resultion) tty working again :-) | That's because you are using the proprietary NVidia driver. When I was OpenSUSE with the proprietary driver my consoles also would be black, now that I am using Ubuntu again they get an even "cooler" effect: (Don't worry, the screen is fine!) The reason for this seems to be the NVidia kernel driver which, once initialized by the DDX (=device dependant X11) driver, cannot cope with requests from any other video subsystem (such as fbdev, VESA, Linux console , ...). The console will still be activated when switching to it. To verify this, try logging in blindly into the console and enter something that will be easy to notice, such as wall or reboot : <Your username><Your password>echo "Test message" >/tmp/message; wall </tmp/message After returning from the console you should see something like this in any terminal window: Broadcast message from <Your username>@<Hostname> (/dev/tty2) at 23:38 ...Test message Unfortunately I do not know of any way to fix this except for using the OpenSource driver ("nouveau"). VT switching works fine using that driver, but that driver creates other issues (spontaneous crashes and generally less performance in my case). I'm also neither a kernel developer nor an NVidia developer so I can't do much more than analyzing the symptoms myself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4784/"
]
} |
157,112 | The default behavior of du on my system is not the proper default behavior. If I ls my /data folder, I see (removing the stuff that isn't important): ghsghsb -> ghshoperssf -> roperroper Inside each folder is a set of folders with numbers as names. I want to get the total size of all folders named 14 , so I use: du -s /data/*/14 And I see... 161176 /data/ghs/14161176 /data/ghsb/148 /data/hope/14681564 /data/rssf/14681564 /data/roper/14 What I want is only: 161176 /data/ghs/148 /data/hope/14681564 /data/roper/14 I do not want to see the symbolic links. I've tried -L , -D , -S , etc. I always get the symbolic links. Is there a way to remove them? | This isn't du resolving the symbolic links; it's your shell. * is a shell glob; it is expanded by the shell before running any command. Thus in effect, the command you're running is: du -s /data/ghs/14 /data/ghsb/14 /data/hope/14 /data/rssf/14 /data/roper/14 If your shell is bash, you don't have a way to tell it not to expand symlinks. However you can use find (GNU version) instead: find /data -mindepth 2 -maxdepth 2 -type d -name 14 -exec du -s {} + | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34016/"
]
} |
157,124 | When I log into my server (Debian 7) through PuTTY, I get greeted by a message saying: -bash: warning: setlocale: LC_ALL: cannot change locale (en_GB.UTF-8). Then, when I try to run almost any command I get this: perl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = "en_GB:en", LC_ALL = "en_GB.UTF-8", LANG = "en_GB.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C"). I have looked all over the web for help. My /etc/environment file has 'LC_ALL="en_GB.UTF-8"' within it. Typing; locale -a prints the following: locale: Cannot set LC_CTYPE to default locale: No such file or directorylocale: Cannot set LC_MESSAGES to default locale: No such file or directorylocale: Cannot set LC_COLLATE to default locale: No such file or directoryCC.UTF-8POSIX This is the result of locale-gen: root@vps94194:/# locale-gen-bash: locale-gen: command not found The same goes for the update-locale command. I cannot reinstall the locale through aptitude as the error blocks it. I can't use dpkg to reconfigure for the same reason. I really don't know how to fix this. Nothing so far has made any difference. | Use: export LC_ALL=C and install what is needed via aptitude ( locales package or something equivalent). If you still get some error due to previous failure, first run: apt-get install -f | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85148/"
]
} |
157,128 | I have a word like "mode v" in a file. I want to replace it to "mode sv". However all my efforts failed as there is space after mode | Use: export LC_ALL=C and install what is needed via aptitude ( locales package or something equivalent). If you still get some error due to previous failure, first run: apt-get install -f | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85155/"
]
} |
157,133 | I have an application server which we start by running this below command and as soon as we run the below commnad, it will start my application server /opt/data/bin/test_start_stop.sh start And to stop my application server, we do it like this - /opt/data/bin/test_start_stop.sh stop So as soon as my application server is running and if I do ps aux , this is what I get - user@machineA:/home/user$ ps aux | grep gld_httpuser 20426 0.0 0.0 8114 912 pts/4 S+ 13:33 0:00 grep gld_httpuser 40509 0.1 0.9 3878120 1263552 ? Sl Sep22 1:53 /opt/data/bin/gld_http Now I am trying to make a shell script in which I need to stop my application server and then I will check whether my process is running after stopping the server or not, if my process is still running, then I need to do kill -9 of my application pid. And I am not sure how to get the pid of my process in the shell script and then kill it. Below is my shell script which I have as of now which will shut down my app server and then check whether my process is still running or not and if by any chance it is still running, then I need to get the pid of my process and do kill -9 on that pid. #!/bin/bash/opt/data/bin/test_start_stop.sh stopif ps aux | grep -v "grep" | grep "gld_http"then echo "Server is still running" # now get the pid of my gld_http process and invoke kill -9 on itelse echo "Server is stopped"fi How can I get the pid of my process and do kill -9 on it in my above shell script? UPDATE:- So my final script will look like this - #!/bin/bash/opt/data/bin/test_start_stop.sh stopif ps aux | grep -v "grep" | grep "gld_http"then echo "Server is still running" # Getting the PID of the process PID=`pgrep gld_http` # Number of seconds to wait before using "kill -9" WAIT_SECONDS=10 # Counter to keep count of how many seconds have passed count=0 while kill $PID > /dev/null do # Wait for one second sleep 1 # Increment the second counter ((count++)) # Has the process been killed? If so, exit the loop. if ! ps -p $PID > /dev/null ; then break fi # Have we exceeded $WAIT_SECONDS? If so, kill the process with "kill -9" # and exit the loop if [ $count -gt $WAIT_SECONDS ]; then kill -9 $PID break fi done echo "Process has been killed after $count seconds." else echo "Server is stopped"fi | First of all; "kill -9" should be a last resort. You can use the kill command to send different signals to a process. The default signal sent by kill is 15 (called SIGTERM). Processes usually catch this signal, clean up and then exit. When you send the SIGKILL signal ( -9 ) a process has no choice but to exit immediately. This may cause corrupted files and leave state files and open sockets that might cause problems when you restart the process. The -9 signal can not be ignored by a process. This is how I would do it: #!/bin/bash# Getting the PID of the processPID=`pgrep gld_http`# Number of seconds to wait before using "kill -9"WAIT_SECONDS=10# Counter to keep count of how many seconds have passedcount=0while kill $PID > /dev/nulldo # Wait for one second sleep 1 # Increment the second counter ((count++)) # Has the process been killed? If so, exit the loop. if ! ps -p $PID > /dev/null ; then break fi # Have we exceeded $WAIT_SECONDS? If so, kill the process with "kill -9" # and exit the loop if [ $count -gt $WAIT_SECONDS ]; then kill -9 $PID break fidoneecho "Process has been killed after $count seconds." | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157133",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64455/"
]
} |
157,154 | In Windows, if you type LIST DISK using DiskPart in a command prompt it lists all physical storage devices, plus their size, format, etc. What is the equivalent of this in Linux? | There are many tools for that, for example fdisk -l or parted -l , but probably the most handy is lsblk (aka list block devices ): Example $ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 238.5G 0 disk├─sda1 8:1 0 200M 0 part /boot/efi├─sda2 8:2 0 500M 0 part /boot└─sda3 8:3 0 237.8G 0 part├─fedora-root 253:0 0 50G 0 lvm /├─fedora-swap 253:1 0 2G 0 lvm [SWAP]└─fedora-home 253:2 0 185.9G 0 lvm It has many additional options, for example to show filesystems, labels, etc. As always man lsblk is your friend. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/157154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85170/"
]
} |
157,164 | I am having this strange problem of my system running out of space constantly. Just yesterday I cleaned up near 15GB of space and today I see it is out of space. I really don't get it. I know some other people also have this problem, but its not .xsession-log . Infact my /var/log is quite small. Here is a screenshot of my disk. I ran it as root. Here is my df [eeuser@roadrunner ~]$ df -hFilesystem Size Used Avail Use% Mounted on/dev/sda5 178G 169G 2.2M 100% /udev 7.9G 4.0K 7.9G 1% /devtmpfs 3.2G 940K 3.2G 1% /runnone 5.0M 0 5.0M 0% /run/locknone 7.9G 2.9M 7.9G 1% /run/shm What seems strange is, disk-analyzer shows usage of / as 100% but only 74.4GB. I have the linux partition of 190GB. Where is my 100GB. [eeuser@roadrunner ~]$ sudo fdisk -l /dev/sdaDisk /dev/sda: 1000.2 GB, 1000204886016 bytes255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0dc6f614 Device Boot Start End Blocks Id System/dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT/dev/sda2 206848 921806847 460800000 7 HPFS/NTFS/exFAT/dev/sda3 921806848 1541421152 309807152+ 7 HPFS/NTFS/exFAT/dev/sda4 1541423102 1953523711 206050305 5 Extended/dev/sda5 1541423104 1919993855 189285376 83 Linux/dev/sda6 1919995904 1953523711 16763904 82 Linux swap / Solaris | There are many tools for that, for example fdisk -l or parted -l , but probably the most handy is lsblk (aka list block devices ): Example $ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 238.5G 0 disk├─sda1 8:1 0 200M 0 part /boot/efi├─sda2 8:2 0 500M 0 part /boot└─sda3 8:3 0 237.8G 0 part├─fedora-root 253:0 0 50G 0 lvm /├─fedora-swap 253:1 0 2G 0 lvm [SWAP]└─fedora-home 253:2 0 185.9G 0 lvm It has many additional options, for example to show filesystems, labels, etc. As always man lsblk is your friend. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/157164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85181/"
]
} |
157,167 | Classical situation: I ran a bad rm and realized immediately afterwards that I had removed the wrong files. (Nothing critical and I had tolerably recent backups, but still annoying.) Knowing that further disk activity was my enemy if I wanted to recover the files with extundelete or such tools, I immediately powered the machine down physically (i.e., with the power button, not with halt or any such command). This was a laptop with no important tasks running or anything open, so it was an acceptable operation. (By the way, I learned since then that the first thing to do in such a situation would be to estimate first if the missing files may still be opened by a process https://unix.stackexchange.com/a/101247 -- if they are, you should recover them this way rather than power down the machine.) Still, once the machine was powered down I thought for a while and decided the files were not worth the time investment of booting a live system for proper forensics. So I powered the machine back up. And then I discovered that my files were still sitting on disk: the rm hadn't been propagated to disk before I had powered down. I did a little dance and thanked the god of sysadmins for His unexpected forgiveness. My question is now to understand how this was possible, and what is the typical delay before an rm is actually propagated to disk. I know that disk IO isn't flushed immediately but that it sits in memory for some time, but I thought that the disk journal would make sure quickly that pending operations do not get entirely lost. https://unix.stackexchange.com/a/78766 seems to hint at a separate mechanism to flush dirty pages and to flush journal operations but does not give sufficient detail about how the journal would be involved for a rm , and the expected delay before operations are flushed. Some more details: the data was in an ext4 partition inside a LUKS volume, and when booting the machine back up I saw the following in syslog : Sep 24 10:24:58 gamma kernel: [ 11.457007] EXT4-fs (dm-0): 1 orphan inode deletedSep 24 10:24:58 gamma kernel: [ 11.458393] EXT4-fs (dm-0): recovery completeSep 24 10:24:58 gamma kernel: [ 11.482475] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) but I am not confident it is related to the rm . Another question would be whether there is a way to tell the kernel to not perform any of the pending disk operations (but rather, say, dump them somewhere), rather than powering the machine down. (Of course, it sounds dangerous to not perform the pending operations, but this is what would happen when powering the machine down anyway, and it some cases it could save you.) This would be "cleaner", of course, and also interesting for e.g. remote servers where physical powerdown is not an easy option. | It sounds like you've got a decent grasp on what happened. Yes, because you hard-powered-off the system before your changes were committed to disk, they were there when you booted back up. The system caches all writes before flushing them out to disk. There are several options which control this behavior, all located at /proc/sys/vm/dirty_* [ kernel doc ] . Unless a flush is explicitly performed by an application via fsync() [ man 2 fsync ] , the data is committed when it is either old enough, or the write cache is filled up. The definition of "data" as used above includes modification to the directory entry to delete the file. Now, as for the journal, that's one of the common misconceptions of what the journal is for. The purpose of a journal is not to ensure changes get replayed, or that data is not lost. The purpose of a journal is to prevent corruption of the filesystem itself, not the files in it. The journal simply contains information about the changes being made, and not (typically) the full data of the change itself. The exact details are dependent upon the filesystem, and journal mode. For ext3/4, see the data mount option in man 8 mount . To answer your supplementary question of whether there's a way to prevent the pending writes without a reboot: From doing a quick read through the kernel source code, it looks like you can use the magic sysrq u command ([ wikipedia ], [ kernel doc ]) to do an emergency remount-read-only operation. It appears this will immediately remount all volumes read-only without a sync operation. To use this, simply press Alt + SysRq + u . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/157167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8446/"
]
} |
157,176 | Installing Google's .deb or .rpm Chrome package adds the Google repository to one's system for automatic updates. Does installing a .deb or .rpm package always add a repository to one's system? If not, how does one verify whether a package will add a repository when installed? | It sounds like you've got a decent grasp on what happened. Yes, because you hard-powered-off the system before your changes were committed to disk, they were there when you booted back up. The system caches all writes before flushing them out to disk. There are several options which control this behavior, all located at /proc/sys/vm/dirty_* [ kernel doc ] . Unless a flush is explicitly performed by an application via fsync() [ man 2 fsync ] , the data is committed when it is either old enough, or the write cache is filled up. The definition of "data" as used above includes modification to the directory entry to delete the file. Now, as for the journal, that's one of the common misconceptions of what the journal is for. The purpose of a journal is not to ensure changes get replayed, or that data is not lost. The purpose of a journal is to prevent corruption of the filesystem itself, not the files in it. The journal simply contains information about the changes being made, and not (typically) the full data of the change itself. The exact details are dependent upon the filesystem, and journal mode. For ext3/4, see the data mount option in man 8 mount . To answer your supplementary question of whether there's a way to prevent the pending writes without a reboot: From doing a quick read through the kernel source code, it looks like you can use the magic sysrq u command ([ wikipedia ], [ kernel doc ]) to do an emergency remount-read-only operation. It appears this will immediately remount all volumes read-only without a sync operation. To use this, simply press Alt + SysRq + u . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/157176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65222/"
]
} |
157,211 | I have a kernel in which one initramfs is embedded. I want to extract it. I got the output x86 boot sector when I do file bzImage I have System.map file for this kernel image. Is there any way to extract the embedded initramfs image from this kernel with or without the help of System.map file ? The interesting string found in System map file is: (Just in case it helps) 57312:c17fd8cc T __initramfs_start57316:c19d7b90 T __initramfs_size | There is some information about this in the gentoo wiki: https://wiki.gentoo.org/wiki/Custom_Initramfs#Salvaging It recommends the usage of binwalk which works exceedingly well. I'll give a quick walk-through with an example: first extract the bzImage file with binwalk: > binwalk --extract bzImageDECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 Microsoft executable, portable (PE)18356 0x47B4 xz compressed data9772088 0x951C38 xz compressed data I ended up with three files: 47B4 , 47B4.xz and 951C38.xz > file 47B447B4: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=aa47c6853b19e9242401db60d6ce12fe84814020, stripped Now lets run binwalk again on 47B4 : > binwalk --extract 47B4DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 ELF, 64-bit LSB executable, AMD x86-64, version 1 (SYSV)9818304 0x95D0C0 Linux kernel version "4.4.6-gentoo (root@host) (gcc version 4.9.3 (Gentoo Hardened 4.9.3 p1.5, pie-0.6.4) ) #1 SMP Tue Apr 12 14:55:10 CEST 2016"9977288 0x983DC8 gzip compressed data, maximum compression, from Unix, NULL date (1970-01-01 00:00:00)<snip> This came back with a long list of found paths and several potentially interesting files. Lets have a look. > file _47B4.extracted/*<snip>_47B4.extracted/E9B348: ASCII cpio archive (SVR4 with no CRC) file E9B348 is a (already decompressed) cpio archive, just what we are looking for! Bingo! To unpack the uncompressed cpio archive (your initramfs!) in your current directory just run > cpio -i < E9B348 That was almost too easy. binwalk is absolutely the tool you are looking for. For reference, I was using v2.1.1 here. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
157,216 | I'm installing netbeans 8.0.1 on Scientific Linux. I have executed the installer .sh but I get the error: java.lang.NoClassDefFoundError thrown from the UncaughtExceptionHandler in thread "main" I found this bug report: https://netbeans.org/bugzilla/show_bug.cgi?id=213437 which suggests the problem is I haven't set my display environment variable before the GUI installer appears. In my bash rc I did: export DISPLAY=local_host:0.0 re-sourced it but the problem still doesn't go away. I am on multiple monitors. Could somebody please help? | There is some information about this in the gentoo wiki: https://wiki.gentoo.org/wiki/Custom_Initramfs#Salvaging It recommends the usage of binwalk which works exceedingly well. I'll give a quick walk-through with an example: first extract the bzImage file with binwalk: > binwalk --extract bzImageDECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 Microsoft executable, portable (PE)18356 0x47B4 xz compressed data9772088 0x951C38 xz compressed data I ended up with three files: 47B4 , 47B4.xz and 951C38.xz > file 47B447B4: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=aa47c6853b19e9242401db60d6ce12fe84814020, stripped Now lets run binwalk again on 47B4 : > binwalk --extract 47B4DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 ELF, 64-bit LSB executable, AMD x86-64, version 1 (SYSV)9818304 0x95D0C0 Linux kernel version "4.4.6-gentoo (root@host) (gcc version 4.9.3 (Gentoo Hardened 4.9.3 p1.5, pie-0.6.4) ) #1 SMP Tue Apr 12 14:55:10 CEST 2016"9977288 0x983DC8 gzip compressed data, maximum compression, from Unix, NULL date (1970-01-01 00:00:00)<snip> This came back with a long list of found paths and several potentially interesting files. Lets have a look. > file _47B4.extracted/*<snip>_47B4.extracted/E9B348: ASCII cpio archive (SVR4 with no CRC) file E9B348 is a (already decompressed) cpio archive, just what we are looking for! Bingo! To unpack the uncompressed cpio archive (your initramfs!) in your current directory just run > cpio -i < E9B348 That was almost too easy. binwalk is absolutely the tool you are looking for. For reference, I was using v2.1.1 here. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50597/"
]
} |
157,241 | I am currently starting a background process within a subshell and was wondering how can I assign its pid number to a variable outside the subshell scope? I have tried many different ways but MYPID always stays set to 0 . MYPID = 0;({ sleep 2 & }; $MYPID=$!)({ sleep 2 & }; echo $!) > $MYPID The only way it returns a value is with: $MYPID=$({ sleep 2 & }; echo $!) However this discards the background process instruction and it will only return a result after 2 seconds. | bash shouldn't print the job status when non-interactive. If that's indeed for an interactive bash , you can do: { pid=$(sleep 20 >&3 3>&- & echo "$!"); } 3>&1 We want sleep 's stdout to go to where it was before, not the pipe that feeds the $pid variable. So we save the outer stdout in the file descriptor 3 ( 3>&1 ) and restore it for sleep inside the command substitution. So pid=$(...) returns as soon as echo terminates because there's nothing left with an open file descriptor to the pipe that feeds $pid . However note that because it's started from a subshell (here in a command substitution), that sleep will not run in a separate process group. So it's not the same as running sleep 20 & with regards to I/O to the terminal for instance. Maybe better would be to use a shell that supports spawning disowned background jobs like zsh where you can do: sleep 20 &! pid=$! With bash , you can approximate it with: { sleep 20 2>&3 3>&- & } 3>&2 2> /dev/null; pid=$!; disown "$pid" bash outputs the [1] 21578 to stderr. So again, we save stderr before redirecting to /dev/null, and restore it for the sleep command. That way, the [1] 21578 goes to /dev/null but sleep 's stderr goes as usual. If you're going to redirect everything to /dev/null anyway, you can simply do: { apt-get update & } > /dev/null 2>&1; pid=$!; disown "$pid" To redirect only stdout: { apt-get-update 2>&3 3>&- & } 3>&2 > /dev/null 2>&1; pid=$!; disown "$pid" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85236/"
]
} |
157,250 | I have been wondering what would be the best way to get good randomness in bash, i.e., what would be a procedure to get a random positive integer between MIN and MAX such that The range can be arbitrarily large (or at least, say, up to 2 32 -1); Values are uniformly distributed (i.e., no bias); It is efficient. An efficient way to get randomness in bash is to use the $RANDOM variable. However, this only samples a value between 0 and 2 15 -1, which may not be large enough for all purposes. People typically use a modulo to get it into the range they want, e.g., MIN=0MAX=12345rnd=$(( $RANDOM % ($MAX + 1 - $MIN) + $MIN )) This, additionally, creates a bias unless $MAX happens to divide 2 15 -1=32767. E.g., if $MIN is 0 and $MAX is 9, then the values 0 through 7 are slightly more probable than the values 8 and 9, as $RANDOM will never be 32768 or 32769. This bias gets worse as the range increases, e.g., if $MIN is 0 and $MAX is 9999, then the numbers 0 through 2767 have a probability of 4 / 32767 , while the numbers 2768 through 9999 only have a probability of 3 / 32767 . So while the above method fulfills condition 3, it does not fulfill conditions 1 and 2. The best method that I came up with so far in trying to satisfy conditions 1 and 2 was to use /dev/urandom as follows: MIN=0MAX=1234567890while rnd=$(cat /dev/urandom | tr -dc 0-9 | fold -w${#MAX} | head -1 | sed 's/^0*//;') [ -z $rnd ] && rnd=0 (( $rnd < $MIN || $rnd > $MAX ))do :done Basically, just collect randomness from /dev/urandom (might consider to use /dev/random instead if a cryptographically strong pseudorandom number generator is desired, and if you have lots of time, or else maybe a hardware random number generator), delete every character that's not a decimal digit, fold the output to the length of $MAX and cut leading 0's. If we happened to only get 0's then $rnd is empty, so in this case set rnd to 0 . Check if the result is outside our range and if so, then repeat. I forced the "body" of the while loop into the guard here so as to force execution of the body at least once, in the spirit of emulating a do ... while loop, since rnd is undefined to start with. I think I fulfilled conditions 1 and 2 here, but now I screwed up condition 3. It's kinda slow. Takes up to a second or so (tenth of a second when I'm lucky). Actually, the loop is not even guaranteed to terminate (although the probability of termination converges to 1 as time increases). Is there an efficient way to get unbiased random integers, within a pre-specified and potentially large range, in bash? (I'll continue to investigate when time allows, but in the meantime I thought someone here might have a cool idea!) Table of Answers The most basic (and hence portable) idea is to generate a random bitstring just long enough. There are different ways of generating a random bitstring, either using bash's built-in $RANDOM variable or using od and /dev/urandom (or /dev/random ). If the random number is greater than $MAX , start over. Complete bash solution for arbitrary ranges using either $RANDOM or /dev/urandom The general idea Get random bitstring using either openssl or od with /dev/urandom . Beautify with tr . Get random bitstring using od with /dev/random . Beautify with awk . Alternatively, it is possible to use external tools. The Perl solution Pro: quite portable, simple, flexible Contra: not for very large numbers above 2 32 -1 The Python solution Pro: simple, flexible, works even for large numbers Contra: less portable The zsh solution Pro: good for people who use zsh anyway Contra: probably even less portable | I see another interesting method from here . rand=$(openssl rand 4 | od -DAn) This one also seems to be a good option. It reads 4 bytes from the random device and formats them as unsigned integer between 0 and 2^32-1 . rand=$(od -N 4 -t uL -An /dev/urandom | tr -d " ") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157250",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56615/"
]
} |
157,261 | Let's say I want to parse the same log portion several times. I want to do data=$(grep "initial filter" file.log) and do the next filters on $data. Will $data grow until all memory is used up? | It seems there are no limits except whatever is set by the OS: $ yes=$(yes)bash: xrealloc: cannot allocate 18446744071562067968 bytes (1617920 bytes allocated) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85080/"
]
} |
157,285 | Trying to count number of files in current directory, I found ls -1 | wc -l , which means: send the list of files (where every filename is printed in a new line) to the input of wc, where -l will count the number of lines on input. This makes sense. I decided to try simply ls | wc -l and was very surprised about it also gives me a correct number of files. I wonder why this happens, because ls command with no options prints the filenames on a single line. | From info ls : '-1' '--format=single-column' List one file per line. This is the default for 'ls' when standard output is not a terminal. When you pipe the output of ls , you get one filename per line. ls only outputs the files in columns when the output is destined for human eyes. Here's where ls decides what to do: switch (ls_mode) { case LS_MULTI_COL: /* This is for the 'dir' program. */ format = many_per_line; set_quoting_style (NULL, escape_quoting_style); break; case LS_LONG_FORMAT: /* This is for the 'vdir' program. */ format = long_format; set_quoting_style (NULL, escape_quoting_style); break; case LS_LS: /* This is for the 'ls' program. */ if (isatty (STDOUT_FILENO)) { format = many_per_line; /* See description of qmark_funny_chars, above. */ qmark_funny_chars = true; } else { format = one_per_line; qmark_funny_chars = false; } break; default: abort (); } source: http://git.savannah.gnu.org/cgit/coreutils.git/tree/src/ls.c | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/157285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85258/"
]
} |
157,286 | I would like to copy files with multiple extensions to a single destination directory. For example, I can use the following command to copy all .txt files in the working directory to a directory called destination : cp -v *.txt destination/ And I can use the following to copy all .png directories in the working directory to destination : cp -v *.png destination/ But it's time consuming to type these as separate commands (even with the use of command history). So, is there any way that I can tell cp to copy files with either the pattern *.txt or the pattern *.png to destination ? Ideally, I would like to be able to specify more than two patterns -- like instructing cp to copy all *.txt or *.png or *.jpg files to destination , for example. I'm sure that all of this is possible using a shell script -- I'm using bash , for example -- but is there any way to accomplish it more simply, just from the console? Could I somehow use brace expansion to do it? I know that it is possible to copy all files in the working directory except those matching certain specified patterns , but since my working directory contains far more file extensions that I don't want to copy than those I do , that would be a pain. Do you have any thoughts on this? | Brace expansion will get the job done. man bash and search for Brace Expansion . cp *.{txt,jpg,png} destination/ EDIT: In keeping with the OP's request, the command above was missing the verbose option: cp -v *.{txt,jpg,png} destination/ | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/157286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
157,328 | I have a file which includes comments: foobarstuff#Do not show this...morestuffevenmorestuff#Or this I want to print the file without including any of the comments: foobarstuffmorestuffevenmorestuff There are a lot of applications where this would be helpful. What is a good way to do it? | One way to remove all comments is to use grep with -o option: grep -o '^[^#]*' file where -o : prints only matched part of the line first ^ : beginning of the line [^#]* : any character except # repeated zero or more times Note that empty lines will be removed too, but lines with only spaces will stay. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/157328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61235/"
]
} |
157,329 | There is apparently a vulnerability (CVE-2014-6271) in bash: Bash specially crafted environment variables code injection attack I am trying to figure out what is happening, but I'm not entirely sure I understand it. How can the echo be executed as it is in single quotes? $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"vulnerablethis is a test EDIT 1 : A patched system looks like this: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"bash: warning: x: ignoring function definition attemptbash: error importing function definition for `x'this is a test EDIT 2 : There is a related vulnerability / patch: CVE-2014-7169 which uses a slightly different test: $ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test" unpatched output : vulnerablebash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)'bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable'bash: error importing function definition for `BASH_FUNC_x'test partially (early version) patched output : bash: warning: x: ignoring function definition attemptbash: error importing function definition for `x'bash: error importing function definition for `BASH_FUNC_x()'test patched output up to and including CVE-2014-7169: bash: warning: x: ignoring function definition attemptbash: error importing function definition for `BASH_FUNC_x'test EDIT 3 : story continues with: CVE-2014-7186 CVE-2014-7187 CVE-2014-6277 | bash stores exported function definitions as environment variables. Exported functions look like this: $ foo() { bar; }$ export -f foo$ env | grep -A1 foofoo=() { bar} That is, the environment variable foo has the literal contents: () { bar} When a new instance of bash launches, it looks for these specially crafted environment variables, and interprets them as function definitions. You can even write one yourself, and see that it still works: $ export foo='() { echo "Inside function"; }'$ bash -c 'foo'Inside function Unfortunately, the parsing of function definitions from strings (the environment variables) can have wider effects than intended. In unpatched versions, it also interprets arbitrary commands that occur after the termination of the function definition. This is due to insufficient constraints in the determination of acceptable function-like strings in the environment. For example: $ export foo='() { echo "Inside function" ; }; echo "Executed echo"'$ bash -c 'foo'Executed echoInside function Note that the echo outside the function definition has been unexpectedly executed during bash startup. The function definition is just a step to get the evaluation and exploit to happen, the function definition itself and the environment variable used are arbitrary. The shell looks at the environment variables, sees foo , which looks like it meets the constraints it knows about what a function definition looks like, and it evaluates the line, unintentionally also executing the echo (which could be any command, malicious or not). This is considered insecure because variables are not typically allowed or expected, by themselves, to directly cause the invocation of arbitrary code contained in them. Perhaps your program sets environment variables from untrusted user input. It would be highly unexpected that those environment variables could be manipulated in such a way that the user could run arbitrary commands without your explicit intent to do so using that environment variable for such a reason declared in the code. Here is an example of a viable attack. You run a web server that runs a vulnerable shell, somewhere, as part of its lifetime. This web server passes environment variables to a bash script, for example, if you are using CGI, information about the HTTP request is often included as environment variables from the web server. For example, HTTP_USER_AGENT might be set to the contents of your user agent. This means that if you spoof your user agent to be something like '() { :; }; echo foo', when that shell script runs, echo foo will be executed. Again, echo foo could be anything, malicious or not. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/157329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16841/"
]
} |
157,351 | I need some assistance grasping what I'm sure is a fundamental concept in Linux: the limit for open files. Specifically, I'm confused on why open sockets can count towards the total number of "open files" on a system. Can someone please elaborate on the reason why? I understand that this probably goes back to the whole "everything is a file" principle in Linux but any additional detail would be appreciated. | The limit on "open files" is not really just for files. It's a limit on the number of kernel handles a single process can use at one time. Historically, the only thing that programs would typically open a lot of were files, so this became known as a limit on the number of open files. There is a limit to help prevent processes from say, opening a lot of files and accidentally forgetting to close them, which will cause system-wide problems eventually. A socket connection is also a kernel handle. So the same limits apply for the same reasons - it's possible for a process to open network connections and forget to close them. As noted in the comments, kernel handles are traditionally called file descriptors in Unix-like systems. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/157351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
157,358 | I am trying to run a local Docker Registry with a synchronized storage with my Dropbox so I can have access to built VMs without having to rebuild the images if I move to a different work station. I have launched the Registry with the following: docker run -d -p 5000:5000 --name td-registry -v /Users/andrew/Dropbox/Developer/Docker/Registry:/tmp/registry registry Then, I push an example image that I have built via: docker push localhost:5000/ubunutu Now, I would expect to see some sort of file or directory placed into /Users/andrew/Dropbox/Developer/Docker/Registry ; but that is not the case: $ ls -lah ~/Dropbox/Developer/Docker/Registrytotal 0drwxr-xr-x 2 andrew staff 68B Sep 24 14:39 .drwxr-xr-x 4 andrew staff 136B Sep 24 16:42 .. Am I mounting the data volume incorrectly when launching the Registry container? Edit Here is the output of the docker push , so it's successfully pushing. The push refers to a repository [localhost:5000/ubuntu] (len: 1)Sending image listPushing repository localhost:5000/ubuntu (1 tags)511136ea3c5a: Image successfully pushedbfb8b5a2ad34: Image successfully pushedc1f3bdbd8355: Image successfully pushed897578f527ae: Image successfully pushed9387bcc9826e: Image successfully pushed809ed259f845: Image successfully pushed96864a7d2df3: Image successfully pusheda037e7415015: Image successfully pushed922d395cc25c: Image successfully pushedf9317ffe9a11: Image successfully pushed1a980360e853: Image successfully pushedf759631e9b64: Image successfully pushed194edb5b619b: Image successfully pushed5cf96e6ae328: Image successfully pushedb4d4b1e2e0b3: Image successfully pushed921507f17768: Image successfully pushedb9faffd3f579: Image successfully pushedPushing tag for rev [b9faffd3f579] on {http://localhost:5000/v1/repositories/ubuntu/tags/latest} | If you're using the official registry image, the path at which the local volumes are stored by default is /tmp/registry . Thus that should be your mount point. docker run -d -p 5000:5000 --name td-registry -v /Users/andrew/Dropbox/Developer/Docker/Registry:/tmp/registry registry This is covered in the persistent storage section of the main readme. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157358",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41774/"
]
} |
157,361 | Trying to start a Linux container, I get the following: lxc-start: No cgroup mounted on the system OS is Debian 7. | LXC (or other uses of the cgroups facility) requires the cgroups filesystem to be mounted (see §2.1 in the cgroups kernel documentation ). It seems that as of Debian wheezy, this doesn't happen automatically. Add the following line to /etc/fstab : cgroup /sys/fs/cgroup cgroup defaults For a one-time thing, mount it manually: mount -t cgroup cgroup /sys/fs/cgroup | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65344/"
]
} |
157,368 | So I understand that rsync works by using a checksum algorithm that updates files according to file size and date. But isn't this synchronization dependent on how source and destination are called? Doesn't the order of source to destination change the behavior of what gets synced? Let me get to where I am going.. Obviously this input1 rsync -e ssh -varz ~/local/directory/ [email protected]:~/remoteFolder/ is not the same as input2 rsync -e ssh -varz [email protected]:~/remoteFolder/ ~/local/directory/ But I thought that either way you called rsync, the whole point of it was so that any file that is newer is updated between the local and remote destinations. However, it seems that it's dependent on whether your new file is located under the source rather than the destination location. If I updated file1 on my local machine (same, but older file1 on server), saved it, and then used code2 to sync between locations, I noticed that the new version of file1 on my local machine does not upload to the server. It actually overwrote the updated file with the older version. If I use input2 it does upload modified to server. Is there a way to run rsync in which it literally synchronizes new and modified files no matter whether source or destination locations are called first according to newer/modified file location? | No, rsync only ever syncs files in one direction. Think of it as a smart and more capable version of cp (or mv if you use the --remove-source-files option): smart in the sense that it tries to skip data which already exists at the destination, and capable in the sense that it can do a few things cp doesn't, such as removing files at the destination which have been deleted at the source. But at its core, it really just does the same thing, copying from a source to a destination. As Patrick mentioned in a comment, unison will do what you're looking for, updating in both directions. It's not a standard tool, though - many UNIX-like systems will have rsync installed, but usually not unison . But on a Linux distribution you can probably find it in your package manager. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37094/"
]
} |
157,381 | Some context about the bug: CVE-2014-6271 Bash supports exporting not just shell variables, but also shell functions to other bash instances, via the process environment to (indirect) child processes. Current bash versions use an environment variable named by the function name, and a function definition starting with “() {” in the variable value to propagate function definitions through the environment. The vulnerability occurs because bash does not stop after processing the function definition; it continues to parse and execute shell commands following the function definition. For example, an environment variable setting of VAR=() { ignored; }; /bin/id will execute /bin/id when the environment is imported into the bash process. Source: http://seclists.org/oss-sec/2014/q3/650 When was the bug introduced, and what is the patch that fully fixes it? (See CVE-2014-7169 ) What are the vulnerable versions beyond noted in the CVE (initially) (3.{0..2} and 4.{0..3})? Has the buggy source code been reused in other projects? Additional information is desirable. Related: What does env x='() { :;}; command' bash do and why is it insecure? | TL;DR The shellshock vulnerability is fully fixed in On the bash-2.05b branch: 2.05b.10 and above (patch 10 included) On the bash-3.0 branch: 3.0.19 and above (patch 19 included) On the bash-3.1 branch: 3.1.20 and above (patch 20 included) On the bash-3.2 branch: 3.2.54 and above (patch 54 included) On the bash-4.0 branch: 4.0.41 and above (patch 41 included) On the bash-4.1 branch: 4.1.14 and above (patch 14 included) On the bash-4.2 branch: 4.2.50 and above (patch 50 included) On the bash-4.3 branch: 4.3.27 and above (patch 27 included) If your bash shows an older version, your OS vendor may still have patched it by themselves, so best is to check. If: env xx='() { echo vulnerable; }' bash -c xx shows "vulnerable", you're still vulnerable. That is the only test that is relevant (whether the bash parser is still exposed to code in any environment variable). Details. The bug was in the initial implementation of the function exporting/importing introduced on the 5 th of August 1989 by Brian Fox, and first released in bash-1.03 about a month later at a time where bash was not in such widespread use, before security was that much of a concern and HTTP and the web or Linux even existed. From the ChangeLog in 1.05 : Fri Sep 1 18:52:08 1989 Brian Fox (bfox at aurel) * readline.c: rl_insert (). Optimized for large amounts of typeahead. Insert all insertable characters at once. * I update this too irregularly. Released 1.03.[...]Sat Aug 5 08:32:05 1989 Brian Fox (bfox at aurel) * variables.c: make_var_array (), initialize_shell_variables () Added exporting of functions. Some discussions in gnu.bash.bug and comp.unix.questions around that time also mention the feature. It's easy to understand how it got there. bash exports the functions in env vars like foo=() { code} And on import, all it has to do is interpret that with the = replaced with a space... except that it should not blindly interpret it. It's also broken in that in bash (contrary to the Bourne shell), scalar variables and functions have a different name space. Actually if you have foo() { echo bar; }; export -f fooexport foo=bar bash will happily put both in the environment (yes entries with same variable name) but many tools (including many shells) won't propagate them. One would also argue that bash should use a BASH_ namespace prefix for that as that's env vars only relevant from bash to bash. rc uses a fn_ prefix for a similar feature. A better way to implement it would have been to put the definition of all exported variables in a variable like: BASH_FUNCDEFS='f1() { echo foo;} f2() { echo bar;}...' That would still need to be sanitized but at least that could not be more exploitable than $BASH_ENV or $SHELLOPTS ... There is a patch that prevents bash from interpreting anything else than the function definition in there ( https://lists.gnu.org/archive/html/bug-bash/2014-09/msg00081.html ), and that's the one that has been applied in all the security updates from the various Linux distributions. However, bash still interprets the code in there and any bug in the interpreter could be exploited. One such bug has already been found (CVE-2014-7169) though its impact is a lot smaller. So there will be another patch coming soon. Until a hardening fix that prevents bash to interpret code in any variable (like using the BASH_FUNCDEFS approach above), we won't know for sure if we're not vulnerable from a bug in the bash parser. And I believe there will be such a hardening fix released sooner or later. Edit 2014-09-28 Two additional bugs in the parser have been found (CVE-2014-718{6,7}) (note that most shells are bound to have bugs in their parser for corner cases, that wouldn't have been a concern if that parser hadn't been exposed to untrusted data). While all 3 bugs 7169, 7186 and 7187 have been fixed in following patches, Red Hat pushed for the hardening fix. In their patch, they changed the behaviour so that functions were exported in variables called BASH_FUNC_myfunc() more or less preempting Chet's design decision. Chet later published that fix as an official upstreams bash patch . That hardening patch, or variants of it are now available for most major Linux distribution and eventually made it to Apple OS/X. That now plugs the concern for any arbitrary env var exploiting the parser via that vector including two other vulnerabilities in the parser (CVE-2014-627{7,8}) that were disclosed later by Michał Zalewski (CVE-2014-6278 being almost as bad as CVE-2014-6271) thankfully after most people had had time to install the hardening patch Bugs in the parser will be fixed as well, but they are no longer that much of an issue now that the parser is no longer so easily exposed to untrusted input. Note that while the security vulnerability has been fixed, it's likely that we'll see some changes in that area. The initial fix for CVE-2014-6271 has broken backward compatibility in that it stops importing functions with . or : or / in their name. Those can still be declared by bash though which makes for an inconsistent behaviour. Because functions with . and : in their name are commonly used, it's likely a patch will restore accepting at least those from the environment. Why wasn't it found earlier? That's also something I wondered about. I can offer a few explanations. First, I think that if a security researcher (and I'm not a professional security researcher) had specifically been looking for vulnerabilities in bash, they would have likely found it. For instance, if I were a security researcher, my approaches could be: Look at where bash gets input from and what it does with it. And the environment is an obvious one. Look in what places the bash interpreter is invoked and on what data. Again, it would stand out. The importing of exported functions is one of the features that is disabled when bash is setuid/setgid, which makes it an even more obvious place to look. Now, I suspect nobody thought to consider bash (the interpreter) as a threat, or that the threat could have come that way. The bash interpreter is not meant to process untrusted input. Shell scripts (not the interpreter) are often looked at closely from a security point of view. The shell syntax is so awkward and there are so many caveats with writing reliable scripts (ever seen me or others mentioning the split+glob operator or why you should quote variables for instance?) that it's quite common to find security vulnerabilities in scripts that process untrusted data. That's why you often hear that you shouldn't write CGI shell scripts, or setuid scripts are disabled on most Unices. Or that you should be extra careful when processing files in world-writeable directories (see CVE-2011-0441 for instance). The focus is on that, the shell scripts, not the interpreter. You can expose a shell interpreter to untrusted data (feeding foreign data as shell code to interpret) via eval or . or calling it on user provided files, but then you don't need a vulnerability in bash to exploit it. It's quite obvious that if you're passing unsanitized data for a shell to interpret, it will interpret it. So the shell is called in trusted contexts. It's given fixed scripts to interpret and more often than not (because it's so difficult to write reliable scripts) fixed data to process. For instance, in a web context, a shell might be invoked in something like: popen("sendmail -oi -t", "w"); What can possibly go wrong with that? If something wrong is envisaged, that's about the data fed to that sendmail, not how that shell command line itself is parsed or what extra data is fed to that shell. There's no reason you'd want to consider the environment variables that are passed to that shell. And if you do, you realise it's all env vars whose name start with "HTTP_" or are well known CGI env vars like SERVER_PROTOCOL or QUERYSTRING none of which the shell or sendmail have any business to do with. In privilege elevation contexts like when running setuid/setgid or via sudo, the environment is generally considered and there have been plenty of vulnerabilities in the past, again not against the shell itself but against the things that elevate the privileges like sudo (see for instance CVE-2011-3628 ). For instance, bash doesn't trust the environment when setuid or called by a setuid command (think mount for instance that invokes helpers). In particular, it ignores exported functions. sudo does clean the environment: all by default except for a white list, and if configured not to, at least black lists a few that are known to affect a shell or another (like PS4 , BASH_ENV , SHELLOPTS ...). It does also blacklist the environment variables whose content starts with () (which is why CVE-2014-6271 doesn't allow privilege escalation via sudo ). But again, that's for contexts where the environment cannot be trusted: any variable with any name and value can be set by a malicious user in that context. That doesn't apply to web servers/ssh or all the vectors that exploit CVE-2014-6271 where the environment is controlled (at least the name of the environment variables is controlled...) It's important to block a variable like echo="() { evil; }" , but not HTTP_FOO="() { evil; }" , because HTTP_FOO is not going to be called as a command by any shell script or command line. And apache2 is never going to set an echo or BASH_ENV variable. It's quite obvious some environment variables should be black-listed in some contexts based on their name , but nobody thought that they should be black-listed based on their content (except for sudo ). Or in other words, nobody thought that arbitrary env vars could be a vector for code injection. As to whether extensive testing when the feature was added could have caught it, I'd say it's unlikely. When you test for the feature , you test for functionality. The functionality works fine. If you export the function in one bash invocation, it's imported alright in another. A very thorough testing could have spotted issues when both a variable and function with the same name are exported or when the function is imported in a locale different from the one it was exported in. But to be able to spot the vulnerability, it's not a functionality test you would have had to do. The security aspect would have had to be the main focus, and you wouldn't be testing the functionality, but the mechanism and how it could be abused. It's not something that developers (especially in 1989) often have at the back of their mind, and a shell developer could be excused to think his software is unlikely to be network exploitable. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/157381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24379/"
]
} |
157,414 | Because of the Shell Shock bug I need to make updates on some of our machines. But I am not sure if all of the packages suggested in apt-get upgrade are OK for my system. In other words I am not sure if there are any dependencies. Our system administrator is not here yet and we cannot contact him. So my question is, how can I only apt-get upgrade the security updates, without having to update everything to the newest available version in debian stable? EDIT SOLUTION apt-get install --only-upgrade bash did the thing for me. On one of our servers, there was still just Debian Squeeze installed. Changing squeeze to wheezy in /etc/apt/sources.list and then running: - apt-get update - apt-get install --only-upgrade bash installed the fixed bash into this older squeeze system. | For Squeeze use squeeze-lts if possible! (i386 and amd64 only...) append this to your sources.list: deb http://http.debian.net/debian squeeze-lts main contrib non-freedeb-src http://http.debian.net/debian squeeze-lts main contrib non-free and then run apt-get updateapt-get install -t squeeze-lts --only-upgrade bash Here is more detail on squeeze-lts: https://wiki.debian.org/LTS/Using If you really want to patch debian lenny check out this gist (but rather consider updating to a newer distro!) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40919/"
]
} |
157,426 | When adding a new user, how is the string validated? I suppose there is a regular expression. What is that regular expression? | The general rule for username is its length must less than 32 characters. It depend on your distribution to make what is valid username. In Debian, shadow-utils 4.1 , there is a is_valid_name function in chkname.c : static bool is_valid_name (const char *name){ /* * User/group names must match [a-z_][a-z0-9_-]*[$] */ if (('\0' == *name) || !((('a' <= *name) && ('z' >= *name)) || ('_' == *name))) { return false; } while ('\0' != *++name) { if (!(( ('a' <= *name) && ('z' >= *name) ) || ( ('0' <= *name) && ('9' >= *name) ) || ('_' == *name) || ('-' == *name) || ( ('$' == *name) && ('\0' == *(name + 1)) ) )) { return false; } } return true;} And the length of username was checked before: bool is_valid_user_name (const char *name){ /* * User names are limited by whatever utmp can * handle. */ if (strlen (name) > USER_NAME_MAX_LENGTH) { return false; } return is_valid_name (name);} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45370/"
]
} |
157,427 | Hello I need to grep the output from df. Sadly awk is not an option (even though its the easy option) here I can only use grep. Filesystem 1K-blocks Used Available Use% Mounted onnone 4 0 4 0% /sys/fs/cgroupnone 5120 0 5120 0% /run/locknone 1981488 444 1981044 0% /run/shmnone 102400 64 102336 0% /run/user/dev/sda3 418176236 281033024 137143212 67% /media/mark/7EE21FBAE21F761D So for example I want the $4 column of the line beginning with /dev/sda3 | If you have a version of grep that supports -P (perl-compatible regex, PCREs) and -o (print only the matching string), you can do df | grep -oP '/sda3.* \K\d+(?=\s+\d+%)' Explanation Here, we match /sda3 , then as many characters as possible until we find a stretch of numbers ( \d+ ) which is followed by one or more spaces ( \s+ ), then one or more numbers ( \d+ ) and a % . The foo(?=bar) construct is a positive lookahead , it allows you to search for the string foo only if it is followed by the string bar . The \K is a PCRE trick that means "discard anything matched up to this point". Combined with -o , it lets you use strings that precede your pattern to anchor your match but not print them. Without -P , things are trickier. You would need multiple passes. For example: df | grep -o '/sda3.*%' | grep -Eo '[0-9]+ *[0-9]+%' | grep -Eo '^[0-9]+' Explanation Here, the first grep identifies the right line and prints everything up to the % . The second prints the longest stretch of numbers before a space and another stretch of numbers ending with % and the final one prints the longest stretch of numbers found at the beginning of the line. Since the previous one only printed the free space and the percentage, this is the free space. If your grep doesn't even support -E , you could do: df | grep -o '/sda3.*%' | grep -o '[0-9]* *[0-9]*%' | grep -o '[0-9][0-9]* ' Explanation Here, we can't use + for "one or more" so, for the last grep , we need to specify at least one number, followed by 0 or more ( [0-9][0-9]* ). If you can use other tools of course, things get much easier: df | sed -n '/sda3/{s/ */ /gp}' | cut -d' ' -f4 Explanation The sed won't print anything ( -n ) unless the current line matches sda3 ( /sda3/{} ) and, if it does, it replaces all consecutive spaces with a single one, allowing the use of cut to print the 4th field. Or df | perl -lne 'm#/sda3.+\s(\d+)\s+\d+%# && print $1' Explanation The -l adds a newline to each print call, the -n means "read the input line by line" and the -e lets you pass a script on the command line. The script itself matches sda3 , then any stretch of characters up to whitespace followed by one or more numbers ( \s(\d+) ), then whitespace followed by a stretch of numbers ending with % ( \s+\d+ ). The parentheses capture the part we are interested in which is then printed as $1 . Or df | tr -s ' ' $'\t' | grep sda3 | cut -f4 Explanation Here, we simply use tr to convert multiple consecutive spaces to a tab (the default delimiter of cut ), then grep for sda3 and print the 4th field. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85353/"
]
} |
157,454 | Both my WiFi connection and my Ethernet connection have recently (I'd say about a week ago) become unstable, without any change in the configs. Now in order to connect I've to try several times and it often randomly disconnects (this for the WiFi, I haven't been able to connect via Ethernet at all but I've tried much less often than connecting via WiFi). I've already tried updating and changing the network manager app (tried wicd and NetworManager) without success. Anybody got any idea about how to fix this? I'm on an update ArchLinux, asus k53 | If you have a version of grep that supports -P (perl-compatible regex, PCREs) and -o (print only the matching string), you can do df | grep -oP '/sda3.* \K\d+(?=\s+\d+%)' Explanation Here, we match /sda3 , then as many characters as possible until we find a stretch of numbers ( \d+ ) which is followed by one or more spaces ( \s+ ), then one or more numbers ( \d+ ) and a % . The foo(?=bar) construct is a positive lookahead , it allows you to search for the string foo only if it is followed by the string bar . The \K is a PCRE trick that means "discard anything matched up to this point". Combined with -o , it lets you use strings that precede your pattern to anchor your match but not print them. Without -P , things are trickier. You would need multiple passes. For example: df | grep -o '/sda3.*%' | grep -Eo '[0-9]+ *[0-9]+%' | grep -Eo '^[0-9]+' Explanation Here, the first grep identifies the right line and prints everything up to the % . The second prints the longest stretch of numbers before a space and another stretch of numbers ending with % and the final one prints the longest stretch of numbers found at the beginning of the line. Since the previous one only printed the free space and the percentage, this is the free space. If your grep doesn't even support -E , you could do: df | grep -o '/sda3.*%' | grep -o '[0-9]* *[0-9]*%' | grep -o '[0-9][0-9]* ' Explanation Here, we can't use + for "one or more" so, for the last grep , we need to specify at least one number, followed by 0 or more ( [0-9][0-9]* ). If you can use other tools of course, things get much easier: df | sed -n '/sda3/{s/ */ /gp}' | cut -d' ' -f4 Explanation The sed won't print anything ( -n ) unless the current line matches sda3 ( /sda3/{} ) and, if it does, it replaces all consecutive spaces with a single one, allowing the use of cut to print the 4th field. Or df | perl -lne 'm#/sda3.+\s(\d+)\s+\d+%# && print $1' Explanation The -l adds a newline to each print call, the -n means "read the input line by line" and the -e lets you pass a script on the command line. The script itself matches sda3 , then any stretch of characters up to whitespace followed by one or more numbers ( \s(\d+) ), then whitespace followed by a stretch of numbers ending with % ( \s+\d+ ). The parentheses capture the part we are interested in which is then printed as $1 . Or df | tr -s ' ' $'\t' | grep sda3 | cut -f4 Explanation Here, we simply use tr to convert multiple consecutive spaces to a tab (the default delimiter of cut ), then grep for sda3 and print the 4th field. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85370/"
]
} |
157,477 | Apparently, the shellshock Bash exploit CVE-2014-6271 can be exploited over the network via SSH. I can imagine how the exploit would work via Apache/CGI, but I cannot imagine how that would work over SSH? Can somebody please provide an example how SSH would be exploited, and what harm could be done to the system? CLARIFICATION AFAIU, only an authenticated user can exploit this vulnerability via SSH. What use is this exploit for somebody, who has legitimate access to the system anyway? I mean, this exploit does not have privilege escalation (he cannot become root), so he can do no more than he could have done after simply logging in legitimately via SSH. | One example where this can be exploited is on servers with an authorized_keys forced command. When adding an entry to ~/.ssh/authorized_keys , you can prefix the line with command="foo" to force foo to be run any time that ssh public key is used. With this exploit, if the target user's shell is set to bash , they can take advantage of the exploit to run things other than the command that they are forced to. This would probably make more sense in example, so here is an example: sudo useradd -d /testuser -s /bin/bash testusersudo mkdir -p /testuser/.sshsudo sh -c "echo command=\\\"echo starting sleep; sleep 1\\\" $(cat ~/.ssh/id_rsa.pub) > /testuser/.ssh/authorized_keys"sudo chown -R testuser /testuser Here we set up a user testuser , that forces any ssh connections using your ssh key to run echo starting sleep; sleep 1 . We can test this with: $ ssh testuser@localhost echo something elsestarting sleep Notice how our echo something else doesn't get run, but the starting sleep shows that the forced command did run. Now lets show how this exploit can be used: $ ssh testuser@localhost '() { :;}; echo MALICIOUS CODE'MALICIOUS CODEstarting sleep This works because sshd sets the SSH_ORIGINAL_COMMAND environment variable to the command passed. So even though sshd ran sleep , and not the command I told it to, because of the exploit, my code still gets run. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/157477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
157,494 | I have a process which has been spawned from a shell. It is running as a background process and exporting a DB to a CSV file in /tmp . How can I tell when the background process has completed (finished / quit) or if the CSV file lock has closed? I plan to FTP the file to another host once it's written, but I need the complete file before I start the file transfer. | One example where this can be exploited is on servers with an authorized_keys forced command. When adding an entry to ~/.ssh/authorized_keys , you can prefix the line with command="foo" to force foo to be run any time that ssh public key is used. With this exploit, if the target user's shell is set to bash , they can take advantage of the exploit to run things other than the command that they are forced to. This would probably make more sense in example, so here is an example: sudo useradd -d /testuser -s /bin/bash testusersudo mkdir -p /testuser/.sshsudo sh -c "echo command=\\\"echo starting sleep; sleep 1\\\" $(cat ~/.ssh/id_rsa.pub) > /testuser/.ssh/authorized_keys"sudo chown -R testuser /testuser Here we set up a user testuser , that forces any ssh connections using your ssh key to run echo starting sleep; sleep 1 . We can test this with: $ ssh testuser@localhost echo something elsestarting sleep Notice how our echo something else doesn't get run, but the starting sleep shows that the forced command did run. Now lets show how this exploit can be used: $ ssh testuser@localhost '() { :;}; echo MALICIOUS CODE'MALICIOUS CODEstarting sleep This works because sshd sets the SSH_ORIGINAL_COMMAND environment variable to the command passed. So even though sshd ran sleep , and not the command I told it to, because of the exploit, my code still gets run. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/157494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85404/"
]
} |
157,517 | I would like to find out how do I apply the fix for this vulnerability on cygwin. I am running the CYGWIN_NT-6.1 MYHOSTNAME 1.7.30(0.272/5/3) 2014-05-23 10:36 x86_64 Cygwin of cygwin on Windows 7. #bash -version GNU bash, version 4.1.11(2)-release (x86_64-unknown-cygwin) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a test I tried apt-cyg but it didn't update anything: $ apt-cyg update bash apt-cyg update bash Working directory is /setup Mirror is http://mirrors.kernel.org/sourceware/cygwin --2014-09-25 09:24:14-- http://mirrors.kernel.org/sourceware/cygwin/x86_64/setup.bz2 Resolving mirrors.kernel.org (mirrors.kernel.org)... 149.20.4.71, 149.20.20.135, 2001:4f8:1:10:0:1994:3:14, ... Connecting to mirrors.kernel.org (mirrors.kernel.org)|149.20.4.71|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 431820 (422K) [application/x-bzip2] Saving to: ‘setup.bz2’ 100% [======================================================================================>] 431,820 898KB/s in 0.5s 2014-09-25 09:24:14 (898 KB/s) - ‘setup.bz2’ saved [431820/431820] Updated setup.ini when try to reinstall by running setup-x86_64.exe and going through wizard re-install bash that is showing under shell, it seems like start downloading everything. It should be very quick update but it start downloading for over 15 minutes then I canceled it. I looked around https://cygwin.com site and other forum but so far not any specific update for this vulnerability. | As per the official Cygwin Installation Page : Installing and Updating Cygwin for 64-bit versions of Windows Run setup-x86_64.exe any time you want to update or install a Cygwin package for 64-bit windows. The signature for setup-x86_64.exe can be used to verify the validity of this binary using this public key. I had a hunch this bash was affected to, so about 15 minutes before you posted your question I did as the setup page instructed. There is no need for a 3rd Party Script. I believe the process went different for me because I had not cleaned out my Download Directory at C:\Cygwin64\Downloads The setup utility Scanned my currently installed packages, and I left the defaults alone. As such, all packages in the base system were updated. One of these happened to be the bash that is affected by the CVE-2014-6271. You can see proof that you are protected by the following screenshot: Please note that I do not know if this update protects against the other vulnerabilities that have been discovered, so please follow the above procedure the next few days until this issue is completely fixed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26989/"
]
} |
157,530 | I have multiple parent directories with the same file structure beneath them. Example: parent1/suba/subb/parent2/suba/subb/ When I am in parent1/suba/subb , I would like to change to parent2/suba/subb without doing something like cd ../../../parent2/suba/subb . How can I do this without listing all the subdirectories and ../ s? | You can use the PWD variable and parameter expansion constructs to quickly apply a text transformation to the current directory. cd ${PWD/parent1/parent2} This doesn't have to be exactly a path component, it can be any substring. For example, if the paths are literally parent1 and parent2 , and there is no character 1 further left in the path, you can use cd ${PWD/1/2} . The search string can contain several path components, but then you need to escape the slash. For example, to go from ~/checkout/trunk/doc/frobnicator/widget to ~/checkout/bugfix/src/frobnicator/widget , you can use cd ${PWD/trunk\/doc/bugfix/src} . More precisely, the parent1 part is a shell wildcard pattern, so you can write something like cd ${PWD/tr*c/bugfix/src} . In zsh , you can use the shorter syntax cd parent1 parent2 . Again, you can replace any substring in the path (here, this is exactly a substring, not a wildcard pattern). You can implement a similar function in bash. cd () { local options options=() while [[ $1 = -[!-]* ]]; do options+=("$1"); shift; done if (($# == 2)); then builtin cd "${options[@]}" "${PWD/$1/$2}" else builtin cd "${options[@]}" "$@" fi} Zsh provides completion for the second argument. Implementing this in bash is left as an exercise for the reader. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67877/"
]
} |
157,541 | I can run this command from my command line prompt: cp -r folder/!(exclude-me) ./ To recursively copy all contents of folder except for the subdirectory named exclude-me into the current directory. This works exactly as intended. However, I need this to work in a bash script I've written, where I have this: if [ -d "folder" ]; then cp -r folder/!(exclude-me) ./ rm -rf folderfi But when I run the script: bash my-script.sh I get this: my-script.sh: line 30: syntax error near unexpected token `('my-script.sh: line 30: ` cp -r folder/!(exclude-me) ./' And I'm at a loss as to why it works from the command prompt, but the exact same line doesn't work in a bash script. | That's because the syntax you're using depends on a particular bash feature which is not activated. You can activate it by adding the relevant command to your script: ## Enable extended globbing featuresshopt -s extglobif [ -d "folder" ]; then cp -r folder/!(exclude-me) ./ && rm -rf folderfi This is the relevant section of man bash : If the extglob shell option is enabled using the shopt builtin, several extended pattern matching operators are recognized. In the following description, a pattern-list is a list of one or more patterns separated by a |. Composite patterns may be formed using one or more of the fol‐ lowing sub-patterns: ?(pattern-list) Matches zero or one occurrence of the given patterns *(pattern-list) Matches zero or more occurrences of the given patterns +(pattern-list) Matches one or more occurrences of the given patterns @(pattern-list) Matches one of the given patterns !(pattern-list) Matches anything except one of the given patterns The reason it is enabled in interactive invocations of bash in your case may be because you have shopt -s extglob in your ~/.bashrc or because you're using https://github.com/scop/bash-completion (as found in the bash-completion package in Debian-based OSes at least) included via ~/.bashrc or /etc/bash.bashrc which does enable extglob upon initialisation . Note that those ksh-style extended glob operators can also be disabled altogether at build time by passing --disable-extended-glob to the configure script in the bash source code, or be enabled by default with --enable-extended-glob-default . However note that extglob breaks POSIX compliance. For instance, while the behaviour for echo !(x) is unspecified in the POSIX sh language, a='!(x)'echo $a Is required to output !(x) assuming the default value of $IFS , not the list of filenames in the current directory other than x , so that shouldn't be done for builds of bash that are intended to be used as sh . In ksh, those X(...) operators are enabled by default but not recognised upon expansions. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85443/"
]
} |
157,547 | I encountered this error when updating bash for the CVE-2014-6271 security issue: # yum update bashRunning transaction (shutdown inhibited)Updating : bash-4.2.47-4.fc20.x86_64/bin/sh: error importing function definition for `some-function' | [edited after 1st comment from: @chepner - thanks!] /bin/bash allows hyphens in function names, /bin/sh (Bourne shell) does not.Here, the offending "some-function" had been exported by bash, and bash called yum which called /bin/sh which reported the error above. fix: rename shell functions to not have hyphens man bash says that bash identifiers may consist:"only of alphanumeric characters and underscores" The /bin/sh error is much more explicit: some-function () { :; } sh: `some-function': not a valid identifier | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41195/"
]
} |
157,558 | It's not clear to be from the manpage for crontab. IS extra white space allowed between the fields? e.g., if I have this: 1 7 * * * /scripts/foo5 17 * * 6 /script/bar31 6 * * 0 /scripts/bofh is it safe to reformat it nicely like this: 1 7 * * * /scripts/foo 5 17 * * 6 /script/bar31 6 * * 0 /scripts/bofh ? | Yes extra space is allowed and you can nicely line up your fields for readability. From man 5 crontab Blank lines and leading spaces and tabs are ignored. and An environment setting is of the form, name = valuewhere the spaces around the equal-sign (=) are optional, and any sub‐sequent non-leading spaces in value will be part of the value assignedto name. For the fields itself the man pages says: The fields may be separated by spaces or tabs. That should be clear: multiple spaces are allowed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/157558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84619/"
]
} |
157,581 | I have some text like this: Sentence #1 (n tokens):Blah Blah Blah[... ... ...]( #start first set here ... (other possible parens and text here) ) #end first set here(...)(...)Sentence #2 (n tokens): I want to extract the second set of parens (including everything in between) ,i.e., ( ... (other possible parens here)) Is there a bash way to do this. I tried the simple 's/(\(.*\))/\1/' | This will do it. There's probably a better way, but this is the first approach that came to mind: echo 'Sentence #1 (n tokens):Blah Blah Blah[... ... ...]( ... (other possible parens here) )(...)(...)Sentence #2 (n tokens):' | perl -0777 -nE ' $wanted = 2; $level = 0; $text = ""; for $char (split //) { $level++ if $char eq "("; $text .= $char if $level > 0; if ($char eq ")") { if (--$level == 0) { if (++$n == $wanted) { say $text; exit; } $text=""; } } }' outputs ( ... (other possible parens here) ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85474/"
]
} |
157,628 | I have a console program with an interactive shell, similar to say, the Python interactive shell. Is there an easy way to start this interactive program A and then use another program B to run A? I want to do something like this: $ /usr/bin/A&$ #find PID of A somehow$ somecommand "PID of A" "input string to send to A"output string from A$ What kind of "somecommand" could do this? Is this what "expect" is supposed to facilitate? I read the expect man page but still have no idea what it does. | expect is for a different purpose. It runs commands on a captive program. You, by contrast, are asking for a way to send commands to a process already running in the background. As a bare-bones minimal example of what you want, let's create a FIFO: $ mkfifo in A FIFO is a special file that one process can write to while a different process reads from it. Let's create a process to read from our FIFO file in : $ python <in &[1] 3264 Now, let's send python a command to run from the current shell: $ echo "print 1+2" >in$ 3 The output from python is 3 and appears here on stdout. If we had redirected python's stdout, it could be sent elsewhere. What expect does expect allows you to automate interaction with a captive command. As an example of what expect can do, create a file: #!/usr/bin/expect --spawn pythonexpect ">>>"send "print 1+2\r"expect ">>>" Then, run this file with expect : $ expect myfilespawn pythonPython 2.7.3 (default, Mar 13 2014, 11:03:55) [GCC 4.7.2] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> print 1+23>>> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85514/"
]
} |
157,689 | I have explored almost all available similar questions , to no avail. Let me describe the problem in detail: I run some unattended scripts and these can produce standard output and standard error lines, I want to capture them in their precise order as displayed by a terminal emulator and then add a prefix like "STDERR: " and "STDOUT: " to them. I have tried using pipes and even epoll-based approach on them, to no avail. I think solution is in pty usage, although I am no master at that. I have also peeked into the source code of Gnome's VTE , but that has not been much productive. Ideally I would use Go instead of Bash to accomplish this, but I have not been able to. Seems like pipes automatically forbid keeping a correct lines order because of buffering. Has somebody been able to do something similar? Or it is just impossible? I think that if a terminal emulator can do it, then it's not - maybe by creating a small C program handling the PTY(s) differently? Ideally I would use asynchronous input to read these 2 streams (STDOUT and STDERR) and then re-print them second my needs, but order of input is crucial! NOTE: I am aware of stderred but it does not work for me with Bash scripts and cannot be easily edited to add a prefix (since it basically wraps plenty of syscalls). Update: added below two gists Example program that generates mixed stdout/stderr Expected output from program above (sub-second random delays can be added in the sample script I provided for a proof of consistent results) Update: solution to this question would also solve this other question , as @Gilles pointed out. However I have come to the conclusion that it's not possible to do what asked here and there. When using 2>&1 both streams are correctly merged at the pty/pipe level, but to use the streams separately and in correct order one should indeed use the approach of stderred that involves syscall hooking and can be seen as dirty in many ways. I will be eager to update this question if somebody can disprove the above. | You might use coprocesses. Simple wrapper that feeds both outputs of a given command to two sed instances (one for stderr the other for stdout ), which do the tagging. #!/bin/bashexec 3>&1coproc SEDo ( sed "s/^/STDOUT: /" >&3 )exec 4>&2-coproc SEDe ( sed "s/^/STDERR: /" >&4 )eval $@ 2>&${SEDe[1]} 1>&${SEDo[1]}eval exec "${SEDo[1]}>&-"eval exec "${SEDe[1]}>&-" Note several things: It is a magic incantation for many people (including me) - for a reason (see the linked answer below). There is no guarantee it won't occasionally swap couple of lines - it all depends on scheduling of the coprocesses. Actually, it is almost guaranteed that at some point in time it will. That said, if keeping the order strictly the same, you have to process the data from both stderr and stdin in the same process, otherwise the kernel scheduler can (and will) make a mess of it. If I understand the problem correctly, it means that you would need to instruct the shell to redirect both streams to one process (which can be done AFAIK). The trouble starts when that process starts deciding what to act upon first - it would have to poll both data sources and at some point get into state where it would be processing one stream and data arrive to both streams before it finishes. And that is exactly where it breaks down. It also means, that wrapping the output syscalls like stderred is probably the only way to achieve your desired outcome (and even then you might have a problem once something becomes multithreaded on a multiprocessor system). As far as coprocesses be sure to read Stéphane's excellent answer in How do you use the command coproc in Bash? for in depth insight. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85565/"
]
} |
157,693 | I would like to add the output of a Python script into my zsh prompt but I am not sure how to do it? Is this done by what is called "PROMPT EXPANSION" in the man pages? Please someone set me on the right track, i.e. post some helpful links I could not find with Google. | There are two principal ways: Parameter expansion by enabling PROMPT_SUBST The psvar array 1. Parameter expansion in prompt If PROMPT_SUBST is enabled setopt PROMPT_SUBST the prompt is subjected to parameter expansion, command substitution and arithmetic expansion before it is evaluated. That way, the output of a script can be included via command substitution. For example: PROMPT='Look at this: $(python yourscript.py) >' If the output contains escape sequences ( %~ , %M , %F{red} etc.) they will be evaluated before the prompt is printed. 2. The psvar array One of the first nine values of the array psvar can be set to the output of the script. It can then be recalled by using %Xv , where X is a number between 1 and 9 (defaults to 1 if X is left out). psvar[5]=$(python yourscript.py)PROMPT='Look at this: %5v >' In order to refresh the value every time before the prompt is printed, the hook function precmd has to be set: precmd() { psvar[5]=$(python yourscript.py)} If there already is a precmd function, or if more than one function is to be used, it is a good idea to use add-zsh-hook . # load add-zsh-hook, need to be done only onceautoload -Uz add-zsh-hookpyscript() { psvar[5]=$(python yourscript.py)}add-zsh-hook precmd pyscript This adds pycript to the list of functions that need to be run before printing the prompt. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47584/"
]
} |
157,697 | I am using Fedora 20 on two machines. Having read about the Shellshock vulnerability, just now at 1100ish UTC on September 26th 2014, in UK, after a yum update bash to protect against it, I tried this recommended testmodes: env x='() { :;}; echo vulnerable' bash -c "echo this is a test" and, in both user and su modes, got this result on one machine: bash: warning: x: ignoring function definition attemptbash: error importing function definition for 'x'this is a test On the other I got simply: this is a test Please: have I succeeded on both machines, or should I worry? In response to @terdon's comment I got this result: [Harry@localhost]~% env X='() { (a)=>\' bash -c "echo echo vuln"; [[ "$(cat echo)" == "vuln" ]] && echo "still vulnerable :("echo vulncat: echo: No such file or directory[Harry@localhost]~% Not sure what it means, though. Just to make it clear, I am puzzled by the warning, and also the differences between the two machines. I have had another close look at the warning message. It might be that, on the machine without the error message, I entered the command with "copy and paste": on the other I typed it in and got the warning, and I now see that the warning quotes the final 'x' as `x' (note the "back tick"). That machine has an american keyboard that I cannot yet change to UK layout, but there is another question entirely. Pursuing this on the 'net this LinuxQuestions.org thread discusses it and it appears that both are safe. | You have done it right. Your systems are secure and not vulnerable from this exploit. If your system would not be secure, the output of the command would be: vulnerable this is a test but since your output is bash: warning: x: ignoring function definition attempt bash: error importing function definition for 'x' this is a test you are safe. If you did this yesterday, please consider running yum update bash today, too as the fix from yesterday is not as good as the one released today. EDIT (as OP requested more information) I can also calm you on the newer vulnerablility. Your system already has the new fix installed. If you'd had the output echo vuln still vulnerable :( you'd be still vulnerable. Now I cannot give you answer to how the exploit exactly works, I mean by that that I cannot tell you what exactly happens and what the differences are between the first and the second exploit. But I can give you a simplified answer to how the expoit works. env X='() { (a)=>\' bash -c "echo echo vuln"; [[ "$(cat echo)" == "vuln" ]] && echo "still vulnerable :(" "Does nothing else" than saving a bit of executable code in an environmental variable that will be executed every time you start a bash-shell. And a bash-shell is started easily/often. Not only by yourself but also a lot of programs need a bash to fulfill theyr work. Like CGI for example. If you would like to have some deeper read about this exploit, here is a link to red hats security blog: https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18182/"
]
} |
157,698 | Redirecting output from a script STDOUT + STDERR to Logfile 1 and a grep to Logfile 2 ./run_test.sh 2>&1 | tee -a /var/log/log1.log | (grep 'START|END') > /var/log/myscripts.log How can I make this work? I tried the different syntax, but it did not works. The output will be redirected only to the first log. The second log is empty. ./run.sh 2>&1 | tee -a ~/log1.log | grep 'Start' > /var/log/myscripts.log./run.sh 2>&1 | tee -a ~/log1.log | egrep 'Start' > /var/log/myscripts.log./run.sh 2>&1 | tee -a ~/log1.log | grep -E 'Start' > /var/log/myscripts.log log1.log contains the output. myscripts.log is empty. | You have done it right. Your systems are secure and not vulnerable from this exploit. If your system would not be secure, the output of the command would be: vulnerable this is a test but since your output is bash: warning: x: ignoring function definition attempt bash: error importing function definition for 'x' this is a test you are safe. If you did this yesterday, please consider running yum update bash today, too as the fix from yesterday is not as good as the one released today. EDIT (as OP requested more information) I can also calm you on the newer vulnerablility. Your system already has the new fix installed. If you'd had the output echo vuln still vulnerable :( you'd be still vulnerable. Now I cannot give you answer to how the exploit exactly works, I mean by that that I cannot tell you what exactly happens and what the differences are between the first and the second exploit. But I can give you a simplified answer to how the expoit works. env X='() { (a)=>\' bash -c "echo echo vuln"; [[ "$(cat echo)" == "vuln" ]] && echo "still vulnerable :(" "Does nothing else" than saving a bit of executable code in an environmental variable that will be executed every time you start a bash-shell. And a bash-shell is started easily/often. Not only by yourself but also a lot of programs need a bash to fulfill theyr work. Like CGI for example. If you would like to have some deeper read about this exploit, here is a link to red hats security blog: https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85580/"
]
} |
157,730 | I am trying to install ap-hotspot on Ubuntu-14.04 When I enter the command: sudo add-apt-repository ppa:nilarimogard/webupd8 It gives me this message "Cannot add PPA: 'ppa"nilarimogard/webupd8' Please check that the PPA name and format is correct" How do I proceed? Since I am using college proxy to access the Internet,so I tried sudo -E add-apt-repository ppa:nilarimogard/webupd8 but it didn't help. But I am able to run sudo apt-get update so there is no problem with the internet connection.I also tried reinstalling ca-cerficates by using the command sudo apt-get install --reinstall ca-certificates it also didn't solve the problem.I also tried from Ubuntu Software Center, but there I am also unable to add a PPA repository. please help me in resolving this problem... | Since you can't add the repository, you can always add them from a terminal using the command line. Browse to the list of the repositories at the WebUpd8 Website . Copy down the address of the master repository, which is Master Repository . You want to add this one because it contains all the others. sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup sudo gedit /etc/apt/sources.list Visit Master Repository in a Web Browser. Find the Dropdown Arrow that reads: Technical Details about this PPA Click Your Ubuntu Version in the List Labeled Choose Your Version Add the resulting output into the file in Step 2. Save the File sudo apt-get update . This update command should now fetch the Private Keys of the new Repository. If you receive the error as you stated in your comment, then this repository has no private key. You may want to contact the PPA maintainer at that point who will either give you the key, or tell you to ignore the Warning. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157730",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85607/"
]
} |
157,738 | Say I have multiple files x1 , x2 , x3 , x4 , all with common header date, time, year, age . How can I merge them to one singe file X in shell scripting? File x1 : date time year age101014 1344 2012 52111012 1200 2010 49 File x2 : date time year age140112 1100 2011 54230113 0500 2005 46 Similiary for other files x3 and x4 . The output should be: date time year age101014 1344 2012 52111012 1200 2010 49140112 1100 2011 54230113 0500 2005 46 and the similar data from x3 and x4 . | Since you can't add the repository, you can always add them from a terminal using the command line. Browse to the list of the repositories at the WebUpd8 Website . Copy down the address of the master repository, which is Master Repository . You want to add this one because it contains all the others. sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup sudo gedit /etc/apt/sources.list Visit Master Repository in a Web Browser. Find the Dropdown Arrow that reads: Technical Details about this PPA Click Your Ubuntu Version in the List Labeled Choose Your Version Add the resulting output into the file in Step 2. Save the File sudo apt-get update . This update command should now fetch the Private Keys of the new Repository. If you receive the error as you stated in your comment, then this repository has no private key. You may want to contact the PPA maintainer at that point who will either give you the key, or tell you to ignore the Warning. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85614/"
]
} |
157,743 | I am running a Fedora desktop behind a corporate proxy that is blocking yum traffic (specifically *.gz and *.bz2 ). I have access to a separate RedHat machine via ssh which can download anything it likes. When I do yum update and other yum commands: Is it possible to route that traffic to the RedHat machine to do the downloads for me? I don't have root access on the RedHat machine but I can login and use wget to download files. If so, how? | My solution was similar to @slm's but I used SOCKS instead because it is simpler and required no proxy installation on the server or client. Run all commands on the computer with restricted acccess. in yum.conf set the proxy as follows proxy=socks5h://localhost:1080 from a terminal type ssh -D 1080 YOUR_USER@YOUR_SERVER_WITH_FULL_WEB_ACCESS press enter and type your password. now, in a separate terminal (not the ssh one) type yum update | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/157743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83122/"
]
} |
157,763 | cd - can move to the last visited directory. Can we visit more history other than the last one? | The command you are looking for is pushd and popd . You could view a practical working example of pushd and popd from here . mkdir /tmp/dir1mkdir /tmp/dir2mkdir /tmp/dir3mkdir /tmp/dir4cd /tmp/dir1pushd .cd /tmp/dir2pushd .cd /tmp/dir3pushd .cd /tmp/dir4pushd .dirs/tmp/dir4 /tmp/dir4 /tmp/dir3 /tmp/dir2 /tmp/dir1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/157763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
157,767 | As I understand it, generally it is considered safe to let anyone provide information that will be stored in an environmental variable. The shellshock vulnerability is an issue here because it means that code at the end of a function definition inside an environmental variable will be executed when a new instance of bash launches and you obviously don't want anyone to run any code they please on your server. Function definitions themselves are apparently not a security risk though and are allowed because they have to be explicitly called for their code to be executed. My question is why can't a malicious user simply define a function, containing their malicious code as a common command like ls and then hope that the script (or whatever is being run) will use this command at some point? An example of what I have in mind: $ export ls='() { echo "doing bad things..."; }'$ bash -c lsdoing bad things... | It is a security risk. That's generally why you can't do it when switching to another context (remote control of a system, changing users, etc). If you have the ability to create any environment variable you want, there are any number of potential ways you can execute arbitrary code. Take $LD_PRELOAD as an example. If you have the ability to set that variable, you can replace library functions, and stick your code in there. You can set the $DISPLAY variable, and redirect the X display the program connects to so that you get control of the app. This is why things like sudo strip the environment of all variables. sudo only allows a few select variables through. And of those variables, it sanitizes them (it passes through the $TERM variable, but it won't if it contains unusual characters). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82317/"
]
} |
157,783 | The following commands seem to be roughly equivalent: read varnamevarname=$(head -1)varname=$(sed 1q) One difference is that read is a shell builtin while head and sed aren't. Besides that, is there any difference in behavior between the three? My motivation is to better understand the nuances of the shell and key utilities like head,sed . For example, if using head is an easy replacement for read , then why does read exist as a builtin? | Neither efficiency nor builtinness is the biggest difference. All of them will return different output for certain input. head -n1 will provide a trailing newline only if the input has one. sed 1q will always provide a trailing newline, but otherwise preserve the input. read will never provide a trailing newline, and will interpret backslash sequences. Additionally, read has additional options, such as splitting, timeouts, and input history, some of which are standard and others vary between shells. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64170/"
]
} |
157,787 | We are running Debian Etch, Lenny and Squeeze because upgrades have never been done in this shop; we have over 150 systems running various Debian versions. In light of the "shell shock" of this week, I assume I need to upgrade bash. I do not know Debian so I am concerned. Can I merely execute apt-get install bash on all of my Debian systems and get the correct Bash package while my repository is pointed at a Squeeze entry. If not, what other course of action do I have? | You have the option to just upgrade bash. To do so use the following apt-get command: apt-get update Then after the update fetches all of the available updates run: apt-get install --only-upgrade bash To get updates on older releases, Squeeze for example, you will probably need to add the Squeeze-LTS repo to your sources.list. To add this repository, edit /etc/apt/sources.list and add the following line to the end of the file. deb http://ftp.us.debian.org/debian squeeze-lts main non-free contrib To check a particular system for the vulnerabilities (or see if the upgrade works) you can check the bash versions that you are using and see if the version is affected (it probably is) or there are numerous shell test scripts available on the web. EDIT 1 To upgrade bash on Lenny or Etch, take a look at Ilya Sheershoff's answer below for how to compile bash from source and manually upgrade the version of bash that your release is using. EDIT 2 Here is an example sources.list file from a Squeeze server I successfully upgraded: deb http://ftp.us.debian.org/debian/ squeeze maindeb-src http://ftp.us.debian.org/debian/ squeeze maindeb http://security.debian.org/ squeeze/updates maindeb-src http://security.debian.org/ squeeze/updates main# squeeze-updates, previously known as 'volatile'deb http://ftp.us.debian.org/debian/ squeeze-updates maindeb-src http://ftp.us.debian.org/debian/ squeeze-updates main# Other - Adding the lsb source for security updatesdeb http://http.debian.net/debian/ squeeze-lts main contrib non-freedeb-src http://http.debian.net/debian/ squeeze-lts main contrib non-free | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85648/"
]
} |
157,794 | I've updated my systems to the latest versions of bash (Fedora: bash-4.2.48-2.fc19.x86_64 and CentOS: bash-4.1.2-15.el6_5.2.x86_64 ) Is merely updating enough to avoid the exploit or do I need to then close all terminals, restart all services, or restart the systems? | From RedHat FAQ : (vulnerability CVE-2014-6271 in Bash.) Do I need to reboot or restart services after installing this update? No, once the new bash package is installed, you do not need to rebootor restart any services. This issue only affects the Bash shell duringstartup, not already running shells. Upgrading the package will ensureall new shells that are started are using the fixed version. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34796/"
]
} |
157,805 | I upgraded my old Debian 6.0 (Squeeze) server, but still the vulnerability seems to be there: $ env x='() { :;}; echo vulnerable' bash -c 'echo hello'vulnerablehello How do I upgrade Bash to a newer version on Debian 6.0 (Squeeze)? | To get updates on older releases you will probably need to add the Debian 6.0 (Squeeze) LTS repository to your sources.list . To add this repository, edit /etc/apt/sources.list and add the following line to the end of the file. deb http://ftp.us.debian.org/debian squeeze-lts main non-free contrib Then run: apt-get update You should see some new sources in the list of repositories now as the update is running. Now just: apt-get install --only-upgrade bash Here is a listing of my sources.list file from a Squeeze server I just upgraded: deb http://ftp.us.debian.org/debian/ squeeze maindeb-src http://ftp.us.debian.org/debian/ squeeze maindeb http://security.debian.org/ squeeze/updates maindeb-src http://security.debian.org/ squeeze/updates main# squeeze-updates, previously known as 'volatile'deb http://ftp.us.debian.org/debian/ squeeze-updates maindeb-src http://ftp.us.debian.org/debian/ squeeze-updates main# Other - Adding the lsb source for security updatesdeb http://http.debian.net/debian/ squeeze-lts main contrib non-freedeb-src http://http.debian.net/debian/ squeeze-lts main contrib non-free | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
157,823 | I'm looking to list all ports a PID is currently listening on. How would you recommend I get this kind of data about a process? | You can use ss from the iproute2 package (which is similar to netstat ): ss -l -p -n | grep "pid=1234," or (for older iproute2 version): ss -l -p -n | grep ",1234," Replace 1234 with the PID of the program. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/157823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61349/"
]
} |
157,868 | I've got a server for personal use running FreeBSD 10, and it doesn't have Bash installed and never did. However, it comes with the its own POSIX-compliant shell "sh". Do I have to worry about the Shellshock bug on my server? I tried running this infamous shell script and didn't get the "vulnerable" echo, but I don't know if that ensures that I am safe: env x='() { :;}; echo vulnerable' sh -c 'echo hello' | Yes, you are safe (at least against Shellshock). The "shellshock" bug is particular to bash and doesn't affect the Bourne (traditional sh ), or Almquist (FreeBSD sh , ash , Debian's dash (and sh in newer releases)) or any other non- bash shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84887/"
]
} |
157,878 | Recently, I have been playing around a lot with color in the terminal and, therefore, with escape sequences, too. I've read the relevant parts of the Bash manpage along with numerous helpful pages on the Net. I've got most of what I want working; a nice color Bash prompt, for example. That said, I am still somewhat confused over when I should use (or need to use) the "non-printing escape sequence" characters. Those would be \[ and \] . If I don't use them in PS1 when defining my prompt then my prompt most definitely does not display properly. If I do use them, everything is fine. Good. But, outside of PS1, they don't seem to operate the same way. For example, to make scripts more readable I have defined a variable $RGB_PURPLE which is set via a simple function c8_rgb() . The end result is that the variable contains the value \[\e[01;38;05;129m\] which turns on a bold purple foreground color. When I use this variable in PS1, it does what I expect. If I use it via printf or echo -e it "half" works. The command printf "${RGB_PURPLE}TEST${COLOR_CLR}\n" (where COLOR_CLR is the escape sequence to reset text properties) results in the following display: \[\]TEST\[\] where everything except the first \[ and final \] are displayed in purple. Why the difference? Why are these brackets printed instead of being processed by the terminal? I would have expected them to be treated the same when printed as part of the prompt as when printed by other means. I don't understand the change. It seems, empirically, that these characters must be used inside the prompt definition, while they shouldn't be used in pretty much every other case. This makes it difficult to use a common function, like my c8_rgb() function mentioned above, to handle escape sequence generation and output since the function cannot know whether its result will be in a prompt config or someplace else. And a quick related question: are echo -e and printf essentially the same with regards to outputting escape sequences? I typically use printf, but is there any reason to favor one over the other? Can anyone explain this apparent subtle difference? Are there any other oddities I should be aware of when using escape sequences (usually just for color) in the terminal? Thanks! | The "non-printing escape sequence" is needed when using non-printing characters in $PS1 because bash needs to know the position of the cursor so that the screen can be updated correctly when you edit the command line. Bash does that by counting the number of characters in the $PS1 prompt and then that's the column number the cursor is in. However, if you place non-printing sequences in $PS1, then that count is wrong and the line can be messed up if you edit the command line. Hence the introduction of the \[ and \] markers to indicate that the enclosed bytes should not be counted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62287/"
]
} |
157,880 | I have root access to a Linux server (CentOS 5.10). I want to see the email server's settings such as whether SMPT is working, wheter there is an email server, port number, does it require SSL, what authentication method is required, the list of email addresses, if possible the passwords for the email addresses. And where should I look for documentation? Here's netstat -ntlp Active Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 4796/javatcp 0 0 0.0.0.0:8009 0.0.0.0:* LISTEN 4796/javatcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 21409/mysqldtcp 0 0 0.0.0.0:970 0.0.0.0:* LISTEN 3332/rpc.statdtcp 0 0 0.0.0.0:44 0.0.0.0:* LISTEN 6765/sshdtcp 0 0 0.0.0.0:10991 0.0.0.0:* LISTEN 4796/javatcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 3271/portmaptcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 4700/httpdtcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4796/javatcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 4768/postgrestcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 4338/sendmailtcp 0 0 0.0.0.0:30847 0.0.0.0:* LISTEN 4796/java | The "non-printing escape sequence" is needed when using non-printing characters in $PS1 because bash needs to know the position of the cursor so that the screen can be updated correctly when you edit the command line. Bash does that by counting the number of characters in the $PS1 prompt and then that's the column number the cursor is in. However, if you place non-printing sequences in $PS1, then that count is wrong and the line can be messed up if you edit the command line. Hence the introduction of the \[ and \] markers to indicate that the enclosed bytes should not be counted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2880/"
]
} |
157,892 | I understand that a package has two components: config and data files. During package upgrade (i.e. security upgrade) data files can be overwritten, but config files should always stay the same. Also config files are usually in /etc and data in /usr . Sometimes, however, the distinction is blurred. In my case, I have modified the icon file for Icedove (Thunderbird): /usr/share/applications/icedove.desktop Now, every time there is a Icedove (Thunderbird) update, my changes get overwritten with the default file (even if it has not changed between updates). Is there any way to prevent this particular file from being overwritten? Setting it to immutable with chattr +i icedove.desktop is not a good idea, as it produces error during package upgrade. | You want the dpkg-divert utility. dpkg-divert --divert /usr/share/applications/icedove.desktop.packaged --rename /usr/share/applications/icedove.desktop | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
157,904 | I have a web server where I have run out of space and it's causing issues with the WordPress sites I am running on them. I know that I have a lot of large .png files (the fact that it's PNG is a mistake in itself, but let's not get into that). I would like to get the list of PNG or JPEG files that are on the server and sort them by decreasing size. I know I can use ls -SlahR , but the sorting is on a per-folder basis. I then came up with find . -name "*.png" | xargs -i -n1 ls -lah {} which is OK except that (a) it doesn't sort the lines and (b) it shows the file permissions and ownerships which I couldn't care less about. So is there something better? Something that would produce [size] [path_to_file]? | You could use a combination of find , du and sort like the following: find <directory> -iname "*.png" -type f -print0 | xargs -0 -n1 du -b | sort -n -r This searches for all regular files in <directory> ending with .png (case-insensitive). The result is then passed to xargs which calls du with each single file, getting its size in bytes (due to -b ) and passed to sort , which sorts the result numerically ( -n ) by the file size in decreasing order ( -r ). The -print0 is used to separate the results by \0 instead of \n , so you can have paths with strange characters like spaces and newlines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48297/"
]
} |
157,911 | I have two different setups I like to use in terminal and vim. One uses a light background and a somewhat fancy airline statusline in vim. The other uses a dark background and a more barebones vim appearance. I'm either wavering between the two out of indecision or just for a little variety once in a while. What's a smart way to easily switch between these two configurations at will? Right now I've basically got two slightly different .bash_profile s and .vimrc s. When I want to go dark I manually source the dark profile and I've defined a bash alias to start vim with the alternate vimrc. I'm sure there's a better way, and I'd be interested in suggestions. Update :I went with the excellent suggestion of setting a THEME environment variable to reference in the config files. Works like a charm. Also found this gem (not in the Ruby sense) which lets me swith the iTerm profile to a dark one at the same time. A warning: defining the bash function as a one-liner like that was giving me a syntax error so I had to break it across multiple lines. it2prof() { echo -e "\033]50;SetProfile=$1\a"}alias dark="export THEME=dark && it2prof black && . ~/.bash_profile"alias light="unset THEME && it2prof white && . ~/.bash_profile" Better still, it turns out iTerm2 has a bunch of escape codes available to changing settings on the fly. Another update :The iTerm2 docs warn that the escape sequences may not work in tmux and screen, and indeed they don't. To get them working you need to tell the multiplexer to send the escape sequence to the underlying terminal rather than trying to interpret it. It's a little hairy, but this is now working for me in tmux, in screen, and in a normal shell session: darken() { if [ -n "$ITERM_PROFILE" ]; then export THEME=dark it2prof black reload_profile fi}lighten() { if [ -n "$ITERM_PROFILE" ]; then unset THEME it2prof white reload_profile fi}reload_profile() { if [ -f ~/.bash_profile ]; then . ~/.bash_profile fi}it2prof() { if [[ "$TERM" =~ "screen" ]]; then scrn_prof "$1" else # send escape sequence to change iTerm2 profile echo -e "\033]50;SetProfile=$1\007" fi}scrn_prof() { if [ -n "$TMUX" ]; then # tell tmux to send escape sequence to underlying terminal echo -e "\033Ptmux;\033\033]50;SetProfile=$1\007\033\\" else # tell gnu screen to send escape sequence to underlying terminal echo -e "\033P\033]50;SetProfile=$1\007\033\\" fi} | Use an environment variable. This way, you can set THEME=dark or THEME=light in a shell, and all programs started by that shell will use the desired scheme. In bash or any other shell: case $THEME in light) PS1='\[\e05m\]fancy stuff\[\e0m\]';; *) PS1='\w\$ ';;esac In your .vimrc : if $THEME == "light" …else …endif | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47044/"
]
} |
157,954 | Is there a simple command to find out to which Wi-Fi AP I am connected? I have several networks configured in my /etc/wpa_supplicant.d/wpa_supplicant.conf , with assigned priorities, and depending where I am I connect to the best available Wi-Fi/SSID. Connection happens automatically, so I don't know to which SSID I am connected. Now I need to check (inside a script) whether I am connected at home, or somewhere else. I know that I can list all available SSIDs with iwlist wlan0 scan But how can I print the SSID to which I am connected at the moment? I am using Debian . | How about iwgetid ? The iwgetid(8) man page says: iwgetid is used to find out the NWID, ESSID or AP/Cell Address of the wireless network that is currently used. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
157,957 | The Learning Bash Book mentions that a subshell will inherit only environment variables and file descriptors, etc., and that it will not inherit variables that are not exported: $ var=15$ (echo $var)15$ ./file # this file include the same command echo $var$ As I know the shell will create two subshells for () and for ./file , but why in the () case does the subshell identify the var variable although it is not exported and in the ./file case it did not identify it? # Strace for () clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f24558b1a10) = 25617# Strace for ./fileclone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f24558b1a10) = 25631 I tried to use strace to figure out how this happens and surprisingly I found that bash will use the same arguments for the clone system call, so this means that both the forked process in () and ./file should have the same process address space of the parent, so why in the () case is the varible visible to the subshell and the same does not happen for the ./file case, although the same arguments is based on the clone system call? | How about iwgetid ? The iwgetid(8) man page says: iwgetid is used to find out the NWID, ESSID or AP/Cell Address of the wireless network that is currently used. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85790/"
]
} |
157,997 | I had a question on a job interview: How can you execute (run) the program with the user user1 without sudo privileges and without access to the root account: $ whoamiuser1$ ls -l ~/binary_program-rw-r--r-- 1 root root 126160 Jan 17 18:57 /home/user1/binary_program | Since you have read permission: $ cp ~/binary_program my_binary$ chmod +x my_binary$ ./my_binary Of course this will not auto-magically grant you escalated privileges. You would still be executing that binary as a regular user. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/157997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
158,038 | I'm having trouble getting aliases to expand on my hosting account when I run a command like: ssh user@server "bash -c \"alias\"" My .bashrc file is: echo .bashrc# .bashrcshopt -s expand_aliases# Source global definitions (commenting this out does nothing)if [ -f /etc/bashrc ]; then . /etc/bashrcfi# User specific aliases and functionsalias php="php55"alias composer="php ~/bin/composer.phar" When I run the above ssh command, I do see ".bashrc" echo'd. But if I try to run aliases, I get nothing. I could try "bash -ic", but this is actually in a script that I can't easily change, and I want to know why this isn't working. Output of ssh user@server "bash -c \"shopt\"" .bashrcautocd offcdable_vars offcdspell offcheckhash offcheckjobs offcheckwinsize offcmdhist oncompat31 offcompat32 offcompat40 offdirspell offdotglob offexecfail offexpand_aliases offextdebug offextglob offextquote onfailglob offforce_fignore onglobstar offgnu_errfmt offhistappend offhistreedit offhistverify offhostcomplete onhuponexit offinteractive_comments onlithist offlogin_shell offmailwarn offno_empty_cmd_completion offnocaseglob offnocasematch offnullglob offprogcomp onpromptvars onrestricted_shell offshift_verbose offsourcepath onxpg_echo off Output of ssh user@server "bash -c \"echo $SHELL\"" .bashrc/bin/bash | From the bash(1) man page: Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt (see the description of shopt under SHELL BUILTIN COMMANDS below). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/158038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17017/"
]
} |
158,041 | I am thinking on some like Contents-<arch>.gz on Debian. A network service were also okay. Does it exist? Simple elaboration: For example, we need a binary named exampletool , which we know very good from other distributions or operation systems. We want to install that, for example, with zypper. But zypper can only install a package. To find out, in which package can we find the required exampletool binary, we need to do practically a search, and ideally a fast, indexed search in the file list of the currently not installed, but in the repositories available packages . On debian, there is an index file in the package repositories named Contents-amd64.gz , in which we can find the required package with a single zgrep command. I am looking for some similar, single-command solution for OpenSUSE, too. If there is none, a web service were also okay for the same functionality. | To search from all available packages to find a particular file, you can use the option wp or se --provides --match-exact as an example: zypper se --provides --match-exact hg You will see output similiar to the following: Loading repository data...Reading installed packages...S | Name | Summary | Type --+-----------+--------------------------+-------- | mercurial | Scalable Distributed SCM | package From that point you can install the package through a standard zypper install zypper in mercurial It should be noted that zypper wp is obsolete and may no longer be available. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/158041",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52236/"
]
} |
158,053 | How can I tell checkinstall only create deb package file, but not install? with checkinstall --install=no , it fails at the end, for not having permission to do something. Does it really need root to create a deb file without installation? $ checkinstall --install=nocheckinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran This software is released under the GNU GPL.********************************************* Debian package creation selected ********************************************This package will be built according to these values: 0 - Maintainer: [ tim@admin ]1 - Summary: [ wine 1.6.2 built from source Oct 3, 2014 ]2 - Name: [ wine ]3 - Version: [ 1.6.2 ]4 - Release: [ 1 ]5 - License: [ GPL ]6 - Group: [ checkinstall ]7 - Architecture: [ i386 ]8 - Source location: [ wine-1.6.2 ]9 - Alternate source location: [ ]10 - Requires: [ ]11 - Provides: [ wine ]12 - Conflicts: [ ]13 - Replaces: [ ]Enter a number to change any of them or press ENTER to continue: Installing with make install...========================= Installation results ===========================make[1]: Entering directory `/tmp/wine-1.6.2/tools'make[1]: `makedep' is up to date.make[1]: Leaving directory `/tmp/wine-1.6.2/tools'make[1]: Entering directory `/tmp/wine-1.6.2/libs/port'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/libs/port'make[1]: Entering directory `/tmp/wine-1.6.2/libs/wine'version=`(GIT_DIR=../../.git git describe HEAD 2>/dev/null || echo "wine-1.6.2") | sed -n -e '$s/\(.*\)/const char wine_build[] = "\1";/p'` && (echo $version | cmp -s - version.c) || echo $version >version.c || (rm -f version.c && exit 1)make[1]: Leaving directory `/tmp/wine-1.6.2/libs/wine'make[1]: Entering directory `/tmp/wine-1.6.2/libs/wpp'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/libs/wpp'make[1]: Entering directory `/tmp/wine-1.6.2/tools'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools'make[1]: Entering directory `/tmp/wine-1.6.2/tools/widl'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/widl'make[1]: Entering directory `/tmp/wine-1.6.2/tools/winebuild'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/winebuild'make[1]: Entering directory `/tmp/wine-1.6.2/tools/winedump'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/winedump'make[1]: Entering directory `/tmp/wine-1.6.2/tools/winegcc'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/winegcc'make[1]: Entering directory `/tmp/wine-1.6.2/tools/wmc'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/wmc'make[1]: Entering directory `/tmp/wine-1.6.2/tools/wrc'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/wrc'make[1]: Entering directory `/tmp/wine-1.6.2/include'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/include'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/adsiid'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/adsiid'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: `libdinput.def' is up to date.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: `libdinput.def.a' is up to date.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dxerr8'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dxerr8'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dxerr9'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dxerr9'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dxguid'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dxguid'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/strmbase'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/strmbase'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/strmiids'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/strmiids'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/uuid'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/uuid'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/winecrt0'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/winecrt0'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/acledit'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/acledit'./tools/mkinstalldirs -m 755 /usr/local/lib/winemkdir /usr/local/lib/winemkdir: cannot create directory `/usr/local/lib/wine': Permission deniedmake: *** [/usr/local/lib/wine] Error 1**** Installation failed. Aborting package creation.Cleaning up...OKBye. with fakeroot checkinstall , also fail due to permission problem. | Try using checkinstall --install=no --fstrans=yes . It enables file system translation so package won't touch your actual filesystem. Thus it doesn't require root privileges to store files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
158,077 | I have setup a kvm win7 guest on a debian host. It's been up and running for a while now, but I had not done much with it. What I had done (for sure) was to download putty . Today I wanted to try and let putty access the host so I had to update the firewall rules. After that I had no internet connection from the guest anymore. Reverting the changes did not help either. So I wonder why I do not have Internet access from the guest anymore. I honestly can not figure out why. I append here the following info that will most probably be of interest: current iptable rules: http://pastebin.com/pTnfs5sr kvm guest xml: http://pastebin.com/U0GhW0px qemu cmd that runs the guest: http://pastebin.com/htZ4R0FE ifconfig : http://pastebin.com/uGVN29VZ Ifconfig shows 2 virtual interfaces: virbr0 which I expect and vnet0 which I do not know how it is setup and why. It does get destroyed and reappears along with virbr0 everytime I run virst net-destroy default and start it again. thank you in advance for your help. EDIT: appending the kernel routing table as well: Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 <external IP of Host> 0.0.0.0 UG 0 0 0 eth0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0195.251.61.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 | So to get internet connection to the guest I had to add the following nat table -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE-A POSTROUTING -s 192.168.122.0/24 -o eth0 -j SNAT --to-source <the host's ip> so I masquerade all outgoing packets that leave the host with ip from the guest subnet. filter table -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT Also I allow packets coming from virbr0 to the internet. And then allow replies to be forwarded back to virbr0 (established, related). filter table again: -A INPUT -s 192.168.122.0/24 -i virbr0 -j ACCEPT finally, allow guest to reach services run on host. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29529/"
]
} |
158,194 | What if 'kill -9' does not work? or How to kill a script which starts new processes? doesn't help me in anyway. I have a python script which starts automatically with another process id using the same port when killed using sudo kill -9 <pid> . $ lsof -i :3002COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEpython 13242 ubuntu 3u IPv4 64592 0t0 TCP localhost:3002 (LISTEN)$ sudo kill -9 13242$ lsof -i :3002COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEpython 16106 ubuntu 3u IPv4 74792 0t0 TCP localhost:3002 (LISTEN)$ sudo kill 16106$ lsof -i :3002COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEpython 16294 ubuntu 3u IPv4 75677 0t0 TCP localhost:3002 (LISTEN) It's not a Zombie process. $ ps -Al4 S 0 16289 1 0 80 0 - 12901 poll_s ? 00:00:00 sudo4 S 1000 16293 16289 0 80 0 - 1100 wait ? 00:00:00 sh0 S 1000 16294 16293 0 80 0 - 34632 poll_s ? 00:00:00 python I have even tried sudo pkill -f <processname> with no luck. It doesn't want to die. Update: It's parent process is sh whose parent is sudo as mentioned in the above table. I am not sure if it is safe to kill these abruptly. Also this is a shared ubuntu server. | Starts automatically with another process ID means that it is a different process. Thus there is a parent process, which monitors its children, and if one dies, it gets respawned by the parent. If you want to stop the service completely, find out how to stop the parent process. Killing it with SIGKILL is of course one of the options, but probably not The Right One TM , since the service monitor might need to do some cleanup to shut down properly. To find the monitor process, you might need to inspect the whole process list, since the actual listeners might dissociate themselves from their parent (usually by the fork() + setsid() combo). In this case, I find the output of ps faux (from procps at least, might vary for other implementations) rather handy - it lists all processes in a hierarchical tree. Unless there has been a PID wrap (see also wikipedia ), the monitor PID should be smaller than PID of any of the listeners (unless of course you hit a PID-wraparound). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/158194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68924/"
]
} |
158,234 | I have a large file composed of text fields separated by semicolons in the form of a large table. It has been sorted.I have a smaller file composed of the same text fields. At some point, someone concatenated this file with others and then did a sort to form the large file described above.I would like to subtract the lines of the small file from the big one (i.e. for each line in the small file, if a matching string exists in the big file, delete that line in the big file). The file looks roughly like this GenericClass1; 1; 2; NA; 3; 4;GenericClass1; 5; 6; NA; 7; 8;GenericClass2; 1; 5; NA; 3; 8;GenericClass2; 2; 6; NA; 4; 1; etc Is there a quick classy way to do this or do I have to use awk? | You can use grep . Give it the small file as input and tell it to find non-matching lines: grep -vxFf file.txt bigfile.txt > newbigfile.txt The options used are: -F, --fixed-strings Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched. (-F is specified by POSIX.) -f FILE, --file=FILE Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX.) -v, --invert-match Invert the sense of matching, to select non-matching lines. (-v is specified by POSIX.) -x, --line-regexp Select only those matches that exactly match the whole line. (-x is specified by POSIX.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/158234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79386/"
]
} |
158,244 | I have a machine with both glibc i686 and x86_64, and a very annoying problem with glibc. Is it normal to have two libraries of the same name installed on one computer? How can I know which library is executed? Until recently, I believed that x86_64 was i686. Well, I must be mistaken but why? [root@machin ~]# yum info glibc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Excluding Packages in global exclude list Finished Installed Packages Name : glibc Arch : i686 Version : 2.5 Release : 42 Size : 12 M Repo : installed Summary : The GNU libc libraries. License : LGPL Description: The glibc package contains standard libraries which are used by : multiple programs on the system. In order to save disk space and : memory, as well as to make upgrading easier, common system code is : kept in one place and shared between programs. This particular package : contains the most important sets of shared libraries: the standard C : library and the standard math library. Without these two libraries, a : Linux system will not function. Name : glibc Arch : x86_64 Version : 2.5 Release : 42 Size : 11 M Repo : installed Summary : The GNU libc libraries. License : LGPL Description: The glibc package contains standard libraries which are used by : multiple programs on the system. In order to save disk space and : memory, as well as to make upgrading easier, common system code is : kept in one place and shared between programs. This particular package : contains the most important sets of shared libraries: the standard C : library and the standard math library. Without these two libraries, a : Linux system will not function. [root@machin ~]# yum info glibc-common Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Excluding Packages in global exclude list Finished Installed Packages Name : glibc-common Arch : x86_64 Version : 2.5 Release : 42 Size : 64 M Repo : installed Summary : Common binaries and locale data for glibc License : LGPL Description: The glibc-common package includes common binaries for the GNU libc : libraries, as well as national language (locale) support. | Technically, i686 is actually a 32-bit instruction set (part of the x86 family line), while x86_64 is a 64-bit instruction set (also referred to as amd64). From the sound of it, you have a 64-bit machine that has 32-bit libraries for backwards compatibility. That should be totally fine. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50325/"
]
} |
158,258 | So at work I have two web servers that I am able to ssh into. Both are RHEL 6.5 When I log into one, it shows this: [username@ldvweb01 /]$ When I log into the other one it shows: -bash-4.1$ I find it way more elegant when it shows the first one. How do I switch between the two? Can someone explain this to me? After running this echo $PS1 these are the results -bash-4.1$ echo $PS1\s-\v\$ and [appadmin@ldvcatweb01 /]$ echo $PS1[\u@\h \W]\$ After checking for the differences between both home directories. I found that there was no .bashrc or .bash_profile in the home directory. So I copied the ones from the previous server. # .bash_profile# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi# User specific environment and startup programsPATH=$PATH:$HOME/binexport PATH and # .bashrc# Source global definitionsif [ -f /etc/bashrc ]; then . /etc/bashrcfi# User specific aliases and functions I am guess that it just pulls the default bash settings from /etc Thanks for all your help. | The bash prompt is configured by the PS1 environment variable. You can get the prompt you desire by adding export PS1="[\u@\h \W]\$ " to your .bashrc file (located in your user's home directory). The full list of special characters that you can use for your prompt can be found in the official Bash documentation . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86048/"
]
} |
158,284 | I would like to know how to stop a running process after appending it with & . For example, I would like to install software foo . Now, assume, foo has many dependancies, it takes an hour to finish. So, I do: yum install foo & . But I would like to stop that on-going process either by making it foreground (the actual premise of my question) so I can interrupt it, or through other methods if necessary. Ctrl + C does not seem to stop this. | If the terminal you launched the command from is still open, you can get it back by running fg . If it is not, identify the process ID by running ps aux | grep yum or just pgrep yum and then use kill PID . Or, if you know you only have one yum instance, run pkill yum . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77324/"
]
} |
158,289 | I am trying to find the largest file in a directory recursively. If there is a subdirectory inside of that directory the function needs to go inside that directory and check to see if the largest file is there. Once the largest file is found the output is displayed with the relative path name and the name and size of the largest file. EX: dude@shell2 (~...assignment/solutions) % bash maxfile.sh ~/testclass/asndude.h.gch: 9481628 This is what I have: #!/bin/shclearrecursiveS() { for d in *; do if [ -d $d ]; then (cd $d; echo $(pwd)/$line; du -a; recursiveS;) fi done}recursiveS I have been stuck for a while now. I cannot implement this by pipelining a number of existing Unix tools. Any ideas would be nice! | use find (here assuming GNU find ) to output file names with the file size. sort. print out the largest one. find . -type f -printf "%s\t%p\n" | sort -n | tail -1 That assumes file paths don't contain newline characters. Using a loop in bash with the GNU implementation of stat : shopt -s globstarmax_s=0for f in **; do if [[ -f "$f" && ! -L "$f" ]]; then size=$( stat -c %s -- "$f" ) if (( size > max_s )); then max_s=$size max_f=$f fi fidoneecho "$max_s $max_f" This will be significantly slower than the find solution. That also assumes that file names don't end in newline characters and will skip hidden files and not descend into hidden directories. If there's a file called - in the current directory, the size of the file open on stdin will be considered. Beware that versions of bash prior to 4.3 followed symbolic links when descending the directory tree. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158289",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86067/"
]
} |
158,369 | Is it possible to get at least "some" autocompletion in a dash shell like in bash ? At least for the existing filenames in a path. | No, dash doesn't have completion. Otherwise it would be called bash. Dash was designed to execute shell scripts fast and with a minimum of memory, it wasn't intended to be used interactively. The best way to get completion in dash is to run exec zsh or exec fish . Or, if you want to stick with a shell that doesn't use much memory, use a BusyBox sh build that includes completion. If you want to stick with dash, you can do what people did before completion existed: use wildcards. For example, instead of typing a prefix of a file name and then pressing Tab , type a prefix of a file name and then type * . The prefix needs to be unique, otherwise multiple file names will be interpolated. To list “completions”, call ls or run echo on the wildcard pattern. If you use the same file name (or other string) multiple times, store it in a variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
158,370 | I am trying to make a monoalphabetic substitution decoder in bash: I have a set of characters that form an alternative alphabet like real alphabet -> ABCDE ...my alphabet -> DGHJK ... I want sed (or another tool, but I think sed can handle it) to replace "A" with "D", "B" with "G" and so on. Is it possible, without using the "-e" argument in sed (and typing all the alphabet) to perform such a substitution? | No, dash doesn't have completion. Otherwise it would be called bash. Dash was designed to execute shell scripts fast and with a minimum of memory, it wasn't intended to be used interactively. The best way to get completion in dash is to run exec zsh or exec fish . Or, if you want to stick with a shell that doesn't use much memory, use a BusyBox sh build that includes completion. If you want to stick with dash, you can do what people did before completion existed: use wildcards. For example, instead of typing a prefix of a file name and then pressing Tab , type a prefix of a file name and then type * . The prefix needs to be unique, otherwise multiple file names will be interpolated. To list “completions”, call ls or run echo on the wildcard pattern. If you use the same file name (or other string) multiple times, store it in a variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85956/"
]
} |
158,377 | We host a Magento webshop hosted by Amazon EC2, and we have speed issues. We're currently looking at moving to another provider. We found one hosting provider where we have a test account now. Speed is better. They use memcached, and this made me wonder how that will improve things for us, if we use that on our servers. I've read that memcached caches database tables, but I understand that it is installed on the webserver, not the database server. Are there any downsides or risks when using Memcached? Will this have unforseen effects on other applications, besides improved speed? How about RAM - will we need more memory for it to work properly? Now we have 1715MB RAM (strange number but this is what it reports). | No, dash doesn't have completion. Otherwise it would be called bash. Dash was designed to execute shell scripts fast and with a minimum of memory, it wasn't intended to be used interactively. The best way to get completion in dash is to run exec zsh or exec fish . Or, if you want to stick with a shell that doesn't use much memory, use a BusyBox sh build that includes completion. If you want to stick with dash, you can do what people did before completion existed: use wildcards. For example, instead of typing a prefix of a file name and then pressing Tab , type a prefix of a file name and then type * . The prefix needs to be unique, otherwise multiple file names will be interpolated. To list “completions”, call ls or run echo on the wildcard pattern. If you use the same file name (or other string) multiple times, store it in a variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34756/"
]
} |
158,380 | I need to redirect some link with or without trailing slashes: www.domain.con/foo → www.domain.com/redirect (working) www.domain.com/foo/ → www.domain.com/redirect (not working) I tried this rule in .htaccess : RewriteRule (.*)/foo/$ http://www.domain.com/redirect$1 [L,R=301] | No, dash doesn't have completion. Otherwise it would be called bash. Dash was designed to execute shell scripts fast and with a minimum of memory, it wasn't intended to be used interactively. The best way to get completion in dash is to run exec zsh or exec fish . Or, if you want to stick with a shell that doesn't use much memory, use a BusyBox sh build that includes completion. If you want to stick with dash, you can do what people did before completion existed: use wildcards. For example, instead of typing a prefix of a file name and then pressing Tab , type a prefix of a file name and then type * . The prefix needs to be unique, otherwise multiple file names will be interpolated. To list “completions”, call ls or run echo on the wildcard pattern. If you use the same file name (or other string) multiple times, store it in a variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64116/"
]
} |
158,385 | I've followed this guide http://centoshelp.org/resources/scripts-tools/a-basic-understanding-of-screen-on-centos/ Now, seems like I am having trouble running screen with non-root user, because everytime I run screen it only shows [remote detached] what does it mean? | No, dash doesn't have completion. Otherwise it would be called bash. Dash was designed to execute shell scripts fast and with a minimum of memory, it wasn't intended to be used interactively. The best way to get completion in dash is to run exec zsh or exec fish . Or, if you want to stick with a shell that doesn't use much memory, use a BusyBox sh build that includes completion. If you want to stick with dash, you can do what people did before completion existed: use wildcards. For example, instead of typing a prefix of a file name and then pressing Tab , type a prefix of a file name and then type * . The prefix needs to be unique, otherwise multiple file names will be interpolated. To list “completions”, call ls or run echo on the wildcard pattern. If you use the same file name (or other string) multiple times, store it in a variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79493/"
]
} |
158,389 | I'd like to check whether a line printed by a command contains an error message, but I'd like to also print all the output from the command (for make logs). Is there some way to get all the output of a command (unmodified) and an exit code based on the contents of that output? The closest workaround I could think about was my_command | grep -C 99999999 '^Error' . This is similar but distinct to this question , since I care about the exit code and don't want colouring. | Use tee and redirect it to stderr my_command | tee /dev/stderr | grep -q '^Error' It will save grep exit status and duplicate all the output to stderr which is visible in console. If you need it in stdout you can redirect it there later like this: ( my_command | tee /dev/stderr | grep -q '^Error' ) 2>&1 Note that grep will output nothing, but it will be on tee . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
158,395 | I have already followed this guide to disable middle mouse button paste on my Ubuntu 12.04. Works like a charm. Now I am trying to achieve the same on my Linux Mint 17. When I try to sudo apt-get build-dep libgtk2.0-0 it gives me the following output: Reading package lists... DoneBuilding dependency tree Reading state information... DonePicking 'gtk+2.0' as source package instead of 'libgtk2.0-0'E: Unable to find a source package for gtk+2.0 For me it looks like apt-get is somehow "resolving" 'libgtk2.0-0' to 'gtk+2.0' , but then does not find any package named like that. EDIT: although I am now able to compile the program (see my answer), I still do not know what Picking 'gtk+2.0' as source package instead of 'libgtk2.0-0' is supposed to mean. Any insight on this would be appreciated, thanks! | As others have already noted, make sure that for every deb … entry in /etc/apt/sources.list and /etc/apt/sources.list.d/* , you have a matching deb-src … entry. The rest of the line must be identical. The deb entry is for binary packages (i.e. ready to install), the deb-src is for source packages (i.e. ready to compile). The reason why the two kinds of packages are separated is that they are managed very differently: binary packages have a dependency tracking mechanism and a currently-installed list, whereas source packages are only tracked so that they can be downloaded conveniently. Note that when discussing package repositories, the word source means two unrelated things: a source as in a location to download packages from, and a source package as opposed to a binary package. libgtk2.0-0 is the name of a binary package. It is built from a source package called gtk+2.0 . The reason source and binary package names don't always match is that building a source package can produce multiple binary packages; for example, gtk+2.0 is the source for 14 packages as it is split into two libraries ( libgtk2.0 , libgail ), corresponding packages to build programs using these libraries ( …-dev ), documentation for developers ( …-doc ), companion programs ( libgtk2.0-bin ), etc. You can see the name of the source package corresponding to a binary package by checking the Source: … line in the output of dpkg -s BINARY_PACKAGE_NAME (if the package is installed) or apt-cache show BINARY_PACKAGE_NAME . You can list the binary packages produced by a source package with aptitude search '?source-package(^SOURCE_PACKAGE_NAME$) . The command apt-get source downloads a source package. If you give it an argument which isn't a known source package, it looks it up in the database of installable binary packages and tries to download the corresponding source package. The command apt-get build-dep follows the same approach to deduce the name of a source package, then queries the source package database to obtain a list of binary packages (the list in the Build-Dep: field), and installs those binary packages. The Software Sources GUI has a checkbox “enable repositories with source code” for official repositories, make sure that it's ticked. If you add third-party repositories manually, make sure that you add both deb-src and deb lines. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/158395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80453/"
]
} |
158,400 | In /etc/shadow file there are encrypted password. Encrypted password is no longer crypt(3) or md5 "type 1" format. ( according to this previous answer )Now I have a $6$somesalt$someveryverylongencryptedpasswd as entry. I can no longer use openssl passwd -1 -salt salt hello-world $1$salt$pJUW3ztI6C1N/anHwD6MB0 to generate encrypted passwd. Any equivalent like (non existing) .. ? openssl passwd -6 -salt salt hello-world | Python: python -c 'import crypt; print crypt.crypt("password", "$6$saltsalt$")' (for python 3 and greater it will be print(crypt.crypt(..., ...)) ) Perl: perl -e 'print crypt("password","\$6\$saltsalt\$") . "\n"' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/158400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79818/"
]
} |
158,408 | I have 20 tab delimited files with the same number of rows. I want to select every 4th column of each file, pasted together to a new file. In the end, the new file will have 20 columns with each column come from 20 different files. How can I do this with Unix/Linux command(s)? Input, 20 of this same format. I want the 4th column denoted here as A1 for file 1: chr1 1734966 1735009 A1 0 0 0 0 0 1 0chr1 2074087 2083457 A1 0 1 0 0 0 0 0chr1 2788495 2788535 A1 0 0 0 0 0 0 0chr1 2821745 2822495 A1 0 0 0 0 0 1 0chr1 2821939 2822679 A1 1 0 0 0 0 0 0... Output file, with 20 columns, each column coming from one of the 20 files' 4th column: A1 A2 A3 ... A20A1 A2 A3 ... A20A1 A2 A3 ... A20A1 A2 A3 ... A20A1 A2 A3 ... A20... | with paste under bash you can do: paste <(cut -f 4 1.txt) <(cut -f 4 2.txt) .... <(cut -f 4 20.txt) With a python script and any number of files ( python scriptname.py column_nr file1 file2 ... filen ): #! /usr/bin/env python# invoke with column nr to extract as first parameter followed by# filenames. The files should all have the same number of rowsimport syscol = int(sys.argv[1])res = {}for file_name in sys.argv[2:]: for line_nr, line in enumerate(open(file_name)): res.setdefault(line_nr, []).append(line.strip().split('\t')[col-1])for line_nr in sorted(res): print '\t'.join(res[line_nr]) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86155/"
]
} |
158,416 | I have been given a long password by my company for my Ubuntu system. This password is cumbersome to enter when authenticating with sudo repeatedly. Can I authenticate with sudo using a password other than the one associated with my user account, or enable sudo with no password authentication altogether? | You could tie sudo authentication to the knowledge of a secret key managed by ssh-agent . This can be achieved via PAM and the pam_ssh_agent_auth module. You can generate a separate keypair to use exclusively for sudo authentication. The password will be the passphrase used to encrypt the private key. To configure the pam_ssh_agent_auth module add the following to /etc/pam.d/sudo before any other auth or include directives: auth sufficient pam_ssh_agent_auth.so file=/etc/security/authorized_keys You will also need to tell sudo not to drop the SSH_AUTH_SOCK environment variable by adding the following to /etc/sudoers (via visudo ): Defaults env_keep += "SSH_AUTH_SOCK" Now add the public portion of the key you want to act as the authentication token to /etc/security/authorized_keys . You'd probably also want to add the -t switch to ssh-add with suitable short lifetime when adding the key to have ssh-agent mimic the default sudo behavior of prompting for password confirmation if a certain time has passed since it was last entered, or even use the -c switch to trigger password confirmation each time the key is used for authentication. Note that the default in Ubuntu is to use GNOME Keyring for SSH key management, which as far as I know doesn't currently allow key timeout to be set . You can disable SSH key management in GNOME Keyring completely by adding the following to ~/.config/autostart/gnome-keyring-ssh.desktop : [Desktop Entry]Type=ApplicationName=SSH Key AgentComment=GNOME Keyring: SSH AgentExec=/usr/bin/gnome-keyring-daemon --start --components=sshOnlyShowIn=GNOME;Unity;MATE;X-GNOME-Autostart-Phase=InitializationX-GNOME-AutoRestart=falseX-GNOME-Autostart-Notify=trueX-GNOME-Autostart-enabled=falseX-GNOME-Bugzilla-Bugzilla=GNOMEX-GNOME-Bugzilla-Product=gnome-keyringX-GNOME-Bugzilla-Component=generalX-GNOME-Bugzilla-Version=3.10.1NoDisplay=trueX-Ubuntu-Gettext-Domain=gnome-keyring which overrides /etc/xdg/autostart/gnome-keyring-ssh.desktop , the key difference being the line: X-GNOME-Autostart-enabled=false | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86159/"
]
} |
158,434 | This is what I want to accomplish: I want to open a gnome terminal with five tabs in it I want to run a set of commands (5 – 10 commands) in each tab automatically First tab: shall set clear-case view and after that execute one or more commands Second tab: shall login into a server and execute some commands Third tab: shall only execute some commands gnome-terminal --geometry=260x25-0+0 --tab -e "csh -c \"ct setview myViewName; cal\"" –tab --tab --tab (works ok, view is set but no command executed after that) I have tried to do it this way instead and running this in the script below: gnome-terminal --geometry 125x18-0-26 --tab -t "some title" -e /home/ekido/Desktop/MyScripts/myScript#!/usr/bin/expectexec gnome-terminal --geometry 125x49-0+81 –tabspawn ssh usert@serverexpect "password"send "*******\r"expect "user@server100:~>"send “some command\r"expect "user@server100:~>"send “some command"interact If I remove the exec gnome-terminal --geometry 125x49-0+81 –tab rows from the example and call a script from some other file, it works fine -- I get logged in to the server and all commands executed. Can anyone help me solve this? To write a script that I call for every tab is not an option, since I will have 5 terminals with 5-7 tabs in each in the end, and that means it would be 25 to 30 scripts to write (cost more than it helps in my problem). | This seems to work on my machine: gnome-terminal --geometry=260x25-0+0 --tab -e "bash -c 'date; read -n1'" --tab -e "bash -c 'echo meow; read -n1' " --tab --tab Please note, as soon as the processes executed by -e are done running, they will terminate. In this case, bash is loaded, runs whatever commands you pass to it, and immediately exists. I put in the read statements to wait for user input. This way those tabs won't close until you press a key, just so you can see it in this example. Without them, it would look as if only two tabs opened, because the other two would execute and close too quickly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86173/"
]
} |
158,445 | I am wondering if someone might point me in the right direction. I've got little experience working with the Linux command line and recently due to various factors in work I've been required to gain knowledge. Basically I have two php scripts that reside in a directory on my server. For the purposes of the application these scripts must be running continuously. Currently I implement that this way: nohup sh -c 'while true; do php get_tweets.php; done' >/dev/null & and nohup sh -c 'while true; do php parse_tweets.php; done' >/dev/null & However, I've noticed that despite the infinte loop the scripts to stop periodically and I'm forced to restart them. I'm not sure why but they do. That has made me look into the prospect of a CRON job that checks if they are running and if they are not, run them/restart them. Would anyone be able to provide me with some information on how to go about this? | I'd like to expand on Davidann's answer since you are new to the concept of a cron job. Every UNIX or Linux system has a crontab stored somewhere. The crontab is a plain text file. Consider the following: (From the Gentoo Wiki on Cron ) #Mins Hours Days Months Day of the week10 3 1 1 * /bin/echo "I don't really like cron"30 16 * 1,2 * /bin/echo "I like cron a little"* * * 1-12/2 * /bin/echo "I really like cron" This crontab should echo "I really like cron" every minute of every hour of every day every other month. Obviously you would only do that if you really liked cron. The crontab will also echo "I like cron a little" at 16:30 every day in January and February. It will also echo "I don't really like cron" at 3:10 on the January 1st. Being new to cron, you probably want to comment the starred columns so that you know what each column is used for. Every Cron implementation that I know of has always been this order. Now merging Davidann's answer with my commented file: #Mins Hours Days Months Day of week * * * * * lockfile -r 0 /tmp/the.lock && php parse_tweets.php; rm -f /tmp/the.lock * * * * * lockfile -r 0 /tmp/the.lock && php get_tweets.php; rm -f /tmp/the.lock Putting no value in each column defaults to: Every Minute Of Every Hour Of Every Day Of Every Month All Week Long, --> Every Minute All Year Long. As Davidann states using a lockfile ensures that only one copy of the php interpreter runs, php parse_tweets.php is the command to "run" the file, and the last bit of the line will delete the lock file to get ready for the next run. I don't like deleting a file every minute, but if this is the behavior you need, this is very acceptable. Writing and Rewriting to disk is just personal preference | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83650/"
]
} |
158,448 | I am trying to understand permissions in detail. I was reading about setuid and it's uses. However, this particular case confuses me. I have made a small script and now I have set the suid bit for the script as below. chmod u+s ramesh I see the permissions set as below. -rwsrw-r-- 1 ramesh ramesh 29 Sep 30 10:09 ramesh Now, I believe with setuid any user could execute the script. Now, I did the command chmod u-x ramesh It gives me the permission as, -rwSrw-r-- 1 ramesh ramesh 29 Sep 30 10:09 ramesh Now, I understand the S denotes setuid with no executable bit. That is, no one can execute this file. So my question is, what practical purposes do the setting of S bit have? I am trying to understand from an example perspective for setting this bit. | Now, I believe with setuid any user could execute the script. Not quite. To make the script executable by every user, you just need to set a+rx permissions: chmod a+rx script setuid means that the script is always executed with the owner's permissions , that is, if you have the following binary: martin@dogmeat ~ % touch dangerousmartin@dogmeat ~ % sudo chown root:root dangerous martin@dogmeat ~ % sudo chmod a+rx,u+s dangerous martin@dogmeat ~ % ll dangerous -rwsrwxr-x 1 root root 0 Sep 30 17:23 dangerous* This binary will always run as root, regardless of the user that is executing it. Obviously this is dangerous and you have to be extremely careful with setuid, especially when you are writing setuid applications. Also, you shouldn't be using setuid on scripts at all because it's inherently unsafe on Linux. Now, I understand the S denotes setuid with no executable bit. That is, no one can execute this file. So my question is, what practical purposes do the setting of S bit have? I am trying to understand from an example perspective for setting this bit. I don't think that there is a practical purpose, IMO it's just a possible combination of the permission bits. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47538/"
]
} |
158,489 | On Arch Linux running as a guest OS on VMWare Fusion, I noticed the system time of Arch falls behind when I sleep the host OS and never gets back in sync. It appears systemd-timesyncd is loaded but inactive. [root@arch1 ~]# systemctl status systemd-timesyncd* systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; enabled) Active: inactive (dead) since Tue 2014-09-30 11:04:42 PDT; 3min 7s ago start condition failed at Tue 2014-09-30 11:04:42 PDT; 3min 7s ago ConditionVirtualization=no was not met Docs: man:systemd-timesyncd.service(8) Main PID: 17582 (code=exited, status=0/SUCCESS) Status: "Idle." Update: The answers below explain how to get the systemd-timesyncd.service running under a VM, but it turns out that doesn't solve the time sync problem (which is probably why systemd-timesyncd is disabled under VMs). The Arch wiki page Installing Arch Linux in VMWare explains how to perform Time Synchronization between guest and host OS. | Just create a configuration file that unsets that parameter. mkdir -p /etc/systemd/system/systemd-timesyncd.service.decho -e "[Unit]\nConditionVirtualization=" > /etc/systemd/system/systemd-timesyncd.service.d/allow_virt.confsystemctl daemon-reloadsystemctl start systemd-timesyncd.service This technique is described in the systemd.unit man page: Along with a unit file foo.service, a directory foo.service.d/ may exist. All files with the suffix ".conf" from this directory will be parsed after the file itself is parsed. This is useful to alter or add configuration settings to a unit, without having to modify their unit files. Make sure that the file that is included has the appropriate section headers before any directive. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/71995/"
]
} |
158,494 | When attempting to launch system-config-users from command line, I get the following warning, and the tool does not open. I'm using CentOS 7 with Mate 1.8.1. WARNING **: Error enumerating actions: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files Error checking for authorization org.freedesktop.policykit.exec: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files yum list polkit* Installed Packagespolkit.x86_64 0.112-5.el7 @anacondapolkit-devel.x86_64 0.112-5.el7 @base polkit-docs.noarch 0.112-5.el7 @base polkit-gnome.x86_64 0.105-6.el7 @epel polkit-pkla-compat.x86_64 0.1-4.el7 @anaconda What is missing from my system to cause this error? | I just had the same return when installing deluged on arch, I typed: systemctl start deluged I tried with sudo and it worked fine. Seems to be a group permissions issue. All I did was enable permissions for my user account and then typed: sudo systemctl start deluged worked like a charm. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/158494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20107/"
]
} |
158,495 | I want to update my aliases from a script. I have some new aliases in ~/updateFiles/newAliases : alias ga='git add -A' I also have this update script ~/updateFiles/updater : #!/bin/bashcp newAliases ~/.bash_aliasessource ~/.bash_aliases But it does not appear to work. How do I get the calling environment to source the new aliases? This does not work: $ cd ~/updateFiles$ ./updater$ ga-su: ga: command not found This does work: $ cd ~/updateFiles$ ./updater$ source ~/.bash_aliases$ ga$ | The difference is the scope and syntactically is very subtle: $ ./updater is the equivalent of $ /bin/bash ./updater It runs the script (if it is marked as executable and on a filesystem mounted with the exec option - the latter form works even if one of these conditions is not met). That means it spawns new shell instance and feeds it content of the script. Thus any aliases defined therein are limited to the duration of the interpreting shell which is only till the end of the script. $ . updater$ . ./updater$ source updater$ source ./updater mean all the same and tell the current shell to execute contents of that file as if you typed it on the command line. That means that any aliases, functions, environment variables, shell option settings and so on will be available in the shell afterwards. That is also why you sometimes see shell init files ( ~/.bashrc in the case of Bash) that look like this: #!/bin/bashfor n in ~/etc/bash/*; do . $ndone where ~/etc/bash can look like: ~/etc/bash/|-- bash.10.env|-- bash.20.aliases`-- bash.30.func (the names are quite self-explanatory). Whenever you add some files to such init directory, all you need to do to apply the changes is . ~/.bashrc . For which you can have an alias, of course. This can be also extended - for example by having specialised initialisation depending on the hostname (or phase of moon by using pom from bsg-games). One big caveat for these setups: be sure to make the the init files "re-entrant" in the sense it doesn't matter how many times you source them in one shell - for example variables you may want to preserve should be defined conditionally: VAR=${VAR:-"value"} instead of unconditionally: VAR="value" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57734/"
]
} |
158,501 | I have extracted a large (3.9GB) tar.bz2 file using the following command: tar -xjvf archive.tar.bz2 The extract proceeds fines but exits printing: bzip2: (stdin): trailing garbage after EOF ignored Is there a problem with the archive/extract? Has the integrity of my data been compromised? | The difference is the scope and syntactically is very subtle: $ ./updater is the equivalent of $ /bin/bash ./updater It runs the script (if it is marked as executable and on a filesystem mounted with the exec option - the latter form works even if one of these conditions is not met). That means it spawns new shell instance and feeds it content of the script. Thus any aliases defined therein are limited to the duration of the interpreting shell which is only till the end of the script. $ . updater$ . ./updater$ source updater$ source ./updater mean all the same and tell the current shell to execute contents of that file as if you typed it on the command line. That means that any aliases, functions, environment variables, shell option settings and so on will be available in the shell afterwards. That is also why you sometimes see shell init files ( ~/.bashrc in the case of Bash) that look like this: #!/bin/bashfor n in ~/etc/bash/*; do . $ndone where ~/etc/bash can look like: ~/etc/bash/|-- bash.10.env|-- bash.20.aliases`-- bash.30.func (the names are quite self-explanatory). Whenever you add some files to such init directory, all you need to do to apply the changes is . ~/.bashrc . For which you can have an alias, of course. This can be also extended - for example by having specialised initialisation depending on the hostname (or phase of moon by using pom from bsg-games). One big caveat for these setups: be sure to make the the init files "re-entrant" in the sense it doesn't matter how many times you source them in one shell - for example variables you may want to preserve should be defined conditionally: VAR=${VAR:-"value"} instead of unconditionally: VAR="value" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86214/"
]
} |
158,564 | This is my code #!/bin/bashshowword() { echo $1}echo This is a sample message | xargs -d' ' -t -n1 -P2 showword So I have a function showword which echoes whatever string you pass as a parameter to the function. Then I have xargs trying to call the function and pass one word at a time to the function, and run 2 copies of the function in parallel. The thing that is not working is xargs doesn't recognize the function. How can I achieve what I am trying to do, how can I make xargs work with the function showword ? | Try exporting the function, then calling it in a subshell: showword() { echo $1}export -f showwordecho This is a sample message | xargs -d' ' -t -n1 -P2 bash -c 'showword "$@"' _ This causes xargs to execute bash -c 'showword "$@"' _ Thisbash -c 'showword "$@"' _ isbash -c 'showword "$@"' _ a ︙ The arguments passed to the bash command are, well,passed into the bash environment, but starting from 0. So, inside the function, $0 is “ _ ” and $1 is “ This ” $0 is “ _ ” and $1 is “ is ” $0 is “ _ ” and $1 is “ a ” ︙ See Bash -c with positional parameters . Note that export -f works only in Bash,and -P n ( --max-procs= max-procs )works only in GNU xargs . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84978/"
]
} |
158,584 | A former coworker did something to top that whenever it runs as root the data is sorted by MEM usage instead of the default CPU usage. According to multiple searches, the man page and even the options within the top console itself (O), just pressing k it should be sorted by CPU, but instead when I hit k it asks me for a pid to kill. | You can change the sort field in the interactive top window with the < and > keys. I'm not sure what operating system you're running but at least on my GNU top, k is supposed to kill, not reset. Presumably, your friend changed the sort field and hit Shift + W to save to ~/.toprc . Just use the keys I mentioned to choose the sort field you want and then, when it's set up as you like it, hit Shift + W again and it should save that state and open that way next time. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27807/"
]
} |
158,597 | What are good program or script solutions to run specific applications depending on whether the laptop is running on battery or charging? In order to conserve battery, I automatically want to stop certain applications from running when on battery (Dropbox, backup engine, etc...) and restart them when back on charging. | You can change the sort field in the interactive top window with the < and > keys. I'm not sure what operating system you're running but at least on my GNU top, k is supposed to kill, not reset. Presumably, your friend changed the sort field and hit Shift + W to save to ~/.toprc . Just use the keys I mentioned to choose the sort field you want and then, when it's set up as you like it, hit Shift + W again and it should save that state and open that way next time. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28643/"
]
} |
158,613 | Debian on external USB SSD drive. There was some error in dmesg log file: ...[ 3.320718] EXT4-fs (sdb2): INFO: recovery required on readonly filesystem[ 3.320721] EXT4-fs (sdb2): write access will be enabled during recovery[ 5.366367] EXT4-fs (sdb2): orphan cleanup on readonly fs[ 5.366375] EXT4-fs (sdb2): ext4_orphan_cleanup: deleting unreferenced inode 6072[ 5.366426] EXT4-fs (sdb2): ext4_orphan_cleanup: deleting unreferenced inode 6071[ 5.366442] EXT4-fs (sdb2): 2 orphan inodes deleted[ 5.366444] EXT4-fs (sdb2): recovery complete... The system boots and works normally. Is it possible to repair this fully, and what is the proper way? | You can instruct the filesystem to perform an immediate fsck upon being mounted like so: Method #1: Using /forcefsck You can usually schedule a check at the next reboot like so: $ sudo touch /forcefsck$ sudo reboot Method #2: Using shutdown You can also tell the shutdown command to do so as well, via the -F switch: $ sudo shutdown -rF now NOTE: The first method is the most universal way to achieve this! Method #3: Using tune2fs You can also make use of tune2fs , which can set the parameters on the filesystem itself to force a check the next time a mount is attempted. $ sudo tune2fs -l /dev/sda1Mount count: 3Maximum mount count: 25 So you have to place the "Mount count" higher than 25 with the following command: $ sudo tune2fs -C 26 /dev/sda1 Check the value changed with tune2fs -l and then reboot! NOTE: Of the 3 options I'd use tune2fs given it can deal with force checking any filesystem whether it's the primary's ( / ) or some other. Additional notes You'll typically see the "Maximum mount count:" and "check interval:" parameters associated with a partition that's been formatted as ext2/3/4. Often times they're configured like so: $ tune2fs -l /dev/sda5 | grep -E "Mount count|Maximum mount|interval"Mount count: 178Maximum mount count: -1Check interval: 0 (<none>) When the parameters are set this way, the device will never perform an fsck during mounting. This is fairly typical with most distros. There are 2 forces that drive a check. Either number of mounts or an elapse time. The "Check interval" is the time based one. You can say every 2 weeks to that argument, 2w. See the tune2fs man page for more info. NOTE: Also make sure to understand that tune2fs is a filesystem command, not a device command. So it doesn't work with just any old device, /dev/sda , unless there's an ext2/3/4 filesystem there, the command tune2fs is meaningless, it has to be used against a partition that's been formatted with one of those types of filessystems. References Linux Force fsck on the Next Reboot or Boot Sequence | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487383/"
]
} |
158,616 | I have a lot of files with names like: index.php?dir=IOP%2FFOO%2FBAR%2F&file=ScanImage001.jpg After replacing index.php? with nothing %2F with ' / ', dir= with noting and file= with nothing: IOP/FOO/BAR/ScanImage001.jpg Now I want to rename first file to the second as well as create directories. How can I do that? | You can instruct the filesystem to perform an immediate fsck upon being mounted like so: Method #1: Using /forcefsck You can usually schedule a check at the next reboot like so: $ sudo touch /forcefsck$ sudo reboot Method #2: Using shutdown You can also tell the shutdown command to do so as well, via the -F switch: $ sudo shutdown -rF now NOTE: The first method is the most universal way to achieve this! Method #3: Using tune2fs You can also make use of tune2fs , which can set the parameters on the filesystem itself to force a check the next time a mount is attempted. $ sudo tune2fs -l /dev/sda1Mount count: 3Maximum mount count: 25 So you have to place the "Mount count" higher than 25 with the following command: $ sudo tune2fs -C 26 /dev/sda1 Check the value changed with tune2fs -l and then reboot! NOTE: Of the 3 options I'd use tune2fs given it can deal with force checking any filesystem whether it's the primary's ( / ) or some other. Additional notes You'll typically see the "Maximum mount count:" and "check interval:" parameters associated with a partition that's been formatted as ext2/3/4. Often times they're configured like so: $ tune2fs -l /dev/sda5 | grep -E "Mount count|Maximum mount|interval"Mount count: 178Maximum mount count: -1Check interval: 0 (<none>) When the parameters are set this way, the device will never perform an fsck during mounting. This is fairly typical with most distros. There are 2 forces that drive a check. Either number of mounts or an elapse time. The "Check interval" is the time based one. You can say every 2 weeks to that argument, 2w. See the tune2fs man page for more info. NOTE: Also make sure to understand that tune2fs is a filesystem command, not a device command. So it doesn't work with just any old device, /dev/sda , unless there's an ext2/3/4 filesystem there, the command tune2fs is meaningless, it has to be used against a partition that's been formatted with one of those types of filessystems. References Linux Force fsck on the Next Reboot or Boot Sequence | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86305/"
]
} |
158,638 | I have a svn local copy, i want to make a search on the string some_string. $ grep some_string * -rlang/en:some_string=Some stringlang/.svn/en:some_string=Some string But if a go little further, it seems that first level hidden directories are excluded : $ cd lang && grep some_string * -ren:some_string=Some string How can I remove the hidden svn diretories from my output, and not only from the first depth level? $ grep some_string * -r --which_option_here?lang/en:some_string=Some string | Use --exclude-dir option, e.g.: grep -r --exclude-dir='.*' some_string From man grep : --exclude-dir=DIR Exclude directories matching the pattern DIR from recursive searches. Note however, that --exclude-dir option is available only in GNU grep. If your grep doesn't support it, you may need to use other tool as find for example : find . \( -name .svn -prune \) -o -name "*" -exec grep -H "some_string" {} 2>/dev/null \; | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/158638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50325/"
]
} |
158,678 | I can successfully mount an ext4 partition, the problem is that all the files on the partition are owned by the user with userid 1000. On one machine, my userid is 1000, but on another it's 1010. My username is the same on both machines, but I realise that the filesystem stores userids, not usernames. I could correct the file ownership with something like the following: find /mnt/example -exec chown -h 1010 {} \; But then I would have to correct the file ownerships again back to 1000 when I mount this external drive on another machine. What I would like is to give mount an option saying map userid 1000 to 1010, so that I don't have to actually modify any files. Is there a way to do this? | Take a look at the bindfs package. bindfs is a FUSE filesystem that allows for various manipulations of file permissions, file ownership etc. on top of existing file systems. You are looking specifically for the --map option of bindfs: --map=user1/user2:@group1/@group2:..., -o map=... Given a mapping user1/user2, all files owned by user1 are shown as owned by user2. When user2 creates files, they are chowned to user1 in the underlying directory. When files are chowned to user2, they are chowned to user1 in the underlying directory. Works similarly for groups. A single user or group may appear no more than once on the left and once on the right of a slash in the list of mappings. Currently, the options --force-user, --force-group, --mirror, --create-for-*, --chown-* and --chgrp-* override the corresponding behavior of this option. Requires mounting as root. So to map your files with user id 1001 in /mnt/wrong to /mnt/correct with user id 1234, run this command: sudo bindfs --map=1001/1234 /mnt/wrong /mnt/correct | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/158678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9041/"
]
} |
158,727 | I have been studying the Linux kernel behaviour for quite some time now, and it's always been clear to me that: When a process dies, all its children are given back to the init process (PID 1) until they eventually die. However, recently, someone with much more experience than me with the kernel told me that: When a process exits, all its children also die (unless you use NOHUP in which case they get back to init ). Now, even though I don't believe this, I still wrote a simple program to make sure of it. I know I should not rely on time ( sleep ) for tests since it all depends on process scheduling, yet for this simple case, I think that's fairly enough. int main(void){ printf("Father process spawned (%d).\n", getpid()); sleep(5); if(fork() == 0){ printf("Child process spawned (%d => %d).\n", getppid(), getpid()); sleep(15); printf("Child process exiting (%d => %d).\n", getppid(), getpid()); exit(0); } sleep(5); printf(stdout, "Father process exiting (%d).\n", getpid()); return EXIT_SUCCESS;} Here is the program's output, with the associated ps result every time printf talks: $ ./test &Father process spawned (435).$ ps -ef | grep testmyuser 435 392 tty1 ./testChild process spawned (435 => 436).$ ps -ef | grep testmyuser 435 392 tty1 ./testmyuser 436 435 tty1 ./testFather process exiting (435).$ ps -ef | grep testmyuser 436 1 tty1 ./testChild process exiting (436). Now, as you can see, this behaves quite as I would have expected it to. The orphan process (436) is given back to init (1) until it dies. However, is there any UNIX-based system on which this behaviour does not apply by default? Is there any system on which the death of a process immediately triggers the death of all its children? | When a process exits, all its children also die (unless you use NOHUP in which case they get back to init). This is wrong. Dead wrong. The person saying that was either mistaken, or confused a particular situation with the the general case. There are two ways in which the death of a process can indirectly cause the death of its children. They are related to what happens when a terminal is closed. When a terminal disappears (historically because the serial line was cut due to a modem hangup, nowadays usually because the user closed the terminal emulator window), a SIGHUP signal is sent to the controlling process running in that terminal — typically, the initial shell started in that terminal. Shells normally react to this by exiting. Before exiting, shells intended for interactive use send HUP to each job that they started. Starting a job from a shell with nohup breaks that second source of HUP signals because the job will then ignore the signal and thus not be told to die when the terminal disappears. Other ways to break the propagation of HUP signals from the shell to the jobs include using the shell's disown builtin if it has one (the job is removed from the shell's list of jobs), and double forking (the shell launches a child which launches a child of its own and exits immediately; the shell has no knowledge of its grandchild). Again, the jobs started in the terminal die not because their parent process (the shell) dies, but because their parent process decides to kill them when it is told to kill them. And the initial shell in the terminal dies not because its parent process dies, but because its terminal disappears (which may or may not coincidentally be because the terminal is provided by a terminal emulator which is the shell's parent process). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41892/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.