source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
176,215 | Exactly what is the difference between devfs and sysfs ? Both seem to maintain a list of hardwares attached to the system . Then why the need for 2 separate fs even arose? As far as I can get /sys maintains somewhat "raw" list of devices(like "ser0"). Udev acts on those devices, gets various informations and applies various rules to present them as recognizable names which are then mapped onto /dev (like "camera"). Is this the only reason? And then we mount the corresponding devices from the /dev fs(can't we do that from the /sys fs) into the /media fs. I have read the answer at Difference between /dev and /sys/class? . But I cannot get the sys fs part where it states that Sysfs contain the hierarchy of devices, as they are attached to the computer Are the files in /sys not device node files? Then what type of files are they? | The /sys filesystem (sysfs) contains files that provide information about devices: whether it's powered on, the vendor name and model, what bus the device is plugged into, etc. It's of interest to applications that manage devices. The /dev filesystem contains files that allow programs to access the devices themselves: write data to a serial port, read a hard disk, etc. It's of interest to applications that access devices. A metaphor is that /sys provides access to the packaging, while /dev provides access to the content of the box. The files in /sys are not device nodes, but symbolic links and regular files. Those regular files are special in that reading or writing to them invokes file-specific functions in the kernel, like device nodes. The difference is that files in /sys work this way because of the filesystem they are on, whereas device nodes work this way due to their device node characteristics (the file type indicating a (block or character) device, and the device major and minor number indicating which device it is). The reason for /dev existing independently of /sys is partly historical: /dev dates back to the dawn of Unix, while /sys is a much more recent invention. If Linux was designed today with no historical background, /dev/sda might be /sys/block/sda/content . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96209/"
]
} |
176,216 | On this SO thread and a few other threads I have seen the following commands for redirecting stdout and stderr to a file. Are they all equivalent? Is there any difference between them? command1 >> logfile 2>&1command &> logfilecommand >& logfile | Since you have tagged zsh , let me tell you that all the 3 redirections works exactly the same way. As you might have read in both the duplicate posts (the one in the comment and the one in your post), they all redirect stderr to stdout which inturn is redirected to the file 'logfile' (ie, the logfile will contain both the output and errors). But their behaviour changes a LOT depending on the shell you are in. All the three styles of redirections works well in the same way in bash and zsh But: Only >& works in csh or tcsh [soum@server ~]$ ./test.sh > logfile 2>&1Ambiguous output redirect.[soum@server ~]$ ./test.sh &> logfileInvalid null command.[soum@server ~]$ ./test.sh >& logfile[soum@server ~]$ echo $SHELL/bin/tcsh[soum@server ~]$ In ksh only 2>&1 works. $ ./test.sh >& logfile-ksh: logfile: bad file unit number$ ./test.sh &> logfile[1] 23039$ 1 2 3 4 5 6 logfile test.shls: cannot access ttr: No such file or directory[1] + Done(2) ./test.sh &> logfile I hate ksh . While >& just gave an error, the &> backgrounded a part of the command and emptied the logfile (if non-empty). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
176,217 | Having read both What is meant by mounting a device in Linux? and understanding "mount" as a concept in the OS , I have a problem where it is stated that All accessible storage must have an associated location in this single directory tree. This is unlike Windows where (in the most common syntax for file paths) there is one directory tree per storage component (drive). Mounting is the act of associating a storage device to a particular location in the directory tree. But there is already an accessible location for say a cdrom drive under /dev/cdrom which obviously comes in the directory hierarchy. So why the need for creating a separate "mount point" under /media/cdrom? Why accessing directly from /dev/cdrom is made impossible? I heard that that the device node files are just like ordinary files. And reading and writing to them is just like ordinary files. So does this mean that the filesystem in the cdrom is not available if we access it from /dev/cdrom. And the filesystem hierarchy(inside the cdrom) "comes alive" when we "mount" it? | You can read or write /dev/cdrom (eg, using dd or cat ) but when you do that you are just reading or writing the raw bytes of the device. That can be useful in various circumstances (like cloning a partition), but generally we want to see the directories and files stored on the device. When you mount a device you're basically telling the kernel to use a layer of software (the filesystem driver) to translate those raw bytes into an actual filesystem. Thus mounting a device associates the filesystem on that device to the directory hierarchy. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96209/"
]
} |
176,307 | I'm pretty sure that all Red Hat and Debian based distributions follow the convention of shipping the kernel configuration in /boot/config-* , but what of other distributions? Or, if this convention is extremely common, which distributions don't follow it? | Debian and derivatives (Ubuntu, Linux Mint, β¦) The configuration for the kernel /boot/vmlinuz- VERSION is stored in /boot/config- VERSION . The two files ship in the same package, linux- VERSION or kernel- VERSION . Arch Linux, Gentoo (if enabled) The configuration for the running kernel is stored in the kernel binary and can be retrieved with zcat /proc/config.gz . This file exists when the CONFIG_IKCONFIG option is set when compiling the kernel - and so can be true (or not) regardless of distribution, though the default kernel configuration for the two named does enable it. Incidentally, arch linux's default configuration does not name the kernel (or its initramfs image) by version even in /boot - the files there are named only for their corresponding packages. For example, a typical arch linux boot kernel is named /boot/vmlinuz- linux where linux is the package one installs for the default kernel. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41033/"
]
} |
176,322 | bash and fish scripts are not compatible, but I would like to have a file that defines some some environment variables to be initialized by both bash and fish. My proposed solution is defining a ~/.env file that would contain the list of environment variables like so: PATH="$HOME/bin:$PATH"FOO="bar" I could then just source it in bash and make a script that converts it to fish format and sources that in fish. I was thinking that there may be a better solution than this, so I'm asking for better way of sharing environment variables between bash fish. Note: I'm using OS X. Here is an example .env file that I would like both fish and bash to handle using ridiculous-fish's syntax (assume ~/bin and ~/bin2 are empty directories): setenv _PATH "$PATH"setenv PATH "$HOME/bin"setenv PATH "$PATH:$HOME/bin2"setenv PATH "$PATH:$_PATH" | bash has special syntax for setting environment variables, while fish uses a builtin. I would suggest writing your .env file like so: setenv VAR1 val1setenv VAR2 val2 and then defining setenv appropriately in the respective shells. In bash (e.g. .bashrc): function setenv() { export "$1=$2"; }. ~/.env In fish (e.g. config.fish): function setenv; set -gx $argv; endsource ~/.env Note that PATH will require some special handling, since it's an array in fish but a colon delimited string in bash. If you prefer to write setenv PATH "$HOME/bin:$PATH" in .env, you could write fish's setenv like so: function setenv if [ $argv[1] = PATH ] # Replace colons and spaces with newlines set -gx PATH (echo $argv[2] | tr ': ' \n) else set -gx $argv end end This will mishandle elements in PATH that contain spaces, colons, or newlines. The awkwardness in PATH is due to mixing up colon-delimited strings with true arrays. The preferred way to append to PATH in fish is simply set PATH $PATH ~/bin . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9191/"
]
} |
176,324 | What is a file descriptor? Why do we need them? | A file descriptor is a number that represents an open file in a process. It's a way for the program to remember which file it's manipulating. Opening a file looks for a free number and assigns it to the file in that process's file descriptor table; closing the file removes the entry from the process's descriptor table. There is no relation between file descriptor n in a process and the file descriptor with the same number in another process. βEvery file has three of them (stdin, stdout, stderr)β is nonsense. Processes have file descriptors, not files. Processes can and often do have more than three file descriptors, and can have fewer. Stdin, stdout and stderr are the names for file descriptors 0, 1 and 2 because they have a conventional meaning: stdin (standard input) is where the program is supposed to read user input (if it wants to), stdout (standard output) is where the program is supposed to write the data that it produces (if it wants to), and stderr (standard error) is for error messages. Stdin and stdout are of use in programs intended to be used on the command lines and especially in pipelines; I invite you to read what is meant by connecting STDOUT and STDIN? and (more advanced) How can a command have more than one output? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96047/"
]
} |
176,326 | The following works in my shell (zsh): > FOO='ls'> $FOOfile1 file2 but the following doesn't: > FOO='emacs -nw'> $FOOzsh: command not found: emacs -nw even though invoking emacs -nw directly opens Emacs perfectly well. Why? | Because there's no command called emacs -nw . There's a command called emacs to which you can pass a -nw option. To store commands, you generally use functions : foo() emacs -nw "$@"foo ... To store several arguments, you generally use arrays: foo=(emacs -nw)$foo ... To store a string containing several words separated by spaces and have it split on spaces, you can do: foo='emacs -nw'${(s: :)foo} ... You could rely on word splitting performed on IFS (IFS contains space, tab, newline and nul by default): foo='emacs -nw'$=foo ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
176,350 | I would like to generate a number of arrays that differ only by year. In the loop, I create the arrays with brace expansion and a variable. I tried the following code without success: LIST={JF,JFE,RFS,JBF,JFI,JMCB}for year in {1998..2000} {2009..2011}do declare -a 'y$year=('"$LIST"'-$year)' echo "${y$year[@]}"done The outcome should be the list of the following entries: y1998: JF-1998 JFE-1998 RFS-1998 JBF-1998 JFI-1998 JMCB-1998y1999: JF-1999 JFE-1999 RFS-1999 JBF-1999 JFI-1999 JMCB-1999...y2011: JF-2011 JFE-2011 RFS-2011 JBF-2011 JFI-2011 JMCB-2011 I don't need to print them, just use them in a loop as well as pass them as arguments. Because of the second, eval in the loop is not sufficient. | Deferring brace expansion is really a case for eval , particularly if you want to stringify things β ordinary parameter expansion doesn't do the right thing at the right time. This should do what it seems you wanted: LIST={JF,JFE,RFS,JBF,JFI,JMCB}for year in {1998..2000} {2009..2011}do eval "y$year=($LIST-$year)" tmp="y$year[@]" echo "${!tmp}"done You can't indirect into an array, so it's necessary to stringify the array inside an eval too if you want to print it out. If you don't, you can take out everything after the ; . tmp is used for indirect expansion : with tmp set to "y$year[@]" , where $year is replaced with its value, the expansion ${!tmp} gives the contents of this iteration's array (what ${y1998[@]} , etc, would have expanded to). The above will output: JF-1998 JFE-1998 RFS-1998 JBF-1998 JFI-1998 JMCB-1998JF-1999 JFE-1999 RFS-1999 JBF-1999 JFI-1999 JMCB-1999JF-2000 JFE-2000 RFS-2000 JBF-2000 JFI-2000 JMCB-2000JF-2009 JFE-2009 RFS-2009 JBF-2009 JFI-2009 JMCB-2009JF-2010 JFE-2010 RFS-2010 JBF-2010 JFI-2010 JMCB-2010JF-2011 JFE-2011 RFS-2011 JBF-2011 JFI-2011 JMCB-2011 and also create arrays y1998 ... y2011 . The declare s aren't strictly necessary, although they do let you skip an eval if you're aiming for that and don't need to stringify. I suggest that this probably isn't the way you really want to achieve your underlying goal, though, whatever that is. Nested loops aren't evil, and if any bit of it's hardcoded you can abstract that out. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94483/"
]
} |
176,387 | I want to move large file created by external process as soon as it's closed. Is this test command correct? if lsof "/file/name"then # file is open, don't touch it!else if [ 1 -eq $? ] then # file is closed mv /file/name /other/file/name else # lsof failed for some other reason fifi EDIT: the file represents a dataset and I have to wait until it's complete to move it so another program can act on it. That's why I need to know if the external process is done with the file. | From the lsof man page Lsof returns a one (1) if any error was detected, including the failure to locate command names, file names, Internet addresses or files, login names, NFS files, PIDs, PGIDs, or UIDs it was asked to list. If the -V option is specified, lsof will indicate the search items it failed to list. So that would suggest that your lsof failed for some other reason clause would never be executed. Have you tried just moving the file while your external process still has it open? If the destination directory is on the same filesystem, then there should be no problems with doing that unless you need to access it under the original path from a third process as the underlying inode will remain the same. Otherwise I think mv will fail anyway. If you really need to wait until your external process is finished with the file, you are better to use a command that blocks instead of repeatedly polling. On Linux, you can use inotifywait for this. Eg: inotifywait -e close_write /path/to/file If you must use lsof (maybe for portability), you could try something like: until err_str=$(lsof /path/to/file 2>&1 >/dev/null); do if [ -n "$err_str" ]; then # lsof printed an error string, file may or may not be open echo "lsof: $err_str" >&2 # tricky to decide what to do here, you may want to retry a number of times, # but for this example just break break fi # lsof returned 1 but didn't print an error string, assume the file is open sleep 1doneif [ -z "$err_str" ]; then # file has been closed, move it mv /path/to/file /destination/pathfi Update As noted by @JohnWHSmith below, the safest design would always use an lsof loop as above as it is possible that more than one process would have the file open for writing (an example case may be a poorly written indexing daemon that opens files with the read/write flag when it should really be read only). inotifywait can still be used instead of sleep though, just replace the sleep line with inotifywait -e close /path/to/file . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43182/"
]
} |
176,408 | I'm reading a relatively old text on Linux partitions and file systems (the LPIC 1 Certification Bible ). It says: Some versions of the Linux boot loaders cannot access a kernel that is outside the first 1024 cylinders on a disk. By putting the /boot partition at the beginning of the drive you can be assured of not having a problem when accessing the kernel at boot. This problem shows itself most often in cases of dual booting Linux along with another operating system that is on the first partition. Why would a boot loader have " no access to the kernel outside the first 1024 cylinders on a disk "? Also, what does " putting the /boot partition at the beginning of the drive " mean? | This is a limitation imposed by having a very old BIOS and bootloader rather than Linux itself. The BIOS would only be able to access the first 1024 cylinders of the disk (see here for more information on what cylinders/heads/sectors are). This limitation would extend to bootloaders which, due to their simple nature, would not have their own disk drivers and would use the BIOS services to access the disk. This meant that both the bootloader and the kernel (since it is the bootloader's job to load it) would have to be within the first 1024 cylinders on systems with this limitation. A simple way to do this was to create a separate /boot partition containing the kernel and put it at the beginning of the drive (usually just by making it the first partition). This means that it would physically reside within the first 1024 cylinders, provided of course that the partition wasn't too large. The limitation is no longer an issue because it only applies to old BIOSes. Also, many modern bootloaders (eg GRUB) have their own disk drivers and so do not need to rely on BIOS services. Modern bootloaders may use /boot for other purposes, but it is no longer required to be both on a separate partition and within the first 1024 cylinders (although there are many cases where it is necessary to have /boot on a separate partition). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90796/"
]
} |
176,409 | screen command is a nice program for us to run process background, but I found Ctrl + a w do not show screen windows list in xshell(Xmanager component) and Ctrl + a k do not kill this screen terminal for me. However Ctrl + a d to dettach session works! So what's wrong with Ctrl +a w to list sessions? More serious, How do I know whether I am in screen window or normal bash window? Many times I try to dettach screen session, I got logout after ctrl+a d . Very embarrassing isn't it? So is there any hints to show me whether am I in a screen window or just normal tty terminal? | This is a limitation imposed by having a very old BIOS and bootloader rather than Linux itself. The BIOS would only be able to access the first 1024 cylinders of the disk (see here for more information on what cylinders/heads/sectors are). This limitation would extend to bootloaders which, due to their simple nature, would not have their own disk drivers and would use the BIOS services to access the disk. This meant that both the bootloader and the kernel (since it is the bootloader's job to load it) would have to be within the first 1024 cylinders on systems with this limitation. A simple way to do this was to create a separate /boot partition containing the kernel and put it at the beginning of the drive (usually just by making it the first partition). This means that it would physically reside within the first 1024 cylinders, provided of course that the partition wasn't too large. The limitation is no longer an issue because it only applies to old BIOSes. Also, many modern bootloaders (eg GRUB) have their own disk drivers and so do not need to rely on BIOS services. Modern bootloaders may use /boot for other purposes, but it is no longer required to be both on a separate partition and within the first 1024 cylinders (although there are many cases where it is necessary to have /boot on a separate partition). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88545/"
]
} |
176,423 | On a Lenovo Laptop with Ubuntu 14.04 I am not able to display the current screen's stuff on a Monitor connected with HDMI. The HDMI cables are both plugged in to the laptop and the monitor, and the monitor is switched on. Going to Systems Settings -> Displays and clicking on 'Detect Displays' only the standard laptop screen is shown. The external monitor is not shown. How can I fix this problem in order to see the current screen on both, the laptop screen and on the monitor screen? Also, it is unimportant if the screen can play the laptop's sound. I only want the visible screen output shown on the external monitor as well, which works fine when starting the laptop with the Windows OS (without any change to hardware and/or cables)... Additional information: xrandr only shows the standard monitor; the full output of xrandr is xrandr: Failed to get size of gamma for output defaultScreen 0: minimum 1600 x 900, current 1600 x 900, maximum 1600 x 900default connected primary 1600x900+0+0 0mm x 0mm 1600x900 77.0* The HDMI connection works flawless when running the laptop with Windows (dual-boot) Output of line of lspci : 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA controller])Subsystem: Lenovo Device 3977Flags: bus master, fast devsel, latency 0, IRQ 7Memory at c0000000 (64-bit, non-prefetchable) [size=4M]Memory at b0000000 (64-bit, prefetchable) [size=256M]I/O ports at 3000 [size=64]Expansion ROM at <unassigned> [disabled]Capabilities: <access denied> Output of sudo lshw -C display : *-display UNCLAIMED description: VGA compatible controller product: 3rd Gen Core processor Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list configuration: latency=0 resources: memory:c0000000-c03fffff memory:b0000000-bfffffff ioport:3000(size=64) I also tried to remove and re-install the package xserver-xorg-video-intel - but it did not change anything (after reboot). I followed the steps given here for a Samsung LS22B150NS monitor with a resolution of 1920 x 1080 pixels. But I got an error xrandr: cannot find output "VGA1" : alex:~$ cvt 1920 1080# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHzModeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsyncalex:~$ xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsyncxrandr: Failed to get size of gamma for output defaultalex:~$ xrandr --addmode VGA1 1920x1080_60.00xrandr: Failed to get size of gamma for output defaultxrandr: cannot find output "VGA1" | This is a limitation imposed by having a very old BIOS and bootloader rather than Linux itself. The BIOS would only be able to access the first 1024 cylinders of the disk (see here for more information on what cylinders/heads/sectors are). This limitation would extend to bootloaders which, due to their simple nature, would not have their own disk drivers and would use the BIOS services to access the disk. This meant that both the bootloader and the kernel (since it is the bootloader's job to load it) would have to be within the first 1024 cylinders on systems with this limitation. A simple way to do this was to create a separate /boot partition containing the kernel and put it at the beginning of the drive (usually just by making it the first partition). This means that it would physically reside within the first 1024 cylinders, provided of course that the partition wasn't too large. The limitation is no longer an issue because it only applies to old BIOSes. Also, many modern bootloaders (eg GRUB) have their own disk drivers and so do not need to rely on BIOS services. Modern bootloaders may use /boot for other purposes, but it is no longer required to be both on a separate partition and within the first 1024 cylinders (although there are many cases where it is necessary to have /boot on a separate partition). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31853/"
]
} |
176,426 | I have tried to do this and I came up with this grep -E '\<[0-9]{4}"-"[0-9]{2}"-"[0-9]{2}\>' It doesn't work and the reason for that is the "-" and multiple grep things, so I tried dividing them with a pipe like this grep -E '\<[0-9]{4}-|[0-9]{2}-|[0-9]{2}\>' But it still matches lines like 4444 , or similar. Anyone know how to achieve what I want? | you are overquoting... grep -E '\<[0-9]{4}-[0-9]{2}-[0-9]{2}\>' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
176,452 | I am trying to enable a HDMI connection to a monitor connected with a HDMI cable to my Lenovo laptop using the following commands. > xrandrxrandr: Failed to get size of gamma for output defaultScreen 0: minimum 1600 x 900, current 1600 x 900, maximum 1600 x 900default connected primary 1600x900+0+0 0mm x 0mm 1600x900 77.0* > cvt 1920 1080# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHzModeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync> xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsyncxrandr: Failed to get size of gamma for output default> xrandr --addmode VGA1 1920x1080_60.00xrandr: Failed to get size of gamma for output defaultxrandr: cannot find output "VGA1" Is there something wrong with the commands? Is there something wrong with xrandr ? Maybe I need to install additional packages? | First, you must know the name of your output devices. To do that execute this on the command line: xrandr --listmonitors You will get something like this: Monitors: 2 0: +*HDMI-0 1920/510x1080/290+0+0 HDMI-0 1: +VGA-0 768/203x1024/271+1920+0 VGA-0 Then you run xrandr with the right name. In my case: xrandr --addmode VGA-0 1656x900_60.00 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31853/"
]
} |
176,454 | In Linux (currently using ext4 filesystem), how can one check quickly if the contents of a file has been modified without reading any of its contents? Is the stat command a recommended approach? I currently do $ stat --format "%Y" hello.txt and later I can check if the same command yields the same output. If it does, I conclude that hello.txt has not changed. My feeling is that one wants to throw in more parameters to be even more sure. For example, would adding the file size, file name, etc, provide an even better "fingerprint" of the file? On this topic, I recall that a TrueCrypt volume I once had was always ignored by my incremental backup program, possibly because TrueCrypt made sure to leave no meta data changes behind. I suppose it is indeed possible to change all the data returned by stat , hence it cannot be guaranteed to pick up on every possible modification of the file? | If you want to detect whether a file has been modified through normal means (editing it in some application, checking out a new version from a revision control systems, rebuilding it, etc.), check whether its modification time (mtime) has changed from the last check. That's what stat -c %Y reports. The modification time can be set by the touch command. If you want to detect whether the file has changed in any way (including the use of touch , extracting an archive, etc.), check whether its inode change time ( ctime ) has changed from the last check. That's what stat -c %Z reports. The ctime cannot be spoofed except by the system administrator (and even then, only through indirect means: by changing the system clock, or by accessing the disk directly, bypassing the filesystem). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45220/"
]
} |
176,475 | It seems like it should be simple to symlink one file to a new file in a subdirectory.... ....without moving subdirectories. But something about the syntax is perplexing and counter to what I would expect. Here's a test case: mkdir tempcd tempmkdir deployecho "Contents of the build file!" > deploy/resources.build.phpln -s deploy/resources.build.php deploy/resources.phpcat deploy/resources.php #bad symlink This just creates a broken symlink!I am running this in a build environment setup script, so I want to avoid changing the current working directory if at all possible. ln -s deploy/resources.build.php resources.phpcat deploy/resources.php Also doesn't work because it creates the symlink in the temp directory instead of the deploy subdirectory. cd deployln -s resources.build.php resources.phpcd .. This works, but I'd prefer to know how to do it without changing directories. Using a full path like: /home/whatever/src/project/temp/stuff/temp/deploy/resources.build.php Works, but is unweildy and somewhat impractical, especially in a build environment where all the project stuff might be different between builds, and the like. How can I create a symlink between two files in a subdirectory, without moving into that subdirectory and out of it, and while giving the new file "alias" a new name? | But something about the syntax is perplexing and counter to what I would expect. The arguments for ln , in the form that you're using it, are: ln [OPTION]... [-T] TARGET LINK_NAME (1st form) The perplexing, unintuitive thing is that when you're creating a symlink, the target argument for ln isn't expected to be a path to a file, but rather the contents of the symlink to be created. If you think about it for a moment, it's obvious that it has to be that way. Consider: $ echo foo >foo$ ln -s foo bar1$ ln -s $PWD/foo bar2$ cat bar1foo$ cat bar2foo$ ls -l bar1 bar2lrwxrwxrwx 1 matt matt 3 Dec 29 16:29 bar1 -> foolrwxrwxrwx 1 matt matt 29 Dec 29 16:29 bar2 -> /home/matt/testdir/foo In that example I create 2 symlinks, named "bar1" and "bar2", that point to the same file. ls shows that the symlinks themselves have different contents, though - one contains an absolute path, and one contains a relative path. Because of this, one would continue working even if it were moved to another directory, and the other wouldn't: $ mv bar2 /tmp$ cat /tmp/bar2foo$ mv bar1 /tmp$ cat /tmp/bar1cat: /tmp/bar1: No such file or directory So, considering that we must be able to make both relative and absolute symlinks, and even to create broken symlinks that will become un-broken if the target file is later created, the target argument has to be interpreted as freeform text, rather than the path to an already-existing file. If you want to create a file named deploy/resources.php that links to deploy/resources.build.php, you need to decide if you want to create an absolute symlink (which is resilient against the symlink being moved, but breaks if the target is moved), or a relative symlink (which will keep working as long as both the symlink and the target are moved together and maintain the same relative paths). To create an absolute symlink, you could do: $ ln -s $PWD/deploy/resources.build.php deploy/resources.php To create a relative one, you would first figure out the relative path from the source to the target. In this case, since the source and target are in the same directory relative to one another, you can just do: $ ln -s resources.build.php deploy/resources.php If they weren't in the same directory, you would need to instead do something like: $ ln -s ../foo/f bar/b In that case, even though foo and bar are both in your current directory, you need to include a ../ into the ln target because it describes how to find f from the directory containing b . That's an extremely long explanation, but hopefully it helps you to understand the ln syntax a little better. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3932/"
]
} |
176,477 | Very new to UNIX but not new to programming. Using Terminal on MacBook. For the purposes of managing and searching word lists for crossword construction, I'm trying to get handy with the Grep command and its variations. Seems pretty straightforward but getting hung up early on with what I thought should be a simple case. When I enter grep "^COW" masternospaces.txt I get what I want: a list of all the words starting with COW. But when I enter grep "COW$" masternospaces.txt I expect to get a list of words ending with COW (there are many such words), and nothing is returned at all. The file is a plain text file, with every line just a word (or a word phrase with no spaces) in all caps. Any idea what could be happening here? | As @steeldriver mentionned, the problem is likely to be caused by a different line ending style than what grep is expecting. To check the line endings You can use hexdump to check exactly how your line endings are formatted. I suggest you use my favorite format : hexdump -e '"%08_ad (0x%08_ax) "8/1 "%02x "" "8/1 "%02x "' -e '" "8/1 "%_p""|"8/1 "%_p""\n"' masternospaces.txt With the output, check the line endings : 0a -> LF , 0d -> CR . A very quick example would give something like this : $ hexdump -e '"%08_ad (0x%08_ax) "8/1 "%02x "" "8/1 "%02x "' -e '" "8/1 "%_p""|"8/1 "%_p""\n"' masternospaces.txt00000000 (0x00000000) 4e 6f 20 43 4f 57 20 65 6e 64 69 6e 67 0d 0a 45 No COW e|nding..E00000016 (0x00000010) 6e 64 69 6e 67 20 69 6e 20 43 4f 57 0d 0a nding in| COW.. Note the line endings in dos format : 0d 0a . To change the line endings You can see here or here for various methods of changing line endings using various tools, but for a one-time thing, you could always use vi/vim : vim masternospaces.txt:set fileformat=unix:wq To grep without changing anything If you just want grep to match no matter the line ending, you could always specify line endings like this : grep 'COW[[:cntrl:]]*$' masternospaces.txt If a blank line is shown, you can check that you indeed matched something by using the -v option of cat : grep 'COW[[:cntrl:]]*$' masternospaces.txt | cat -v My personal favorite You could also both grep and standardize the output using sed : sed -n '/COW^M*$/{;s/^M//g;p;};' masternospaces.txt where ^M is obtained by typing Ctrl-V Ctrl-M on your keyboard. Hope this helps! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96388/"
]
} |
176,489 | I want to install Android NDK on my CentOS 6.5 machine. But when I ran the program, it says it needs glibc 2.14 to be able to run. My CentOS 6.5 only has Glibc 2.12 installed. So I tried to update glibc by: $ sudo yum update glibc But after that I found the glibc version is still 2.12, not 2.14. $ ldd --versionldd (GNU libc) 2.12 I think glibc 2.14 may not be available on CentOS repositories. So how can I update it to glibc 2.14 on CentOS 6.5? | You cannot update glibc on Centos 6 safely. However you can install 2.14 alongside 2.12 easily, then use it to compile projects etc. Here is how: mkdir ~/glibc_install; cd ~/glibc_install wget http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gztar zxvf glibc-2.14.tar.gzcd glibc-2.14mkdir buildcd build../configure --prefix=/opt/glibc-2.14make -j4sudo make installexport LD_LIBRARY_PATH="/opt/glibc-2.14/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33477/"
]
} |
176,557 | [I had to change the example to make it clear that there are subdirectories.] Let's say I want to recreate a subset of my hierarchy. for arguments sake, let's say I want to backup files in filelist.conf # cat rsync-listab*bb* and # find .../abc./abc/file-in-abc./abd./abd/file-in-abd./aca./bba./bbc./bca./rsync-list I would have hoped that rsync -arv --include-from=rsync-list --exclude='*' . /somewhere-else would recreate abc, abd, bba, and bbc. the problem is that it does not descend into the ab* directories, so it does not do abc/file-in-abc and abd/file-in-abd. so, in this sense, the ab* is not really a wildcard that is expanded into abc and abd and then rsynced. | The manpage lists these five options: --exclude=PATTERN exclude files matching PATTERN--exclude-from=FILE read exclude patterns from FILE--include=PATTERN don't exclude files matching PATTERN--include-from=FILE read include patterns from FILE--files-from=FILE read list of source-file names from FILE --files-from is for exact filenames, and --include-from is for patterns, so you might want to try that instead. Using include-from , you don't need to specify + , but you do need to exclude everything else. For example, given: $ ls -v1 sourceimage1.tiff...image700.tiff$ cat includesimage7*.tiff Then I can sync only image7*.tiff using: rsync -aP --include-from=includes --exclude='*' source/ target The manpage also says, in the INCLUDE/EXCLUDE PATTERN RULES section: a β*β matches any path component, but it stops at slashes. use β**β to match anything, including slashes. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70646/"
]
} |
176,616 | A few months ago, meld started behaving oddly. Common lines are almost unreadable, and shown as dark grey text on a black background. Oddly enough, running it as root is fine (with kdesudo meld ), although the theme is less pretty. How can I specify the text's colour options for meld? I'm using: Arch Linux KDE 4.14.3 (also seen in 4.14.2) meld 3.12.2 (also seen in 3.12.1) gtk3 3.14.6 (also seen in 3.14.5) Troubleshooting KDE system settings meld uses GTK3, so I fiddled with System Settings > Common Appearance and Behaviour > Application Appearance > GTK > Select a GTK3 Theme. This change was reflected in meld, but none of the three options I selected changed the text. (The available options were Default, Emacs, and oxygen-gtk; the latter is used in the screenshot above.) Manually modifying config files I looked in ~ for files with gtk in their name. ~/.gtkrc-2.0~/.gtkrc-2.0-kde4~/.config/gtk-2.0~/.config/gtk-3.0~/.kde4/share/config/gtkrc~/.kde4/share/config/gtkrc-2.0 Interestingly, there is nothing with gtk in its name in /root . Hence, I tried deleting some of the ~ files, to see if I could get the same effect for my user. I presume all the gtkrc-2.0 files are irrelevant to meld. Firstly, I deleted ~/.config/gtk-3.0 , but this had no effect, and was recreated when I opened meld. The only other option appeared to be ~/.kde4/share/config/gtkrc , so deleted this and started meld, which was unaffected. However, the file was not recreated, and it contains some possibly pertinent lines (e.g. text[ACTIVE] = { 1.000, 1.000, 1.000 } ). I'm unsure if the (missing) file was loaded at all. I tried kbuildsycoca4 ; kquitapp plasma-desktop ; sleep 2 ; kstart plasma-desktop , but this had no effect. Do I need to manually reload the gtkrc? And why is this file not being affected/rewritten by the system settings? (Also, FWIW, I removed ~/.gtkrc-2.0-kde4 , which was actually a symlink to ~/.gtkrc-2.0 , and I also removed the target itself, but that didn't help. Again, I didn't reload gtk (I'm not sure if this is necessary, or possible), and the files weren't re-created when I tried running meld again.) Possibly pertinent environment variables $ export | grep -i gtkdeclare -x GTK2_RC_FILES="/etc/gtk-2.0/gtkrc:/home/sparhawk/.gtkrc-2.0:/home/sparhawk/.kde4/share/config/gtkrc-2.0"declare -x GTK_IM_MODULE="xim"declare -x GTK_MODULES="canberra-gtk-module"declare -x GTK_RC_FILES="/etc/gtk/gtkrc:/home/sparhawk/.gtkrc:/home/sparhawk/.kde4/share/config/gtkrc" (Disclosure: I've previously asked this question on the KDE forums , but didn't come to a solution.) | At least from Meld 3.16.4 support different color schemes. See Meld > Preferences : (possibly this change was introduced in earlier versions) Note : It is also possible to force a specific theme for Meld by CLI: GTK_THEME=Adwaita:dark meld | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
176,666 | Is it enough to see getfacl giving no error, or do I have to check some other place to see whether or not ACLs are supported by the file systems? | If you're talking about a mounted filesystem, I don't know of any intrinsic way to tell whether ACL are possible. Note that βare ACL supported?β isn't a very precise question since there are several types of ACL around (Solaris/Linux/not-POSIX-after-all, NFSv4, OSX, β¦). Note that getfacl is useless as a test since it will happily report Unix permissions if that's all there is: you need to try setting an ACL to test. Still on mounted filesystem, you can check for the presence of acl in the mount options (which you can find in /proc/mount ). Note that this isn't enough: you also need to take the kernel version and the filesystem type in consideration. Some filesystem types always have ACL available, regardless of mount options; this is the case for tmpfs, xfs and zfs. Some filesystems have ACL unless explicitly excluded; this is the case for ext4 since kernel 2.6.39 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
176,671 | I want to enable core dump generation by default upon reboot. Executing: ulimit -c unlimited in a terminal seems to work until the computer is rebooted. | Think I figured out something that works. I used a program called LaunchControl to create a file called enable core dumps.plist at /System/Library/LaunchDaemons with the following contents: <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"><dict> <key>GroupName</key> <string>wheel</string> <key>InitGroups</key> <true/> <key>Label</key> <string>core dumps launchctl</string> <key>ProgramArguments</key> <array> <string>launchctl</string> <string>limit</string> <string>core</string> <string>unlimited</string> <string>unlimited</string> </array> <key>RunAtLoad</key> <true/> <key>UserName</key> <string>root</string></dict></plist> with these permissions: $ ls -al enable\ core\ dumps.plist -rw-r--r-- 1 root wheel 582 Dec 30 15:38 enable core dumps.plist and this seemed to do the trick: $ launchctl limit core core unlimited unlimited $ ulimit -a corecore file size (blocks, -c) unlimited...<output snipped>... I created a little test program that just crashes: $ ./a.out Segmentation fault: 11 (core dumped) And, voila, a core dump was generated: $ # ls -al /cores/total 895856drwxrwxr-t@ 3 root admin 102 Dec 30 15:55 .drwxr-xr-x 31 root wheel 1122 Oct 18 10:32 ..-r-------- 1 root admin 458678272 Dec 30 15:55 core.426 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96507/"
]
} |
176,673 | In my script, I have several layers of statusing: remote is available (ping) remote NFS service is active remote NFS is exporting a certain directory remote NFS is mounted (mount) For (2) and (3), I believe rcpinfo is the best bet. For (2) though, I cannot figure out how to narrow my query to the NFS service without starting a subshell (which is not acceptable for this application). For (3), I'm not sure this information is even available remotely (without ssh ing in, of course). I'm working on RHEL 6 and have no access to programs that are not included in the standard distribution. | For 3) you probably want to use showmount -e remote_nfs_server which shows if remote_nfs_server has exported anything. And for 2) if you don't want to use a shubshell and know if the remote server runs NFSv3 or NFSv4 and if TCP or UDP, you could query for that specifically with rpcinfo: rpcinfo -u remote_nfs_server nfs 3 for NFSv3 via UDP and rpcinfo -t remote_nfs_server nfs 4 for NFSv4 via TCP For 4) you may want to look at Check if folder is a mounted remote filesystem Further information: Remote procedure call tools | Managing NFS and NIS NFS Diagnostic Tools | Managing NFS and NIS | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28980/"
]
} |
176,687 | Is there a way to set the storage size for the VM on creating it? I will be using Vagrant, but not sure if this is something that needs to be done in VirtualBox or a setting I can include in the Vagrantfile (I checked docs but there doesn't seem to be any indication) | The vagrant-disksize plugin makes this easy. Create a debian-9 vm with a 20gb hard drive. minimally: Vagrant.configure("2") do |config| config.vm.box = "debian/stretch64" config.disksize.size = "20GB"end or, using auto-install logic for the plugin: Vagrant.configure("2") do |config| required_plugins = %w( vagrant-vbguest vagrant-disksize ) _retry = false required_plugins.each do |plugin| unless Vagrant.has_plugin? plugin system "vagrant plugin install #{plugin}" _retry=true end end if (_retry) exec "vagrant " + ARGV.join(' ') end config.vm.box = "debian/stretch64" config.disksize.size = "20GB"end | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96261/"
]
} |
176,688 | I have two scripts, one that is used on a folder with csv files and another on a folder with .tar.gz files. In the second script, I first unzip the files before doing anything with them. Now I want to combine both scripts into one script where a user can specify which folder he/she wants to work with. When the script is run, the user has two options, to either use folderA or folderB. Sort of like a menu. ScriptA #!/bin/bashFOLDER=/path/to/folderDATE_LOG=`date "+%Y-%m-%d-%H:%M:%S"`LOG_FILE=/home/kamil/Desktop/Script/log_$DATE_LOG.txt# Getting the pattern and headers of files from FOLDERcd "$FOLDER" for file in *.csv; do echo -n "${file}" "|" | sed -r 's/(.*)_[0-9]{8}_[0-9][0-9]-[0-9][0-9].[0-9][0-9].csv/\1/' head -1 "$file" done | tee $LOG_FILE ScriptB #!/bin/bashFOLDER_HISTO=/path/to/folder/WithTAR.GZDATE_LOG=`date "+%Y-%m-%d-%H:%M:%S"`LOG_FILE=/home/kamil/Desktop/Script/log_$DATE_LOG.txt# Getting the pattern and headers of files from FOLDER_HISTOcd "$FOLDER_HISTO" for zip_file in *.tar.gz; do file=`tar -xvf $zip_file` echo -n "${file}" "|" | sed -r 's/(.*)_[0-9]{8}_[0-9][0-9]-[0-9][0-9].[0-9][0-9].csv/\1/' head -1 "$file" done | tee $LOG_FILE | The vagrant-disksize plugin makes this easy. Create a debian-9 vm with a 20gb hard drive. minimally: Vagrant.configure("2") do |config| config.vm.box = "debian/stretch64" config.disksize.size = "20GB"end or, using auto-install logic for the plugin: Vagrant.configure("2") do |config| required_plugins = %w( vagrant-vbguest vagrant-disksize ) _retry = false required_plugins.each do |plugin| unless Vagrant.has_plugin? plugin system "vagrant plugin install #{plugin}" _retry=true end end if (_retry) exec "vagrant " + ARGV.join(' ') end config.vm.box = "debian/stretch64" config.disksize.size = "20GB"end | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96518/"
]
} |
176,693 | I'm trying to create some simple pass/fail test scripts, but I am having some challenges as outlined below. My intent is to get the full results of a command (such as ping) into a results.txt file to keep, but also parse the results.txt file for different checks to raise if issues are detected: #!/bin/bashping -c 20 google.com > results.txtpacketloss = `awk '/packet loss/{x=$6} END{print x}' results.txt`echo "$packetloss" >> debug.txt# if packetloss > 0, add to an error.txt to fail# if avg ping time > 5ms, add to an error.txt to fail The packetloss variable is not getting the awk information from the results.txt file (sending to a debug file to review). I was wondering if there is something about shell scripting that would prevent this and an associated workaround? Manually running awk on results.txt returns '0%' which is the expected result. | Spaces are not allowed around = ! So: #!/bin/bashping -c 20 google.com > results.txtpacketloss=$(awk '/packet loss/{print $6}' results.txt)echo "$packetloss" >> debug.txt Or even shorter: ping -c 20 google.com | awk '/packet loss/{sub(/%/, "");print $6 >> "debug.txt"}' NOTE: There is no need to assign the x variable; you can print $6 directly. AWK itself can create new files with its output The backquote ` is used in the old-style command substitution, for example : foo=`command` The foo=$(command) syntax is recommended instead. Backslash handling inside $() is less surprising, and $() is easier to nest. Check http://mywiki.wooledge.org/BashFAQ/082 Extra solution using Perl : ping -c 20 google.com | perl -lne '/(\d+)%\s+packet\s+loss/ and print $1' >> debug.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46930/"
]
} |
176,696 | Qt4 applications use the gtk theme by default, but Qt5 applications need to be started using -style gtk , or they don't look like gtk applications. Is there a way to make Qt5 applications use the gtk style by default? There is qtconfig-qt4 (and style is set to gtk), but no qtconfig-qt5 package. I'm on Linux Mint 17.1 βRebeccaβ Cinnamon. | I found the solution after reading https://wiki.archlinux.org/index.php/Uniform_Look_for_Qt_and_GTK_Applications : Qt5 decides the style to use based on what desktop environment is used. If it doesn't recognize the desktop environment, it falls back to a generic style. To force a specific style, you can set the QT_STYLE_OVERRIDE environment variable. Specifically, set it to gtk if you want to use the gtk theme. Qt5 applications also support the -style flag, which you can use to launch a Qt5 application with a specific style. So I added this line to my $HOME/.profile export QT_STYLE_OVERRIDE=gtk | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9108/"
]
} |
176,710 | I decided to install ArchLinux and the tutorial I found wanted to create the partitions in advance. So I made the ext4 and linux-swap partitions and saw in gParted my old friends - the Windows partitions, the one called SYSTEM and another one, which Windows creates itself, named System Reserved. So i deleted both. And now the computer doesn't know there is an OS on my computer. I receive the pxe-mof message. What should I do? | I found the solution after reading https://wiki.archlinux.org/index.php/Uniform_Look_for_Qt_and_GTK_Applications : Qt5 decides the style to use based on what desktop environment is used. If it doesn't recognize the desktop environment, it falls back to a generic style. To force a specific style, you can set the QT_STYLE_OVERRIDE environment variable. Specifically, set it to gtk if you want to use the gtk theme. Qt5 applications also support the -style flag, which you can use to launch a Qt5 application with a specific style. So I added this line to my $HOME/.profile export QT_STYLE_OVERRIDE=gtk | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33936/"
]
} |
176,717 | In a CentOS 7 server, I type in firewall-cmd --list-all , and it gives me the following: public (default, active) interfaces: enp3s0 sources: services: dhcpv6-client https ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: What is the dhcpv6-client service? What does it do? And what are the implications of removing it? I read the wikipedia page for dhcpv6 , but it does not tell me specifically what this service on CentOS 7 Firewalld does. This server is accessible via https and email via mydomain.com , but it is a private server that can only be accessed via https by a list of known ip addresses. In addition, this server can receive email from a list of known email addresses. Is the dhcpv6-client service required to reconcile the domain addresses from the known ip https requests and for exchanging the email with known email addresses? | This is needed if you are using DHCP v6 due to the slightly different way that DHCP works in v4 and v6. In DHCP v4 the client establishes the connection with the server and because of the default rules to allow 'established' connections back through the firewall, the returning DHCP response is allowed through. However, in DHCP v6, the initial client request is sent to a statically assigned multicast address while the response has the DHCP server's unicast address as the source (see RFC 3315 ). As the source is now different to the initial request's destination, the 'established' rule will not allow it through and consequently DHCP v6 will fail. To combat this, a new firewalld rule was created called dhcpv6-client which allows incoming DHCP v6 responses to pass - this is the dhcpv6-client rule. If you're not running DHCP v6 on your network or you are using static IP addressing, then you can disable it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
176,749 | When I'm checking for some process, I usually write ps aux | grep myprocess And sometimes I get the output of eimantas 11998 0.0 0.0 8816 740 pts/0 S+ 07:45 0:00 grep myprocess if the process is not running. Now I really wonder why grep is in the list of processes if it filters out the output of the ps command after ps has run? | This behavior is totally normal, it's due to how bash manages pipe usage. pipe is implemented by bash using the pipe syscall. After that call, bash forks and replaces the standard input ( file descriptor 0 ) with the input from the right process ( grep ). The main bash process creates another fork and passes the output descriptor of the fifo in place of the standard input ( file description 1 ) and launches the left command. The ps utility is launched after the grep command, so you can see it on the output. If you are not convinced by it, you can use set -x to enable command tracing. For instance: + ps aux+ grep --color=auto grep+ grep --color=auto systemdalexises 1094 0.0 0.8 6212 2196 pts/0 S+ 09:30 0:00 grep --color=auto systemd For more explanation you can check this part of basic c shell: http://www.cs.loyola.edu/~jglenn/702/S2005/Examples/dup2.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78/"
]
} |
176,751 | I have two files with the below data; I need the difference between two files. I tried with diff but it also shows line which are common in the two files: (22372 Dec 4 15:36 /opt/apache-tomcat-6.0.36/webapps/new/new.txt) . First file: (multiple data exists in the same way in file 1) 22677 Dec 4 15:36 /opt/apache-tomcat-6.0.36/webapps/new/abc.txt22372 Dec 4 15:36 /opt/apache-tomcat-6.0.36/webapps/new/new.txt Second file: (multiple data exists in the same way in file 2). 22372 Dec 4 15:36 /opt/apache-tomcat-6.0.36/webapps/new/new.txt22677 Dec 3 15:36 /opt/apache-tomcat-6.0.36/webapps/new/abc.txt12344 Dec 10 15:36 /opt/apache-tomcat-6.0.36/webapps/abc/.../test.txt I need the below output: 22677 Dec 3 15:36 /opt/apache-tomcat-6.0.36/webapps/new/abc.txt12344 Dec 10 15:36 /opt/apache-tomcat-6.0.36/webapps/abc/.../test.txt | This seems like a perfect opportunity to use comm. From the GNU coreutils manual page (v8.30): With no options, produce three-column output. Column one containslines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files. -1 suppress column 1 (lines unique to FILE1) -2 suppress column 2 (lines unique to FILE2) -3 suppress column 3 (lines that appear in both files) Using this information, we can remove the lines unique file1 as well as the lines present in both files. $ comm -1 -3 <(sort file1) <(sort file2)12344 Dec 10 15:36 /opt/apache-tomcat-6.0.36/webapps/abc/.../test.txt22677 Dec 3 15:36 /opt/apache-tomcat-6.0.36/webapps/new/abc.txt -1 and -3 removes all lines unique to file 1 and all lines common to both. Because of the sort, it will change the order of the output but that doesn't seem to be a consideration based on the question. If the input is already sorted, you can skip the sorts yielding $ comm -1 -3 file1 file2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96435/"
]
} |
176,758 | We have this setup on our server: Ubuntu 14.04 (used to be 12.04) 10TB RAID-6 system LVM (one VG, two LV) ext4 partition ( ~2TB) Btrfs partition ( ~8TB) After a reboot last month the system was slow. First we thought it was because the RAID was resyncing (one drive was not added and was reactivated). But after that finally finished (12-13 days) access to Btrfs still is noticably slow. ext4 access seems normal. The sysadmin who set this up (and who left this summer) already used autodefrag,noatime on the Btrfs mount. What can we do to get Btrfs speed up again? | I can't comment, sorry if this is not good as answer. You should check your drives. I think 12 days of rsync time for 10Tb is too long, should be more like 12-24 hours. Look at the different drives with smartctl to check if one has many errors: for i in a b c d e f g h i j k l; do echo $i ; smartctl -x /dev/sd$i | grep occurred | head -1 ; done I have seen working RAID slow down because of this. And IIRC Btrfs needs more disc access than ext4 for directory look ups, which could explain access speed difference. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176758",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96451/"
]
} |
176,775 | I used to create a Thunar custom action as presented here in order to extract audio without transcoding from selected (multiple) video files . The command to be added in Thunar custom actions list was: xfce4-terminal -e "parallel avconv -i '{}' -map 0:1 -c:a copy '{}.m4a' -- %F" now I get only this: parallel: Warning: Input is read from the terminal. Only experts do this on purpose. Press CTRL-D to exit. Why is that and how to adapt such commands so that they would work in Thunar custom actions? | I can't comment, sorry if this is not good as answer. You should check your drives. I think 12 days of rsync time for 10Tb is too long, should be more like 12-24 hours. Look at the different drives with smartctl to check if one has many errors: for i in a b c d e f g h i j k l; do echo $i ; smartctl -x /dev/sd$i | grep occurred | head -1 ; done I have seen working RAID slow down because of this. And IIRC Btrfs needs more disc access than ext4 for directory look ups, which could explain access speed difference. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
176,787 | Say, I have a file, and multiple regexes have to be searched in it and the number of matches for each regex has to be counted. Thus, I cannot combine the patterns: grep -Po '{regex_1}|{regex_2}|...|{regex_n}' file | wc -l ... as the number of occurences for each regex is required. I obviously could: occurences[i]=$(grep -Po "${regex[i]}" file | wc -l) ... but unfortunately, the files encountered may be quite large (> 1 GB) and there are many of patterns (in the range of thousands) to check for, making the process quite slow, as multiple reads of the same file would be involved. Is there a way to do this in a fast manner? | Probably awk would be fastest shell tool here. You could try: awk "/$regex1/ { ++r1 } /$regex2/ { ++r2 }"' END { print "regex1:",r1 "\nregex2:",r2 }' <infile Of course if you need to use perl regular expressions like your question, then really perl is the only answer. However, awk does use extended expressions (like grep -E ) as opposed to basic ones. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
176,790 | In FHS-2.3, we have /media that holds mount points for removable media such as CD-ROMs and we have /mnt that holds temporarily mounted filesystems. On the other hand, we have /run/media and /run/mount . For me, the CDs and USBs are mounted on /run/media. I don't see any clear distinction between them( /media , /mnt , /run/mount ) . What are their differences? I have seen similar trend (mount on /run/media) in fedora 20 - GNOME 3.10.4 and ubuntu 14.04.1 (installed on virtual box) with GNOME 3.10.4. But when I plugged in a USB flash (with auto-mounter script) on a system with Centos 6 and GNOME 2.28.2 it was mounted on /media | FHS v2.3 was released ten years ago. Some things have changed since then (including the introduction of /run 1 ). About three years ago, the Linux Foundation decided to update the standard and invited all interested parties to participate. You can view the v. 3.0 drafts here and the section that describes /run here . The distinction between /media and /mnt is pretty clear in the FHS (see Purpose and Rationale ), so I won't go over it again. Same for the purpose of /run - see links. The Gnome story is yet another thing. Gnome uses underneath an application called udisks (replaced later by udisks2 ) to automount drives/devices. For quite a long time, udisks default mounts were under /media . In 2012 the devs decide to move the mounts to /run/media (i.e. a private directory). So the different behaviour you're experiencing there is caused by the different versions of udisks that each DE is using. 1: see What's this /run directory doing on my system and where does it come from ? What is this new /run filesystem ? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90796/"
]
} |
176,839 | I'm new to Linux and so far,I have come to understand that if we open a new process through a gnome-terminal eg: gedit & "& for running it in background", then if we close the terminal, gedit exits as well. So, to avoid this ,we disown the process from the parent terminal by giving the process id of gedit. But, I've come across one peculiarity; If we open a gnome-terminal from within a gnome-terminal(parent) with gnome-terminal & and now, if we close the parent terminal, the child terminal doesn't close even if I haven't disowned it. Why is it? And if there is an exception, what is the reason for making it an exception and where can I find the configuration file(if accessible) where this exception has been mentioned? | When you close a GNOME Terminal window, the shell process (or the process of whatever command you instructed Terminal to run) is sent the SIGHUP signal. A process can catch SIGHUP, which means a specified function gets called, and most shells do catch it. Bash will react to a SIGHUP by sending SIGHUP to each of its background processes (except those that have been disowned). Looking at your example with gedit (with some help from pstree and strace ): When the GNOME Terminal window is closed, gedit is sent SIGHUP by the shell, and since it doesn't catch SIGHUP (and doesn't ignore it), gedit will immediately exit. βgnome-terminal(31486)βbash(31494)ββgedit(31530)[31486] getpgid(0x7b06) = 31494[31486] kill(-31494, SIGHUP) = 0[31494] --- SIGHUP {si_signo=SIGHUP, si_code=SI_USER, si_pid=31486, si_uid=0} ---[31494] kill(-31530, SIGHUP) = 0[31530] --- SIGHUP {si_signo=SIGHUP, si_code=SI_USER, si_pid=31494, si_uid=0} ---[31530] +++ killed by SIGHUP +++[31494] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=31530, si_status=SIGHUP} ---[31486] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=31494, si_status=SIGHUP} --- But when you type the gnome-terminal command and there is an existing GNOME Terminal window, by default GNOME Terminal will do something a bit unusual: it will call the factory org.gnome.Terminal.Factory via D-Bus, and immediately exit; there is no background job for the shell to see for more than a fraction of a second. As a result of that factory call, the new window you get after typing gnome-terminal is managed by a new thread of the same GNOME Terminal process that is managing your existing window. Your first shell is unaware of the process id of the second shell, and cannot automatically kill it. βgnome-terminal(9063)ββ¬βbash(39548) β ββbash(39651) ββ{gnome-terminal}(9068) ββ{gnome-terminal}(9070) On the other hand, if you type gnome-terminal --disable-factory & , it won't call the factory, and process-wise it will behave just like gedit did in your example. βgnome-terminal(39817)ββbash(39825)ββgnome-terminal(39867)ββbash(39874) β ββ{gnome-terminal}(39868) β ββ{gnome-terminal}(39819) Closing the first terminal window will close both the first and the second terminal windows. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96181/"
]
} |
176,843 | I am struggling with setting up an encrypted NAS for some days now. The basic plan is to have btrfs on lvm on luks on raid1 with lvmcache in writeback mode thrown in for the root partition to reduce disk access. TL;DR: After setting up the partitions and filesystems GRUB fails to install with: grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet..grub-install: error: embedding is not possible, but this is required for RAID and LVM install. Partitions Following the Arch Wiki I start by setting up the partitions: gdisk output for /dev/sda and /dev/sdb: Disk /dev/sda: 976773168 sectors, 465.8 GiBLogical sector size: 512 bytesDisk identifier (GUID): 9EFA6587-E34F-4AC1-8B56-5262480A6C6APartition table holds up to 128 entriesFirst usable sector is 34, last usable sector is 976773134Partitions will be aligned on 2048-sector boundariesTotal free space is 2014 sectors (1007.0 KiB)Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF02 BIOS boot partition 2 4096 976773134 465.8 GiB 8300 Linux filesystem Note the BIOS boot partition that is apparently required by GRUB when installing in BIOS/GPT mode. MDADM As I have two disk I want them in a RAID1 array: mdadm --create --level=1 --raid-devices=2 /dev/md0 /dev/sda2 /dev/sdb2root@archiso ~ # mdadm --detail --scanARRAY /dev/md0 metadata=1.2 name=archiso:0 UUID=bdfc3fea:f4a0ee6d:6ac08012:59ea384broot@archiso ~ # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 488253440 blocks super 1.2 [2/2] [UU] [>....................] resync = 2.0% (9832384/488253440) finish=96.6min speed=82460K/sec bitmap: 4/4 pages [16KB], 65536KB chunkunused devices: <none> LUKS Next I setup a LUKS volume on top of the RAID : root@archiso ~ # cryptsetup luksFormat /dev/md0 WARNING!========This will overwrite data on /dev/md0 irrevocably.Are you sure? (Type uppercase yes): YESEnter passphrase: Verify passphrase:root@archiso ~ # cryptsetup luksOpen /dev/md0 md0-cryptEnter passphrase for /dev/md0: LVM Btrfs snapshots could be used instead of LVM , but as of writing this there is no way to add a SSD caching device to Btrfs . So I opted to use LVM and add the SSD via lvmcache later: (Creating the volume group in one step:) root@archiso ~ # vgcreate vg0 /dev/mapper/md0-crypt Physical volume "/dev/mapper/md0-crypt" successfully created Volume group "vg0" successfully createdroot@archiso ~ # lvcreate -L 100M -C y vg0 -n boot Logical volume "boot" created.root@archiso ~ # lvcreate -L 20G vg0 -n root Logical volume "root" created.root@archiso ~ # lvcreate -L 10G vg0 -n var Logical volume "var" created.root@archiso ~ # lvcreate -L 6G -C y vg0 -n swap Logical volume "swap" created.root@archiso ~ # lvcreate -l +100%FREE vg0 -n home Logical volume "home" created Resulting in the following layout: root@archiso ~ # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert boot vg0 -wc-a----- 100.00m home vg0 -wi-a----- 429.53g root vg0 -wi-a----- 20.00g swap vg0 -wc-a----- 6.00g var vg0 -wi-a----- 10.00g Btrfs/Filesystems Creating the filesystems: root@archiso ~ # mkfs.ext4 /dev/vg0/bootroot@archiso ~ # mkfs.btrfs /dev/vg0/homeroot@archiso ~ # mkfs.btrfs /dev/vg0/rootroot@archiso ~ # mkfs.btrfs /dev/vg0/var ( ext4 was chosen for boot because btrfs complains about the small partition size.) Mounting the filesystems: root@archiso ~ # swapon /dev/vg0/swaproot@archiso ~ # mount /dev/vg0/root /mnt/arch -o compress=lzoroot@archiso ~ # mount /dev/vg0/home /mnt/arch/home -o compress=lzoroot@archiso ~ # mount /dev/vg0/var /mnt/arch/var -o compress=lzoroot@archiso ~ # mount /dev/vg0/boot /mnt/arch/boot Installing Arch Actually I just copy the system from a previous backup: root@archiso ~ # rsync -Pa /mnt/bkp/sda/* /mnt/arch ( coffee break ) Setting up mdadm.conf and fstab root@archiso ~ # genfstab -U /mnt/arch > /mnt/arch/etc/fstabroot@archiso ~ # cat /mnt/arch/etc/fstab # /dev/mapper/vg0-rootUUID=62ebf0c9-bb37-4b4e-87dd-eb8a4ace6a69 / btrfs rw,relatime,compress=lzo,space_cache 0 0# /dev/mapper/vg0-homeUUID=53113e11-b663-452f-b4da-1443e470b065 /home btrfs rw,relatime,compress=lzo,space_cache 0 0# /dev/mapper/vg0-varUUID=869ffe10-7a1c-4254-9612-25633c7ae619 /var btrfs rw,relatime,compress=lzo,space_cache 0 0# /dev/mapper/vg0-bootUUID=d121a9df-8c03-4ad9-a6e0-b68739b1a358 /boot ext4 rw,relatime,data=ordered 0 2# /dev/mapper/vg0-swapUUID=29035eeb-540d-4437-861b-c30597bb7c16 none swap defaults 0 0root@archiso ~ # mdadm --detail --scan >> /mnt/arch/etc/mdadm.confroot@archiso ~ # cat /mnt/arch/etc/mdadm.conf[...]ARRAY /dev/md0 metadata=1.2 name=archiso:0 UUID=bdfc3fea:f4a0ee6d:6ac08012:59ea384b Chrooting into the system root@archiso ~ # arch-chroot /mnt/arch /bin/bash[root@archiso /]# mkinitcpio.conf These hooks were added: mdadm_udev encrypt lvm2 btrfs [root@archiso /]# mkinitcpio -p linux Configuring GRUB Now for the interesting (and failing) part, I chose GRUB as my bootloader as it should support all of the contraptions that I use. References: https://wiki.archlinux.org/index.php/Dm-crypt/Encrypting_an_entire_system#LVM_on_LUKS http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption/ Changed parts in /etc/default/grub : GRUB_CMDLINE_LINUX="cryptdevice=/dev/md0:vg0"GRUB_ENABLE_CRYPTODISK=y Installing grub: [root@archiso /]# grub-install --target=i386-pc --recheck /dev/sda Installing for i386-pc platform. /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning.grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet..grub-install: error: embedding is not possible, but this is required for RAID and LVM install. ( --debug output available here ) Frankly ... I have no idea what's the problem here. In BIOS/GPT mode GRUB should embed it's core.img into the ef02/BIOS boot partition shouldn't it? Edit https://bbs.archlinux.org/viewtopic.php?id=144254 doesn't apply here: [root@archiso /]# btrfs fi show --all-devicesLabel: none uuid: 62ebf0c9-bb37-4b4e-87dd-eb8a4ace6a69 Total devices 1 FS bytes used 965.77MiB devid 1 size 20.00GiB used 3.04GiB path /dev/mapper/vg0-rootLabel: none uuid: 869ffe10-7a1c-4254-9612-25633c7ae619 Total devices 1 FS bytes used 339.15MiB devid 1 size 10.00GiB used 3.04GiB path /dev/mapper/vg0-varLabel: none uuid: 53113e11-b663-452f-b4da-1443e470b065 Total devices 1 FS bytes used 384.00KiB devid 1 size 429.53GiB used 2.04GiB path /dev/mapper/vg0-homeBtrfs v3.17.3 | Hmm ... apparently this line was the clue: grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet.. Previously I had installed btrfs directly on /dev/sda and /dev/sdb . That's why both of them had a FSTYPE and LABEL attached (as shown in lsblk ). Solution: I have now wiped both /dev/sda and /dev/sdb with hdparm (Secure Erase). There is probably a better way to unset those flags ... but this worked for me. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79930/"
]
} |
176,870 | Is xorg.conf.d no longer used by Arch Linux? If so, does anyone know where the configuration files that once lived under said directory now reside? | The default X config files live in /usr/share/X11/xorg.conf.d in arch. You can still put them in /etc/X11/xorg.conf.d if you want. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43029/"
]
} |
176,873 | Previously I used source command like this: source file_name But what I'm trying to do is this: echo something | source Which doesn't work. | Since source (or . ) takes a file as argument, you could try process substitution: source <(echo something) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/176873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22884/"
]
} |
176,897 | I have a bash script like this export pipedargument="| sort -n"ls $pipedargument But it gives the error ls: |: No such file or directoryls: sort: No such file or directory It seems to be treating the contents of "| sort -n" as just an argument passed to ls . How can I escape it so that it's treated as a regular piped command? I'm trying to conditionally set the $pipedargument . I guess I could just conditionally execute different versions of the command but still wondering if there's a way to make this work like above? | You are right that you cannot use | that way. The reason is that the shell has already looked for pipelines and separated them into commands before it does the variable substitution. Hence, | is treated as just another character. One possible work-around is to place the pipe character literally: $ cmd="sort -n"$ ls | $cmd In the case that you don't want a pipeline, you can use cat as a "nop" or placeholder: $ cmd=cat$ ls | $cmd This method avoids the subtleties of eval . See also here . A better approach: arrays A more sophisticated approach would use bash arrays in place of plain strings: $ cmd=(sort -n)$ ls | "${cmd[@]}" The advantage of arrays becomes important as soon as you need the command cmd to contain quoted arguments. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47695/"
]
} |
176,904 | how to print ps header when using pipe in linux this normal ps output $ ps -efUID PID PPID C STIME TTY TIME CMDroot 1 0 0 17:00 ? 00:00:02 /usr/lib/systemd/systemd but below ps command output without header $ ps -ef | grep systemdroot 1 0 0 17:00 ? 00:00:02 /usr/lib/systemd/systemd how do i print ps header for second command? thx | My first instinct is to do ps -ef | grep UID && ps -ef | grep systemd , but that will also print the grep commands like so $ ps -ef | grep UID && ps -ef | grep systemdUID PID PPID C STIME TTY TIME CMDroot 1 0 0 17:00 ? 00:00:02 /usr/lib/systemd/systemduser PID PPID C 23:30 ? 00:00:00 grep systemduser PID PPID C 23:30 ? 00:00:00 grep UID I don't see how you can print only the header, because any time you execute this, the regex will match the grep itself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96661/"
]
} |
176,917 | Many times I accidentally run the cat command on files that have contents up to few thousand lines. I try to kill the cat command with Ctrl + C or Ctrl + Z , but both only take effect after the total output of cat is displayed in the terminal, so I have to wait till cat gets completely executed. Is there a better solution that avoids waiting? Because sometimes the files are up to size of 100MBs, and it gets irritating to wait for it. I am using tcsh . | If the file(s) in question contain lots of data, sending the signal can actually get to cat before it finishes. What you really observe is the finite speed of your terminal - cat sends the data to the terminal and it takes some time for the terminal to display all of it. Remember that it usually has to redraw the whole output window for each line of output (i.e. move the contents of the window one line up and print the next line at the bottom). While there are techniques and algorithms to make this faster than if it was done the straightforward way, it still takes some time. Thus, if you want to get rid of the output as quickly as possible, hide the terminal window , because then (usually) no actual redrawing takes place. In a graphical environment this can mean either minimizing the window or switching to a different virtual desktop, on the Linux virtual console just switch to another one (( Ctrl +) Alt + F x ). Also notice that if you ran this over a slow network link (SSH over a GSM connection, for example), you would definitely see much less output before cat is killed by the signal, because the speed of the terminal redrawing wouldn't be the bottleneck any more. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/176917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
176,997 | $ whoamiadmin$ sudo -S -u otheruser whoamiotheruser$ sudo -S -u otheruser /bin/bash -l -c 'echo $HOME'/home/admin Why isn't $HOME being set to /home/otheruser even though bash is invoked as a login shell? Specifically, /home/otheruser/.bashrc isn't being sourced.Also, /home/otheruser/.profile isn't being sourced. - ( /home/otheruser/.bash_profile doesn't exist) | To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command. Example (login shell): sudo -i Example (with a specified user): sudo -i -u user Example (with a command): sudo -i -u user whoami Example (print user's $HOME ): sudo -i -u user echo \$HOME Note: The backslash character ensures that the dollar sign reaches the target user's shell and is not interpreted in the calling user's shell. I have just checked the last example with strace which tells you exactly what's happening. The output bellow shows that the shell is being called with --login and with the specified command, just as in your explicit call to bash, but in addition sudo can do its own work like setting the $HOME . # strace -f -e process sudo -S -i -u user echo \$HOMEexecve("/usr/bin/sudo", ["sudo", "-S", "-i", "-u", "user", "echo", "$HOME"], [/* 42 vars */]) = 0...[pid 12270] execve("/bin/bash", ["-bash", "--login", "-c", "echo \\$HOME"], [/* 16 vars */]) = 0... I noticed that you are using -S and I don't think it is generally a good technique. If you want to run commands as a different user without performing authentication from the keyboard, you might want to use SSH instead. It works for localhost as well as for other hosts and provides public key authentication that works without any interactive input. ssh user@localhost echo \$HOME Note: You don't need any special options with SSH as the SSH server always creates a login shell to be accessed by the SSH client. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/176997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49146/"
]
} |
177,014 | I used mount to show mounted drives, I don't want to see the not so interesting ones (i.e. non-physical). So I used to have a script mnt that did: mount | grep -Ev 'type (proc|sysfs|tmpfs|devpts) ' under Ubuntu 8.04 and showed me ext3 and reiserfs mount points only. That line is actually commented out and now I use (for Ubuntu 12.04): mount | grep -Ev 'type (proc|sysfs|tmpfs|devpts|debugfs|rpc_pipefs|nfsd|securityfs|fusectl|devtmpfs) ' to only show my ext4 and zfs partitions (I dropped using reiserfs ). Now I am preparing for Ubuntu 14.04 and the script has to be extended again (cgroup,pstore). Is there a better way to do this without having to extend the script? I am only interested in physical discs that are mounted and mounted network drives ( nfs , cifs ). | The -t option for mount also works when displaying mount points and takes a comma separated list of filesystem types: mount -t ext3,ext4,cifs,nfs,nfs4,zfs I am not sure if that is a better solution. If you start using (e.g. btrfs ) and forget to add that to the list you will not see it and maybe not miss it. I'd rather actively filter out any new "uninteresting" filesystem when they pop up, even though that list is getting long. You can actively try to only grep the interesting mount points similar to what @Graeme proposed, but since you are interested in NFS/CIFS mounts as well (which don't start with / ), you should do: mount | grep -E --color=never '^(/|[[:alnum:]\.-]*:/)' ( the --color is necessary to suppress coloring of the initial / on the lines found). As Graeme pointed out name based mounting of NFS shares should be allowed as well. The pattern either selects lines starting with a / or any combination of "a-zA-Z0-9." followed by :/ (for NFS mounts). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96750/"
]
} |
177,030 | What does it mean if a Linux container (LXC container) is called "unprivileged"? | Unprivileged LXC containers are the ones making use of user namespaces ( userns ). I.e. of a kernel feature that allows to map a range of UIDs on the host into a namespace inside of which a user with UID 0 can exist again. Contrary to my initial perception of unprivileged LXC containers for a while, this does not mean that the container has to be owned by an unprivileged host user. That is only one possibility. Relevant is: that a range of subordinate UIDs and GIDs is defined for the host user ( usermod [-v|-w|--add-sub-uids|--add-sub-gids] ) ... and that this range is mapped in the container configuration ( lxc.id_map = ... ) So even root can own unprivileged containers, since the effective UIDs of container processes on the host will end up inside the range defined by the mapping. However, for root you have to define the subordinate IDs first. Unlike users created via adduser , root will not have a range of subordinate IDs defined by default. Also keep in mind that the full range you give is at your disposal, so you could have 3 containers with the following configuration lines (only UID mapping shown): lxc.id_map = u 0 100000 100000 lxc.id_map = u 0 200000 100000 lxc.id_map = u 0 300000 100000 NB: as per a comment recent versions call this lxc.idmap ! assuming that root owns the subordinate UIDs between 100000 and 400000. All documentation I found suggests to use 65536 subordinate IDs per container, some use 100000 to make it more human-readbable, though. In other words: You don't have to assign the same range to each container. With over 4 billion (~ 2^32 ) possible subordinate IDs that means you can be generous when dealing the subordinate ranges to your host users. Unprivileged container owned and run by root To rub that in again. An unprivileged LXC guest does not require to be run by an unprivileged user on the host. Configuring your container with a subordinate UID/GID mapping like this: lxc.id_map = u 0 100000 100000lxc.id_map = g 0 100000 100000 where the user root on the host owns that given subordinate ID range, will allow you to confine guests even better. However, there is one important additional advantage in such a scenario (and yes, I have verified that it works): you can auto-start your container at system startup. Usually when scouring the web for information about LXC you will be told that it is not possible to autostart an unprivileged LXC guest. However, that is only true by default for those containers which are not in the system-wide storage for containers (usually something like /var/lib/lxc ). If they are (which usually means they were created by root and are started by root), it's a whole different story. lxc.start.auto = 1 will do the job quite nicely, once you put it into your container config. Getting permissions and configuration right I struggled with this myself a bit, so I'm adding a section here. In addition to the configuration snippet included via lxc.include which usually goes by the name /usr/share/lxc/config/$distro.common.conf (where $distro is the name of a distro), you should check if there is also a /usr/share/lxc/config/$distro.userns.conf on your system and include that as well. E.g.: lxc.include = /usr/share/lxc/config/ubuntu.common.conflxc.include = /usr/share/lxc/config/ubuntu.userns.conf Furthermore add the subordinate ID mappings: lxc.id_map = u 0 100000 65535lxc.id_map = g 0 100000 65535 which means that the host UID 100000 is root inside the user namespace of the LXC guest. Now make sure that the permissions are correct. If the name of your guest would be stored in the environment variable $lxcguest you'd run the following: # Directory for the containerchown root:root $(lxc-config lxc.lxcpath)/$lxcguestchmod ug=rwX,o=rX $(lxc-config lxc.lxcpath)/$lxcguest# Container configchown root:root $(lxc-config lxc.lxcpath)/$lxcguest/configchmod u=rw,go=r $(lxc-config lxc.lxcpath)/$lxcguest/config# Container rootfschown 100000:100000 $(lxc-config lxc.lxcpath)/$lxcguest/rootfschmod u=rwX,go=rX $(lxc-config lxc.lxcpath)/$lxcguest/rootfs This should allow you to run the container after your first attempt may have given some permission-related errors. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177030",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
177,138 | Help for a simple script #!/bin/basharray1=(prova1prova2slack64)a="slack64"b="ab"if [ $a = $b ]then echo "$a = $b : a is equal to b"else echo "$a = $b: a is not equal to b"fi This script simply doesn't work, I want a script which check if slack64 is present in a list(i use an array),and simply give me, yes is present,or no.I don't know how to compare an array with a single variable. | Use a different kind of array: rather than an integer-indexed array, use an associative array, so the key (index) is what you will be checking for. bash-4.0 or later is required for this. declare -A array1=( [prova1]=1 [prova2]=1 [slack64]=1)a=slack64[[ -n "${array1[$a]}" ]] && printf '%s is in array\n' "$a" In the above we don't really care about the values, they need only be non-empty for this. You can "invert" an indexed array into a new associative array by exchanging the key and value: declare -a array1=( prova1 prova2 slack64)declare -A map # required: declare explicit associative arrayfor key in "${!array1[@]}"; do map[${array1[$key]}]="$key"; done # see belowa=slack64[[ -n "${map[$a]}" ]] && printf '%s is in array\n' "$a" This can pay off if you have large arrays which are frequently searched, since the implementation of associative arrays will perform better than array-traversing loops. It won't suit every use case though, since it cannot handle duplicates (though you can use the value as a counter, instead of just 1 as above), and it cannot handle an empty index. Breaking out the complex line above, to explain the "inversion": for key in "${!a[@]}" # expand the array indexes to a list of wordsdo map[${a[$key]}]="$key" # exchange the value ${a[$key]} with the index $keydone | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/177138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
177,157 | I have created a line of text that looks like this: INSERT INTO radcheck(id, username, attribute, op, value) VALUES (,,00:23:32:c2:a9:e8,Auth-Type,:=,Accept); I want to use sed to make it look like this: INSERT INTO radcheck(id, username, attribute, op, value) VALUES (,'','00:23:32:c2:a9:e8','Auth-Type',':=','Accept'); This makes way more sense in context and I have gotten a little farther with it over the last (apparently) 17 hours: #!/bin/bashssh [email protected] brmacs >>MACS.txt mv MACS.txt /etc/persistent scp [email protected]:/etc/persistent/MACS.txt MACS.txtsed -i "1d" MACS.txthead -c 58 MACS.txt >>shortmacs.txttail -c 18 shortmacs.txt >>usermac.txtsed 's/"//g' usermac.txt >>usermacrdy.txtsed -i 's/^/INSERT INTO `radcheck`(`id`, `username`, `attribute`, `op`, `value`) VALUES (,'',/' usermacrdy.txtsed "s/$/','Auth-Type',':=','Accept');/" usermacrdy.txt > sqlquery.txtsed -i "s/,,/\,\'\',\'/" sqlquery.txtrm -f MACS.txtrm -f shortmacs.txtrm -f usermac.txtrm -f usermacrdy.txt WORKS!!! The head and tail cut the MAC address out of the original text file xfer'd over from the UBNT CPE device and then I pass it through sed to build the SQL syntax around the MAC address. After all that I found out the id portion of the query is not needed for success so now I am in slightly the same boat with: sed -i 's/^/INSERT INTO `radcheck`(`username`, `attribute`, `op`, `value`) VALUES (/' usermacrdy.txtsed "s/$/','Auth-Type',':=','Accept');/" usermacrdy.txt > sqlquery.txtsed -i "s/\,\'\',\'//" sqlquery.txt | There are four ways to include the single quote that you need. One cannot escape a single-quotes string within a single-quoted string. However, one can end the quoted string, insert an escaped single-quote, and then start a new single-quoted string.Thus, to put a single quote in the middle of 'ab' , use: 'a'\''b' . Or, using the sed command that you need: $ sed -r 's/,([^ ),]+)/,'\''\1'\''/g; s/,,/,'\'\'',/g' fileINSERT INTO radcheck(id, username, attribute, op, value) VALUES (,'','00:23:32:c2:a9:e8','Auth-Type',':=','Accept'); The second way is to use a double-quoted string, in which case the single-quote can be inserted easily: $ sed -r "s/,([^ ),]+)/,'\1'/g; s/,,/,'',/g" fileINSERT INTO radcheck(id, username, attribute, op, value) VALUES (,'','00:23:32:c2:a9:e8','Auth-Type',':=','Accept'); This issue with double-quoted strings is that the shell does processing on them. Here, though, there are no shell-active characters, so it is easy. The third method is to use a hex escape as PM2Ring demonstrates. The fourth way, suggested in the comments by Jonathan Leffler, is to place the sed commands in a separate file: $ cat script.sed s/,([^ ),]+)/,'\1'/gs/,,/,'',/g$ sed -rf script.sed fileINSERT INTO radcheck(id, username, attribute, op, value) VALUES (,'','00:23:32:c2:a9:e8','Auth-Type',':=','Accept'); This way has the strong advantage that sed reads the commands directly without any interference from the shell. Consequently, this completely avoids the need to escape shell-active characters and allows the commands to be entered in pure sed syntax. How the sed solution works The trick is to put single quotes around the comma-separated strings that you want but not around the others. Based on the single example that you gave, here is one approach: s/,([^ ),]+)/,'\1'/g This looks for one or more non-space, non-comma, and non-close-parens characters which follow a comma. These characters are placed inside single quotes. s/,,/,'',/g This looks for consecutive commas and places a two single-quotes between them. OSX and other BSD platforms To avoid extra backslashes, the above sed expressions use extended regular expressions. With GNU, these are invoked as -r but, with BSD, they are invoked with -E . Also, some non-GNU sed do not accept multiple commands separated with semicolons. Thus, on OSX, try: sed -E -e "s/,([^ ),]+)/,'\1'/g" -e "s/,,/,'',/g" file Addendum: Matching a MAC address From the comments, we have the following input; $ cat file3 INSERT INTO radcheck(username, attribute, op, value) VALUES (00:23:32:c2:a9:e8,'Auth-Type',':=','Accept'); And, we want to put single-quotes around the MAC address that follows the open-parens. To do that: $ sed -r "s/\(([[:xdigit:]:]+)/('\1'/" file3 INSERT INTO radcheck(username, attribute, op, value) VALUES ('00:23:32:c2:a9:e8','Auth-Type',':=','Accept'); In any locale, [:xdigit:] will match any hexadecimal digit. Thus, ([[:xdigit:]:]+) will match a MAC address (hex digit or colon). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96854/"
]
} |
177,205 | In a tutorial, I'm prompted "If you are running Squeeze, follow these instructions..." and "If you are running Wheezy, follow these other instructions..." When I run uname , I get the following information: Linux dragon-debian 3.2.0-4-686-pae #1 SMP Debian 3.2.63-2+deb7u2 i686 GNU/Linux Is that information enough to know if I'm using Squeeze or Wheezy , or do I get that from somewhere else? | Commands to try: β’ cat /etc/*-release β’ cat /proc/version β’ lsb_release -a - this shows "certain LSB (Linux Standard Base) and distribution-specific information" . For a shell script to get the details on different platforms, there's this related question. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/177205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
177,224 | I have the following working code: largest_prime=1for number_under_test in {1..100}do is_prime=true factors='' for ((divider = 2; divider < number_under_test-1; divider++)); do remainder=$(($number_under_test % $divider)) [ $remainder == 0 ] && [ is_prime ] && is_prime=false && factors+=$divider' ' done [ $is_prime == true ] && echo "${number_under_test} is prime!" || echo "${number_under_test} is NOT prime (factors= $factors)" [ $is_prime == true ] && largest_prime=$number_under_testdoneprintf "\nLargest Prime= $largest_prime\n" This code runs quickly is 0.194 seconds. However I found the && is_prime= false a bit hard to read and it could look (to the untrained eye) as if it was being tested rather than being set which is what it does.So I tried changed the && into an if...then and this works - but is 75 times slower at 14.48 seconds. It's most noticeable on the higher numbers. largest_prime=1for number_under_test in {1..100}do is_prime=true factors='' for ((divider = 2; divider < number_under_test-1; divider++)); do remainder=$(($number_under_test % $divider)) if ([ $remainder == 0 ] && [ $is_prime == true ]); then is_prime=false factors+=$divider' ' fi done [ $is_prime == true ] && echo "${number_under_test} is prime!" || echo "${number_under_test} is NOT prime (factors= $factors)" [ $is_prime == true ] && largest_prime=$number_under_testdone printf "\nLargest Prime= $largest_prime\n" Is there any was to have the clarity of the block without the slowness? Update (1/4/2015 10:40am EST) Great feedback! I am now using the following. Any other feedback ? largest_prime=1separator=' 'for number_under_test in {1..100}; { is_prime=true factors='' for ((divider = 2; divider < (number_under_test/2)+1; divider++)) { remainder=$(($number_under_test % $divider)) if [ $remainder == 0 ]; then is_prime=false factors+=$divider' ' fi } if $is_prime; then printf "\n${number_under_test} IS prime\n\n" largest_prime=$number_under_test else printf "${number_under_test} is NOT prime, factors are: " printf "$factors\n" fi}printf "\nLargest Prime= $largest_prime\n" | That's because you're spawning a sub-shell every time: if ([ $remainder == 0 ] && [ $is_prime == true ]); then Just remove the parentheses if [ $remainder == 0 ] && [ $is_prime == true ]; then If you want to group commands, there's syntax to do that in the current shell: if { [ $remainder == 0 ] && [ $is_prime == true ]; }; then (the trailing semicolon is required, see the manual ) Note that [ is_prime ] is not the same as [ $is_prime == true ] : you could write that as simply $is_prime (with no brackets) which would invoke the bash built-in true or false command. [ is_prime ] is a test with one argument, the string "is_prime" -- when [ is given a single argument, the result is success if the argument is non-empty, and that literal string is always non-empty, hence always "true". For readability, I would change the very long line [ $is_prime == true ] && echo "${number_under_test} is prime!" || echo "${number_under_test} is NOT prime (factors= $factors)" [ $is_prime == true ] && largest_prime=$number_under_test to if [ $is_prime == true ]; then echo "${number_under_test} is prime!"else echo "${number_under_test} is NOT prime (factors= $factors)" # removed extraneous [ $is_prime == true ] test that you probably # didn't notice off the edge of the screen largest_prime=$number_under_testfi Don't underestimate whitespace to improve clarity. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/177224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
177,238 | Is there a (relatively) simple way to test if an executable not only exists, but is valid? By valid, I mean that an x86_64 Mach-O (OS X) executable will not run on an ARM Raspberry Pi. However, simply running tool-osx || tool-rpi works on OS X, where the executable runs, but does not fall back to tool-rpi when the x86_64 fails. How can I fall back to another executable when one is invalid for the processor architecture? | Rather than testing for a valid executable, it's probably best to test what the current architecture is, then select the proper executable based on that. For example: if [ $(uname -m) == 'armv6l' ]; then tool-rpielse tool-osxfi However, if testing the executable is what you really want to do, GNU file can tell you the architecture of an executable: user@host:~$ file $(whereis cat)ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0x4e89fd8f129f0a508afa325b0f0f703fde610971, stripped | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177238",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47864/"
]
} |
177,279 | I'm attempting to download a year's worth of data from an NOAA FTP Server using wget (or ncftpget). However, it takes way longer than it should due to FTP's overhead (I think). For instance, this command time wget -nv -m ftp://ftp:[email protected]/pub/data/noaa/2015 -O /weather/noaa/2015 Or similarly, via ncftpget ncftpget -R -T -v ftp.ncdc.noaa.gov /weather/noaa/ /pub/data/noaa/2015 Yields a result of. 53 minutes to transfer 30M! FINISHED --2015-01-03 16:21:41--Total wall clock time: 53m 32sDownloaded: 12615 files, 30M in 7m 8s (72.6 KB/s)real 53m32.447suser 0m2.858ssys 0m8.744s When I watch this transfer, each individual file transfers quite quickly (500kb/sec) but the process of downloading 12,000 relatively small files incurs an enormous amount of overhead and slows the entire process down. My Questions: Am I assessing the situation correctly? I realize it's hard to tell without knowing the servers but does FTP really suck this much when transferring tons of small files? Are there any tweaks to wget or ncftpget to enable them to play nicer with the remote FTP server? Or perhaps some kind of parallelism? | Here's how I ended up doing solving this using the advice from others. The NOAA in this case has an FTP and an HTTP resource for this, so what I wrote a script that does the following: ncftpls to get a list of files sed to complete the filepaths to a full list of http files aria2c to quickly download them all Example script: # generate file listncftpls ftp://path/to/ftp/resources > /tmp/remote_files.txt# append the full path, use httpsed -i -e 's/^/http:\/\/www1\.website\.gov\/pub\/data\//' /tmp/remote_files.txt# download using aria2caria2c -i /tmp/remote_files.txt -d /filestore/2015 This runs much faster and is probably kinder to the NOAA's servers. There's probably even a clever way to get rid of that middle step, but I haven't found it yet. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92603/"
]
} |
177,291 | What is the best way to renew a gpg key pair when it got expired and what is the reason for the method? The key pair is already signed by many users and available on public servers. Should the new key be a subkey of the expired private key? Should it be signed by the old (I could try to edit the key and change the date of expiration to tomorrow)? Should the new key sign the old? | Private keys never expire. Only public keys do. Otherwise, the world would never notice the expiration as (hopefully) the world never sees the private keys. For the important part, there is only one way, so that saves a discussion about pros and cons. You have to extend the validity of the main key: gpg --edit-key 0x12345678gpg> expire...gpg> save You have to make a decision about extending validity of vs. replacing the subkey(s). Replacing them gives you limited forward security (limited to rather large time frames). If that is important to you then you should have (separate) subkeys for both encryption and signing (the default is one for encryption only). gpg --edit-key 0x12345678gpg> key 1gpg> expire...gpg> key 1gpg> key 2gpg> expire...gpg> save You need key 1 twice for selecting and deselecting because you can extend the validity of only one key at a time. You could also decide to extend the validity unless you have some reason to assume the key has been compromised. Not throwing the whole certificate away in case of compromise makes sense only if you have an offline main key (which IMHO is the only reasonable way to use OpenPGP anyway). The users of your certificate have to get its updated version anyway (either for the new key signatures or for the new key(s)). Replacing makes the key a bit bigger but that is not a problem. If you use smartcards (or plan to do so) then having more (encryption) keys creates a certain inconvenience (a card with the new key cannot decrypt old data). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/177291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26440/"
]
} |
177,309 | may I output both stdout and stderr on console screen and store one of them into a log file? I write a test shell script: #!/bin/shecho OUT! >&1echo ERR! >&2 I can output both of them on screen just by run the script: $./test OUT!ERR! I can output stderr and catch stdout into log file by: $./test | tee 1>logERR!$cat log OUT! I can output nothing but catch all stdout and stderro into log file by: $./test 2>&1| tee 1>log$cat log OUT!ERR! I can output both of stdout and stderr and catch all of them into a log file by: $./test 2>&1 | tee logOUT!ERR!$cat log OUT!ERR! I can output both can catch stdout into log file by: $./test | tee 2>&1 logERR!OUT!$cat log OUT! My questions are: how to just output stdout and catch stderr into file?(I tried ./test|tee 2>log , but doesn't work) how to just output both and catch stderr into file? | how to just output stdout and catch stderr into file?(I tried ./test|tee 2>log, but doesn't work) $ ./test 2>logOUT!$ cat logERR! how to just output both and catch stderr into file? $ ./test 2>&1 >/dev/tty | tee logOUT!ERR!$ cat logERR! If this expression was to be part of a larger pipeline, then you may want to avoid the use of /dev/tty . One way to do that is to swap stdout and stderr. To do this swap, we need to create a third file handle like so: $ exec 3>&1; ./test 2>&1 1>&3 | tee log; exec 3>&-OUT!ERR!$ cat logERR! The first statement, exec 3>&1 , assigns file handle 3 to the current stdout (whatever that might be). Then, ./test 2>&1 1>&3 | tee log pipes stderr to the tee command while sending stdout to file handle 3. Finally, for good housekeeping, exec 3>&- closes file handle 3. Additional notes and comments Regarding: I can output stderr and catch stdout into log file by: $./test | tee 1>logERR!$cat log OUT! That can be simplified to: $ ./test >logERR!$ cat logOUT! Also, regarding: I can output nothing but catch all stdout and stderro into log file by: $ ./test 2>&1| tee 1>log$ cat log OUT!ERR! That can be simplified to: $ ./test >log 2>&1$ cat logOUT!ERR! Or, with bash , but not POSIX shell, a still simpler form is possible: $ ./test &>log$ cat logOUT!ERR! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41350/"
]
} |
177,316 | I have yum install mysql in CentOS 6.4 . Now, when I use mysql command, it throws the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' When I check the directory there is no mysql folder. How should I solve this problem? It's worth mentioning when I do service mysql start (or with mysqld ) it errors unrecognized service . I should mention that I have changed the path in my.cnf but nothing happens. The problem is that no *.sock file exists at all. Update : Results of checking mysql process with mysqladmin -u root -p status returns: mysqladmin: connect to server at 'localhost' failederror: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! Also when I directly want to run /etc/init.d/mysqld it errors that it cannot resolve the ip (localhost.localdomain). Second Update: I have run the command [root@localhost /]# rpm -qa | grep mysql and it returns: mysql-libs-5.1.73-3.el6_5.x86_64mysql-libs-5.1.73-3.el6_5.i686mysql-5.1.73-3.el6_5.x86_64mysql-server-5.1.73-3.el6_5.i686 | how to just output stdout and catch stderr into file?(I tried ./test|tee 2>log, but doesn't work) $ ./test 2>logOUT!$ cat logERR! how to just output both and catch stderr into file? $ ./test 2>&1 >/dev/tty | tee logOUT!ERR!$ cat logERR! If this expression was to be part of a larger pipeline, then you may want to avoid the use of /dev/tty . One way to do that is to swap stdout and stderr. To do this swap, we need to create a third file handle like so: $ exec 3>&1; ./test 2>&1 1>&3 | tee log; exec 3>&-OUT!ERR!$ cat logERR! The first statement, exec 3>&1 , assigns file handle 3 to the current stdout (whatever that might be). Then, ./test 2>&1 1>&3 | tee log pipes stderr to the tee command while sending stdout to file handle 3. Finally, for good housekeeping, exec 3>&- closes file handle 3. Additional notes and comments Regarding: I can output stderr and catch stdout into log file by: $./test | tee 1>logERR!$cat log OUT! That can be simplified to: $ ./test >logERR!$ cat logOUT! Also, regarding: I can output nothing but catch all stdout and stderro into log file by: $ ./test 2>&1| tee 1>log$ cat log OUT!ERR! That can be simplified to: $ ./test >log 2>&1$ cat logOUT!ERR! Or, with bash , but not POSIX shell, a still simpler form is possible: $ ./test &>log$ cat logOUT!ERR! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95333/"
]
} |
177,343 | It's very weird that after switching to zsh from bash, I can't access root. I normally use 'su' to login as root after I login as a normal user (username is normalusername ) with less privileges. And it was always nice. But after switching root shell from bash to zsh, when I try to login via su, I got: normalusername@(none):~$ suPassword: (enter the correct password)Cannot execute zsh: No such file or directory When I access root directly via ssh from my Mac, I got: localhost:~ myname$ ssh -lroot [email protected]'s password: (enter the correct/incorrect password)Permission denied, please try again. No matter whether I entered the correct password, it comes again and again. So I intentionally entered a wrong password with "su" from a normal user, I got: normalusername@(none):~$ suPassword: (entered a wrong password and pressed enter)(pressed enter)su: Authentication failure After entering the incorrect password I didn't see anything, I typed enter twice and then got the su: Authentication failure result. I tried for many times and the conclusion is: If I enter the correct password, it will tell me that "Cannot execute zsh: No such file or directory" If I enter the incorrect password, it will not showing up anything until I hit enter for 2-6 times. It seems to be an indication that I didn't just forget the "correct password". But how can I access root anyway? The entry in /etc/passwd is root:x:0:0:root:/root:zsh | try: cd /usr/bin; su .Since you didn't give an absolute path to zsh. su is checking the PWD . changing to the directory zsh exists in will work on some systems.an example: % suPassword:su: zsh: No such file or directory% cd /usr/local/bin% suPassword:# print $OSTYPEfreebsd10.0# | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
177,421 | I'm trying to have a small function that I then call. To keep it simple I just want to validate that 12 is divisible by 2,3 and 4 The code is: divisible_by () { under_test=12 from=2 to=4 divisible=0 for ((check=from; check<=to; check++)) { echo "check= $check" if [ $under_test % $check -ne 0 ]; then divisible=1 fi } return $divisible}x=divisible_byif [ $x -eq 0 ]; then echo "true"else echo "false"fi Currently I get ./5_divisible_by_1_to_10.sh: line 16: [: divisible_by: integer expression expectedfalse I'm also finding it hard to pass the number in as a parameter but maybe it's related. | divisible_by () { under_test=12 from=2 to=4 for ((check=from; check<=to; check++)) { echo "check=$check" ((under_test % check == 0)) || return } true}if divisible_by; then echo trueelse echo falsefi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
177,460 | When trying to install the package kodi-standalone-service however after I get prompted for editing the PKGBUILD then I get this error. makepkg: invalid option '--asroot'Unable to read PKGBUILD I can't seem to find anything on this with google. This also happens in the ISL package as well. (BTW I'm using Yaourt to install this) | Pacman 4.2 was officially released just a few days ago. One of its changes is the removal of the option --asroot from makepkg , which means that you cannot build packages as the root user anymore. The recommended solution now is to build packages with a non-root account. You can read more about this change in Allan's post (he is a developer of pacman). There are also some discussions about the motivation behind this change in the arch-general mailing list. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14013/"
]
} |
177,462 | I am very confused as to why I cannot connect to my fedora server from my desktop through filezilla. I think I have added the service to the firewall (though when I use firewall-cmd --list-services it does not show...) and the server is started, I can telnet into it from the 192 IP locally but from my desktop it just says Status: Connecting to 192.168.1.113:21...Error: Connection timed outError: Could not connect to server is my configuration incorrect? [root@localhost ~]# cat /etc/redhat-releaseFedora release 21 (Twenty One)[root@localhost /]# netstat -ntulpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 981/smbdtcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 981/smbdtcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 838/sshdtcp6 0 0 :::443 :::* LISTEN 1452/httpdtcp6 0 0 :::445 :::* LISTEN 981/smbdtcp6 0 0 :::9090 :::* LISTEN 1/systemdtcp6 0 0 :::139 :::* LISTEN 981/smbdtcp6 0 0 :::80 :::* LISTEN 1452/httpdtcp6 0 0 :::21 :::* LISTEN 1559/vsftpdtcp6 0 0 :::22 :::* LISTEN 838/sshdudp 0 0 0.0.0.0:68 0.0.0.0:* 958/dhclientudp 0 0 192.168.1.255:137 0.0.0.0:* 840/nmbdudp 0 0 192.168.1.113:137 0.0.0.0:* 840/nmbdudp 0 0 0.0.0.0:137 0.0.0.0:* 840/nmbdudp 0 0 192.168.1.255:138 0.0.0.0:* 840/nmbdudp 0 0 192.168.1.113:138 0.0.0.0:* 840/nmbdudp 0 0 0.0.0.0:138 0.0.0.0:* 840/nmbdudp 0 0 0.0.0.0:11320 0.0.0.0:* 958/dhclientudp6 0 0 :::18941 :::* 958/dhclient vsftpd.conf # Example config file /etc/vsftpd/vsftpd.conf## The default compiled in settings are fairly paranoid. This sample file# loosens things up a bit, to make the ftp daemon more usable.# Please see vsftpd.conf.5 for all compiled in defaults.## READ THIS: This example file is NOT an exhaustive list of vsftpd options.# Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's# capabilities.## Allow anonymous FTP? (Beware - allowed by default if you comment this out).anonymous_enable=YES## Uncomment this to allow local users to log in.# When SELinux is enforcing check for SE bool ftp_home_dirlocal_enable=YES## Uncomment this to enable any form of FTP write command.write_enable=YES## Default umask for local users is 077. You may wish to change this to 022,# if your users expect that (022 is used by most other ftpd's)local_umask=022## Uncomment this to allow the anonymous FTP user to upload files. This only# has an effect if the above global write enable is activated. Also, you will# obviously need to create a directory writable by the FTP user.# When SELinux is enforcing check for SE bool allow_ftpd_anon_write, allow_ftpd_full_access#anon_upload_enable=YES## Uncomment this if you want the anonymous FTP user to be able to create# new directories.#anon_mkdir_write_enable=YES## Activate directory messages - messages given to remote users when they# go into a certain directory.dirmessage_enable=YES## Activate logging of uploads/downloads.xferlog_enable=YES## Make sure PORT transfer connections originate from port 20 (ftp-data).connect_from_port_20=YES## If you want, you can arrange for uploaded anonymous files to be owned by# a different user. Note! Using "root" for uploaded files is not# recommended!#chown_uploads=YES#chown_username=whoever## You may override where the log file goes if you like. The default is shown# below.#xferlog_file=/var/log/xferlog## If you want, you can have your log file in standard ftpd xferlog format.# Note that the default log file location is /var/log/xferlog in this case.xferlog_std_format=YES#log_ftp_protocol=YES## You may change the default value for timing out an idle session.#idle_session_timeout=600## You may change the default value for timing out a data connection.#data_connection_timeout=120## It is recommended that you define on your system a unique user which the# ftp server can use as a totally isolated and unprivileged user.#nopriv_user=ftpsecure## Enable this and the server will recognise asynchronous ABOR requests. Not# recommended for security (the code is non-trivial). Not enabling it,# however, may confuse older FTP clients.#async_abor_enable=YES## By default the server will pretend to allow ASCII mode but in fact ignore# the request. Turn on the below options to have the server actually do ASCII# mangling on files when in ASCII mode.# Beware that on some FTP servers, ASCII support allows a denial of service# attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd# predicted this attack and has always been safe, reporting the size of the# raw file.# ASCII mangling is a horrible feature of the protocol.#ascii_upload_enable=YES#ascii_download_enable=YES## You may fully customise the login banner string:#ftpd_banner=Welcome to blah FTP service.## You may specify a file of disallowed anonymous e-mail addresses. Apparently# useful for combatting certain DoS attacks.#deny_email_enable=YES# (default follows)#banned_email_file=/etc/vsftpd/banned_emails## You may specify an explicit list of local users to chroot() to their home# directory. If chroot_local_user is YES, then this list becomes a list of# users to NOT chroot().# (Warning! chroot'ing can be very dangerous. If using chroot, make sure that# the user does not have write access to the top level directory within the# chroot)#chroot_local_user=YES#chroot_list_enable=YES# (default follows)#chroot_list_file=/etc/vsftpd/chroot_list## You may activate the "-R" option to the builtin ls. This is disabled by# default to avoid remote users being able to cause excessive I/O on large# sites. However, some broken FTP clients such as "ncftp" and "mirror" assume# the presence of the "-R" option, so there is a strong case for enabling it.#ls_recurse_enable=YES## When "listen" directive is enabled, vsftpd runs in standalone mode and# listens on IPv4 sockets. This directive cannot be used in conjunction# with the listen_ipv6 directive.listen=NO## This directive enables listening on IPv6 sockets. By default, listening# on the IPv6 "any" address (::) will accept connections from both IPv6# and IPv4 clients. It is not necessary to listen on *both* IPv4 and IPv6# sockets. If you want that (perhaps because you want to listen on specific# addresses) then you must run two copies of vsftpd with two configuration# files.# Make sure, that one of the listen options is commented !!listen_ipv6=YESpam_service_name=vsftpduserlist_enable=YEStcp_wrappers=YES | Pacman 4.2 was officially released just a few days ago. One of its changes is the removal of the option --asroot from makepkg , which means that you cannot build packages as the root user anymore. The recommended solution now is to build packages with a non-root account. You can read more about this change in Allan's post (he is a developer of pacman). There are also some discussions about the motivation behind this change in the arch-general mailing list. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95684/"
]
} |
177,469 | I'm using OpenSUSE 13.2 with GNOME 3.14 Desktop Environment. My interface language is en_US while my locale is ar_SY.My locale contain some wrong data about months names (May and June) and I want to repair that manually. Where can I find the locale files in order to change the values? Not the locale but the data with in the locale, for instance: In Syria, May is Ψ£ΩΨ§Ψ± but it's written as ΩΩΨ§Ψ±Ψ§Ω in the locale ar_SY and I want to fix that. | Pacman 4.2 was officially released just a few days ago. One of its changes is the removal of the option --asroot from makepkg , which means that you cannot build packages as the root user anymore. The recommended solution now is to build packages with a non-root account. You can read more about this change in Allan's post (he is a developer of pacman). There are also some discussions about the motivation behind this change in the arch-general mailing list. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96213/"
]
} |
177,491 | I would like to iterate through a file containing dates (named dates.txt) in the following format: 2009 08 03 08 092009 08 03 09 102009 08 03 10 112009 08 03 11 122009 08 03 12 132009 08 03 13 14 And pass each field of each line as a positional parameter to another script. ie: the other script is executed by putting the following at the command line: $ . the_script 2009 08 03 08 09 I have tried for i in $(./dates.txt);do echo ${i};done but get: ./dates: line 1: 2009: command not found./dates: line 2: 2009: command not found./dates: line 3: 2009: command not found./dates: line 4: 2009: command not found./dates: line 5: 2009: command not found./dates: line 6: 2009: command not found from this i can tell it is working through each line but perhaps getting hung up on each field? Because of the above, I have yet been able to figure out how to pass the read lines as positional parameters to the other script. Not sure where to work it in either? Help, please! | Perhaps you mean something like this? while read -d $'\n' i; do echo $i;done <./dateswhile read -d $'\n' i; do . the_script $i;done <./dates | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96234/"
]
} |
177,513 | Is there a grep-like utility that will enable me to do grep searches with logic operators. I want to be able to nest and combine the logical constructs freely. For example, stuff like this should be possible: grep (term1 && term2) || (term1 && (term3 xor term4)) * I realize this can be done with vanilla grep and additional bash scripting, but my goal here is to avoid having to do that. | There are lot of ways to use grep with logical operators. Using multiple -e options matches anything that matches any of the patterns, giving the OR operation. Example: grep -e pattern1 -e pattern2 filename In extended regular expressions ( grep -E ), you can use | to combine multiple patterns with the OR operation. Example: grep -E 'pattern1|pattern2' filename grep -v can simulate the NOT operation. There is no AND operator in grep , but you can brute-forcesimulate AND by using multiple patterns with | . Example : grep -E 'pattern1.*pattern2|pattern2.*pattern1' filename The above example will match all the lines that contain both pattern1 and pattern2 in either order. This gets very ugly if there are more patterns to combine. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/177513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
177,552 | I tried my luck with grep and sed but somehow I don't manage to get it right. I have a log file which is about 8 GB in size. I need to analyze a 15 minute time period of suspicious activity. I located the part of the log file that I need to look at and I am trying to extract those lines and save it into a separate file. How would I do that on a regular CentOS machine? My last try was this but it didn't work. I am at loss when it comes to sed and those type of commands. sed -n '2762818,2853648w /var/log/output.txt' /var/log/logfile | sed -n '2762818,2853648p' /var/log/logfile > /var/log/output.txt p is for print | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89041/"
]
} |
177,572 | Used to be able to right click on the tab and change the title. Not sure how to do this anymore. Just upgraded to Fedora 21. EDIT: I have switched from gnome-terminal to ROXterm | Create a function in ~/.bashrc : function set-title() { if [[ -z "$ORIG" ]]; then ORIG=$PS1 fi TITLE="\[\e]2;$*\a\]" PS1=${ORIG}${TITLE}} Then use your new command to set the terminal title. It works with spaces in the name too set-title my new tab title It is possible to subsequently use set-title again (original PS1 is preserved as ORIG ). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/177572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37688/"
]
} |
177,603 | This is strangely difficult to figure out. I do not want to delete what is to the left of the cursor, only from the cursor to the end of the line, and the following 3 lines in ONE command. I know I could do d$ and 3dd | Actually, 3dd deletes what is to the left of the cursor. You want 4D . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43342/"
]
} |
177,610 | I have postfix installed on a development box and I used the parameters from this other posting to configure postfix to work on localhost only. But the other posting does not explain how to send emails or view received emails from the command line. I have higher level code for sending/receiving smtp email, but I want to be able to do it from the command line first in order to validate the postfix is working before I start testing the higher level code. I have made several tries and seem to be sending emails, but I cannot find the emails that have been sent. How can I confirm that the emails have been sent and also read the emails from the command line? EDIT#1: I typed MAIL=/home/root/Maildir in the terminal then hit return, then typed mail and hit return. I did this in the root account and again in the username account. This showed a list of old emails in the root account, so I logged into the username account and typed the following to send an email from username to root : sendmail root@localhost <<EOFsubject:This is a testfrom:username@localhostBody message here...EOF The preceding code resulted in another command prompt with no error. But when I logged back into root and typed mail again to check mail, the new email was not listed along with the old emails. Also, main.cf is as follows: queue_directory = /var/spool/postfixcommand_directory = /usr/sbindaemon_directory = /usr/libexec/postfixdata_directory = /var/lib/postfixmail_owner = postfixmyorigin = localhostinet_interfaces = localhostinet_protocols = allunknown_local_recipient_reject_code = 550mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128relayhost = alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliaseshome_mailbox = Maildir/mailbox_command = debug_peer_level = 2debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5sendmail_path = /usr/sbin/sendmail.postfixnewaliases_path = /usr/bin/newaliases.postfixmailq_path = /usr/bin/mailq.postfixsetgid_group = postdrophtml_directory = nomanpage_directory = /usr/share/mansample_directory = /usr/share/doc/postfix-2.10.1/samplesreadme_directory = /usr/share/doc/postfix-2.10.1/README_FILES What am I doing wrong? EDIT#2: After IanMcGowan's suggestions, I checked to see that mailx was already installed. I then used this tutorial to test sending and receiving emails using the mailx commands, but I am not able to read the newly send emails either. I think it is a configuration problem. I am using email addresses like root@localhost and username@localhost . telnet localhost 25 results in: Trying 127.0.0.1...Connected to localhost.Escape character is '^]'.220 localhost.localdomain ESMTP Postfix nano /var/log/maillog contains: Jan 5 12:09:40 localhost postfix/postfix-script[6162]: starting the Postfix mail systemJan 5 12:09:40 localhost postfix/master[6164]: daemon started -- version 2.10.1, configuration /etc/postfixJan 5 12:46:00 localhost postfix/postfix-script[3036]: starting the Postfix mail systemJan 5 12:46:00 localhost postfix/master[3047]: daemon started -- version 2.10.1, configuration /etc/postfixJan 5 13:12:02 localhost postfix/smtpd[4642]: connect from localhost.localdomain[127.0.0.1]Jan 5 13:12:02 localhost postfix/smtpd[4642]: DB1249A618: client=localhost.localdomain[127.0.0.1]Jan 5 13:12:02 localhost postfix/cleanup[4645]: DB1249A618: message-id=<1738078707.0.1420492322780.JavaMail.username@localhost.localdomain>Jan 5 13:12:02 localhost postfix/qmgr[3058]: DB1249A618: from=<[email protected]>, size=632, nrcpt=1 (queue active)Jan 5 13:12:02 localhost postfix/smtpd[4642]: disconnect from localhost.localdomain[127.0.0.1]Jan 5 13:12:02 localhost postfix/local[4646]: DB1249A618: to=<[email protected]>, orig_to=<root@localhost>, relay=local, delay=0.11, delays=0.06/0.02/0/0.03, dsn=2.0.0, status=sent (delivered to maildir)Jan 5 13:12:02 localhost postfix/qmgr[3058]: DB1249A618: removedJan 5 14:29:20 localhost postfix/pickup[5207]: 7F4439A616: uid=1000 from=<username>Jan 5 14:29:20 localhost postfix/cleanup[5266]: 7F4439A616: message-id=<[email protected]>Jan 5 14:29:20 localhost postfix/qmgr[3058]: 7F4439A616: from=<[email protected]>, size=334, nrcpt=1 (queue active)Jan 5 14:29:20 localhost postfix/local[5271]: 7F4439A616: to=<[email protected]>, orig_to=<root@localhost>, relay=local, delay=0.13, delays=0.1/0.01/0/0.02, dsn=2.0.0, status=sent (delivered to maildir)Jan 5 14:29:20 localhost postfix/qmgr[3058]: 7F4439A616: removedJan 5 14:57:10 localhost postfix/pickup[5207]: A21B49A618: uid=0 from=<root>Jan 5 14:57:10 localhost postfix/cleanup[5529]: A21B49A618: message-id=<[email protected]>Jan 5 14:57:10 localhost postfix/qmgr[3058]: A21B49A618: from=<[email protected]>, size=534, nrcpt=1 (queue active)Jan 5 14:57:10 localhost postfix/local[5531]: A21B49A618: to=<[email protected]>, orig_to=<root>, relay=local, delay=0.38, delays=0.34/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to maildir)Jan 5 14:57:10 localhost postfix/qmgr[3058]: A21B49A618: removedJan 5 15:47:38 localhost postfix/pickup[5207]: F312D9A618: uid=0 from=<root>Jan 5 15:47:39 localhost postfix/cleanup[5975]: F312D9A618: message-id=<[email protected]>Jan 5 15:47:39 localhost postfix/qmgr[3058]: F312D9A618: from=<[email protected]>, size=458, nrcpt=1 (queue active)Jan 5 15:47:39 localhost postfix/local[5977]: F312D9A618: to=<[email protected]>, orig_to=<username@localhost>, relay=local, delay=0.12, delays=0.09/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to maildir)Jan 5 15:47:39 localhost postfix/qmgr[3058]: F312D9A618: removedJan 5 15:48:20 localhost postfix/pickup[5207]: A826C9A618: uid=1000 from=<username>Jan 5 15:48:20 localhost postfix/cleanup[5975]: A826C9A618: message-id=<[email protected]>Jan 5 15:48:20 localhost postfix/qmgr[3058]: A826C9A618: from=<[email protected]>, size=461, nrcpt=1 (queue active)Jan 5 15:48:20 localhost postfix/local[5977]: A826C9A618: to=<[email protected]>, orig_to=<username@localhost>, relay=local, delay=0.11, delays=0.08/0/0/0.03, dsn=2.0.0, status=sent (delivered to maildir)Jan 5 15:48:20 localhost postfix/qmgr[3058]: A826C9A618: removedJan 5 15:48:29 localhost postfix/pickup[5207]: 54AA19A618: uid=1000 from=<username>Jan 5 15:48:29 localhost postfix/cleanup[5975]: 54AA19A618: message-id=<[email protected]>Jan 5 15:48:29 localhost postfix/qmgr[3058]: 54AA19A618: from=<[email protected]>, size=461, nrcpt=1 (queue active)Jan 5 15:48:29 localhost postfix/local[5977]: 54AA19A618: to=<[email protected]>, orig_to=<root@localhost>, relay=local, delay=0.11, delays=0.09/0/0/0.02, dsn=2.0.0, status=sent (delivered to maildir)Jan 5 15:48:29 localhost postfix/qmgr[3058]: 54AA19A618: removedJan 5 15:52:03 localhost postfix/pickup[5207]: C756E9A618: uid=0 from=<root>Jan 5 15:52:03 localhost postfix/cleanup[6074]: C756E9A618: message-id=<[email protected]>Jan 5 15:52:03 localhost postfix/qmgr[3058]: C756E9A618: from=<[email protected]>, size=491, nrcpt=1 (queue active)Jan 5 15:52:03 localhost postfix/local[6076]: C756E9A618: to=<[email protected]>, orig_to=<root@localhost>, relay=local, delay=0.13, delays=0.09/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to maildir)Jan 5 15:52:03 localhost postfix/qmgr[3058]: C756E9A618: removedJan 5 16:02:36 localhost postfix/smtpd[6213]: connect from localhost.localdomain[127.0.0.1]Jan 5 16:04:26 localhost postfix/smtpd[6213]: disconnect from localhost.localdomain[127.0.0.1] The logs say delivered to maildir . Am I using the wrong syntax to access maildir contents? If so, what is the correct syntax? Or is the problem in main.cf above? EDIT#3 I typed nano /var/spool/mail/root and was able to view the old emails that show up when I log in as root and type mail or mailx . But the new emails are not located there. These emails are automated and seem to be relics from before postfix was configured to use /Maildir structure. | Actually, 3dd deletes what is to the left of the cursor. You want 4D . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
177,621 | I'm wondering how did you dertermine the end of a script/file? I'm especially interested for old Unix versions (like V6). Is there a '\0' after the last written character? | Userland programs under even older Unixes did not see "pad" bytes at the end of a file. I know that MS-DOS or CP/M would fill disk blocks with Ctrl-Z characters, so not only did a file reading algorithm have to check for end-of-disk-blocks, it also had to check for padding bytes. Unixes never did that sort of thing. Programs read bytes until the end-of-file condition happens, which for the read(2) system call means returning 0. Unfortuately a long-running system call can be interrupted, which causes read() to return the error code (-1), and the global symbol errno evaluates to EINTR, so Unixes also traditioally introduce some goofiness into reading certain devices. There's also a file system aspect to this all: Unix filesystems would put data into disk blocks, and keep a file-size-in-bytes value in the inode. Some other OSes only kept file size in blocks. If data was smaller than a block, the problem bubbled up into userland, with pad bytes or other nonsense. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96047/"
]
} |
177,635 | I want to use box-cutter/debian77 but when I try vagrant box add box-cutter/debian77 https://atlas.hashicorp.com/box-cutter/debian77 I get Downloading box from URL: https://atlas.hashicorp.com/box-cutter/debian77Extracting box...e: 0/s, Estimated time remaining: --:--:--)The box failed to unpackage properly. Please verify that the boxfile you're trying to add is not corrupted and try again. Theoutput from attempting to unpackage (if any):bsdtar: Error opening archive: Unrecognized archive format what is wrong? I have instlalled all archivers (unar, bsdtar, bzip2,...) Or where can I download the image manually? I use Vagrant 1.4.3 on Ubuntu 14.10. the same command works on a friend using arch and on debian wheezy too | Userland programs under even older Unixes did not see "pad" bytes at the end of a file. I know that MS-DOS or CP/M would fill disk blocks with Ctrl-Z characters, so not only did a file reading algorithm have to check for end-of-disk-blocks, it also had to check for padding bytes. Unixes never did that sort of thing. Programs read bytes until the end-of-file condition happens, which for the read(2) system call means returning 0. Unfortuately a long-running system call can be interrupted, which causes read() to return the error code (-1), and the global symbol errno evaluates to EINTR, so Unixes also traditioally introduce some goofiness into reading certain devices. There's also a file system aspect to this all: Unix filesystems would put data into disk blocks, and keep a file-size-in-bytes value in the inode. Some other OSes only kept file size in blocks. If data was smaller than a block, the problem bubbled up into userland, with pad bytes or other nonsense. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177635",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
177,644 | After accidentally holding down ctrl+alt+t, my tmux sessions are now automatically named with annoyingly high numbers: llama@llama:~$ tmux ls124: 1 windows (created Mon Jan 5 16:45:55 2015) [80x24] (attached) How can I reset this number to 1 ? I've tried tmux rename-session 'ing my session to a lower number, but after closing it and opening a new session, the numbering resumes from the original number. Is there any way to fix this without restarting tmux? | No, this is not currently possible. The only thing you can do about this without restarting the server is to override the name manually when creating a new session by issuing tmux new -s 5 , for example: $ tmux new -d -P10:$ tmux ls10: 1 windows (created Wed Jan 7 15:50:29 2015) [107x89]$ tmux new -s 5 -d -P5:$ tmux ls10: 1 windows (created Wed Jan 7 15:50:29 2015) [107x89]5: 1 windows (created Wed Jan 7 15:50:40 2015) [107x89]$ tmux new -s 5 -d -Pduplicate session: 5 The automatic session number is governed by the global variable u_int next_session_id in session.c which cannot be accessed from the command line, as grepping the source code reveals. tmux new-session calls session_create() in session.c (line 88) and next_session_id is incremented whenever you create a new session. The argument of -s flag to new-session (short new ) sets name , otherwise next_session_id is used. if (name != NULL) { s->name = xstrdup(name); s->id = next_session_id++; } else { s->name = NULL; do { s->id = next_session_id++; free(s->name); xasprintf(&s->name, "%u", s->id); } while (RB_FIND(sessions, &sessions, s) != NULL); } | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45600/"
]
} |
177,651 | If I do $ cat > file.txt text Ctrl - D Ctrl - D Question 1: If I don't press enter, why do I have to press Ctrl - D twice? If I do $ cat > file.txt pa bam pshhh Ctrl - Z [2]+ Stopped cat > file.txt$ cat file.txt$ cat > file.txt pa bam pshhh Ctrl - Z [2]+ Stopped cat > file.txt$ cat file.txtpa bam pshhh Why is the second time the file with 1 line? | In Unix, most objects you can read and write - ordinary files, pipes, terminals, raw disk drives - are all made to resemble files. A program like cat reads from its standard input like this: n = read(0, buffer, 512); which asks for 512 bytes. n is the number of bytes actually read, or -1 if there's an error. If you did this repeatedly with an ordinary file, you'd get a bunch of 512-byte reads, then a somewhat shorter read at the tail end of the file, then 0 if you tried to read past the end of the file. So, cat will run until n is <= 0. Reading from a terminal is slightly different. After you type in a line, terminated by the Enter key, read returns just that line. There are a few special characters you can type. One is Ctrl-D . When you type this, the operating system sends all of the current line that you've typed (but not the Ctrl-D itself) to the program doing the read. And here's the serendipitous thing: if Ctrl-D is the first character on the line, the program is sent a line of length 0 - just like the program would see if it just got to the end of an ordinary file. cat doesn't need to do anything differently , whether it's reading from an ordinary file or a terminal. Another special character is Ctrl-Z . When you type it, anywhere in a line, the operating system discards whatever you've typed up until that point and sends a SIGTSTP signal to the program, which normally stops (pauses) it and returns control to the shell. So in your example $ cat > file.txtpa bam pshhh<Ctrl+Z>[2]+ Stopped cat > file.txt you typed some characters that were discarded, then cat was stopped without having written anything to its output file. $ cat > file.txtpa bam pshhh<Ctrl+Z>[2]+ Stopped cat > file.txt you typed in one line, which cat read and wrote to its output file, and then the Ctrl-Z stopped cat . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/177651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96175/"
]
} |
177,740 | I read about pipes and streams and Iβm still confused on how itβs implemented. A program is started and it reads data from βstandard inputβ stream (stdin), which is where the keyboard sends data to. My question is, how is that different from a pipe? Piping allows me to have a process that sends data to a pipe, and another process is reading data from it. When the keyboard is pressed, data is sent to stdin and a program is reading data from this same stream. A "read" operation is executed as soon as data is sent to this stream, just like a pipe. Are these streams piped? | Unix terminal i/o has traditionally been implemented as some sort of queue . Older kernels used clists. V8 Unix used streams . In most cases, clists and streams are used to implement a portion of the link between a user process (specifically, a file descriptor) and a character device driver (for example, a serial port or a pty). Pipes are also a queue, but they link user processes (specifically, a pair of file descriptors). There are a variety of implementations of pipes, including sockets; a special type of file; and even STREAMS (STREAMS is a derivative of V8 streams.) So, streams and pipes are both implementations of a queue, but they are used in different situations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27153/"
]
} |
177,777 | I have the following output: 2015/1/7 82015/1/8 492015/1/9 402015/1/10 3372015/1/11 112015/1/12 32015/1/13 92015/1/14 1022015/1/15 622015/1/16 102015/1/17 302015/1/18 302015/1/19 12015/1/20 32015/1/21 232015/1/22 122015/1/24 62015/1/25 32015/1/27 22015/1/28 162015/1/29 12015/2/1 122015/2/2 22015/2/3 12015/2/4 102015/2/5 132015/2/6 22015/2/9 22015/2/10 252015/2/11 12015/2/12 62015/2/13 122015/2/14 22015/2/16 82015/2/17 82015/2/20 12015/2/23 12015/2/27 12015/3/2 32015/3/3 2 And I'd like to draw a histogram 2015/1/7 ===2015/1/8 ===========2015/1/9 ==========2015/1/10 ====================================================================2015/1/11 ===2015/1/11 =... Do you know if there is a bash command that would let me do that? | Try this in perl : perl -lane 'print $F[0], "\t", "=" x ($F[1] / 5)' file EXPLANATIONS: -a is an explicit split() in @F array, we get the values with $F[n] x is to tell perl to print a character N times ($F[1] / 5) : here we get the number and divide it by 5 for a pretty print output (simple arithmetic) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6107/"
]
} |
177,792 | I have a file named variables.f90 , having many lines defining different variables, as follows :: integer::n_monomer=6800real*8::rx=5.0d0#... randomly integer and real numbers definedreal*8::mu_nano=8.0d0............. and I dont know what will be the value of mu_nano , it can be any real number.Now I want to modify the above statement such that, its value is incremented by 1 using bash script as follows :: real*8::mu_nano=9.0d0 | Try this in perl : perl -lane 'print $F[0], "\t", "=" x ($F[1] / 5)' file EXPLANATIONS: -a is an explicit split() in @F array, we get the values with $F[n] x is to tell perl to print a character N times ($F[1] / 5) : here we get the number and divide it by 5 for a pretty print output (simple arithmetic) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81858/"
]
} |
177,843 | I have a JSON output that contains a list of objects stored in a variable. (I may not be phrasing that right) [ { "item1": "value1", "item2": "value2", "sub items": [ { "subitem": "subvalue" } ] }, { "item1": "value1_2", "item2": "value2_2", "sub items_2": [ { "subitem_2": "subvalue_2" } ] }] I need all the values for item2 in a array for a bash script to be run on ubuntu 14.04.1. I have found a bunch of ways to get the entire result into an array but not just the items I need | Using jq : $ cat json[ { "item1": "value1", "item2": "value2", "sub items": [ { "subitem": "subvalue" } ] }, { "item1": "value1_2", "item2": "value2_2", "sub items_2": [ { "subitem_2": "subvalue_2" } ] }] CODE: arr=( $(jq -r '.[].item2' json) )printf '%s\n' "${arr[@]}" OUTPUT: value2value2_2 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/177843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/97293/"
]
} |
177,846 | Specifically, in smartctl output, how is LifeTime(hours) calculated? I'm assuming it's one of the following: The difference (in hours) between the time of the test and the manufacture date of the drive. The difference (in hours) between the time of the test and the first powered-on date of the drive. The difference (in hours) between the time of the test (in terms of "drive running hours") and the total number of "drive running hours". *By "drive running hours", I mean a running total of the number of hours a drive has been powered on. (Analogy: Airplane engines don't have odometers like cars. Rather, they usually show the number of hours the engines have been running. I'm using "drive running hours" to mean something similar, but for hard drives) Example smartctl output: === START OF READ SMART DATA SECTION ===SMART Self-test log structure revision number 1Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error# 1 Short offline Completed without error 00% 22057 -# 2 Short offline Completed without error 00% 22057 -# 3 Extended offline Completed without error 00% 22029 -# 4 Extended offline Completed without error 00% 21958 - | If I remember correctly this can vary from drive to drive. Most brands:Once testing is done at the manufacturer the firmware is loaded which will begin monitoring the first time the drive is started by the user. The firmware does not monitor actual times. It works exactly like the hour meter on a plane. The only difference being some brands might do testing with the firmware active, so a brand new drive might show 1-2 hours where others will show 0 (Unless the test takes over an hour.) If you run smartctl -A /dev/sdX , replacing x with your drive, you can see the attributes that your HDD is reporting. There is a Powered On Time attribute which is where this value comes from. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65991/"
]
} |
177,849 | What is the meaning of the directory specified by '//'? It can be accessed by typing in 'cd //' at the comand prompt. I have tried this on mac 10.9.5 and Centos 6. It shows the contents for the root directory. In the prompt it shows '//' for the directory. Is this simply a glitch in the prompt code? I use \w to show the working directory. | If I remember correctly this can vary from drive to drive. Most brands:Once testing is done at the manufacturer the firmware is loaded which will begin monitoring the first time the drive is started by the user. The firmware does not monitor actual times. It works exactly like the hour meter on a plane. The only difference being some brands might do testing with the firmware active, so a brand new drive might show 1-2 hours where others will show 0 (Unless the test takes over an hour.) If you run smartctl -A /dev/sdX , replacing x with your drive, you can see the attributes that your HDD is reporting. There is a Powered On Time attribute which is where this value comes from. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96123/"
]
} |
177,877 | I am looking at a co-workers shell code and I saw this: date 2&>$0 I know what date does, but what's 2&>$0 doing?He's out for a while, so I can't ask him what this part was about. | Summary Under bash , if that command is in a script, the script file will be overwritten with an error message. Example Consider the script: $ cat test.shdate 2&>$0 Now, run the script: $ bash test.shtest.sh: line 2: unexpected EOF while looking for matching ``'test.sh: line 3: syntax error: unexpected end of file Observe the new contents of the script: $ cat test.shdate: invalid date `2' Explanation The command, date 2&>$0 , is interpreted as follows: The date command is run with argument 2 All output, both stdout and stderr, from the date command is redirected to the file $0 . $0 is the name of the current script. The symbol > indicates redirection of, by default, stdout. As a bash extension, the symbol &> is a shortcut indication redirection of both stdout and stderr. Consequently, both stdout and stderr are redirected to the file $0 . Once the script file is overwritten, it is no longer a valid script and bash will complain about the malformed commands. Difference between bash and POSIX shells With a simple POSIX shell, such as dash , the shortcut &> is not supported. Hence, the command date 2&>$0 will redirect only stdout to the file $0 . In this case, that means that the script file is overwritten with an empty file while the date error message will appear on the terminal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
177,902 | I have changed my OS X Yosemite shell to Zsh and configured with "oh my zsh" plugins, recently i installed proxychains-ng to proxy command line tools, but i found zsh completion does not work on the command after proxychains4, like proxychains4 wget [hit tab], will not come up with wget's optionsproxychains4 gi[tab], will not come up with "git" And zsh does not work on command after alias either, alias proxy="http_proxy=http://127.0.0.1:12345"proxy brew[hit tab], will not come up with brew's subcommands there will be no completions for command and it's option. Any idea? thank you. | By default, zsh expands alias before doing completion. It's possible that your configuration disables this; you can reenable it explicitly by unsetting the complete_aliases option . unsetopt complete_aliases For an external command like proxychains4 , you can declare that its arguments are themselves a command and its arguments by making its completion _precommand . This isn't easy to find in the documentation, but you can observe the configuration for similar commands such as nohup by running echo $_comps[nohup] . This is with the βnewβ completion system (after running compinit ). compdef _precommand proxychains4 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98353/"
]
} |
177,933 | I am using CentOS with Citrix XenServer. [root@xen01 shm]# uname -aLinux xen01 2.6.32.43-0.4.1.xs1.8.0.855.170800xen #1 SMP Mon Jul 21 05:12:35 EDT 2014 i686 i686 i386 GNU/Linux[root@xen01 shm]# lsb_release -aLSB Version: :core-4.0-ia32:core-4.0-noarchDistributor ID: XenServerDescription: XenServer release 6.2.0-70446c (xenenterprise)Release: 6.2.0-70446cCodename: xenenterprise I installed apcupsd package, from http://sourceforge.net/projects/apcupsd/files/rpms%20-%20Stable/3.14.10/apcupsd-3.14.10-1.el5.i386.rpm/download But there was a new version the past year, and seems that RPM wasn't updated to 3.14.12. I found this version however: https://admin.fedoraproject.org/updates/FEDORA-EPEL-2014-4191/apcupsd-3.14.12-1.el6 I would like to know what exactly EL5 and EL6 means in term of packages. The latter fails because of dependencies, but am I able to instal EL6 packages? | EL5 stands for Enterprise Linux 5 (Red Hat Enterprise Linux version or CentOS version) and EL6 accordingly for Enterprise Linux 6. You can find the current release you are running in: cat /etc/redhat-release or cat /etc/centos-release Depending on the version your system is running with you are able to install the packages. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21575/"
]
} |
177,935 | I tried the following commands variable='one|name'echo $variable The output is one|name whereas echo one|name gives an error No command 'name' found . This is reasonable because bash treats | as a pipe and tries to execute command name with one as input. But why does echo $variable print one|name ? After Parameter and Variable expansion, shouldn't it be equivalent to echo one|name ? Version: GNU bash, version 4.3.11(1)-release (i686-pc-linux-gnu) | No, it shouldn't, because of the way bash operate the command . When you type echo one|name , bash parse the command, treats | as a pipe token, so | perform pipeline. When you type echo $variable , because token parsing occur before variable expansion, bash parsing the command into two parts, echo and $variable . After that, it performs variable expansion, expand $variable to one|name . In this case, one|name is a string, | is a part of string and can not be treated as a pipe token (of course, the token recognition phrase was done). The only thing it can be special if IFS variable contains | , bash will use | as delimiter to perform field spliting: $ variable='one|name'$ IFS='|'$ echo $variableone name | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37803/"
]
} |
177,943 | CentOS Linux release 7.0.1406 (Core) / Linux 3.10.0-123.13.2.el7.x86_64 Last week, I noticed that when I tried to restart, there was an option to Install Updates & Restart . I do not recall manually installing any updates. Because this computer is for work, I would rather not upgrade software where a previous version is crucial for development... Or somehow make a mistake and take a day to fix it. PS: If needed, how do I rollback to a point before Update A was installed? | I found out that in Centos 7 yum-cron has nothing to do with the "Install Updates & Restart" prompt. I don't need or want automatic updates too. After some tricky research I discovered this feature is provided by a gnome package "packagekit". Three solutions: uninstall packagekit altogether (my favourite) disable packagekit from running (see systemctl) find PackageKit.conf (in /etc/PackageKit/ on my system) find WritePreparedUpdates= in the file (last line on my system) set WritePreparedUpdates=false restart in all three cases (just to be on the safe side...) More at: http://www.itsprite.com/linuxhow-to-disable-packagekit-on-centos-fedora-or-rhel/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
177,957 | I want to store the value of 2^500 in the variable DELTA . I'm doing export DELTA=$(echo "scale=2; 2^500" | bc) but this does not set DELTA to 3273390607896141870013189696827599152216642046043064789483291368096133796404674554883270092325904157150886684127560071009217256545885393053328527589376 . Instead, it sets it to 32733906078961418700131896968275991522166420460430647894832913680961\33796404674554883270092325904157150886684127560071009217256545885393\053328527589376 I tried the answers in this question (3 years old), using export DELTA=$(echo "scale=2; 2^500" | bc | tr '\n' ' ') or export DELTA=$(echo "scale=2; print 2^500" | bc | tr '\n' ' ') but none of them work for setting the variable, only to echo it. Any idea? | echo "scale=2; 2^500" | bc | tr -d '\n\\' Output: 3273390607896141870013189696827599152216642046043064789483291368096133796404674554883270092325904157150886684127560071009217256545885393053328527589376 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36325/"
]
} |
177,976 | Is it possible to use terminal vim with xdg-open? I don't have a GUI text editor because I only use vim through the terminal. (I don't care very much for gvim either.) Is it possible to tell xdg-open to open a terminal, then open vim with the selected file? Thanks. | In either your .bashrc or .zshrc, depending whether you use bash or zsh respectively, export these two environment variables: export EDITOR=vimexport VISUAL=vim Adittionally, you might want to associate vim to the mimetype of text files: xdg-mime default vim.desktop text/plain Now you'll have to create a vim.desktop file in /usr/share/applications , which should execute the terminal emulator you want, opening vim. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/177976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58523/"
]
} |
177,998 | I'm using Debian Jessie (testing). I have a bluetooth mouse (Microsoft Sculpt Comfort) and I can pair it and use it ok, but after some time of inactivity (around 10 minutes) it stops working, I have to manually touch the set discoverable button on the mouse and re-pair it on the command line. The same mouse I tried on OS X and it works, so it's not a hardware issue Kernel 3.14.12-1 (2014-07-11) I pair the mouse with this command: sudo hidd --connect 30:59:B7:72:A5:A7 When paired correctly, this is the /var/log/syslog output Jan 7 15:22:42 desktop hidd: New HID device 30:59:B7:72:A5:A7 (Microsoft Bluetooth Mouse )Jan 7 15:22:42 desktop kernel: [103877.102083] hid-generic 0005:045E:07A2.0009: unknown main item tag 0x0Jan 7 15:22:42 desktop kernel: [103877.102481] input: Microsoft Bluetooth Mouse as /devices/pci0000:00/0000:00:02.0/usb2/2-3/2-3:1.0/bluetooth/hci0/hci0:42/0005:045E:07A2.0009/input/input51Jan 7 15:22:42 desktop kernel: [103877.102884] hid-generic 0005:045E:07A2.0009: input,hidraw3: BLUETOOTH HID v1.29 Mouse [Microsoft Bluetooth Mouse ] on 00:15:83:c8:52:19 After some idle time, this is printed on the same log file: Jan 7 15:34:34 desktop acpid: input device has been disconnected, fd 20 If I click a mouse button or move it, this gets printed: Jan 7 15:49:55 desktop bluetoothd[650]: Refusing input device connect: No such file or directory (2)Jan 7 15:49:56 desktop bluetoothd[650]: Refusing connection from 30:59:B7:72:A5:A7: unknown device Which seems to indicate that the mouse is still working and trying to tell the OS to re-connect, but it cannot. This is the udevadm info -p response: P: /devices/pci0000:00/0000:00:02.0/usb2/2-3/2-3:1.0/bluetooth/hci0/hci0:42/0005:045E:07A2.0004/input/input22E: ABS=100000000E: DEVPATH=/devices/pci0000:00/0000:00:02.0/usb2/2-3/2-3:1.0/bluetooth/hci0/hci0:42/0005:045E:07A2.0004/input/input22E: EV=10001fE: ID_FOR_SEAT=input-pci-0000_00_02_0-usb-0_3_1_0E: ID_INPUT=1E: ID_INPUT_KEY=1E: ID_INPUT_KEYBOARD=1E: ID_INPUT_MOUSE=1E: ID_PATH=pci-0000:00:02.0-usb-0:3:1.0E: ID_PATH_TAG=pci-0000_00_02_0-usb-0_3_1_0E: KEY=4837fff072ff32d bf54444600000000 1f0001 30f908b17c007 ffe77bfad9415fff febeffdff3cfffff fffffffffffffffeE: MODALIAS=input:b0005v045Ep07A2e0129-e0,1,2,3,4,14, k71,72,73,74,75,77,79,7A,7B,7C,7D,7E,7F,80,81,82,83,84,85,86,87,88,89,8A,8B,8C,8E,90,96,98,9B,9C,9E,9F,A1,A3,A4,A5,A6,A7,A8,A9,AB,AC,AD,AE,B0, B1,B2,B5,B6,B7,B8,B9,BA,BB,BC,BD,BE,BF,C0,C1,C2,CE,CF,D0,D1,D2,D4,D8,D9,DB,DF,E4,E7,E8,E9,EA,EB,F0,F1,100,110,111,112,113,114,161,162,166,16A,1 6E,172,174,176,178,179,17A,17B,17C,17D,17F,180,182,183,185,188,189,18C,18D,18E,18F,190,191,192,193,195,198,199,19A,1A0,1A1,1A2,1A3,1A4,1A5,1A6, 1A7,1A8,1A9,1AA,1AB,1AC,1AD,1AE,1B0,1B1,1B7,1BA,r0,1,6,7,8,9,a20,m4,lsfwE: MSC=10E: NAME="Microsoft Bluetooth Mouse "E: PHYS="00:15:83:c8:52:19"E: PRODUCT=5/45e/7a2/129E: PROP=0E: REL=3c3E: SUBSYSTEM=inputE: TAGS=:seat:E: UNIQ="30:59:b7:72:a5:a7"E: USEC_INITIALIZED=55796705 | There are 3 solutions for this problem. Maybe even combining 2 of them could fix your issue. Solution 1 Edit the file /etc/bluetooth/input.conf and set the parameter IdleTimeout=0 inside the [General] block. root@nwdesktop:~# vim /etc/bluetooth/input.conf# Configuration file for the input service# This section contains options which are not specific to any# particular interface[General]# Set idle timeout (in minutes) before the connection will# be disconnect (defaults to 0 for no timeout)IdleTimeout=0 Restart the bluetooth service: root@nwdesktop:~# /etc/init.d/bluetooth restart * Stopping bluetooth [ OK ] * Starting bluetooth [ OK ] This will prevent disconnections due to timeout from your bluetooth mice and keyboards. Solution 2 Create an udev rule that will avoid your mouse to autosuspend root@nwdesktop:~# vi /etc/udev/rules.d/91-local.rulesACTION=="add", SUBSYSTEM=="bluetooth", ATTR{product}=="Microsoft Bluetooth Mouse ", ATTR{power/control}="on"root@nwdesktop:~# # udevadm control --reload-rules Solution 3 This one does not makes me proud, but... Create a script with your hidd connect command: user@nwdesktop:~# vi /home/user/recconect.sh#!/bin/bashsudo hidd --connect 30:59:B7:72:A5:A7 Now, add to your crontab: root@nwdesktop:~# vi /etc/crontab*/10 * * * * root /home/user/recconect.sh Cheers. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/177998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/97041/"
]
} |
178,032 | When I issue cat , the terminal hangs waiting for stdin input. However, when less is issued, I get Missing filename ("less --help" for help) . It is known that both less and cat accepts stdin input. What is the difference? How is this reflected in the man pages? | less runs the following code when it's not given any filename arguments: if (isatty(fd0)){ error("Missing filename (\"less --help\" for help)", NULL_PARG); quit(QUIT_OK);}return (edit("-")); It's complaining when standard input is a terminal. If standard input is an ordinary file or pipe, it's OK with that. It presumably does this because it needs to read responses from the terminal at the end of each page, and there'd be no way to distinguish the data that is being paged and the responses. This isn't mentioned in the man page. Maybe it should be. cat doesn't page its output, and doesn't read responses from the terminal. It doesn't have any restrictions as far as stdin being a terminal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/178032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74615/"
]
} |
178,051 | I have this folder structure: βββ foo1βΒ Β βββ bar1.txtβΒ Β βββ bar2.txtβββ foo2βΒ Β βββ bar3.txtβΒ Β βββ bar4 with a space.txtβββ foo3 βββ qux1 βββ bar5.txt βββ bar6.txt that I would like to flatten into this, with an underscore between each folder level: βββ foo1_bar1.txtβββ foo1_bar2.txtβββ foo2_bar3.txtβββ foo2_bar4 with a space.txtβββ foo3_qux1.bar6.txtβββ foo3_qux1_bar5.txt I've looked around and I haven't found any solution that work, mostly I think because my problem has two particularities: there might be more than one folder level inside the root one and also because some files might have spaces. Any idea how to accomplish this in bash? Thanks! Edit : Running gleen jackman proposed answer I get this: There are two underscores for the first level folder. Any idea how to either avoid this or just rename it so that is just one underscore? Thanks. | find */ -type f -exec bash -c 'file=${1#./}; echo mv "$file" "${file//\//_}"' _ '{}' \; remove echo if you're satisfied it's working. Don't worry that the echo'ed commands don't show quotes, the script will handle files with spaces properly. If you want to remove the now empty subdirectories: find */ -depth -type d -exec echo rmdir '{}' \; | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/178051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98453/"
]
} |
178,069 | How do I specify command arguments in sudoers? As a background, aws command is actually a gateway to a whole bunch of sub-systems and I want to restrict the user to only run aws s3 cp ...any other args... When I try the following in /etc/sudoers Cmnd_Alias AWSS3_CMD = /usr/local/bin/aws s3 cp, /usr/local/aws/bin/aws s3 cpgbt1 ALL=(gbt-ops) NOPASSWD: AWSS3_CMD The shell unfortunately prompts for password $ sudo -u gbt-ops aws s3 cp helloworld s3://my-bucket/hw.1gbt-ops's password: If I remove the command args in Cmnd_Alias, then it flows as desired (without password prompt), but the authorization are way too broad. So, what is the right way of restricting to only certain types of command invocations . Cmnd_Alias AWSS3_CMD = /usr/local/bin/aws, /usr/local/aws/bin/aws Then $ sudo -u gbt-ops aws s3 cp helloworld s3://my-bucket/hw.1...happy Thanks a lot. | You haven't used any wildcards, but have provided two arguments. Therefore sudo looks for commands exactly as written (excepting path-lookup) (from man 5 sudoers ): If a Cmnd has associated command line arguments, then the arguments in the Cmnd must match exactly those given by the user on the command line (or match the wildcards if there are any). Try something like: Cmnd_Alias AWSS3_CMD = /usr/local/bin/aws s3 cp *, /usr/local/aws/bin/aws s3 cp * Note that: Wildcards in command line arguments should be used with care. Because command line arguments are matched as a single, concatenated string, a wildcard such as β?β or β*β can match multiple words. So, only one wildcard is needed per command. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/178069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91672/"
]
} |
178,070 | How do I paste from the PRIMARY selection (eg. mouse-selected text) with a keyboard shortcut? Shift+Insert inconsistently pastes from PRIMARY or CLIPBOARD, depending on the application. Background: Ctrl+C copies selected text to CLIPBOARD while mouse-selection copies to PRIMARY. Paste from CLIPBOARD with Ctrl+V and paste from PRIMARY with mouse-middle-click . In a terminal emulator (gnome-terminal), paste from CLIPBOARD with Ctrl+Shift+V . (Paste from PRIMARY with mouse-middle-click still.) I want to paste from PRIMARY with a keyboard shortcut. In gnome-terminal, this is Shift+Insert , but in gedit and Firefox, Shift+Insert pastes from CLIPBOARD. I want a shortcut that consistently pastes from CLIPBOARD and a different short cut that consistently pastes from PRIMARY. I'm running Ubuntu 14.04 with xmonad and Firefox 34.0 | All the apps you've mentioned are gtk+ apps so it's quite easy to answer Why ... Because in all gtk+ apps ( except one ), Shift + Insert pastes from CLIPBOARD - i.e. it's equivalent to Ctrl + V . The shortcut is hardcoded in gtkentry.c (line 2022) and gtktextview.c (line 1819): gtk_binding_entry_add_signal (binding_set, GDK_KEY_Insert, GDK_SHIFT_MASK, "paste-clipboard", 0); It is also documented in the GTK+ 3 Reference Manual under GtkEntry : The βpaste-clipboardβ signalvoiduser_function (GtkEntry *entry, gpointer user_data)The ::paste-clipboard signal is a keybinding signal which gets emittedto paste the contents of the clipboard into the text view.The default bindings for this signal are Ctrl-v and Shift-Insert. As far as I know this was done for consistency with other DE's (see KDE 's Qt key bindings in QTextEdit Class ) and Windows OS 1 . The only exception is gnome-terminal . After long debates, the devs have decided (for consistency with other terminals) that, in gnome-terminal , Shift + Insert should paste from PRIMARY and Ctrl + Shift + V should paste from CLIPBOARD (although you have the options to customize some shortcuts). As to How do you paste selection with a keyboard shortcut... there's no straightforward way. The easiest way is to assign a shortcut to a script that runs xdotool click 2 (simulates clicking the middle-mouse button). While this works (and it should work with all or most DE's and toolkits), it only works if the mouse cursor is actually over the text entry box, otherwise it fails. Another relatively easy way is via Gnome Accessibility, if it's available on your system. It also requires the presence of a numpad. Go to Universal Access >> Pointing & Clicking and enable Mouse Keys . Make sure NumLock is off. You can then use the numpad keys to move the cursor and click. To simulate a middle-mouse button click, press (and release) * (asterisk) then press 5 (here's a short guide ). This solution seems to always work in a gtk+ environment. The downside is that it requires Gnome Accessibility and a numpad. Also, you cannot customize the shortcut. An interesting solution was proposed on gnome-bugzilla (bug 643391) . (Update 2018: issue has now been moved here .) It requires patching some source files and setting configuration options in ~/.config/gtk-3.0/gtk.css (or ~/.gtkrc-2.0 for gtk+ 2 apps). I haven't tried it personally but the feedback is positive. Ideally, you would patch the source files and define a "paste-selection" signal then bind Shift + Insert to "paste-selection" instead of "paste-clipboard" . Andy's code (attached in the bug report linked above) could serve as a guide on how to do that. Even then, it would only affect gtk+ apps (I'm not a KDE/Qt guy so I have no idea how to alter Qt apps behavior). 1: (not to mention IBM's CUA) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/178070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18510/"
]
} |
178,077 | I understand what mounting is in Linux, and I understand device files. However I do not understand WHY we need to mount. For example, as explained in the accepted answer of this question , using this command: mount /dev/cdrom /media/cdrom we are mounting the CDROM device to /media/cdrom and eventually are able to access the files of CDROM with the following command ls /media/cdrom which will list the content of the CDROM. Why not skip mounting altogether, and do the following? ls /dev/cdrom And have the content of the CDROM Listed. I expect one of the answers to be: " This is how Linux is designed ". But if so, then why was it designed that way? Why not access the /dev/cdrom directory directly? What's the real purpose of mounting? | One reason is that block level access is a bit lower level than ls would be able to work with. /dev/cdrom , or dev/sda1 may be your CD ROM drive and partition 1 of your hard drive, respectively, but they aren't implementing ISO 9660 / ext4 - they're just RAW pointers to those devices known as Device Files . One of the things mount determines is HOW to use that raw access - what file system logic / driver / kernel modules are going to manage the reads/writes, or translate ls /mnt/cdrom into which blocks need to be read, and how to interpret the content of those blocks into things like file.txt . Other times, this low level access can be good enough; I've just read from and written to serial ports, usb devices, tty terminals, and other relatively simple devices. I would never try to manually read/write from /dev/sda1 to, say, edit a text file, because I'd basically have to reimplement ext4 logic, which may include, among other things: look up the file inodes, find the storage blocks, read the full block, make my change(s), write the full blocks, then update the inode (perhaps), or instead write this all to the journal - much too difficult. One way to see this for yourself is just to try it: [root@ArchHP dev]# cd /dev/sda1bash: cd: /dev/sda1: Not a directory /dev is a directory, and you can cd and ls all you like. /dev/sda1 is not a directory; it's a special type of file that is what the kernel offers up as a 'handle' to that device. See the wikipedia entry on Device Files for a more in depth treatment. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/178077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95326/"
]
} |
178,078 | I'm doing a data transfer, the old file system relies deeply on a directory which now is on different path. This is a git directory which stores code online. I have no rights to move it or rename it. So what I can do is rsync this directory to the same old path. But the directory name also changed. Is there a easy way that I can rsync a directory to a target directory with a different name? | If you want to use rsync to recursively make the dest directory an exact copy of the src directory: rsync -a src/ dest The rsync man page explains how this works: A trailing slash on the source [...] avoid[s] creating an additional directory level at the destination. You can think of a trailing / on a source as meaning "copy the contents of this directory" as opposed to "copy the directory by name" [...] | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/178078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
178,093 | Question : how can we report the REAL memory usage ( without the cache!) using nmon or vmstat or svmon on AIX 6? nmon: vmstat: svmon: Like on Linux, we can use the free command, but it's not available in AIX: [user@notebook ~]$ free -m total used free shared buffers cachedMem: 7797 4344 3453 0 219 2745-/+ buffers/cache: 1379 6417Swap: 2047 0 2047[user@notebook ~]$ free -m | grep cache: | awk '{print $3}'1379[user@notebook ~]$ | Short version: look at the in use clnt + pers pages in the svmon -G output (unit is 4k pages) if you want to know all file cache, or look at vmstat -v and look at "file pages" for file cache excluding executables (same unit). You should check out the following article if you want a good overview of what's going on: Overview of AIX page replacement . For an extremely short summary, memory in AIX is classified in two ways: Working memory vs permanent memory Working memory is process (stack, heap, shared memory) and kernel memory. If that sort of memory needs to be pages out, it goes to swap. Permanent memory is file cache. If that needs to be paged out, it goes back out to the filesystem where it came from (for dirty pages, clean pages just get recycled). This is subdivided into non-client (or persistent) pages for JFS filesystems, and client pages for JFS2, NFS, and possibly others. Computational vs non-computational pages. Computational pages are again process and kernel data, plus process text data (i.e. pages that cache the executable/code). Non-computational are the other ones: file cache that's not executable (or shared library). svmon -G (btw, svmon -G -O unit=MB is a bit friendlier) gives you the work versus permanent pages. The work column is, well, work memory. You get the permanent memory by adding up the pers (JFS) and clnt (JFS2) columns. In your case, you've got about 730MB of permanent pages, that are backed by your filesystems (186151*4k pages). Now the topas top-right "widget" FileSystemCache (numperm) shows something slightly different, and you'd get that same data with vmstat -v : that's only non-computational permanent pages. i.e. same thing as above, but excluding pages for executables. In your case, that's about 350MB (2.2% of 16G). Either way, that's really not much cache. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/178093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98473/"
]
} |
178,118 | I have a dedicated server with one network card in it. I however got two IP addresses. When I use the simple command sudo ip addr add 188.40.90.88 dev eth0 it fails to see it as a separate IP. I've googled along trying to find a fix, but I can't really find out what packages I need to set up a switch, and how to do it. My dedicated server runs with the following specifications: 16GB DDR3 RAM ( intel i7 ) OS: ubuntu 14.01 These are the two most important ones, I believe; I've got no idea what network card my dedicated server has, but I know it supports IEEE 802.1q , which I found out on the Ubuntu website. | I'm not quite sure exactly what you're trying to accomplish. I am assuming that your question could be re-titled "How to set up two IPs on a single network interface." Each network interface on your machine is given an identifier. Typically, you start with eth0 and work your way up (eth1, eth2, eth3). These are all physically different network cards. You can also have virtual cards on top of each of your physical cards. This is how you would set up multiple IPs on the same NIC. To set this up, you can use the following example, changing the addresses to suit your needs ( /etc/network/interfaces ): # This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).# The loopback network interfaceauto loiface lo inet loopback# The primary network interfaceauto eth0 eth0:0allow-hotplug eth0 eth0:0#eth0iface eth0 inet staticaddress 123.123.123.123netmask 255.255.255.0gateway 123.123.123.1#eth0:0 (LAN)iface eth0:0 inet staticaddress 212.212.212.212netmask 255.255.128.0gateway 212.212.212.1 The tricky part could be the netmask. Try 255.255.255.0 if you aren't sure. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/178118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85043/"
]
} |
178,127 | If you tar a directory recursively, it just uses the order from the os's readdir . But in some cases it's nice to tar the files sorted. What's a good way to tar a directory sorted alphabetically? Note, for the purpose of this question, gnu-tar on a typical Linux system is fine. | For a GNU tar : --sort=ORDER Specify the directory sorting order when reading directories. ORDER may be one of the following:`none' No directory sorting is performed. This is the default.`name' Sort the directory entries on name. The operating system may deliver directory entries in a more or less random order, and sorting them makes archive creation reproducible.`inode' Sort the directory entries on inode number. Sorting directories on inode number may reduce the amount of disk seek operations when creating an archive for some file systems. You'll probably also want to look at --preserve-order . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/178127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63928/"
]
} |
178,145 | I am using LXDE (Openbox). Every time I need to re-size my window, I have to position my mouse carefully to grab the thin window frame until my mouse cursor changes and I can re-size the window. I remember seeing in other window managers a "resize corner" in the bottom right corner of the window, which can easily be grabbed to resize the window diagonally. Does anything like this exist in LXDE (Openbox) ?How can I add it ?Can it be configured in ~/.local/share/themes/theme/openbox-3/themerc ? | Yes, something like it currently exists, but it is not the same resize grip in the corner that is used in gtk themes. Something like that would need to be coded into openbox. Add a Handle In your themerc for the theme you are using, you can set window.handle.width to a number of pixels and handle will appear below the window 1 . The handle includes diagonal resizing tools in the left and right corners. Unfortunately, this method does take up a bit more real estate than the gtk style corner grip. For example, in your themerc, this would create a 6 pixel wide handle: window.handle.width: 6 Specifies the size of the window handle. The window handle is the piece of decorations on the bottom of windows. A value of 0 means that no handle is shown. To activate changes made to the theme, run openbox --reconfigure . Change Border Width You may also change the border.width setting in your themerc to make the window border wider. This increases the area you can drag, but also increases the visual border of the window, so again you are sacrificing screen real estate which sucks. Drag with Alt + Right-Click You can position the cursor anywhere over the window, hold the Alt key and hold the right mouse button to resize the nearest window edge. This includes corners. The only downside here is that this requires a two hand operation. You may be able to create a custom key binding that is preferred. The mouse binding for resizing is in ~/.config/openbox/rc.xml or whichever the appropriate rc file is in the ~/.config/openbox/ directory and looks like this: <mousebind button="A-Right" action="Drag"> <action name="Resize"/> </mousebind> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/178145",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
178,160 | I have an encrypted share folder on my synology NAS DS413 (which uses ecryptfs). I can manually mount the encrypted folder and read the decrypted files without issue, using synologies GUI. For some reason, I have never been able to mount the encrypted folder using my passphrase . But I can always do it by using the private key generated during ecryptfs setup. So I have since been doing some research on decrypting the encrypted files without a synology (for example if this thing catches fire or is stolen and I need to restore from backup). I've read several threads and howto's on decrypting synology/ecryptfs encrypted shares using linux and encryptfs-utils. But the howto always tells you to provide the passphrase and never mention the use of the key for decryption. So my question is how do I decrypt using the key (which works to mount and decrypt with synology's software)? The key I have is 80 bytes and is binary. The first 16 bytes are integers only and the remaining bytes appear to be random hex. Thanks for any tips! | Short answer: Use the passphrase $1$5YN01o9y to reveal your actual passphrase from the keyfile with ecryptfs-unwrap-passphrase (the backslashes escape the $ letters): printf "%s" "\$1\$5YN01o9y" | ecryptfs-unwrap-passphrase keyfile.key - Then use your passphrase with one of the instructions you probably already know, like AlexP's answer here or Robert Castle's article . Or do it all in a single line: mount -t ecryptfs -o key=passphrase,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=no,ecryptfs_enable_filename_crypto=yes,passwd=$(printf "%s" "\$1\$5YN01o9y" | ecryptfs-unwrap-passphrase /path/to/keyfile.key -) /path/to/encrypted/folder /path/to/mountpoint I just tested the whole decryption process with a keyfile and can confirm its working: Created a new encrypted shared folder in DSM 6.2 and downloaded the keyfile. Shut down the NAS, removed a drive, connected it to a Ubuntu x64 18.04.2 machine and mounted the raid and volume group there. Installed ecryptfs-utils and successfully got access to the decrypted data using the mount command mentioned above with the downloaded keyfile. Credits: I found that $1$5YN01o9y -passphrase in a post in a German Synology forum . The user that probably actually found out the secret in 2014 is known there as Bastian (b666m) . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/178160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98532/"
]
} |
178,162 | Can somebody explain to me why a number with a leading 0 gives this funny behaviour? #!/bin/bashNUM=016 SUM=$((NUM + 1)) echo "$NUM + 1 = $SUM" Will print: 016 + 1 = 15 | The misunderstanding is that the numbers don't mean what you expect. A leading zero denotes a number with base 8. I.e. 016 is the same as 8#16 . If you want to keep the leading zero then you need 10#016 . > num=016> echo $((num))14> echo $((10#$num))16 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/178162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95215/"
]
} |
178,171 | I have a file that contains data that gets updated over time (mydata). The file looks like this: 1 2 3 With this file, I want to do this (basically handle each number as a separate parameter): myscript.sh 1 2 3 Since the data is not static but instead updated over time, I want to run this command: myscript.sh "$(cat mydata)" But instead I see ./myscript.sh: line 1: 1: command not found What should I do? | "$(cat mydata)" evaluates to a string which contains the content of the file mydata minus any trailing newline. What you want is the list of whitespace-separated words in the file. So, for once, use the command substitution outside of double quotes: myscript.sh $(cat mydata) For extra robustness, turn off globbing, so that if a column in mydata contains one of the characters \[*? it isn't interpreted as a file name pattern. You may also want to set IFS to contain the word separator characters, but the default (ASCII whitespace) should be exactly what you need. (set -f; myscript.sh $(cat mydata)) Alternatively, you could use the read builtin. This reads the first line and ignores the rest (which you could do above by replacing cat by head -n 1 ). read -r one two three rest <mydatamyscript.sh "$one" "$two" "$three" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/178171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6001/"
]
} |
178,187 | I'm trying to automatically mount a network drive at startup by editing /etc/fstab but doesn't work. If I execute this lΓne, sudo mount.cifs //192.168.0.67/test /home/pi/test -o username=myname,password=123 it works great. But I don't know how to properly write the same in /etc/fstab . | Each line in the /etc/fstab file contains the following fields separated by spaces or tabs: file_system dir type options dump pass A typical mount point added in /etc/fstab would look like the following: # <file system> <dir> <type> <options> <dump> <pass>/dev/sda1 / ext4 defaults,noatime 0 1 You can't simply add a mount statement in the file. Add this line to the end of your /etc/fstab file: //192.168.0.67/test /home/pi/test cifs username=myname,password=123,iocharset=utf8,sec=ntlm 0 0 After the /etc/fstab is edited you can test by mounting the filesystem with mount -a which will check fstab and attempt to mount everything that is present. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/178187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98557/"
]
} |
178,189 | Currently I create a copy of my log files like this: # tar -cvzf /var/www/vhosts/example.com/httpdocs/myfiles.tar.gz /var/www/vhosts/example.com/logs/access_log /var/www/vhosts/example.com/httpdocs/app/tmp/logs/error.log /var/log/mysqld.log /var/log/messages /var/log/httpd/access_log /var/log/httpd/suexec_log /var/log/httpd/error_log /var/log/sw-cp-server/error_log /usr/local/psa/var/log/xferlog /etc/php.ini But this creates a tar files with the directory structure. To avoid folder structure it seems like I should make cd for each file, so all files will be saved to tar file without subfolders. (Also there is a problem with tar.gz files, tar command doesn't permit to update archieve file, if file is compressed) But in this case there will be multiple files with same name, for example 2 file with name access_log. So I need to change destination log file name. For example /var/www/vhosts/example.com/logs/access_log to -var-www-vhosts-example.com-logs-access_log/var/log/httpd/access_log to -var-log-httpd-access_log Is this possible to archieve these files without the directory structure and with the file name changes ? Note that files exists in different folders. | Each line in the /etc/fstab file contains the following fields separated by spaces or tabs: file_system dir type options dump pass A typical mount point added in /etc/fstab would look like the following: # <file system> <dir> <type> <options> <dump> <pass>/dev/sda1 / ext4 defaults,noatime 0 1 You can't simply add a mount statement in the file. Add this line to the end of your /etc/fstab file: //192.168.0.67/test /home/pi/test cifs username=myname,password=123,iocharset=utf8,sec=ntlm 0 0 After the /etc/fstab is edited you can test by mounting the filesystem with mount -a which will check fstab and attempt to mount everything that is present. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/178189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31892/"
]
} |
178,217 | I'm struggling with escaping and expression evaluation. I am trying to run a command inside a find ... -exec COMMAND structure, where COMMAND operates on one file and outputs to the stdout. To use this COMMAND normally on the command line, I simply redirect to a new file, like this: COMMAND inputfilename.md > outputfilename.md.conf This works fine. However, if I want to recursively loop over ALL the files in a directory and its subdirectory, using find -exec , I can't seem to figure out how to separate the file name and the path so that I can redirect properly. For example, this produces an error: find ./ *.md -not -path './/.git/*' -exec COMMAND '{} > ~/wiki/newdirectory/{}.cong' \; because {} expands the entire file path. I've tried various combinations of basename and dirname , but I can't seem to get them to evaluate (instead, I just get the string 'basename' etc.). For example, this didn't work, because the text 'basename' just shows up find ./ *.md -not -path './/.git/*' -exec COMMAND '{} > ~/wiki/newdirectory/''basename {}''.conf' \; (returns this error: 'No such file or directory - .//newdirectory/README.md.conf > ~/wiki/newdirectory/basename .//newdirectory/README.md.conf' ). Any help would be appreciated. To summarize, the objective is: Recursively iterate over all files matching *.md in a directory Execute COMMAND inputfile.md > ~/newdirectory/inputfile.md.conf where inputfile is the same string in each iteration. Thanks! | Whatever you do, you'll need to invoke a shell to perform the redirection of the command output to a file whose location depends on the find result. find ./ *.md -not -path './/.git/*' -exec sh -c 'COMMAND "$0" > ~/wiki/newdirectory/"${0##*/}.cong"' {} \; Don't substitute {} inside the shell script. This isn't supported on all systems, and even where it is, that would not work in general, since it would treat the file name as a piece of shell syntax, e.g. a file called ;rm -rf ~;.md would cause you to erase all your files. ${0##*/} uses pure string manipulation to obtain the base name of the file. You could also use $(basename -- "$0") . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/178217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98572/"
]
} |
178,235 | in the cp manpage, it lists the -f/--force option as:if an existing destination file cannot be opened, remove it and try again for the --remove-destination option it says: remove each existing destination file before attempting to open it (contrast with --force) So, the former first checks if it can be opened, if not, it deletes in anyway, while the latter just bypasses that step. I combined each with the -i option, and in both cases, it indicates what the permissions of the files are if it's write-protected. The latter would seem to be more efficient, especially if recursively copying/overwriting large directories, but why maintain both options? What's the advantage of checking something it's going to over-ride anyway? | There's a distinction between the two (emphasis mine): if an existing destination file cannot be opened, remove it and try again remove each existing destination file before attempting to open it In the first case, if the file can be opened, cp will attempt to replace only the contents. cp is not going to remove the file unnecessarily. This will retain the permissions and ownerships of the original file unless you specify that they're to be copied too. The second case is useful when the contents can't be read (such as dangling symlinks ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/178235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94411/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.