source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
252,350 | Password less SSH without user directory? The folder .ssh should be stored in a user directory as far as I understood. ServerA: Linux without /home/users
ServerB: Linux with /home/users
client: Linux/mac etc... Cases: client password-less ssh to ServerB, no problem client password-less ssh to ServerA, no problem SeverA password-less ssh to ServerB, problem! If there is no actual user directories in ServerA how public key for each user without user directory existing? Or is there other ways to safely ssh to ServerB from ServerA? | The kill command is a very simple wrapper to the kill system call , which knows only about process IDs (PIDs). pkill and killall are also wrappers to the kill system call , (actually, to the libc library which directly invokes the system call), but can determine the PIDs for you, based on things like, process name, owner of the process, session id, etc. How pkill and killall work can be seen using ltrace or strace on them. On Linux, they both read through the /proc filesystem, and for each pid (directory) found, traverses the path in a way to identify a process by its name or other attributes. How this is done is technically speaking, kernel and system specific. In general, they read from /proc/<PID>/stat which contains the command name as the 2nd field. For pkill -f and pgrep examine the /cmdline entry for each PID's proc entry. pkill and pgrep use the readproc system call, whereas killall does not. I couldn't say if there's a performance difference: you'll have to benchmark that on your own. | {
"source": [
"https://unix.stackexchange.com/questions/252350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148706/"
]
} |
252,368 | From my experience with modern programming and scripting languages, I believe most programmers are generally accustomed to referring to the first element of an array as index 0 (zero). I'm sure I've heard of languages other than zsh starting array indexing on 1 (one); it's okay, as it is equally convenient.
However, as the previously released and widely used shell scripting languages ksh and bash both use 0, why would someone choose to alter this common convention? There does not seem to be any substantial advantages of using 1 as the first index;
then, the only explanation I can think of regarding this somewhat "exclusive feature" to shells would be "they just did this to show off a bit more their cool shell". I don't know much of either zsh or its history, though, and there is a high chance my trivial theory about this does not make any sense. Is there an explanation for this? Or is it just out of personal taste? | Virtually all shell arrays (Bourne, csh, tcsh, fish, rc, es, yash) start at 1. ksh is the only exception that I know (bash just copied ksh). Most interpreted languages at the time (early 90s): awk , tcl at least, and tools typically used from the shell ( cut -f1-3 , head -n 3 , sort -k1,3 , cal 1 2015 , comm -1 ) start at 1. sed , ed , vi number their lines from 1... zsh takes the best of the Bourne shell and csh. The Bourne shell array $@ start at 1. zsh is consistent with its handling of $@ (like in Bourne) or $argv (like in csh). See how confusing it is in ksh where ${@:0:1} does not give you the first positional parameter for instance. A shell is a user tool before being a programming language. It makes sense for most users to have the first element in $a[1] . It also means that the number of elements is the same as the last indice (in zsh like in most other shells except ksh, arrays are not sparse). a[1] for the first element is consistent with a[-1] for the last. So IMO the question should rather be: Why did David Korn's choose to make its arrays start at 0? About your: "However, as the previously released and widely used shell scripting languages ksh and bash both use 0" Note that while bash was released a few months before zsh indeed (June 1989 compared to December 1990), array support was added in their respective 2.0 version, but for zsh that was released in 1991, while for bash it was released much later 1996. The first Unix shell to introduce arrays (unless you want to consider the 1970's Thompson shell with its $1 .. $2 positional parameters) was csh in the late 70s whose indexes start at one. And its code was freely available, while ksh was proprietary and often not included by default on Unices (sold separately at a hefty price) until the late 80s. While ksh93 code was released as open source circa 2000, ksh88's, to this day, never was (though it's not too difficult to find ksh86a and ksh88d source code on archive.org today if you're interested in archaeology). | {
"source": [
"https://unix.stackexchange.com/questions/252368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93585/"
]
} |
252,517 | Can nmap list all hosts on the local network that have both SSH and HTTP open? To do so, I can run something like: nmap 192.168.1.1-254 -p22,80 --open However, this lists hosts that have ANY of the list ports open, whereas I would like hosts that have ALL of the ports open. In addition, the output is quite verbose: # nmap 192.168.1.1-254 -p22,80 --open
Starting Nmap 6.47 ( http://nmap.org ) at 2015-12-31 10:14 EST
Nmap scan report for Wireless_Broadband_Router.home (192.168.1.1)
Host is up (0.0016s latency).
Not shown: 1 closed port
PORT STATE SERVICE
80/tcp open http
Nmap scan report for new-host-2.home (192.168.1.16)
Host is up (0.013s latency).
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
Nmap done: 254 IP addresses (7 hosts up) scanned in 3.78 seconds What I'm looking for is output simply like: 192.168.1.16 as the above host is the only one with ALL the ports open. I certainly can post-process the output, but I don't want to rely on the output format of nmap, I'd rather have nmap do it, if there is a way. | There is not a way to do that within Nmap, but your comment about not wanting "to rely on the output format of nmap" lets me point out that Nmap has two stable output formats for machine-readable parsing. The older one is Grepable output ( -oG ) , which works well for processing with perl, awk, and grep, but is missing some of the more advanced output (like NSE script output, port reasons, traceroute, etc.). The more complete format is XML output ( -oX ) , but it may be overkill for your purposes. You can either save these outputs to files with -oG , -oX , or -oA (both formats plus "normal" text output), or you can send either one straight to stdout: nmap 192.168.1.1-254 -p22,80 --open -oG - | awk '/22\/open.*80\/open/{print $2}' | {
"source": [
"https://unix.stackexchange.com/questions/252517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137535/"
]
} |
252,593 | I would like to know how to delete a USB flash drive via the terminal if possible so data can't be recovered. | TL/DR: Make sure you get the right device name, ensure it's not mounted, and do as many random overwrites as you can afford. You can follow it by an erase command designed for flash hardware, if you are on a recent enough distribution. In these checks, always use the drive (like /dev/sd h ) and not the partition name (which would be /dev/sd h1 ) # dmesg|grep sdXX
[3600.000001] sd 6:0:0:0: [sdXX] 125106176 512-byte logical blocks: (64.0 GB/59.6 GiB)
# blkid|grep sdXX
/dev/sdXX1: PARTUUID="88a03bb2-ced8-4bb2-9883-0a51b4d460a8"
# df|grep /dev/sdXX
# shred -vzn8 /dev/sdXX
shred: /dev/sdXX: pass 1/9 (random)...
shred: /dev/sdXX: pass 1/9 (random)...46MiB/3.8GiB 1%
...
shred: /dev/sdXX: pass 9/9 (000000)...3.8GiB/3.8GiB 100%
# blkdiscard -s /dev/sdXX
blkdiscard: /dev/sdXX: BLKSECDISCARD ioctl failed: Operation not supported
# blkdiscard /dev/sdXX
blkdiscard: /dev/sdXX: BLKDISCARD ioctl failed: Operation not supported
# In theory, overwriting with zero with dd is just fine. However, due to how the internals of a flash drive are built, if you use a single overwrite pass, there may be several layers of data hidden behind the actual blocks that are still storing leftover information. Typically a part of flash storage is faulty, and is marked so during manufacturing. There are also other bits that can go wrong (becoming unchangeable, unsettable, or unclearable), these parts must be marked faulty as well during the lifetime. This information is stored in a reserved space, on the same chips as your data. This is one of the several reasons a 4GB thumb drive is not showing 2^32 bytes capacity. Flash storage is also internally organised in larger blocks, sometimes much larger than the filesystems working on the drive. A typical filesystem block size is 4KB, and the flash segments that can be erased in one go may range from 64KB to even several megabytes. These large blocks can only be erased in whole, which resets all of the block to a known state (all 1s or all 0s). Afterwards a data write can alter any of the bits (change the default 1s into 0s where needed, or change the default 0s into 1s), but only once . To change any of the bits back into the default, all of the segment needs to be erased again! So, when you want to change a 4KB block (the filesystem is asked to change a single character in the middle of a file), the flash controller would need to read and buffer all 64KB of the old data, erase all of it, and write back the new contents. This would be very slow, erasing segments is the slowest operation. Also, a segment can only erased by a limited times (tens of thousands is typical), so if you make too many changes to a single file, that can quickly deteriorate the drive. But this is not how it's done. Intelligent flash controllers simply write the 4KB new data elsewhere, and make a note to redirect reads to this 4KB of data in the middle of the old block. They need some more space, that we can't see to store this information about redirects. They also try to make sure that they go through all the accessible segments to store data, this is called wear levelling . This means that typically old data is still on the drive somewhere! If you just cleared all accessible blocks, all the hidden blocks still keep a quite recent version of the data. Whether this is accessible to an attacker you want your data to be protected from, is a different question. If you have a recent enough distribution, and the USB drive is programmed to reveal that it is a flash drive, blkdiscard can use the underlying TRIM operation, which is the segment erase that we talked about above. It also has an additional flag to make sure that even the invisible hidden data is fully erased by the hardware: # blkdiscard -s /dev/myusbdevice -s, --secure
Perform a secure discard. A secure discard is the same as a regular discard except that all copies of the discarded blocks that were possibly created by garbage collection must also be erased. This requires support from the device. It won't necessarily work, as I demonstrated above. If you get Operation not supported , either your kernel, your utilities, or the USB gateway chip (which allows the flash controller to look like a drive via USB) does not support passing TRIM command. (The flash controller must still be able to erase segments on its own). If it is supported by the vendor of your drive, this is the safest way. Another, less safe way to make sure you're allowing less of the old data to linger around somewhere is to overwrite it several times, with random values, if possible. Why random, you ask? Just imagine if the USB drive were made too intelligent, and detected that you wanted to clear a sector, and just made a change in a bitmap that this sector is now free, and will need clearing later. As this means it can speed up writes of zeros, so it makes for a pendrive that appears more efficient, right? Whether your drive is doing it, hard to tell. At the most extreme, the drive could just remember how much from the start you have cleared, and all it needs to store is about 4 bytes of information to do this, and not clear anything from the data you want to disappear. All so that it could look very fast. If you are overwriting the data with random, unpredictable values, these optimizations are impossible. So the drive has to make sure the data ends up stored inside the flash chips. But you still won't be able to rule out that some of the previously used sectors are still there with some old data of yours, but the drive just didn't consider important to erase it just yet, since it's not accessible normally. Only the actual TRIM command can guarantee that. To automate overwriting with random values, you may want to look into using shred , like: # shred -vzn88 /dev/myusbdrive The options used: -v for making it show the progress -z to zero it at as a final phase -n8 is to do 8 random passes of overwrites If possible, use both blkdiscard and shred , if blkdiscard -s is supported by your drive, it's the optimal solution, but can't hurt to do a shred beforehand to rule out firmware mistakes. Oh, and always double-triple-check the device that you are trying to clear! dmesg can help to see what was the most recently inserted device, and also it's worth to check the device name you intend to clear with ls -al , even for the devices node numbers, and the blkid output to see what partitions may be available that you DON'T want to clear. Never use these commands on an internal drive that you want to keep using - blkdiscard will only work on solid state drives, but it's not worth to try to lose data! There may be other ways to clear data securely as technology progresses. One other way mentioned is the ATA SECURITY ERASE command that can be issued via hdparm commands. In my experience, it is not really supported on flash drives. It was designed for enterprise hard drives, and the feature is not always implemented in lowest cost storage devices. The TRIM / DISCARD operation is much newer than the SECURITY ERASE command, and was created in response to the flash features, so it has a much higher chance of being implemented, even in cheap USB drives, but it's still not ubiquitous. If you want to erase an SD/micro SD card in an USB dongle, and blkdiscard reports it is not supported, you may want to try a different dongle/card reader, and/or do it in a machine with a direct SD/MMC slot. | {
"source": [
"https://unix.stackexchange.com/questions/252593",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
252,625 | I would like to know how to create a ntfs partition on /dev/sdx. I could not figure out if I should use partition type 7 86 or 87. What is the full list of commands to use? | create a partition using fdisk fdisk /dev/sdx Commands: to create the partition: n, p, [enter], [enter] to give a type to the partition: t, 7 (don't select 86 or 87, those are for volume sets) if you want to make it bootable: a to see the changes: p to write the changes: w create a ntfs fileystem on /dev/sdx1: mkfs.ntfs -f /dev/sdx1 (the -f argument makes the command run fast, skipping both the bad block check and the zeroing of the storage) mount it wherever you want mount /dev/sdx1 /mnt/myNtfsDevice | {
"source": [
"https://unix.stackexchange.com/questions/252625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
252,671 | I'd like to try PHP7.0 on Debian Jessie and am trying to install it from sid. However, php7.0 depends on php7.0-common which depends on php-common > 18 while php-common in sid is at 17. Does this mean it's simply impossible to install php7.0 from this distribution at the moment? Why is that? I know that it is possible to install from source as explained e.g. here , I'm just asking about the official packages. Note : the packages in sid have been fixed and it is now (Jan 6, 2016) possible to install from there. | You have unofficial repos with new versions. Using Debian one of the best well-known repository for most up-to-date software for web servers for i386 and amd64 packages is dotdeb. " Dotdeb is an extra repository providing up-to-date packages for your Debian servers" They have PHP 7 since the 3rd of December (of 2015), and have had a pre-packaged beta since November. To add the dotdeb repository, from here . Edit /etc/apt/sources.list and add deb http://packages.dotdeb.org jessie all Fetch the repository key and install it. wget https://www.dotdeb.org/dotdeb.gpg
sudo apt-key add dotdeb.gpg Do then sudo apt-get update And lastly: sudo apt-get install php7.0 To search for php 7 related packages: apt-cache search php | grep ^php7 In Ubuntu you also already have PPAs for it too. It seems Debian backports do not have yet PHP 7.0. Search here in a near future. | {
"source": [
"https://unix.stackexchange.com/questions/252671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
252,684 | The Docker service is clearly running: $ systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2015-12-28 19:20:50 GMT; 3 days ago
Docs: https://docs.docker.com
Main PID: 1015 (docker)
CGroup: /system.slice/docker.service
└─1015 /usr/bin/docker daemon -H fd:// --exec-opt native.cgroupdriver=cgroupfs
$ ps wuf -u root | grep $(which docker)
root 1015 0.0 0.3 477048 12432 ? Ssl 2015 2:26 /usr/bin/docker daemon -H fd:// --exec-opt native.cgroupdriver=cgroupfs However, Docker itself refuses to talk to it: $ docker info
Cannot connect to the Docker daemon. Is the docker daemon running on this host? I am running the default Docker configuration , that is, I haven't changed any /etc files relating to this service. What could be the problem here? | You need to add yourself to the docker group and activate the group (by logging out and in again or running newgrp docker ) to run docker commands. The error message is simply misleading. | {
"source": [
"https://unix.stackexchange.com/questions/252684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
252,745 | I try to modify one column of my file, then print the result. awk -F"|" '{ if(NR!=1){$5 = $5+0.1} print $0}' myfile It does what I want, but when printing, only the first line keeps its field separator.( the one, i don't modify). So I could use print $1"|"$2"|"$3"|"$4"|"$5"|"... but isn't there a solution using $0 ? (For example if I don't know the numbers of column) I believe, I could solve my problem easily with sed , but I try to learn awk for now. | @Sukminder has already given the simple answer; I have a couple tidbits of style points and helpful syntax about your example code (like a code review). This started as a comment but it was getting long. OFS is the output field separator, as already mentioned. $0 is the default argument to print —no need to specify it explicitly. And another style point: awk has what's called "patterns", which are like built-in conditional blocks. So you could also just use: awk -F'|' 'BEGIN {OFS = FS} NR != 1 {$5 += 0.1} {print}' myfile | {
"source": [
"https://unix.stackexchange.com/questions/252745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148352/"
]
} |
252,822 | What is the purpose of having both? Aren't they both used for mounting drives? | I recommend visiting the Filesystem Hierarchy Standard. /media is mount point for removable media . In other words, where system mounts removable media. This directory contains sub-directories used for mounting removable media such as CD-ROMs, floppy disks, etc. /mnt is for temporary mounting . In other words, where user can mount things. This directory is generally used for mounting filessytems temporarily when needed. Ref: http://www.pathname.com/fhs/pub/fhs-2.3.html#MEDIAMOUNTPOINT http://www.pathname.com/fhs/pub/fhs-2.3.html#MNTMOUNTPOINTFORATEMPORARILYMOUNT | {
"source": [
"https://unix.stackexchange.com/questions/252822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
252,980 | I'm trying to find my current logged in group without wanting to use newgrp to switch. | I figured I can use the following. id -g To get all the groups I belong id -G And to get the actual names, instead of the ids, just pass the flag -n . id -Gn This last command will yield the same result as executing groups | {
"source": [
"https://unix.stackexchange.com/questions/252980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149687/"
]
} |
252,995 | How can mouse support be enabled in an Emacs terminal session started with emacs -nw ? Is there a keyboard shortcut or a flag to do this? If not how can it be done in terminal emulators? I use Guake. | Hit F10 to open the menu and use the arrow keys to navigate to “Options” → “Customize Emacs” → “All Settings Matching…”. Type mouse and Enter . If your Emacs version doesn't have a menu when running in a terminal then run M-x customize . (This means: press Alt + X , type customize and press Enter .) Navigate to the search box, type mouse and press Enter . Mouse support is called “Xterm Mouse mode”. You can find that in the manual . The manual also gives a way to turn it on (for the current session) — M-x xterm-mouse-mode . In the Customize interface, on the setting you want to change, press Enter on “Show Value”. A “Toggle” button appears, press Enter on it. Then press Enter on the “State” box and choose either 0 for “Set for Current Session” or “1” for “Save for Future Sessions”. (You can choose 0 for now and come back there and choose 1 later if you're happy with the setting.) | {
"source": [
"https://unix.stackexchange.com/questions/252995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
253,203 | My question is simple: how do I tell journald to re-read its configuration file without rebooting ? I've made some changes to /etc/systemd/journald.conf and I'd like to see if they are correct and everything works as I expect. I do not want to reboot. | To control running services with systemd, use the systemctl utility . This utility is similar to the service utility provided by SysVinit and Upstart. Among others: systemctl status systemd-journald indicates whether the service is running and additional information if it is. systemctl start systemd-journald starts the service (systemd unit). systemctl stop systemd-journald stops the service. systemctl restart systemd-journald restarts the service. systemctl reload systemd-journald reloads the service's configuration if possible, but will not kill it (so no risk of a service interruption or of disrupting processing in progress, but the service may keep running with a stale configuration). systemctl force-reload systemd-journald reloads the service's configuration if possible, and if not restarts the service (so the service is guaranteed to use the current configuration, but this may interrupt something). systemctl daemon-reload reloads systemd's own configuration. | {
"source": [
"https://unix.stackexchange.com/questions/253203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2229/"
]
} |
253,233 | I have recursively transferred many files and folders with scp , using the command: scp -rp /source/folder [email protected]:/destination/folder After the transfer is completed, do I need to check whether all files got transferred without any corruption, or does scp take care of it (i.e. displays some error message if any of the file is not correctly transfered)? | scp verifies that it copied all the data sent by the other party. The integrity of the transfer is guaranteed by the cryptographic channel protocol. So you don't need to verify the integrity after the transfer. That would be redundant, and very unlikely to catch any hardware error since the data you're comparing against would probably be read from the cache. Verifying data periodically can be useful, but verifying immediately after the transfer is pointless. You do however need to ensure that scp isn't telling you that something went wrong. There should be an error message, but the reliable indicator is that scp returns a nonzero exit code if something went wrong. More precisely, you know that the file was transmitted correctly if scp returns 0 (i.e. the success status code). Checking that the exit status is 0 is necessary when you run any command anyway. If scp returns an error status, or if it's killed by a signal, or if it never dies because the system crashes or loses power while it's running, then you have no guarantees. In particular, since scp copies the file directly to its final name, this means that you can end up with a partial file in case of a system crash. The part that was copied is guaranteed to be correct but the file may be truncated. For better reliability, use rsync instead of scp. Unless instructed otherwise, rsync writes to a temporary file, and moves it into place once it's finished. Thus, if rsync returns a success code, you know the file is present and a correct, complete copy; if rsync hasn't returned an error code then no file will be present (unless there was an older version of the file, in which case that older version won't be modified). | {
"source": [
"https://unix.stackexchange.com/questions/253233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16704/"
]
} |
253,279 | How can I split a word's letters, with each letter in a separate line? For example, given "StackOver" I would like to see S
t
a
c
k
O
v
e
r I'm new to bash so I have no clue where to start. | I would use grep : $ grep -o . <<<"StackOver"
S
t
a
c
k
O
v
e
r or sed : $ sed 's/./&\n/g' <<<"StackOver"
S
t
a
c
k
O
v
e
r And if empty space at the end is an issue: sed 's/\B/&\n/g' <<<"StackOver" All of that assuming GNU/Linux. | {
"source": [
"https://unix.stackexchange.com/questions/253279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150030/"
]
} |
253,290 | I am getting this error: [ 2614.727471] ata1: exception Emask 0x10 SAct 0x0 SErr 0x4000000 action 0xe frozen
[ 2614.727477] ata1: irq_stat 0x00000040, connection status changed
[ 2614.727481] ata1: SError: { DevExch }
[ 2614.727488] ata1: limiting SATA link speed to 1.5 Gbps
[ 2614.727491] ata1: hard resetting link
[ 2615.450561] ata1: SATA link down (SStatus 0 SControl 310)
[ 2615.450577] ata1: EH complete and I DO NOT HAVE ANY SATA disk drives connected. I have an IDE disk!!! my kernel version is recent: 4.2.8-300.fc23.x86_64 , Fedora 23,
motherboard: ASRock supercomputer X58 Why is it telling me I have a link if that is not true? Is there a way to diagnose this? I suppose the IDE interface on my motherboard is somehow mapped to SATA controller, so the error I am getting is not originated from the disk, but from the controller. Then, why does it tell me that it is resetting the link to 1.5 Gbps??? Maximum IDE speed is 133MB/s. Very weird. And btw, I the disk is working perfectly without any problems. | I would use grep : $ grep -o . <<<"StackOver"
S
t
a
c
k
O
v
e
r or sed : $ sed 's/./&\n/g' <<<"StackOver"
S
t
a
c
k
O
v
e
r And if empty space at the end is an issue: sed 's/\B/&\n/g' <<<"StackOver" All of that assuming GNU/Linux. | {
"source": [
"https://unix.stackexchange.com/questions/253290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150940/"
]
} |
253,306 | if [ -z "$OPTION" ] # if option is not given(empty) then:
then
command1 --defaultOption
else
command1 $OPTION
fi \
2> >( function1 "$DETAILS" ) \
< <( command2 "$OTHER_DETAILS" ) I am seriosly puzzled how directing stderr to a file and feeding a file into stdin interact with an if statement.
Well known things are: 2>filename# Redirect stderr to file "filename."
2>>filename# Redirect and append stderr to file "filename."
command < input-file > output-file
< input-file command > output-file My guess would be: command2 generates a file which is forwarded either to command1's stdin with --defaultOption (if $OPTION is empty, then case) or to command1's stdin with $OPTION (if $OPTION is not empty, else case).
stderr of command1 is redirected to function1 (which as an example might be some sort of progress-bar display). So my questions are: Are the whitespaces between the brackets < < and > > necessary? Is it actual an append (whitespace ignored), or a "double" redirect?
Am I missing an interaction between brackets and braces >( and <( ?
Does it somehow influence the evaluation of the the if? Or is only -z $OPTION tested? Can I understand what's going on better if I write the outputted file of command2 to the disk, then check for the option and read it again in the if statement? command2 "$OTHER_DETAILS" --out=file.txt
if [ -z "$OPTION]
then
command1 --defaultOption --in=file.txt 2>function1
else
command1 "$OPTION" --in=file.txt 2>function1
fi This is part of a script I found over there: http://linuxtv.org/wiki/index.php/V4L_capturing/script (lines 912 through 924) | I would use grep : $ grep -o . <<<"StackOver"
S
t
a
c
k
O
v
e
r or sed : $ sed 's/./&\n/g' <<<"StackOver"
S
t
a
c
k
O
v
e
r And if empty space at the end is an issue: sed 's/\B/&\n/g' <<<"StackOver" All of that assuming GNU/Linux. | {
"source": [
"https://unix.stackexchange.com/questions/253306",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150042/"
]
} |
253,376 | Why can't I run this command in my terminal: open index.html Wasn't it supposed open this file on my browser? Also can't I run this command: open index.html -a "Sublime Text" . The result of these commands are: $ open index.html
Couldn't get a file descriptor referring to the console
$ open index.html -a "Sublime Text" -
open: invalid option -- 'a'
Usage: open [OPTIONS] -- command | The primary purpose of OS X's open command is to open a file in the associated application. The equivalent of that on modern non-OSX unices is xdg-open . xdg-open index.html xdg-open doesn't have an equivalent of OSX's open -a to open a file in specific application. That's because the normal way to open a file in an application is to simply type the name of the application followed by the name of the file. More precisely, you need to type the name of the executable program that implements the application. sublime_text index.html Linux, like other Unix systems (but not, as far as I know, the non-Unixy parts of OS X) manages software by tracking it with a package manager, and puts individual files where they are used . For example, all executable programs are in a small set of directories and all those directories are listed in the PATH variable ; running sublime_text looks up a file called sublime_text in the directories listed in PATH . OS X needs an extra level of indirection, through open -a , to handle applications which are unpacked in a single directory tree and registered in an application database. Linux doesn't have any application database, but it's organized in such a way that it doesn't need one. If running the command sublime_text shell doesn't work for you, then Sublime Text hasn't been installed properly. I've never used it, and apparently it comes as a tar archive, not as a distribution package (e.g. deb or rpm), so it's possible that you need to do an extra installation step. It's really the job of the makers of Sublime Text to make this automatic, but if they haven't done it, you can probably do it yourself by running the command sudo -s …/sublime_text /usr/local/bin Replace … by the path where the sublime_text executable is, of course. The open command you encountered is an older name for the openvt command (some Linux distributions only include it under the name openvt ). The openvt command creates a new virtual console , which can only be done by root and isn't used very often in this century since most people only ever work in a graphical window environment. | {
"source": [
"https://unix.stackexchange.com/questions/253376",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150122/"
]
} |
253,422 | For example, while this works: $ echo foo
foo This doesn't: $ /bin/sh -c echo foo Whereas this does: $ /bin/sh -c 'echo foo; echo bar'
foo
bar Is there an explanation? | From man sh -c string If the -c option is present, then commands are read from string.
If there are arguments after the string, they are assigned to the
positional parameters, starting with $0 It means your command should be like this: $ sh -c 'echo "$0"' foo
foo Similarly: $ sh -c 'echo "$0 $1"' foo bar
foo bar That was the first part to understand; the second case is simple and doesn't need explanation, I guess. | {
"source": [
"https://unix.stackexchange.com/questions/253422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67045/"
]
} |
253,426 | I'm happy with the default usage of detox for sanitizing filenames, except I don't always want to replace whitespace with a single underscore. Would like to replace whitespace with a single space. Know how to do this? | From man sh -c string If the -c option is present, then commands are read from string.
If there are arguments after the string, they are assigned to the
positional parameters, starting with $0 It means your command should be like this: $ sh -c 'echo "$0"' foo
foo Similarly: $ sh -c 'echo "$0 $1"' foo bar
foo bar That was the first part to understand; the second case is simple and doesn't need explanation, I guess. | {
"source": [
"https://unix.stackexchange.com/questions/253426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150158/"
]
} |
253,524 | Is there any objective reason to prefer one form to the other? Performance, reliability, portability? filename=/some/long/path/to/a_file
parentdir_v1="${filename%/*}"
parentdir_v2="$(dirname "$filename")"
basename_v1="${filename##*/}"
basename_v2="$(basename "$filename")"
echo "$parentdir_v1"
echo "$parentdir_v2"
echo "$basename_v1"
echo "$basename_v2" Produces: /some/long/path/to
/some/long/path/to
a_file
a_file (v1 uses shell parameter expansion, v2 uses external binaries.) | Both have their quirks, unfortunately. Both are required by POSIX, so the difference between them isn't a portability concern¹. The plain way to use the utilities is base=$(basename -- "$filename")
dir=$(dirname -- "$filename") Note the double quotes around variable substitutions, as always, and also the -- after the command, in case the file name begins with a dash (otherwise the commands would interpret the file name as an option). This still fails in one edge case, which is rare but might be forced by a malicious user²: command substitution removes trailing newlines. So if a filename is called foo/bar then base will be set to bar instead of bar . A workaround is to add a non-newline character and strip it after the command substitution: base=$(basename -- "$filename"; echo .); base=${base%.}
dir=$(dirname -- "$filename"; echo .); dir=${dir%.} With parameter substitution, you don't run into edge cases related to expansion of weird characters, but there are a number of difficulties with the slash character. One thing that is not an edge case at all is that computing the directory part requires different code for the case where there is no / . base="${filename##*/}"
case "$filename" in
*/*) dirname="${filename%/*}";;
*) dirname=".";;
esac The edge case is when there's a trailing slash (including the case of the root directory, which is all slashes). The basename and dirname commands strip off trailing slashes before they do their job. There's no way to strip the trailing slashes in one go if you stick to POSIX constructs, but you can do it in two steps. You need to take care of the case when the input consists of nothing but slashes. case "$filename" in
*/*[!/]*)
trail=${filename##*[!/]}; filename=${filename%%"$trail"}
base=${filename##*/}
dir=${filename%/*};;
*[!/]*)
trail=${filename##*[!/]}
base=${filename%%"$trail"}
dir=".";;
*) base="/"; dir="/";;
esac If you happen to know that you aren't in an edge case (e.g. a find result other than the starting point always contains a directory part and has no trailing / ) then parameter expansion string manipulation is straightforward. If you need to cope with all the edge cases, the utilities are easier to use (but slower). Sometimes, you may want to treat foo/ like foo/. rather than like foo . If you're acting on a directory entry then foo/ is supposed to be equivalent to foo/. , not foo ; this makes a difference when foo is a symbolic link to a directory: foo means the symbolic link, foo/ means the target directory. In that case, the basename of a path with a trailing slash is advantageously . , and the path can be its own dirname. case "$filename" in
*/) base="."; dir="$filename";;
*/*) base="${filename##*/}"; dir="${filename%"$base"}";;
*) base="$filename"; dir=".";;
esac The fast and reliable method is to use zsh with its history modifiers (this first strips trailing slashes, like the utilities): dir=$filename:h base=$filename:t ¹ Unless you're using pre-POSIX shells like Solaris 10 and older's /bin/sh (which lacked parameter expansion string manipulation features on machines still in production — but there's always a POSIX shell called sh in the installation, only it's /usr/xpg4/bin/sh , not /bin/sh ). ² For example: submit a file called foo to a file upload service that doesn't protect against this, then delete it and cause foo to be deleted instead | {
"source": [
"https://unix.stackexchange.com/questions/253524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
253,767 | To restart or shut off Linux from the terminal, one can use reboot and poweroff , respectively. However, both of these commands require root privileges. Why is this so? What security risk is posed by not requiring this to have root privileges? The GUI provides a way for any user to shut off or restart, so why do the terminal commands need to be run as root? Speaking of the options from the GUI, if the terminal requires root privileges to shut off or restart the Linux computer, how is the GUI able to present an option that does the same without requiring the entering of a password? | Warning: by the end of this answer you'll probably know more about linux than you wanted to Why reboot and poweroff require root privileges GNU/Linux operating systems are multi-user , as were its UNIX predecessors. The system is a shared resource, and multiple users can use it simultaneously . In the past this usually happened on computer terminals connected to a minicomputer or a mainframe . The popular PDP-11 minicomputer. A bit large, by today's standards :) In modern days, this can happen either remotely over the network (usually via SSH ), on thin clients or on a multiseat configuration , where there are several local users with hardware attached to the same computer. A multi-seat configuration. Photo by Tiago Vignatti In practice, there can be hundreds or thousands of users using the same computer simultaneously. It wouldn't make much sense if any user could power off the computer, and prevent everyone else from using it. What security risk is posed by not requiring this to have root privileges? On a multi-user system, this prevents what is effectively a denial-of-service attack The GUI provides a way for any user to shut off or restart, so why do the terminal commands need to be run as root? Many Linux distributions do not provide a GUI. The desktop Linux distributions that do are usually oriented to a single user pattern, so it makes sense to allow this from the GUI. Possible reasons why the commands still require root privileges: Most users of a desktop-oriented distro will use the GUI, not the command line, so it's not worth the trouble Consistency with accepted UNIX conventions (Arguably misguided) security, as it prevents naive programs or scripts from powering off the system How is the GUI able to present shutdown without root privileges? The actual mechanism will vary depending on the specific desktop manager (GUI). Generally speaking, there are several mechanisms available for this type of task: Running the GUI itself as root (hopefully that shouldn't happen on any proper implementation...) setuid sudo with NOPASSWD Communicating the command to another process that has those privileges, usually done with D-Bus . On popular GUIs, this is usually managed by polkit . In summary Linux is used in very diverse environments - from mainframes, servers and desktops to supercomputers, mobile phones, and microwave ovens . It's hard to keep everyone happy all the time! :) | {
"source": [
"https://unix.stackexchange.com/questions/253767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139098/"
]
} |
253,783 | I have written a Linux (bash) shell script using while loop and I kept sleep command to execute the script for every 60 sec and the output is redirected to other file. After few hours I stopped the script and it doesn't stop executing and I deleted the script still it is running and the output file is updated every 60 seconds. I could see sleep command in the process running by Linux. I tried to kill the PID of sleep using kill -9 PID command. No use. It is in my production server.can some one help out. How should we stop the execution of script. | Warning: by the end of this answer you'll probably know more about linux than you wanted to Why reboot and poweroff require root privileges GNU/Linux operating systems are multi-user , as were its UNIX predecessors. The system is a shared resource, and multiple users can use it simultaneously . In the past this usually happened on computer terminals connected to a minicomputer or a mainframe . The popular PDP-11 minicomputer. A bit large, by today's standards :) In modern days, this can happen either remotely over the network (usually via SSH ), on thin clients or on a multiseat configuration , where there are several local users with hardware attached to the same computer. A multi-seat configuration. Photo by Tiago Vignatti In practice, there can be hundreds or thousands of users using the same computer simultaneously. It wouldn't make much sense if any user could power off the computer, and prevent everyone else from using it. What security risk is posed by not requiring this to have root privileges? On a multi-user system, this prevents what is effectively a denial-of-service attack The GUI provides a way for any user to shut off or restart, so why do the terminal commands need to be run as root? Many Linux distributions do not provide a GUI. The desktop Linux distributions that do are usually oriented to a single user pattern, so it makes sense to allow this from the GUI. Possible reasons why the commands still require root privileges: Most users of a desktop-oriented distro will use the GUI, not the command line, so it's not worth the trouble Consistency with accepted UNIX conventions (Arguably misguided) security, as it prevents naive programs or scripts from powering off the system How is the GUI able to present shutdown without root privileges? The actual mechanism will vary depending on the specific desktop manager (GUI). Generally speaking, there are several mechanisms available for this type of task: Running the GUI itself as root (hopefully that shouldn't happen on any proper implementation...) setuid sudo with NOPASSWD Communicating the command to another process that has those privileges, usually done with D-Bus . On popular GUIs, this is usually managed by polkit . In summary Linux is used in very diverse environments - from mainframes, servers and desktops to supercomputers, mobile phones, and microwave ovens . It's hard to keep everyone happy all the time! :) | {
"source": [
"https://unix.stackexchange.com/questions/253783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150439/"
]
} |
253,816 | Is there a way to tell the Linux kernel to only use a certain percentage of memory for the buffer cache? I know /proc/sys/vm/drop_caches can be used to clear the cache temporarily, but is there any permanent setting that prevents it from growing to more than e.g. 50% of main memory? The reason I want to do this, is that I have a server running a Ceph OSD which constantly serves data from disk and manages to use up the entire physical memory as buffer cache within a few hours. At the same time, I need to run applications that will allocate a large amount (several 10s of GB) of physical memory. Contrary to popular belief (see the advice given on nearly all questions concerning the buffer cache), the automatic freeing up the memory by discarding clean cache entries is not instantaneous: starting my application can take up to a minute when the buffer cache is full (*), while after clearing the cache (using echo 3 > /proc/sys/vm/drop_caches ) the same application starts nearly instantaneously. (*) During this minute of startup time, the application is faulting in new memory but spends 100% of its time in the kernel, according to Vtune in a function called pageblock_pfn_to_page . This function seems to be related to memory compaction needed to find huge pages, which leads me to believe that actually fragmentation is the problem. | If you do not want an absolute limit but just pressure the kernel to flush out the buffers faster, you should look at vm.vfs_cache_pressure This variable controls the tendency of the kernel to reclaim the memory which is used for caching of VFS caches, versus pagecache and swap. Increasing this value increases the rate at which VFS caches are reclaimed. Ranges from 0 to 200. Move it towards 200 for higher pressure. Default is set at 100. You can also analyze your memory usage using the slabtop command. In your case, the dentry and *_inode_cache values must be high. If you want an absolute limit, you should look up cgroups . Place the Ceph OSD server within a cgroup and limit the maximum memory it can use by setting the memory.limit_in_bytes parameter for the cgroup. memory.memsw.limit_in_bytes sets the maximum amount for the sum of memory and swap usage. If no units are specified, the value is interpreted as bytes. However, it is possible to use suffixes to represent larger units — k or K for kilobytes, m or M for Megabytes, and g or G for Gigabytes. References: [1] - GlusterFS Linux Kernel Tuning [2] - RHEL 6 Resource Management Guide | {
"source": [
"https://unix.stackexchange.com/questions/253816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150451/"
]
} |
253,836 | I want to clean up some files, and make make the way in which they are written more uniform. So, my input looks something like this: $a$h$l )r
^9 ^5 l
\ urd The thing is, some spaces are "unnecessary" and make comparing the files difficult. For this reason, I want to remove all spaces, unless they follow directly after one of the following characters: $ ^ T iN (N being a variable, any character 1 byte long) oN (N being a variable, as above) s sN (N being a variable, as above) @ ! / ( ) =N (N being a variable, as above) %N (N being a variable, as above) So, an example-input might be: :
$ $ $N
$ $ $a
sa s l r
*56 l r
o1 o 2
%%x v Where the wanted output would be: :
$ $ $N
$ $ $a
sa s lr
*56lr
o1 o 2
%%xv For the %%x v case, the space is removed because it's the third character following the initial % , where the second % acts as the variable. I'm using a GNU/Linux operating system. | If you do not want an absolute limit but just pressure the kernel to flush out the buffers faster, you should look at vm.vfs_cache_pressure This variable controls the tendency of the kernel to reclaim the memory which is used for caching of VFS caches, versus pagecache and swap. Increasing this value increases the rate at which VFS caches are reclaimed. Ranges from 0 to 200. Move it towards 200 for higher pressure. Default is set at 100. You can also analyze your memory usage using the slabtop command. In your case, the dentry and *_inode_cache values must be high. If you want an absolute limit, you should look up cgroups . Place the Ceph OSD server within a cgroup and limit the maximum memory it can use by setting the memory.limit_in_bytes parameter for the cgroup. memory.memsw.limit_in_bytes sets the maximum amount for the sum of memory and swap usage. If no units are specified, the value is interpreted as bytes. However, it is possible to use suffixes to represent larger units — k or K for kilobytes, m or M for Megabytes, and g or G for Gigabytes. References: [1] - GlusterFS Linux Kernel Tuning [2] - RHEL 6 Resource Management Guide | {
"source": [
"https://unix.stackexchange.com/questions/253836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147807/"
]
} |
253,892 | I have two (Debian) Linux servers. I am creating a shell script. On the first one I create an array thus: #!/bin/bash
target_array=(
"/home/user/direct/filename -p123 -r"
) That works fine. But when I run this on the other server I get: Syntax error: "(" unexpected As far as I can tell both servers are the same. Can anyone shed some light on why this doesn't work? If I type it into the terminal directly it is fine?? It would appear that when I run it as sh scriptname.sh I get the error, but if I run it as ./scriptname.sh it seems to be ok. What's the difference? | When you use ./scriptname.sh it executes with /bin/bash as in the first line with #! . But when you use sh scriptname.sh it executes sh , not bash . The sh shell has no syntax to create arrays, but Bash has the syntax you used. | {
"source": [
"https://unix.stackexchange.com/questions/253892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102428/"
]
} |
253,903 | I am running a docker server on Arch Linux (kernel 4.3.3-2) with several containers. Since my last reboot, both the docker server and random programs within the containers crash with a message about not being able to create a thread, or (less often) to fork. The specific error message is different depending on the program, but most of them seem to mention the specific error Resource temporarily unavailable . See at the end of this post for some example error messages. Now there are plenty of people who have had this error message, and plenty of responses to them. What’s really frustrating is that everyone seems to be speculating how the issue could be resolved, but no one seems to point out how to identify which of the many possible causes for the problem is present. I have collected these 5 possible causes for the error and how to verify that they are not present on my system: There is a system-wide limit on the number of threads configured in /proc/sys/kernel/threads-max ( source ). In my case this is set to 60613 . Every thread takes some space in the stack. The stack size limit is configured using ulimit -s ( source ). The limit for my shell used to be 8192 , but I have increased it by putting * soft stack 32768 into /etc/security/limits.conf , so it ulimit -s now returns 32768 . I have also increased it for the docker process by putting LimitSTACK=33554432 into /etc/systemd/system/docker.service ( source , and I verified that the limit applies by looking into /proc/<pid of docker>/limits and by running ulimit -s inside a docker container. Every thread takes some memory. A virtual memory limit is configured using ulimit -v . On my system it is set to unlimited , and 80% of my 3 GB of memory are free. There is a limit on the number of processes using ulimit -u . Threads count as processes in this case ( source ). On my system, the limit is set to 30306 , and for the docker daemon and inside docker containers, the limit is 1048576 . The number of currently running threads can be found out by running ls -1d /proc/*/task/* | wc -l or by running ps -elfT | wc -l ( source ). On my system they are between 700 and 800 . There is a limit on the number of open files, which according to some source s is also relevant when creating threads. The limit is configured using ulimit -n . On my system and inside docker, the limit is set to 1048576 . The number of open files can be found out using lsof | wc -l ( source ), on my system it is about 30000 . It looks like before the last reboot I was running kernel 4.2.5-1, now I’m running 4.3.3-2. Downgrading to 4.2.5-1 fixes all the problems. Other posts mentioning the problem are this and this . I have opened a bug report for Arch Linux . What has changed in the kernel that could be causing this? Here are some example error messages: Crash dump was written to: erl_crash.dump
Failed to create aux thread Jan 07 14:37:25 edeltraud docker[30625]: runtime/cgo: pthread_create failed: Resource temporarily unavailable dpkg: unrecoverable fatal error, aborting:
fork failed: Resource temporarily unavailable
E: Sub-process /usr/bin/dpkg returned an error code (2) test -z "/usr/include" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/include"
/bin/sh: fork: retry: Resource temporarily unavailable
/usr/bin/install -c -m 644 popt.h '/tmp/lib32-popt/pkg/lib32-popt/usr/include'
test -z "/usr/share/man/man3" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/share/man/man3"
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: Resource temporarily unavailable
/bin/sh: fork: retry: No child processes
/bin/sh: fork: Resource temporarily unavailable
/bin/sh: fork: Resource temporarily unavailable
make[3]: *** [install-man3] Error 254 Jan 07 11:04:39 edeltraud docker[780]: time="2016-01-07T11:04:39.986684617+01:00" level=error msg="Error running container: [8] System error: fork/exec /proc/self/exe: resource temporarily unavailable" [Wed Jan 06 23:20:33.701287 2016] [mpm_event:alert] [pid 217:tid 140325422335744] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread | The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request . After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line: # systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago
Docs: https://docs.docker.com
Main PID: 2770 (docker)
Tasks: 502 (limit: 512)
CGroup: /system.slice/docker.service Setting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system , but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager. A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc . DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax . Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288 ) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf . | {
"source": [
"https://unix.stackexchange.com/questions/253903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59955/"
]
} |
253,930 | I'm launching a script from a Jenkins server (on RHEL6) that, among other things, uses SCP with "BatchMode yes" to copy a file from a remote machine. The script runs properly outside of Jenkins, but fails inside. The verbose SCP log shows: debug1: Next authentication method: publickey
debug1: Offering public key: /var/lib/jenkins/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 277
debug1: Trying private key: /var/lib/jenkins/.ssh/id_dsa So there's something missing from the Jenkins user that is required to login. It's not a known_hosts entry, or at least the correct host is listed in /var/lib/jenkins/.ssh/known_hosts. What else could it be? Edit : Per request, the command was scp -vvv -o "BatchMode yes" [email protected]:myfile.txt . Here is a more extensive log snippet: Executing: program /usr/bin/ssh host myserver.com, user myuser, command scp -v -f myfile.txt
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to myserver.com [xxx.xxx.xxx.xxx] port 22.
debug1: Connection established.
debug1: identity file /var/lib/jenkins/.ssh/identity type -1
debug1: identity file /var/lib/jenkins/.ssh/identity-cert type -1
debug1: identity file /var/lib/jenkins/.ssh/id_rsa type 1
debug1: identity file /var/lib/jenkins/.ssh/id_rsa-cert type -1
debug1: identity file /var/lib/jenkins/.ssh/id_dsa type -1
debug1: identity file /var/lib/jenkins/.ssh/id_dsa-cert type -1
debug1: identity file /var/lib/jenkins/.ssh/id_ecdsa type -1
debug1: identity file /var/lib/jenkins/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2
debug1: match: OpenSSH_6.2 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug3: Wrote 960 bytes for a total of 981
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,[email protected]
debug2: kex_parse_kexinit: none,[email protected]
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-md5
debug1: kex: server->client aes128-ctr hmac-md5 none
debug2: mac_setup: found hmac-md5
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug3: Wrote 24 bytes for a total of 1005
debug2: dh_gen_key: priv key bits set: 131/256
debug2: bits set: 773/1536
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug3: Wrote 208 bytes for a total of 1213
debug3: check_host_in_hostfile: host myserver.com filename /var/lib/jenkins/.ssh/known_hosts
debug3: check_host_in_hostfile: host myserver.com filename /var/lib/jenkins/.ssh/known_hosts
debug3: check_host_in_hostfile: match line 2
debug3: check_host_in_hostfile: host xxx.xxx.xxx.xxx filename /var/lib/jenkins/.ssh/known_hosts
debug3: check_host_in_hostfile: host xxx.xxx.xxx.xxx filename /var/lib/jenkins/.ssh/known_hosts
debug3: check_host_in_hostfile: match line 3
debug1: Host 'myserver.com' is known and matches the RSA host key.
debug1: Found key in /var/lib/jenkins/.ssh/known_hosts:2
debug2: bits set: 759/1536
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug3: Wrote 16 bytes for a total of 1229
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug3: Wrote 48 bytes for a total of 1277
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /var/lib/jenkins/.ssh/identity ((nil))
debug2: key: /var/lib/jenkins/.ssh/id_rsa (0x7f38a83ee310)
debug2: key: /var/lib/jenkins/.ssh/id_dsa ((nil))
debug2: key: /var/lib/jenkins/.ssh/id_ecdsa ((nil))
debug3: Wrote 64 bytes for a total of 1341
debug1: Authentications that can continue: publickey,keyboard-interactive
debug3: start over, passed a different list publickey,keyboard-interactive
debug3: preferred gssapi-keyex,gssapi-with-mic,publickey
debug3: authmethod_lookup publickey
debug3: remaining preferred: ,gssapi-with-mic,publickey
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /var/lib/jenkins/.ssh/identity
debug3: no such identity: /var/lib/jenkins/.ssh/identity
debug1: Offering public key: /var/lib/jenkins/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug3: Wrote 368 bytes for a total of 1709
debug1: Server accepts key: pkalg ssh-rsa blen 277
debug2: input_userauth_pk_ok: SHA1 fp 72:a5:45:d3:f2:6d:15:c4:2e:f9:37:34:44:10:2b:b9:59:ee:18:c0
debug3: sign_and_send_pubkey: RSA 72:a5:45:d3:f2:6d:15:c4:2e:f9:37:34:44:10:2b:b9:59:ee:18:c0
debug1: Trying private key: /var/lib/jenkins/.ssh/id_dsa
debug3: no such identity: /var/lib/jenkins/.ssh/id_dsa
debug1: Trying private key: /var/lib/jenkins/.ssh/id_ecdsa
debug3: no such identity: /var/lib/jenkins/.ssh/id_ecdsa
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
Permission denied (publickey,keyboard-interactive). | The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request . After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line: # systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago
Docs: https://docs.docker.com
Main PID: 2770 (docker)
Tasks: 502 (limit: 512)
CGroup: /system.slice/docker.service Setting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system , but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager. A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc . DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax . Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288 ) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf . | {
"source": [
"https://unix.stackexchange.com/questions/253930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77505/"
]
} |
254,367 | In bash scripting: we create variable by just naming it: abc=ok or we can use declare declare abc=ok what's the difference? and why does bash make so many ways to create a variable? | From help -m declare : NAME declare - Set variable values and attributes. SYNOPSIS declare [ -aAfFgilnrtux ] [ -p ] [ name [ = value ] ...] DESCRIPTION Set variable values and attributes. Declare variables and give them attributes. If no NAMEs are given,
display the attributes and values of all variables. Options: -f restrict action or display to function names and definitions -F restrict display to function names only (plus line number and
source file when debugging) -g create global variables when used in a shell function; otherwise
ignored -p display the attributes and value of each NAME Options which set attributes: -a to make NAMEs indexed arrays (if supported) -A to make NAMEs associative arrays (if supported) -i to make NAMEs have the ‘integer’ attribute -l to convert NAMEs to lower case on assignment -n make NAME a reference to the variable named by its value -r to make NAMEs readonly -t to make NAMEs have the ‘trace’ attribute -u to convert NAMEs to upper case on assignment -x to make NAMEs export Using ‘ + ’ instead of ‘ - ’ turns off the given attribute. Variables with the integer attribute have arithmetic evaluation (see
the let command) performed when the variable is assigned a value. When used in a function, declare makes NAMEs local, as with the local command. The ‘ -g ’ option suppresses this behavior. Exit Status: Returns success unless an invalid option is supplied or a variable
assignment error occurs. SEE ALSO bash(1) IMPLEMENTATION GNU bash, version 4.3.11(1)-release (i686-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> So, declare is used for setting variable values and attributes . Let me show the use of two attributes with a very simple example: $ # First Example:
$ declare -r abc=ok
$ echo $abc
ok
$ abc=not-ok
bash: abc: readonly variable
$ # Second Example:
$ declare -i x=10
$ echo $x
10
$ x=ok
$ echo $x
0
$ x=15
$ echo $x
15
$ x=15+5
$ echo $x
20 From the above example, I think you should understand the usage of declare variable over normal variable! This type of declare ation is useful in functions, loops with scripting. Also visit Typing variables: declare or typeset | {
"source": [
"https://unix.stackexchange.com/questions/254367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106512/"
]
} |
254,494 | I've noticed that { can be used in brace expansion: echo {1..8} or in command grouping: {ls;echo hi} How does bash know the difference? | A simplified reason is the existence of one character: space . Brace expansions do not process (un-quoted) spaces. A {...} list needs (un-quoted) spaces. The more detailed answer is how the shell parses a command line . The first step to parse (understand) a command line is to divide it into parts. These parts (usually called words or tokens) result from dividing a command line at each meta-character from the link : Splits the command into tokens that are separated by the fixed set of meta-characters: SPACE, TAB, NEWLINE, ;, (, ), <, >, |, and &. Types of tokens include words, keywords, I/O redirectors, and semicolons. Meta-characters: space tab enter ; , < > | and & . After splitting, words may be of a type (as understood by the shell): Command pre-asignements: LC=ALL ... Command LC=ALL echo Arguments LC=ALL echo "hello" Redirection LC=ALL echo "hello" >&2 Brace expansion Only if a "brace string" (without spaces or meta-characters) is a single word (as described above) and is not quoted , it is a candidate for "Brace expansion". More checks are performed on the internal structure later. Thus, this: {ls,-l} qualifies as "Brace expansion" to become ls -l , either as first word or argument (in bash, zsh is different). $ {ls,-l} ### executes `ls -l`
$ echo {ls,-l} ### prints `ls -l` But this will not: {ls ,-l} . Bash will split on space and parse the line as two words: {ls and ,-l} which will trigger a command not found (the argument ,-l} is lost): $ {ls ,-l}
bash: {ls: command not found Your line: {ls;echo hi} will not become a "Brace expansion" because of the two meta-characters ; and space . It will be broken into this three parts: {ls new command: echo hi} . Understand that the ; triggers the start of a new command. The command {ls will not be found, and the next command will print hi} : $ {ls;echo hi}
bash: {ls: command not found
hi} If it is placed after some other command, it will anyway start a new command after the ; : $ echo {ls;echo hi}
{ls
hi} List One of the "compound commands" is a "Brace List" (my words): { list; } . As you can see, it is defined with spaces and a closing ; . The spaces and ; are needed because both { and } are "Reserved Words ". And therefore, to be recognized as words, must be surrounded by meta-characters (almost always: space ). As described in the point 2 of the linked page Checks the first token of each command to see if it is .... , {, or (, then the command is actually a compound command. Your example: {ls;echo hi} is not a list. It needs a closing ; and one space (at least) after { . The last } is defined by the closing ; . This is a list { ls;echo hi; } . And this { ls;echo hi;} is also (less commonly used, but valid)(Thanks @choroba for the help). $ { ls;echo hi; }
A-list-of-files
hi But as argument (the shell knows the difference) to a command, it triggers an error: $ echo { ls;echo hi; }
bash: syntax error near unexpected token `}' But be careful in what you believe the shell is parsing: $ echo { ls;echo hi;
{ ls
hi | {
"source": [
"https://unix.stackexchange.com/questions/254494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106512/"
]
} |
254,599 | All the results of my searches end up having something to do with hostname or uname -n . I looked up the manual for both, looking for sneaky options, but no luck. I am trying to find an equivalent of OSX's scutil --get ComputerName on Linux systems. On Mac OS X, the computer name is used as a human-readable identifier for the computer; it's shown in various management screens ( e.g. on inventory management, Bonjour-based remote access, ...) and serves as the default hostname (after filtering to handle spaces etc.). | The closest equivalent to a human-readable (and human-chosen) name for any computer running Linux is the default hostname stored in /etc/hostname . On some (not all) Linux distributions, this name is entered during installation as the computee’s name (but with network hostname constraints, unlike macOS’s computer name). This can be namespaced, i.e. each UTS namespace can have a different hostname. Systems running systemd distinguish three different hostnames, including a “pretty” human-readable name which is supposed to be descriptive in a similar fashion to macOS’s computer name; this can be set and retrieved using hostnamectl ’s --pretty option. The other two hostnames are the static hostname, which is the default hostname described above, and the transient hostname which reflects the current network configuration. Systemd also supports a chassis type ( e.g. “tablet”) and an icon for the host; see systemd-hostnamed.service . | {
"source": [
"https://unix.stackexchange.com/questions/254599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45354/"
]
} |
254,644 | Suppose I have a file called file : $ cat file
Hello
Welcome to
Unix I want to add and Linux at the end of the last line of the file. If I do echo " and Linux" >> file will be added to a new line. But I want last line as Unix and Linux So, in order to work around this, I want to remove newline character at the end of file. Therefore, how do I remove the newline character at the end of file in order to add text to that line? | If all you want to do is add text to the last line, it's very easy with sed. Replace $ (pattern matching at the end of the line) by the text you want to add, only on lines in the range $ (which means the last line). sed '$ s/$/ and Linux/' <file >file.new &&
mv file.new file which on Linux can be shortened to sed -i '$ s/$/ and Linux/' file If you want to remove the last byte in a file, Linux (more precisely GNU coreutils) offers the truncate command, which makes this very easy. truncate -s -1 file A POSIX way to do it is with dd . First determine the file length, then truncate it to one byte less. length=$(wc -c <file)
dd if=/dev/null of=file obs="$((length-1))" seek=1 Note that both of these unconditionally truncate the last byte of the file. You may want to check that it's a newline first: length=$(wc -c <file)
if [ "$length" -ne 0 ] && [ -z "$(tail -c -1 <file)" ]; then
# The file ends with a newline or null
dd if=/dev/null of=file obs="$((length-1))" seek=1
fi | {
"source": [
"https://unix.stackexchange.com/questions/254644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
254,956 | What is the difference between Docker, LXD, and LXC. Do they offer the same services or different. | No, LXC, Docker, and LXD, are not quite the same. In short: LXC LinuX Containers (LXC) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host) https://wiki.archlinux.org/index.php/Linux_Containers low level ... https://linuxcontainers.org/ Docker by Docker, Inc a container system making use of LXC containers so you can: Build, Ship, and Run Any App, Anywhere http://www.docker.com LXD by Canonical, Ltd a container system making use of LXC containers so that you can: run LXD on Ubuntu and spin up instances of RHEL, CentOS, SUSE, Debian, Ubuntu and just about any other Linux too, instantly, ... http://www.zdnet.com/article/ubuntu-lxd-not-a-docker-replacement-a-docker-enhancement/ Docker vs LXD Docker specializes in deploying apps LXD specializes in deploying (Linux) Virtual Machines Source: http://linux.softpedia.com/blog/infographic-lxd-machine-containers-from-ubuntu-linux-492602.shtml Originally: https://insights.ubuntu.com/2015/09/23/infographic-lxd-machine-containers-from-ubuntu/ Minor technical note installing LXD includes a command line program coincidentally named lxc http://blog.scottlowe.org/2015/05/06/quick-intro-lxd/ | {
"source": [
"https://unix.stackexchange.com/questions/254956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148778/"
]
} |
255,035 | I have a huge csv file with 10 fields separated by commas. Unfortunately, some lines are malformed and do not contain exactly 10 commas (what causes some problems when I want to read the file into R). How can I filter out only the lines that contain exactly 10 commas? | Another POSIX one: awk -F , 'NF == 11' <file If the line has 10 commas, then there will be 11 fields in this line. So we simply make awk use , as the field delimiter. If the number of fields is 11, the condition NF == 11 is true, awk then performs the default action print $0 . | {
"source": [
"https://unix.stackexchange.com/questions/255035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31337/"
]
} |
255,373 | I have 25GB text file that needs a string replaced on only a few lines. I can use sed successfully but it takes a really long time to run. sed -i 's|old text|new text|g' gigantic_file.sql Is there a quicker way to do this? | You can try: sed -i '/old text/ s//new text/g' gigantic_file.sql From this ref : OPTIMIZING FOR SPEED: If execution speed needs to be increased (due to
large input files or slow processors or hard disks), substitution will
be executed more quickly if the "find" expression is specified before
giving the "s/.../.../" instruction. Here is a comparison over a 10G file. Before: $ time sed -i 's/original/ketan/g' wiki10gb
real 5m14.823s
user 1m42.732s
sys 1m51.123s After: $ time sed -i '/ketan/ s//original/g' wiki10gb
real 4m33.141s
user 1m20.940s
sys 1m44.451s | {
"source": [
"https://unix.stackexchange.com/questions/255373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151547/"
]
} |
255,379 | Was reading this article about automatically logging into a raspberry pi and they say to use this command: 1:2345:respawn:/bin/login -f pi tty1 </dev/tty1 >/dev/tty1 2>&1 After going through the manual I see that -f means no auth and that pi is the user, but what does tty1 </dev/tty1 >/dev/tty1 2>&1 do? I assume tty1 is the terminal to login into or something, but then the following arguments are confusing as well. Why are there angle braces </dev/tty1 > ? Are they doing some weird redirection? I would really appreciate if someone could break it down. I'm not a fan of using commands I'm unfamiliar with. | You can try: sed -i '/old text/ s//new text/g' gigantic_file.sql From this ref : OPTIMIZING FOR SPEED: If execution speed needs to be increased (due to
large input files or slow processors or hard disks), substitution will
be executed more quickly if the "find" expression is specified before
giving the "s/.../.../" instruction. Here is a comparison over a 10G file. Before: $ time sed -i 's/original/ketan/g' wiki10gb
real 5m14.823s
user 1m42.732s
sys 1m51.123s After: $ time sed -i '/ketan/ s//original/g' wiki10gb
real 4m33.141s
user 1m20.940s
sys 1m44.451s | {
"source": [
"https://unix.stackexchange.com/questions/255379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102764/"
]
} |
255,480 | I get what I expected when doing this in bash : [ "a" == "a" ] && echo yes It gave me yes . But when I do this in zsh , I get the following: zsh: = not found Why does the same command ( /usr/bin/[ ) behave differently in different shells? | It's not /usr/bin/[ in either of the shells. In Bash, you're using the built-in test / [ command , and similarly in zsh . The difference is that zsh also has an = expansion : =foo expands to the path to the foo executable. That means == is treated as trying to find a command called = in your PATH . Since that command doesn't exist, you get the error zsh: = not found that you saw (and in fact, this same thing would happen even if you actually were using /usr/bin/[ ). You can use == here if you really want. This works as you expected in zsh: [ "a" "==" "a" ] && echo yes because the quoting prevents =word expansion running. You could also disable the equals option with setopt noequals . However, you'd be better off either: Using single = , the POSIX-compatible equality test ; or Better still, using the [[ conditionals with == in both Bash and zsh . In general, [[ is just better and safer all around, including avoiding this kind of issue (and others) by having special parsing rules inside. | {
"source": [
"https://unix.stackexchange.com/questions/255480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141443/"
]
} |
255,484 | I know how to create a bridge using brctl , but I have been advised not to use this anymore, and to use iproute2 or ip instead(since brctl is deprecated presumably). Assuming this is good advice, how do I create a bridge using ip ? For instance, say I wanted to bridge eth0 and eth1 . | You can use the bridge object ip the ip command, or the bridge command that makes part of the iproute2 package. Basic link manipulation To create a bridge named br0 , that have eth0 and eth1 as members: ip link add name br0 type bridge
ip link set dev br0 up
ip link set dev eth0 master br0
ip link set dev eth1 master br0 To remove an interface from the bridge: ip link set dev eth0 nomaster And finally, to destroy a bridge after no interface is member: ip link del br0 Forwarding manipulation To manipulate other aspects of the bridge like the FDB( Forwarding Database ) I suggest you to take a look at the bridge(8) command . Examples: Show forwarding database on br0 bridge fdb show dev br0 Disable a port( eth0 ) from processing BPDUs . This will make the interface filter any incoming bpdu bridge link set dev eth0 guard on Setting STP Cost to a port( eth1 for example): bridge link set dev eth1 cost 4 To set root guard on eth1: bridge link set dev eth1 root_block on Cost is calculated using some factors, and the link speed is one of them. Using a fix cost and disabling the processing of BPDUs and enabling root_block is somehow simmilar to a guard-root feature from switches. Other features like vepa, veb and hairpin mode can be found on bridge link sub-command list. VLAN rules manipulation The vlan object from the bridge command will allow you to create ingress/egress filters on bridges. To show if there is any vlan ingress/egress filters: bridge vlan show To add rules to a given interface: bridge vlan add dev eth1 <vid, pvid, untagged, self, master> To remove rules. Use the same parameters as vlan add at the end of the command to delete a specific rule. bridge vlan delete dev eth1 Related stuff: bridge(8) manpage How to create a bridge interface | {
"source": [
"https://unix.stackexchange.com/questions/255484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92090/"
]
} |
255,509 | When dual booting Windows 7/10 and Linux Mint/Ubuntu, you may find yourself having to re-pair your Bluetooth devices again and again. This will happen every time you switch OS. Now, how do you prevent this? I'm answering my own question with the following guide, which has been tested on Ubuntu 14.4 and Linux Mint 17.2, 17.3 and now Linux Mint 18. x . | Why does this happen? Basically, when you pair your device, your Bluetooth service generates a unique set of pairing keys. First, your computer stores the Bluetooth device's MAC address and pairing key. Second, your Bluetooth device stores your computer's MAC address and the matching key. This usually works fine, but the MAC address for your Bluetooth port will be the same on both Linux and Windows (it is set on the hardware level). Thus, when you re-pair the device in Windows or Linux and it generates a new key, that key overwrites the previously stored key on the Bluetooth device. Windows overwrites the Linux key and vice versa. Bluetooth LE Devices: These may pair differently. I haven't investigated myself, but this may help Dual Boot Bluetooth LE (low energy) device pairing How to fix Using the instructions below, we'll first pair your Bluetooth devices with Ubuntu/Linux Mint, and then we'll pair Windows. Then we'll go back into our Linux system and copy the Windows-generated pairing key(s) into our Linux system. Pair all devices w/ Mint/Ubuntu Pair all devices w/ Windows Copy your Windows pairing keys in one of two ways: Use psexec -s -i regedit.exe from Windows (harder). You need psexec as normal regedit doesn't have enough permissions to show this values. Go to "Device & Printers" in Control Panel and go to your Bluetooth device's properties. Then, in the Bluetooth section, you can find the unique identifier. Copy that (you will need it later). Note: on newer versions of windows the route to the device's properties is to go through Settings -> Bluetooth & devices -> Devices -> More devices and printer settings Download PsExec from http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx . Unzip the zip you downloaded and open a cmd window with elevated privileges. (Click the Start menu, search for cmd , then right-click the CMD and click "Run as Administrator".) cd into the folder where you unzipped your download. Run psexec -s -i regedit.exe Navigate to find the keys at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BTHPORT\Parameters\Keys .
If there is no CurrentControlSet , try ControlSet001 . You should see a few keys labels with the MAC addresses - write down the MAC address associated with the unique identifier you copied before. Note: If there are no keys visible after pairing, you likely need to add permissions to read Keys\ Use chntpw from your Linux distro (easier). Start in a terminal then: sudo apt-get install chntpw Mount your Windows system drive in read-write mode cd /[WindowsSystemDrive]/Windows/System32/config chntpw -e SYSTEM opens a console Run these commands in that console: > cd CurrentControlSet\Services\BTHPORT\Parameters\Keys
> # if there is no CurrentControlSet, then try ControlSet001
> # on Windows 7, "services" above is lowercased.
> ls
# shows you your Bluetooth port's MAC address
Node has 1 subkeys and 0 values
key name
<aa1122334455>
> cd aa1122334455 # cd into the folder
> ls
# lists the existing devices' MAC addresses
Node has 0 subkeys and 1 values
size type value name [value if type DWORD]
16 REG_BINARY <001f20eb4c9a>
> hex 001f20eb4c9a
=> :00000 XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX ...ignore..chars..
# ^ the XXs are the pairing key Make a note of which Bluetooth device MAC address matches which pairing key. The Mint/Ubuntu one won't need the spaces in-between. Ignore the :00000 . Go back to Linux (if not in Linux) and add our Windows key to our Linux config entries. Just note that the Bluetooth port's MAC address is formatted differently when moving from Windows to Linux - referenced as aa1122334455 in Windows in my example above.
The Linux version will be in all caps and punctuated by ':' after every two characters - for example AA:11:22:33:44:55.
Based on your version of Linux, you can do one of these: Before Mint 18/16.04 you could do this: sudo edit /var/lib/bluetooth/[MAC address of Bluetooth]/linkkeys - [the MAC address of Bluetooth] should be the only folder in that Bluetooth folder. This file should look something like this: [Bluetooth MAC] [Pairing key] [digits in pin] [0]
AA:11:22:33:44:55 XXXXXXXXxxXXxXxXXXXXXxxXXXXXxXxX 5 0
00:1D:D8:3A:33:83 XXXXXXXXxxXXxXxXXXXXXxxXXXXXxXxX 4 0 Change the Linux pairing key to the Windows one, minus the spaces. In Mint 18 (and Ubuntu 16.04) you may have to do this: Switch to root: su - (In more modern versions of Ubuntu, 'sudo -i') cd to your Bluetooth config location /var/lib/bluetooth/[bth port MAC addresses] Here you'll find folders for each device you've paired with. The folder names being the Bluetooth devices' MAC addresses and contain a single file info . In these files, you'll see the link key you need to replace with your Windows ones, like so: [LinkKey]
Key=B99999999FFFFFFFFF999999999FFFFF Once updated, restart your Bluetooth service in one of the following ways, and then it works! Ubuntu, Mint, Arch: sudo systemctl restart bluetooth Alternatively, reboot your machine into Linux. Reboot into Windows - it works! | {
"source": [
"https://unix.stackexchange.com/questions/255509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151465/"
]
} |
255,581 | What does set command without arguments do? As I can see it prints out my environment variables just like env command but in alphabetical order. And further it prints some different information (variables? functions?) like: __git_printf_supports_v=yes
__grub_script_check_program=grub-script-check
...
quote ()
{
local quoted=${1//\'/\'\\\'\'};
printf "'%s'" "$quoted"
}
quote_readline ()
{
local quoted;
_quote_readline_by_ref "$1" ret;
printf %s "$ret"
} What is it and where does it come from? I cannot find information about set command without arguments. Actually I don't have a man page for set in my Linux distribution at all. | set is a shell built-in that displays all shell variables, not only the environment ones, and also shell functions, which is what you are seeing at the end of the list. Variables are displayed with a syntax that allow them to be set when the lines are executed or sourced. From bash manual page : If no options or arguments are supplied, set displays the names and values of all shell variables and functions, sorted according to the current locale, in a format that may be reused as input for setting or resetting the currently-set variables. On different shells, the behavior is not necessarily the same; for example, ksh set doesn't display shell functions. | {
"source": [
"https://unix.stackexchange.com/questions/255581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112424/"
]
} |
256,029 | If I run these commands: dmesg | head -n 10 I presume the OS sends back some kind of signal to dmesg once head has read 10 lines. How does this work? What does head tell the kernel? This is different from a program dying since this is a normal, 'clean' stop. | It depends on the OS buffers and the timing between the 10th and 11th writes of dmesg . After head writes 10 lines, it terminates and dmesg will receive SIGPIPE signal if it continues writing to the pipe. Depending on your OS buffer, dmesg will often write more than 10 lines before head consumes them. To see that head had consumed more than 10 lines, you can use: strace -f sh -c 'dmesg | head -n 10' (Look at the head process, count on number of read system calls.) To see how the writing speed effect: strace -f sh -c "perl -le '$|++;print 1 while 1' | head -n 10" | {
"source": [
"https://unix.stackexchange.com/questions/256029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5352/"
]
} |
256,055 | I have a new clean installation of CentOS 6.7, it is not on a VM, but on a dedicated notebook. During the installation procedure I've configured my WiFi connection and have added it on eth0 pre-configured connection. I have specified all: name of the connection SSID mode: hoc band: automatic channel: pre-configured MTU: automatic checked automatic connection box and available for all users box too In security section I have inserted WPA & WPA2 Personal, then the corresponding password of the router. In IPv4 section: automatic (DHCP) method and checked the completion of this connection with IPv4 addressing. In IPv6 section: ignore method I log in with root user and corresponding password for have all privileges. The WiFi spy on the WiFi key of the notebook is on, the router is on and Internet works with other devices. But if I ping google.com , it says: unknown host google.com while if I ping 8.8.8.8 , it says: Network is unreachable Since I have configured all the connection data and checked the automatic connection box, I expected that the connection would be automatic when I log in. Is there something that I did wrong? Hope in some friendly advice. | It depends on the OS buffers and the timing between the 10th and 11th writes of dmesg . After head writes 10 lines, it terminates and dmesg will receive SIGPIPE signal if it continues writing to the pipe. Depending on your OS buffer, dmesg will often write more than 10 lines before head consumes them. To see that head had consumed more than 10 lines, you can use: strace -f sh -c 'dmesg | head -n 10' (Look at the head process, count on number of read system calls.) To see how the writing speed effect: strace -f sh -c "perl -le '$|++;print 1 while 1' | head -n 10" | {
"source": [
"https://unix.stackexchange.com/questions/256055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149852/"
]
} |
256,120 | I would like to simplify the output of a script by suppressing the output of secondary commands that are usually successful. However, using -q on them hides the output when they occasionally fail, so I have no way of understanding the error. Additionally, these commands log their output on stderr . Is there a way to suppress a command's output only if it succeeds ? For example (but not limited to) something like this: mycommand | fingerscrossed If all goes well, fingerscrossed catches the output and discards it. Else it echoes it to the standard or error output (whatever). | moreutils ' chronic command does just that: chronic mycommand will swallow mycommand 's output, unless it fails, in which case the output is displayed. | {
"source": [
"https://unix.stackexchange.com/questions/256120",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11791/"
]
} |
256,149 | I have found multiple examples of "esac" appearing at the end of a bash case statement but I have not found any clear documentation on it's use. The man page uses it, and even has an index on the word ( https://www.gnu.org/software/bash/manual/bashref.html#index-esac ), but does not define it's use. Is it the required way to end a case statement, best practice, or pure technique? | Like fi for if and done for for , esac is the required way to end a case statement. esac is case spelled backward, rather like fi is if spelled backward. I don't know why the token ending a for block is not rof . | {
"source": [
"https://unix.stackexchange.com/questions/256149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87949/"
]
} |
256,303 | time writes to stderr , so one would assume that adding 2>&1 to the command line should route its output to stdout . But this does not work: test@debian:~$ cat file
one two three four
test@debian:~$ time wc file > wc.out 2>&1
real 0m0.022s
user 0m0.000s
sys 0m0.000s
test@debian:~$ cat wc.out
1 4 19 file Only with parentheses it works: test@debian:~$ (time wc file) > wc.out 2>&1
test@debian:~$ cat wc.out
1 4 19 file
real 0m0.005s
user 0m0.000s
sys 0m0.000s Why are parentheses needed in this case? Why isn't time wc interpreted as one single command? | In ksh , bash and zsh , time is not a command (builtin or not), it's a reserved word in the language like for or while . It's used to time a pipeline 1 . In: time for i in 1 2; do cmd1 "$i"; done | cmd2 > redir You have special syntax that tells the shell to run that pipe line: for i in 1 2; do cmd1 "$i"; done | cmd2 > redir And report timing statistics for it. In: time cmd > output 2> error It's the same, you're timing the cmd > output 2> error command, and the timing statistics still go on the shell's stderr. You need: { time cmd > output 2> error; } 2> timing-output Or: exec 3>&2 2> timing-output
time cmd > output 2> error 3>&-
exec 2>&3 3>&- For the shell's stderr to be redirected to timing-output before the time construct (again, not command ) is used (here to time cmd > output 2> error 3>&- ). You can also run that time construct in a subshell that has its stderr redirected: (time cmd > output 2> error) 2> timing-output But that subshell is not necessary here, you only need stderr to be redirected at the time that time construct is invoked. Most systems also have a time command. You can invoke that one by disabling the time keyword. All you need to do is quote that keyword somehow as keywords are only recognised as such when literal. 'time' cmd > output 2> error-and-timing-output But beware the format may be different and the stderr of both time and cmd will be merged into error-and-timing-output . Also, the time command, as opposed to the time construct cannot time pipelines or compound commands or functions or shell builtins... If it were a builtin command, it might be able to time function invocations or builtins, but it could not time redirections or pipelines or compound commands. 1 Note that bash has (what can be considered as) a bug whereby time (cmd) 2> file (but not time cmd | (cmd2) 2> file for instance) redirects the timing output to file | {
"source": [
"https://unix.stackexchange.com/questions/256303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150422/"
]
} |
256,494 | I understand that sed is a command to manipulate text file. From my Googling, it seems -i means perform the operation on the file itself, is this correct? What about '1d' ? | In sed : -i option will edit the input file in-place '1d' will remove the first line of the input file Example: % cat file.txt
foo
bar
% sed -i '1d' file.txt
% cat file.txt
bar Note that, most of the time it's a good idea to take a backup while using the -i option so that you have the original file backed up in case of any unexpected change. For example, if you do: sed -i.orig '1d' file.txt the original file will be kept as file.txt.orig and the modified file will be file.txt . | {
"source": [
"https://unix.stackexchange.com/questions/256494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15657/"
]
} |
256,495 | When I try to execute mail from inside a function in a bash script it creates something similar to a fork bomb. To clarify, this creates the issue: #!/bin/bash
mail() {
echo "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"
}
mail
exit 0 Sometimes you can just kill the command and it'll kill the child processes, but sometimes you'll have to killall -9 . It doesn't care whether the mail were sent or not. The fork bomb is created either way. And it doesn't seem as adding any check for the exit code, such as if ! [ "$?" = 0 ] , helps. But the script below works as intended, either it outputs an error or it sends the mail. #!/bin/bash
echo "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"
exit 0 Why does this happen? And how would you go about checking the exit code of the mail command? | You're invoking the function mail from within the same function: #!/bin/bash
mail() {
# This actually calls the "mail" function
# and not the "mail" executable
echo "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"
}
mail
exit 0 This should work: #!/bin/bash
mailfunc() {
echo "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"
}
mailfunc
exit 0 Note that function name is no longer invoked from within the function itself. | {
"source": [
"https://unix.stackexchange.com/questions/256495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149009/"
]
} |
256,497 | Throughout the POSIX specification, there's provision ( 1 , 2 , 3 ...) to allow implementations to treat a path starting with two / specially. A POSIX application (an application written to the POSIX specification to be portable to all POSIX compliant systems) cannot assume that //foo/bar is the same as /foo/bar (though they can assume that ///foo/bar is the same as /foo/bar ). Now what are those POSIX systems (historical and still maintained) that treat //foo specially? I believed (I've now been proven wrong ) that POSIX provision was pushed by Microsoft for their Unix variant (XENIX) and possibly Windows POSIX layer (can anyone confirm that?). It is used by Cygwin which also is a POSIX-like layer for Microsoft Windows. Are there any non-Microsoft Windows systems? OpenVMS? On systems where //foo/bar is special, what is it used for? //host/path for network file systems access? Virtual file systems? Do some applications running on Unix-likes —if not the system's API— treat //foo/bar paths specially (in contexts where they otherwise treat /foo/bar as the path on the filesystem)? Edit , I've since asked a question on the austin-group mailing list about the origin of //foo/bar handling in the spec, and the discussion is an interesting read (from an archaeology point of view at least). | This is a compilation and index of the answers given so far. This post is community wiki , it can be edited by anybody with 100+ reputation and nobody gets reputation from it. Feel free to post your own answer and add a link to it in here (or wait for me to do it). Ideally, this answer should just be a summary (with short entries while individual other answers would have the details). Currently actively maintained systems: Cygwin . A POSIX layer for Microsoft Windows. Used for Windows UNC paths . UWIN since 1.3. Another POSIX layer for Windows. Used at least for //host/file network file sharing paths. @OlivierDulac IBM z/OS as mentioned in the POSIX bug tracker , z/OS resolves //pathname requests to MVS datasets , not to network files. Example . Defunct systems @BinaryZebra Apollo Domain/OS (confirmed). Also mentioned at Official Description UNC (Universal Naming Convention) as the possible origin of //host/path notations ( see also , page 2-15). According to Donn Terry , it was HP (which acquired Apollo Computers) that pushed for inclusion of that provision in the POSIX spec for Domain/OS. @jillagre Tektronix Utek ( corroborated ), where //host/path is a path on a distributed file system . @gilles QNX 4 with the FLEET distributed processing system, where //123/ path is a / path on node 123. (Mentioned in the QNX 6 documentation .) @roaima AT&T SysV Release 3 (unverified). //host/path in (discontinued in SVR4) RFS Remote File Sharing system. @Scott SEL/Gould UTX-32 (unverified). Used for //host/path . Applications that treat //foo/bar specially for paths @Prem Perforce where //depot/A/B/C/D refers to a path in a depot . @WChargin Blender . In its configuration you use a // prefix for relative paths (to the blend associated with the data-block) . The Bazel build system uses a // prefix for labels of targets within the Bazel build graph . | {
"source": [
"https://unix.stackexchange.com/questions/256497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22565/"
]
} |
257,014 | I see there is an executable called "[" in /usr/bin . What is its purpose? | In most cases, [ is a shell builtin and is equivalent to test . However, like test , it also exists as a standalone executable: that's the /bin/[ you saw. You can test this with type -a [ (on an Arch Linux system, running bash ): $ type -a [
[ is a shell builtin
[ is /bin/[ So, on my system, I have two [ : my shell's builtin and the executable in /bin . The executable is documented in man test : TEST(1) User Commands TEST(1)
NAME
test - check file types and compare values
SYNOPSIS
test EXPRESSION
test
[ EXPRESSION ]
[ ]
[ OPTION
DESCRIPTION
Exit with the status determined by EXPRESSION.
[ ... ] As you can see in the excerpt of the man page quoted above, test and [ are equivalent. The /bin/[ and /bin/test commands are specified by POSIX which is why you'll find them despite the fact that many shells also provide them as builtins. Their presence ensures that constructs like: [ "$var" -gt 10 ] && echo yes will work even if the shell running them doesn't have a [ builtin. For example, in tcsh : > which [
/sbin/[
> set var = 11
> [ "$var" -gt 10 ] && echo yes
yes | {
"source": [
"https://unix.stackexchange.com/questions/257014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130702/"
]
} |
257,297 | Let me give an example: $ timeout 1 yes "GNU" > file1
$ wc -l file1
11504640 file1 $ for ((sec0=`date +%S`;sec<=$(($sec0+5));sec=`date +%S`)); do echo "GNU" >> file2; done
$ wc -l file2
1953 file2 Here you can see that the command yes writes 11504640 lines in a second while I can write only 1953 lines in 5 seconds using bash's for and echo . As suggested in the comments, there are various tricks to make it more efficient but none come close to matching the speed of yes : $ ( while :; do echo "GNU" >> file3; done) & pid=$! ; sleep 1 ; kill $pid
[1] 3054
$ wc -l file3
19596 file3 $ timeout 1 bash -c 'while true; do echo "GNU" >> file4; done'
$ wc -l file4
18912 file4 These can write up to 20 thousand lines in a second. And they can be further improved to: $ timeout 1 bash -c 'while true; do echo "GNU"; done >> file5'
$ wc -l file5
34517 file5 $ ( while :; do echo "GNU"; done >> file6 ) & pid=$! ; sleep 1 ; kill $pid
[1] 5690
$ wc -l file6
40961 file6 These get us up to 40 thousand lines in a second. Better, but still a far cry from yes which can write about 11 million lines in a second! So, how does yes write to file so quickly? | In a nutshell: yes exhibits similar behavior to most other standard utilities which typically write to a FILE STREAM with output buffered by the libC via stdio . These only do the syscall write() every some 4kb (16kb or 64kb) or whatever the output block BUFSIZ is . echo is a write() per GNU . That's a lot of mode-switching (which is not, apparently, as costly as a context-switch ) . And that's not at all to mention that, besides its initial optimization loop, yes is a very simple, tiny, compiled C loop and your shell loop is in no way comparable to a compiler optimized program. But I was wrong: When I said before that yes used stdio , I only assumed it did because it behaves a lot like those that do. This was not correct - it only emulates their behavior in this way. What it actually does is very like an analog to the thing I did below with the shell: it first loops to conflate its arguments (or y if none) until they might grow no more without exceeding BUFSIZ . A comment from the source immediately preceding the relevant for loop states: /* Buffer data locally once, rather than having the
large overhead of stdio buffering each item. */ yes does its own write() s thereafter. Digression: (As originally included in the question and retained for context to a possibly informative explanation already written here) : I've tried timeout 1 $(while true; do echo "GNU">>file2; done;) but unable to stop loop. The timeout problem you have with the command substitution - I think I get it now and can explain why it doesn't stop. timeout doesn't start because its command-line is never run. Your shell forks a child shell, opens a pipe on its stdout and reads it. It will stop reading when the child quits, and then it will interpret all the child wrote for $IFS mangling and glob expansions, and with the results, it will replace everything from $( to the matching ) . But if the child is an endless loop that never writes to the pipe, then the child never stops looping, and timeout 's command-line is never completed before (as I guess) you do Ctrl + C and kill the child loop. So timeout can never kill the loop which needs to complete before it can start. Other timeout s: ... simply aren't as relevant to your performance issues as the amount of time your shell program must spend switching between user- and kernel-mode to handle output. timeout , though, is not as flexible as a shell might be for this purpose: where shells excel is in their ability to mangle arguments and manage other processes. As is noted elsewhere, simply moving your [fd-num] >> named_file redirection to the loop's output target rather than only directing output there for the command looped over can substantially improve performance because that way at least the open() syscall need only be done the once. This also is done below with the | pipe targeted as output for the inner loops. Direct comparison: You might do like: for cmd in exec\ yes 'while echo y; do :; done'
do set +m
sh -c '{ sleep 1; kill "$$"; }&'"$cmd" | wc -l
set -m
done 256659456
505401 Which is kind of like the command sub relationship described before, but there's no pipe and the child is backgrounded until it kills the parent. In the yes case the parent has actually been replaced since the child was spawned, but the shell calls yes by overlaying its own process with the new one and so the PID remains the same and its zombie child still knows who to kill after all. Bigger buffer: Now let's see about increasing the shell's write() buffer. IFS="
"; set y "" ### sets up the macro expansion
until [ "${512+1}" ] ### gather at least 512 args
do set "$@$@";done ### exponentially expands "$@"
printf %s "$*"| wc -c ### 1 write of 512 concatenated "y\n"'s 1024 I chose that number because output strings any longer than 1kb were getting split out into separate write() 's for me. And so here's the loop again: for cmd in 'exec yes' \
'until [ "${512+:}" ]; do set "$@$@"; done
while printf %s "$*"; do :; done'
do set +m
sh -c $'IFS="\n"; { sleep 1; kill "$$"; }&'"$cmd" shyes y ""| wc -l
set -m
done 268627968
15850496 That's 300 times the amount of data written by the shell in the same amount of time for this test than the last. Not too shabby. But it's not yes . Felated: As requested, there is a more thorough description than the mere code comments on what is done here at this link . | {
"source": [
"https://unix.stackexchange.com/questions/257297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
257,343 | For a while now, I have had the problem that a gzip process randomly starts on my Kubuntu system, uses up quite a bit of resources and causes my notebook fan to go crazy. The process shows up as gzip -c --rsyncable --best in htop and runs for quite a long time. I have no clue what is causing this, the system is a Kubuntu 14.04 and has no backup plan setup or anything like that. Any idea how I can figure out what is causing this the next time the process appears? I have done a bit of googling already but could not figure it out. I saw some suggestions with the ps command but grepping all lines there did not really point to anything. | Process tree While the process is running try to use ps with the f option to see the process hierarchy: ps axuf Then you should get a tree of processes, meaning you should see what the parent process of the gzip is. If gzip is a direct descendant of init then probably its parent has exited already, as it's very unlikely that init would create the gzip process. Crontabs Additionally you should check your crontab s to see whether there's anything creating it. Do sudo crontab -l -u <user> where user is the user of the gzip process you're seeing (in your case that seems to be root ). If you have any other users on that system which might have done stuff like setting up background services, then check their crontab s too. The fact that gzip runs as root doesn't guarantee that the original process that triggered the gzip was running as root as well. You can see a list of all existing crontab s by doing sudo ls /var/spool/cron/crontabs . Logs Check all the systems logs you have, looking for suspicious entries at the time the process is created. I'm not sure whether Kubuntu names its log files differently, but in standard Ubuntu you should at least check /var/log/syslog . Last choice: a gzip wrapper If none of these lead to any result you could rename your gzip binary and put a little wrapper in place which launches gzip with the passed parameters but also captures the system's state at that moment. | {
"source": [
"https://unix.stackexchange.com/questions/257343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153054/"
]
} |
257,484 | I'm trying to change my terminal emulator from xterm to eterm on Debian Jessie. I can't seem to find the tty config files. I've ran: sudo find / -name tty*.conf but that doesn't yield any results. Where are the config files, and how do I change the default terminal emulator? | In Debian, that is x-terminal-emulator : sudo update-alternatives --config x-terminal-emulator Further reading: Debian Alternatives System Virtual Package: x-terminal-emulator Debian Policy Manual:
Chapter 11 - Customized programs Debian Policy Manual: 11.8.3 Packages providing a terminal emulator | {
"source": [
"https://unix.stackexchange.com/questions/257484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137910/"
]
} |
257,485 | I have a list of IPs and I need to check them for opened ports using nmap .
So far, my script is like this: #!/bin/bash
filename="$1"
port="$2"
echo "STARTING NMAP"
while IFS= read -r line
do
nmap --host-timeout 15s -n $line -p $2 -oN output.txt | grep "Discovered open port" | awk {'print $6'} | awk -F/ {'print $1'} >> total.txt
done <"$filename" It works great but it's slow and I want to check, for example, 100 IPs from the file at once, instead of running them one by one. | In Debian, that is x-terminal-emulator : sudo update-alternatives --config x-terminal-emulator Further reading: Debian Alternatives System Virtual Package: x-terminal-emulator Debian Policy Manual:
Chapter 11 - Customized programs Debian Policy Manual: 11.8.3 Packages providing a terminal emulator | {
"source": [
"https://unix.stackexchange.com/questions/257485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153162/"
]
} |
257,514 | Suppose I have a list of URLs in a text file: google.com/funny
unix.stackexchange.com/questions
isuckatunix.com/ireallydo I want to delete everything that comes after '.com'. Expected Results: google.com
unix.stackexchange.com
isuckatunix.com I tried sed 's/.com*//' file.txt but it deleted .com as well. | To explicitly delete everything that comes after ".com", just tweak your existing sed solution to replace ".com(anything)" with ".com": sed 's/\.com.*/.com/' file.txt I tweaked your regex to escape the first period; otherwise it would have matched something like "thisiscommon.com/something". Note that you may want to further anchor the ".com" pattern with a trailing forward-slash so that you don't accidentally trim something like "sub.com.domain.com/foo": sed 's/\.com\/.*/.com/' file.txt | {
"source": [
"https://unix.stackexchange.com/questions/257514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152598/"
]
} |
257,571 | On my Arch install, /etc/bash.bashrc and /etc/skel/.bashrc contain these lines: # If not running interactively, don't do anything
[[ $- != *i* ]] && return On Debian, /etc/bash.bashrc has: # If not running interactively, don't do anything
[ -z "$PS1" ] && return And /etc/skel/.bashrc : # If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac According to man bash , however, non-interactive shells don't even read these files: When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following commands were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the filename. If I understand correctly, the *.bashrc files will only be read if BASH_ENV is set to point to them. This is something that can't happen by chance and will only occur if someone has explicitly set the variable accordingly. That seems to break the possibility of having scripts source a user's .bashrc automatically by setting BASH_ENV , something that could come in handy. Given that bash will never read these files when run non-interactively unless explicitly told to do so, why do the default *bashrc files disallow it? | This is a question that I was going to post here a few weeks ago. Like terdon , I understood that a .bashrc is only sourced for interactive Bash shells so there should be no need for .bashrc to check if it is running in an interactive shell. Confusingly, all the distributions I use (Ubuntu, RHEL and Cygwin) had some type of check (testing $- or $PS1 ) to ensure the current shell is interactive. I don’t like cargo cult programming so I set about understanding the purpose of this code in my .bashrc . Bash has a special case for remote shells After researching the issue, I discovered that remote shells are treated differently. While non-interactive Bash shells don’t normally run ~/.bashrc commands at start-up, a special case is made when the shell is Invoked by remote shell daemon : Bash attempts to determine when it is being run with its standard input
connected to a network connection, as when executed by the remote shell
daemon, usually rshd , or the secure shell daemon sshd . If Bash
determines it is being run in this fashion, it reads and executes commands
from ~/.bashrc, if that file exists and is readable. It will not do this if
invoked as sh . The --norc option may be used to inhibit this behavior,
and the --rcfile option may be used to force another file to be read, but
neither rshd nor sshd generally invoke the shell with those options or
allow them to be specified. Example Insert the following at the start of a remote .bashrc . (If .bashrc is sourced by .profile or .bash_profile , temporarily disable this while testing): echo bashrc
fun()
{
echo functions work
} Run the following commands locally: $ ssh remote_host 'echo $- $0'
bashrc
hBc bash No i in $- indicates that the shell is non-interactive . No leading - in $0 indicates that the shell is not a login shell . Shell functions defined in the remote .bashrc can also be run: $ ssh remote_host fun
bashrc
functions work I noticed that the ~/.bashrc is only sourced when a command is specified as the argument for ssh . This makes sense: when ssh is used to start a regular login shell, .profile or .bash_profile are run (and .bashrc is only sourced if explicitly done so by one of these files). The main benefit I can see to having .bashrc sourced when running a (non-interactive) remote command is that shell functions can be run. However, most of the commands in a typical .bashrc are only relevant in an interactive shell, e.g., aliases aren’t expanded unless the shell is interactive. Remote file transfers can fail This isn’t usually a problem when rsh or ssh are used to start an interactive login shell or when non-interactive shells are used to run commands. However, it can be a problem for programs such as rcp , scp and sftp that use remote shells for transferring data. It turns out that the remote user’s default shell (like Bash) is implicitly started when using the scp command. There’s no mention of this in the man page – only a mention that scp uses ssh for its data transfer. This has the consequence that if the .bashrc contains any commands that print to standard output, file transfers will fail , e.g, scp fails without error . See also this related Red Hat bug report from 15 years ago, scp breaks when there's an echo command in /etc/bashrc (which was eventually closed as WONTFIX ). Why scp and sftp fail SCP (Secure copy) and SFTP (Secure File Transfer Protocol) have their own protocols for the local and remote ends to exchange information about the file(s) being transferred. Any unexpected text from the remote end is (wrongly) interpreted as part of the protocol and the transfer fails. According to a FAQ from the Snail Book What often happens, though, is that there are statements in either the
system or per-user shell startup files on the server ( .bashrc , .profile , /etc/csh.cshrc , .login , etc.) which output text messages on login,
intended to be read by humans (like fortune , echo "Hi there!" , etc.). Such code should only produce output on interactive logins, when there is a tty attached to standard input. If it does not make this test, it will
insert these text messages where they don't belong: in this case, polluting
the protocol stream between scp2 / sftp and sftp-server . The reason the shell startup files are relevant at all, is that sshd employs the user's shell when starting any programs on the user's behalf (using e.g. /bin/sh -c "command"). This is a Unix tradition, and has
advantages: The user's usual setup (command aliases, environment variables, umask,
etc.) are in effect when remote commands are run. The common practice of setting an account's shell to /bin/false to disable
it will prevent the owner from running any commands, should authentication
still accidentally succeed for some reason. SCP protocol details For those interested in the details of how SCP works, I found interesting information in How the SCP protocol works which includes details on Running scp with talkative shell profiles on the remote side? : For example, this can happen if you add this to your shell profile on the
remote system: echo "" Why it just hangs? That comes from the way how scp in source mode
waits for the confirmation of the first protocol message. If it's not binary
0, it expects that it's a notification of a remote problem and waits for
more characters to form an error message until the new line arrives. Since
you didn't print another new line after the first one, your local scp just
stays in a loop, blocked on read(2) . In the meantime, after the shell
profile was processed on the remote side, scp in sink mode was started,
which also blocks on read(2) , waiting for a binary zero denoting the start
of the data transfer. Conclusion / TLDR Most of the statements in a typical .bashrc are only useful for an interactive shell – not when running remote commands with rsh or ssh . In most such situations, setting shell variables, aliases and defining functions isn’t desired – and printing any text to standard out is actively harmful if transferring files using programs such as scp or sftp . Exiting after verifying that the current shell is non-interactive is the safest behaviour for .bashrc . | {
"source": [
"https://unix.stackexchange.com/questions/257571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
257,590 | I need to use SSH on my machine to access my website and its databases (setting up a symbolic link- but I digress). Following problem I enter the command: ssh-keygen -t dsa To generate public/private dsa key pair. I save it in the default ( /home/user/.ssh/id_dsa ) and enter Enter passphrase twice. Then I get this back: WARNING: UNPROTECTED PRIVATE KEY FILE!
Permissions 0755 for '/home/etc.ssh/id_rsa' are too open. It is recommended that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: [then the FILE PATH in VAR/LIB/SOMEWHERE] Now to work round this I then tried sudo chmod 600 ~/.ssh/id_rsa sudo chmod 600 ~/.ssh/id_rsa.pub But shortly after my computer froze up, and on logging back on there was a could not find .ICEauthority error . I got round this problem and deleted the SSH files but want to be able to use the correct permissions to avoid these issues in future. How should I set up ICEauthority, or where should I save the SSH Keys- or what permissions should they have? Would using a virtual machine be best? This is all very new and I am on a very steep learning curve, so any help appreciated. | chmod 600 ~/.ssh/id_rsa; chmod 600 ~/.ssh/id_rsa.pub (i.e. chmod u=rw,go= ~/.ssh/id_rsa ~/.ssh/id_rsa.pub ) are correct. chmod 644 ~/.ssh/id_rsa.pub (i.e. chmod a=r,u+w ~/.ssh/id_rsa.pub ) would also be correct, but chmod 644 ~/.ssh/id_rsa (i.e. chmod a=r,u+w ~/.ssh/id_rsa ) would not be. Your public key can be public, what matters is that your private key is private. Also your .ssh directory itself must be writable only by you: chmod 700 ~/.ssh or chmod u=rwx,go= ~/.ssh . You of course need to be able to read it and access files in it (execute permission). It isn't directly harmful if others can read it, but it isn't useful either. You don't need sudo . Don't use sudo to manipulate your own files, that can only lead to mistakes. The error about .ICEauthority is not related to the chmod commands you show. Either it's a coincidence or you ran some other commands that you aren't showing us. | {
"source": [
"https://unix.stackexchange.com/questions/257590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153233/"
]
} |
257,598 | I can't seem to find any information on this aside from "the CPU's MMU sends a signal" and "the kernel directs it to the offending program, terminating it". I assumed that it probably sends the signal to the shell and the shell handles it by terminating the offending process and printing "Segmentation fault" . So I tested that assumption by writing an extremely minimal shell I call crsh (crap shell). This shell does not do anything except take user input and feed it to the system() method. #include <stdio.h>
#include <stdlib.h>
int main(){
char cmdbuf[1000];
while (1){
printf("Crap Shell> ");
fgets(cmdbuf, 1000, stdin);
system(cmdbuf);
}
} So I ran this shell in a bare terminal (without bash running underneath). Then I proceeded to run a program that produces a segfault. If my assumptions were correct, this would either a) crash crsh , closing the xterm, b) not print "Segmentation fault" , or c) both. braden@system ~/code/crsh/ $ xterm -e ./crsh
Crap Shell> ./segfault
Segmentation fault
Crap Shell> [still running] Back to square one, I guess. I've just demonstrated that it's not the shell that does this, but the system underneath. How does "Segmentation fault" even get printed? "Who" is doing it? The kernel? Something else? How does the signal and all of its side effects propagate from the hardware to the eventual termination of the program? | All modern CPUs have the capacity to interrupt the currently-executing machine instruction. They save enough state (usually, but not always, on the stack) to make it possible to resume execution later, as if nothing had happened (the interrupted instruction will be restarted from scratch, usually). Then they start executing an interrupt handler , which is just more machine code, but placed at a special location so the CPU knows where it is in advance. Interrupt handlers are always part of the kernel of the operating system: the component that runs with the greatest privilege and is responsible for supervising execution of all the other components. 1,2 Interrupts can be synchronous , meaning that they are triggered by the CPU itself as a direct response to something the currently-executing instruction did, or asynchronous , meaning that they happen at an unpredictable time because of an external event, like data arriving on the network port. Some people reserve the term "interrupt" for asynchronous interrupts, and call synchronous interrupts "traps", "faults", or "exceptions" instead, but those words all have other meanings so I'm going to stick with "synchronous interrupt". Now, most modern operating systems have a notion of processes . At its most basic, this is a mechanism whereby the computer can run more than one program at the same time, but it is also a key aspect of how operating systems configure memory protection , which is is a feature of most (but, alas, still not all ) modern CPUs. It goes along with virtual memory , which is the ability to alter the mapping between memory addresses and actual locations in RAM. Memory protection allows the operating system to give each process its own private chunk of RAM, that only it can access. It also allows the operating system (acting on behalf of some process) to designate regions of RAM as read-only, executable, shared among a group of cooperating processes, etc. There will also be a chunk of memory that is only accessible by the kernel. 3 As long as each process accesses memory only in the ways that the CPU is configured to allow, memory protection is invisible. When a process breaks the rules, the CPU will generate a synchronous interrupt, asking the kernel to sort things out. It regularly happens that the process didn't really break the rules, only the kernel needs to do some work before the process can be allowed to continue. For instance, if a page of a process's memory needs to be "evicted" to the swap file in order to free up space in RAM for something else, the kernel will mark that page inaccessible. The next time the process tries to use it, the CPU will generate a memory-protection interrupt; the kernel will retrieve the page from swap, put it back where it was, mark it accessible again, and resume execution. But suppose that the process really did break the rules. It tried to access a page that has never had any RAM mapped to it, or it tried to execute a page that is marked as not containing machine code, or whatever. The family of operating systems generally known as "Unix" all use signals to deal with this situation. 4 Signals are similar to interrupts, but they are generated by the kernel and fielded by processes, rather than being generated by the hardware and fielded by the kernel. Processes can define signal handlers in their own code, and tell the kernel where they are. Those signal handlers will then execute, interrupting the normal flow of control, when necessary. Signals all have a number and two names, one of which is a cryptic acronym and the other a slightly less cryptic phrase. The signal that's generated when the a process breaks the memory-protection rules is (by convention) number 11, and its names are SIGSEGV and "Segmentation fault". 5,6 An important difference between signals and interrupts is that there is a default behavior for every signal. If the operating system fails to define handlers for all interrupts, that is a bug in the OS, and the entire computer will crash when the CPU tries to invoke a missing handler. But processes are under no obligation to define signal handlers for all signals. If the kernel generates a signal for a process, and that signal has been left at its default behavior, the kernel will just go ahead and do whatever the default is and not bother the process. Most signals' default behaviors are either "do nothing" or "terminate this process and maybe also produce a core dump." SIGSEGV is one of the latter. So, to recap, we have a process that broke the memory-protection rules. The CPU suspended the process and generated a synchronous interrupt. The kernel fielded that interrupt and generated a SIGSEGV signal for the process. Let's assume the process did not set up a signal handler for SIGSEGV , so the kernel carries out the default behavior, which is to terminate the process. This has all the same effects as the _exit system call: open files are closed, memory is deallocated, etc. Up till this point nothing has printed out any messages that a human can see, and the shell (or, more generally, the parent process of the process that just got terminated) has not been involved at all. SIGSEGV goes to the process that broke the rules, not its parent. The next step in the sequence, though, is to notify the parent process that its child has been terminated. This can happen in several different ways, of which the simplest is when the parent is already waiting for this notification, using one of the wait system calls ( wait , waitpid , wait4 , etc). In that case, the kernel will just cause that system call to return, and supply the parent process with a code number called an exit status . 7 The exit status informs the parent why the child process was terminated; in this case, it will learn that the child was terminated due to the default behavior of a SIGSEGV signal. The parent process may then report the event to a human by printing a message; shell programs almost always do this. Your crsh doesn't include code to do that, but it happens anyway, because the C library routine system runs a full-featured shell, /bin/sh , "under the hood". crsh is the grandparent in this scenario; the parent-process notification is fielded by /bin/sh , which prints its usual message. Then /bin/sh itself exits, since it has nothing more to do, and the C library's implementation of system receives that exit notification. You can see that exit notification in your code, by inspecting the return value of system ; but it won't tell you that the grandchild process died on a segfault, because that was consumed by the intermediate shell process. Footnotes Some operating systems don't implement device drivers as part of the kernel; however, all interrupt handlers still have to be part of the kernel, and so does the code that configures memory protection, because the hardware doesn't allow anything but the kernel to do these things. There may be a program called a "hypervisor" or "virtual machine manager" that is even more privileged than the kernel, but for purposes of this answer it can be considered part of the hardware . The kernel is a program , but it is not a process; it is more like a library. All processes execute parts of the kernel's code, from time to time, in addition to their own code. There may be a number of "kernel threads" that only execute kernel code, but they do not concern us here. The one and only OS you are likely to have to deal with anymore that can't be considered an implementation of Unix is, of course, Windows. It does not use signals in this situation. (Indeed, it does not have signals; on Windows the <signal.h> interface is completely faked by the C library.) It uses something called " structured exception handling " instead. Some memory-protection violations generate SIGBUS ("Bus error") instead of SIGSEGV . The line between the two is underspecified and varies from system to system. If you've written a program that defines a handler for SIGSEGV , it is probably a good idea to define the same handler for SIGBUS . "Segmentation fault" was the name of the interrupt generated for memory-protection violations by one of the computers that ran the original Unix , probably the PDP-11 . " Segmentation " is a type of memory protection, but nowadays the term "segmentation fault " refers generically to any sort of memory protection violation. All the other ways the parent process might be notified of a child having terminated, end up with the parent calling wait and receiving an exit status. It's just that something else happens first. | {
"source": [
"https://unix.stackexchange.com/questions/257598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54466/"
]
} |
257,679 | For example, this keeps the gnuplot-x11 graph window open until a key is pressed: gnuplot -e "plot \"file\" ; pause -1 \"text\"" How to keep it open until manually closed? | Use the -p or --persist option: gnuplot --persist -e 'plot sin(x)' This will keep the window open until manually closed. From the man page : -p, --persist lets plot windows survive after main gnuplot program
exits. | {
"source": [
"https://unix.stackexchange.com/questions/257679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94448/"
]
} |
257,819 | I have a file that looks like this toy example. My actual file has 4 million lines, about 10 of which I need to delete. ID Data1 Data2
1 100 100
2 100 200
3 200 100
ID Data1 Data2
4 100 100
ID Data1 Data2
5 200 200 I want to delete the lines that look like the header, except for the first line. Final file: ID Data1 Data2
1 100 100
2 100 200
3 200 100
4 100 100
5 200 200 How can I do this? | header=$(head -n 1 input)
(printf "%s\n" "$header";
grep -vFxe "$header" input
) > output grab the header line from the input file into a variable print the header process the file with grep to omit lines that match the header capture the output from the above two steps into the output file | {
"source": [
"https://unix.stackexchange.com/questions/257819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124543/"
]
} |
257,986 | Please explain the usage of ${#1} below: getlable ()
{
if (( ${#1} == 0 )); then test="-"; else test="${1}"; fi;
} | ${#1} is the length (in number of characters) of $1 which is the first argument to the function. So (( ${#1} == 0 )) is a convoluted way to test whether the first argument is empty (or unset, unset parameters appear as empty when expanded) or not. To test for an empty parameter, the canonical way is: [ -z "$1" ] But there, more likely the intent was to check whether an argument was provided to the function in which case the syntax would be: [ "$#" -eq 0 ] (or (($# == 0)) if you want to make your script ksh/bash/zsh specific). In both cases however, Bourne-like shells have short cuts for that: test=${1:--} # set test to $1, or "-" if $1 is empty or not provided
test=${1--} # set test to $1, or "-" if $1 is not provided Now, if the intent is to pass that to cat or other text utility so that - (meaning stdin) is passed when no argument is provided, then you may not need any of that at all. Instead of: getlable() {
test=${1--}
cat -- "$test"
} Just do: getlable() {
cat -- "$@"
} The list of argument to the function will be passed as-is to cat . If there's no argument, cat will receive no argument (and then read from stdin as if it had been a single - argument). And if there's one or more arguments they will be all passed as-is to cat . | {
"source": [
"https://unix.stackexchange.com/questions/257986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77910/"
]
} |
257,993 | I want to create with sed the following: For example each word in the file that have the ssss... Should be replaced (all word) with target string as gggg . echo "duwdbnhb ssssssmnfkejfnei" | sed s'/ssssss*/gggg/g'
duwdbnhb ggggmnfkejfnei should be: duwdbnhb gggg remark - string could be with couple of s strings ( for example ss or sss or ssssss ...) Example: echo "duwdbnhb sssmnfkejfnei" | sed s'/s*/gggg/g'
duwdbnhb gggg example A echo "rf3 f34kf3ein3e ssghdwydgeug swswww ssswjdbuyhb" | sed s'/ss.*/gggg/'
rf3 f34kf3ein3e gggg but should print that: rf3 f34kf3ein3e gggg swswww gggg example B echo "rf3 f34kf3ein3e ssghdwydgeug swswww ssswjdbuyhb" | sed s'/s.*/gggg/'
rf3 f34kf3ein3e gggg but should print that: rf3 f34kf3ein3e gggg gggg gggg | ${#1} is the length (in number of characters) of $1 which is the first argument to the function. So (( ${#1} == 0 )) is a convoluted way to test whether the first argument is empty (or unset, unset parameters appear as empty when expanded) or not. To test for an empty parameter, the canonical way is: [ -z "$1" ] But there, more likely the intent was to check whether an argument was provided to the function in which case the syntax would be: [ "$#" -eq 0 ] (or (($# == 0)) if you want to make your script ksh/bash/zsh specific). In both cases however, Bourne-like shells have short cuts for that: test=${1:--} # set test to $1, or "-" if $1 is empty or not provided
test=${1--} # set test to $1, or "-" if $1 is not provided Now, if the intent is to pass that to cat or other text utility so that - (meaning stdin) is passed when no argument is provided, then you may not need any of that at all. Instead of: getlable() {
test=${1--}
cat -- "$test"
} Just do: getlable() {
cat -- "$@"
} The list of argument to the function will be passed as-is to cat . If there's no argument, cat will receive no argument (and then read from stdin as if it had been a single - argument). And if there's one or more arguments they will be all passed as-is to cat . | {
"source": [
"https://unix.stackexchange.com/questions/257993",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153544/"
]
} |
258,074 | I'm running Debian Jessie 8.2. I have a bluetooth USB dongle connected to my machine. I run sudo bluetoothctl -a then do the following: [NEW] Controller 5C:F3:70:6B:57:60 debian [default]
Agent registered
[bluetooth]# scan on
Discovery started
[CHG] Controller 5C:F3:70:6B:57:60 Discovering: yes
[bluetooth]# devices
[NEW] Device 08:DF:1F:A7:B1:7B Bose Mini II SoundLink
[bluetooth]# pair 08:DF:1F:A7:B1:7B
Attempting to pair with 08:DF:1F:A7:B1:7B
[CHG] Device 08:DF:1F:A7:B1:7B Connected: yes
[CHG] Device 08:DF:1F:A7:B1:7B UUIDs:
0000110b-0000-1000-8000-00805f9b34fb
0000110c-0000-1000-8000-00805f9b34fb
0000110e-0000-1000-8000-00805f9b34fb
0000111e-0000-1000-8000-00805f9b34fb
00001200-0000-1000-8000-00805f9b34fb
[CHG] Device 08:DF:1F:A7:B1:7B Paired: yes
Pairing successful
[CHG] Device 08:DF:1F:A7:B1:7B Connected: no
[bluetooth]# trust 08:DF:1F:A7:B1:7B
[CHG] Device 08:DF:1F:A7:B1:7B Trusted: yes
Changing 08:DF:1F:A7:B1:7B trust succeeded
[bluetooth]# connect 08:DF:1F:A7:B1:7B
Attempting to connect to 08:DF:1F:A7:B1:7B
Failed to connect: org.bluez.Error.Failed But I can connect to my iPhone this way. Why can't I connect to my Bose Mini II SoundLink speaker? | This may be due to the pulseaudio-module-bluetooth package not being installed. Install it if it missing, then restart pulseaudio. sudo apt install pulseaudio-module-bluetooth
pulseaudio -k
pulseaudio --start If the issue is not due to the missing package, the problem in this case is that PulseAudio is not catching up. A common solution to this problem is to restart PulseAudio. Note that it is perfectly fine to run bluetoothctl as root while PulseAudio runs as user. After restarting PulseAudio, retry to connect. It is not necessary to repeat the pairing. Continue trying second part only if above does not work for you: If restarting PulseAudio does not work, you need to load module-bluetooth-discover. sudo pactl load-module module-bluetooth-discover The same load-module command can be added to /etc/pulse/default.pa .
If that still does not work, or you are using PulseAudio's system-wide mode, also load the following PulseAudio modules (again these can be loaded via your default.pa or system.pa): module-bluetooth-policy
module-bluez5-device
module-bluez5-discover | {
"source": [
"https://unix.stackexchange.com/questions/258074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148559/"
]
} |
258,284 | Is it possible to get current umask of a process? From /proc/<pid>/... for example? | Beginning with Linux kernel 4.7 ( commit ), the umask is available in /proc/<pid>/status . $ grep '^Umask:' "/proc/$$/status"
Umask: 0022 | {
"source": [
"https://unix.stackexchange.com/questions/258284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73160/"
]
} |
258,310 | I've tried putting parted magic versions on an flash drive using YUMI but every time I get an missing file error stating: This application has raised an unexpected error and must abort. [45] File or directory does not exist. os.debian.52 The flash drive is working and formatted with FAT32 as verified through gparted. YUMI also works successfully when I put Kali linux on it. As an alternative I tried multibootusb, which successfully puts parted magic on the USB drive but then it apparently doesn't do it correctly because after booting parted magic cannot find the SQFS file and is unable to load the GUI. According to this thread it may be a common problem with creating USB utilities. If there's a more appropriate forum for this just let me know. My OS is Ubuntu 15.04. | Beginning with Linux kernel 4.7 ( commit ), the umask is available in /proc/<pid>/status . $ grep '^Umask:' "/proc/$$/status"
Umask: 0022 | {
"source": [
"https://unix.stackexchange.com/questions/258310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153679/"
]
} |
258,341 | I've searched everywhere. Tried echo and print. Tried single and double quotes. But I have parsed data and assigned it to a variable and would like to then evaluate it for if there is a variable within it. I will then replace the variable with a wildcard and search for the file. Example: var="file.$DATE.txt"
### Where it goes wrong- Needs to identify that $DATE is within the $var varaible.
test=$(echo "$var"|grep '\$')
if [[ $test ]]
then
### I would use whatever fix is discovered here as well
test=$(echo $test|sed 's/\$[a-zA-Z]*/\*/')
fi
### (Actually pulling from remote machine to local)
cat $test > /tmp/temporary.file Here is at least one of my many failures: PROMPT> file=blah.$DATE
PROMPT> test=$(echo "$file"|grep '\$')
PROMPT> echo $test
PROMPT>
PROMPT> I know it has something to do with expansion, but have no idea how to work it out. Any help would be appreciated. Thanks! | If you need $date inside the variable var: var='file.$date.txt' That will keep the $ inside the variable: $ echo "$var" | grep '\$'
file.$date.txt | {
"source": [
"https://unix.stackexchange.com/questions/258341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136473/"
]
} |
258,503 | I'm wondering about the security of UNIX signals. SIGKILL will kill the process. So, what happens when a non root user's process sends a signal to a root user's process? Does the process still carry out the signal handler? I follow the accepted answer (gollum's), and I type man capabilites , and I find a lot of things about the Linux kernel. From man capabilities : NAME
capabilities - overview of Linux capabilities
DESCRIPTION
For the purpose of performing permission checks, traditional UNIX
implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or
root), and unprivileged processes (whose effective UID is nonzero).
Privileged processes bypass all kernel permission checks, while
unprivileged processes are subject to full permission checking based
on the process's credentials (usually: effective UID, effective GID,
and supplementary group list).
Starting with kernel 2.2, Linux divides the privileges traditionally
associated with superuser into distinct units, known as capabilities ,
which can be independently enabled and disabled. Capabilities are a
per-thread attribute. | On Linux it depends on the file capabilities. Take the following simple mykill.c source: #include <stdio.h>
#include <sys/types.h>
#include <signal.h>
#include <stdlib.h>
void exit_usage(const char *prog) {
printf("usage: %s -<signal> <pid>\n", prog);
exit(1);
}
int main(int argc, char **argv) {
pid_t pid;
int sig;
if (argc != 3)
exit_usage(argv[0]);
sig = atoi(argv[1]);
pid = atoi(argv[2]);
if (sig >= 0 || pid < 2)
exit_usage(argv[0]);
if (kill(pid, -sig) == -1) {
perror("failed");
return 1;
}
printf("successfully sent signal %d to process %d\n", -sig, pid);
return 0;
} build it: gcc -Wall mykill.c -o /tmp/mykill Now as user root start a sleep process in background: root@horny:/root# /bin/sleep 3600 &
[1] 16098 Now as normal user try to kill it: demouser@horny:/home/demouser$ ps aux | grep sleep
root 16098 0.0 0.0 11652 696 pts/20 S 15:06 0:00 sleep 500
demouser@horny:/home/demouser$ /tmp/mykill -9 16098
failed: Operation not permitted Now as root user change the /tmp/mykill caps: root@horny:/root# setcap cap_kill+ep /tmp/mykill And try again as normal user: demouser@horny:/home/demouser$ /tmp/mykill -9 16098
successfully sent signal 9 to process 16098 Finally please delete /tmp/mykill for obvious reasons ;) | {
"source": [
"https://unix.stackexchange.com/questions/258503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106512/"
]
} |
258,512 | Basically, I want to "pluck out" the first occurrence of -inf from the parameter list. (The remaining parameters will be passed along to a different command.) The script I have has the following structure: #!/bin/sh
<CODE>
for POSITIONAL_PARAM in "$@"
do
<CODE>
if [ "$POSITIONAL_PARAM" = '-inf' ]
then
<PLUCK $POSITIONAL_PARAM FROM $@>
break
fi
<CODE>
done
<CODE>
some-other-command "$@"
# end of script Is there a good way to do this? BTW, even though I am mainly interested in answers applicable to /bin/sh , I am also interested in answers applicable only to /bin/bash . | POSIXly: for arg do
shift
[ "$arg" = "-inf" ] && continue
set -- "$@" "$arg"
done
printf '%s\n' "$@" The above code even works in pre-POSIX shells, except the original Almquist shell (Read Endnote ). Change the for loop to: for arg
do
...
done guarantee to work in all shells. Another POSIX one: for arg do
shift
case $arg in
(-inf) : ;;
(*) set -- "$@" "$arg" ;;
esac
done With this one, you need to remove the first ( in (pattern) to make it work in pre-POSIX shells. | {
"source": [
"https://unix.stackexchange.com/questions/258512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
258,656 | Bash behaviour I've just migrated from bash to zsh . In bash , I had the following line in ~/.inputrc . "\e\C-?": unix-filename-rubout Hence, Alt + Backspace would delete back to the previous slash, which was useful for editing paths. Separately, bash defaults to making Ctrl + w delete to the previous space , which is useful for deleting whole arguments (presuming they don't have spaces). Hence, there two slightly different actions performed with each key combination. Zsh behaviour In zsh , both Alt + Backspace and Ctrl + w do the same thing. They both delete the previous word, but they are too liberal with what constitutes a word-break, deleting up to the previous - or _ . Is there a way to make zsh behave similarly to bash , with two independent actions ? If it's important, I have oh-my-zsh installed. | A similar question was asked here: zsh: stop backward-kill-word on directory delimiter and a workable solution given: add these settings to your zshrc: autoload -U select-word-style
select-word-style bash | {
"source": [
"https://unix.stackexchange.com/questions/258656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
258,679 | I just noticed that on one of my machines (running Debian Sid) whenever I type ls any file name with spaces has single quotes surrounding it. I immediately checked my aliases, only to find them intact. wyatt@debian630:~/testdir$ ls
'test 1.txt' test1.txt
wyatt@debian630:~/testdir$ alias
alias ls='ls --color=auto'
alias wget='wget --content-disposition'
wyatt@debian630:~/testdir$ (picture) Another test, with files containing single quotes in their names (also answering a request by jimmij): wyatt@debian630:~/testdir$ ls
'test 1.txt' test1.txt 'thishasasinglequotehere'\''.txt'
wyatt@debian630:~/testdir$ touch "'test 1.txt'"
wyatt@debian630:~/testdir$ ls
''\''test 1.txt'\''' test1.txt
'test 1.txt' 'thishasasinglequotehere'\''.txt' (picture) update with new coreutils-8.26 output (which is admittedly much less confusing, but still irritating to have by default). Thanks to Pádraig Brady for this printout: $ ls
"'test 1.txt'" test1.txt
'test 1.txt' "thishasasinglequotehere'.txt"
$ ls -N
'test 1.txt' test1.txt
test 1.txt thishasasinglequotehere'.txt Why is this happening? How do I stop it properly? To be clear, I myself set ls to automatically color output. It just never put quotes around things before. I'm running bash and coreutils 8.25. Any way to fix this without a recompile? EDIT:
Appears the coreutils developers chose) to break with the convention and make this the global default. UPDATE - October 2017 - Debian Sid has re-enabled the shell escape quoting by default. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877582 And at the bottom of the reply chain to the previous bug report, "the change was intentional and will remain." https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=813164#226 I thought this had already been settled, but apparently it was just reverted so that the "stable" Debian branch could keep its "feature freeze" while getting the other fixes, etc. from the newer version. So that's a shame (in my opinion). UPDATE: April 2019: Just found a spurious bug report in PHP that was caused by this change to ls . When you're confusing developers and generating false bug reports, I think it might be time to re-evaluate your changes. Update: Android toybox ls is now doing something similar to this but with backslashes instead of quotes. Using the -q option makes spaces render as 'question mark characters' (I have not checked what they are, since they're obviously not spaces), so the only fix I have found so far without rooting the device in question is to add this to a script and source it when launching a shell. This function makes ls use columns if in a terminal and otherwise print one-per-line, while tricking ls into printing spaces verbatim because it's running through a pipe. ls() {
# only way I can stop ls from escaping with backslashes
if [ -t 1 ]; then
/system/bin/ls -C $@ |cat
else
/system/bin/ls $@ |cat
fi
} | Preface : While it may be quite satisfying to upvote an answer such as this and call it a day, please be assured that the GNU coreutils maintainers do not care about SO answer votes, & that if you actually want to encourage them to change , you need to email them as this answer describes. Update 2019 : Sometime this past year the maintainers have doubled-down and now offer to any [email protected] reports about this issue only a boilerplate response pointing to an incredibly long page on their website listing problems people have with this change that they have committed themselves to ignoring . The unceasing pressure from [email protected] reports has clearly had an effect, forcing the generation of this immense & absurd page, and potentially reducing the number of maintainers willing to deal with the problem to only one. When this many people consider a thing a bug, then it's a bug whether maintainers disagree or not. Continuing to email them remains the simplest way to encourage change. " Why is this happening? " Several coreutils maintainers decided they knew better than decades of de facto standards. " How do I stop it properly? " http://www.gnu.org/software/coreutils/coreutils.html : Bug Reports If you think you have found a bug in Coreutils, then please send as
complete a bug report as possible to <[email protected]> , and it
will automatically be entered into the Coreutils bug tracker. Before
reporting bugs please read the FAQ. A very useful and often referenced
guide on how to write bug reports and ask good questions is the
document How To Ask Questions The Smart Way . You can browse previous
postings and search the bug-coreutils archive. Distros that have already reverted this change: Debian coreutils-8.25-2 Including consequently, presumably, Ubuntu and all of the hundreds of Debian-based and Ubuntu-based derivatives Distros unaffected: openSUSE (already used -N) " Any way to fix this without a recompile? " Proponents would have you... get back to the old format by adding -N to their ls alias …on all of your installs, everywhere, for the remainder of eternity. | {
"source": [
"https://unix.stackexchange.com/questions/258679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78861/"
]
} |
258,711 | netstat -s prints out a lot of very detailed protocol statistics like number of TCP reset messages received or number of ICMP "echo request" messages sent or number of packets dropped because of a missing route. When in Linux netstat is considered deprecated at nowadays, then is there an alternative? Statistics provided by ss -s are superficial compared to the ones provided by netstat . | Netstat is considered deprecated at nowadays and other programs included in the net-tools like arp, ifconfig, iptunnel, nameif, netstat , and route. The functionality provided by several of these utilities has been reproduced and improved in the new iproute2 suite, primarily by using its new ip command. Examples for deprecated commands and their replacements: arp → ip n ( ip neighbor ) ifconfig → ip a ( ip addr ), ip link , ip -s ( ip -stats ) iptunnel → ip tunnel iwconfig → iw nameif → ip link , ifrename netstat → ss , ip route (for netstat -r ), ip -s link (for netstat -i ), ip maddr (for netstat -g ) The netstat command reads various /proc files to gather information.
However, this approach falls weak when there are lots of connections to display.
This makes it slower.
The ss command gets its information directly from kernel space.
The options used with the ss commands are very similar to netstat ,
making it an easy replacement. Statistics provided by ss are superficial but it is considered the better alternative to netstat . [Citation needed] Examples ss | less # get all connections
ss -t # get TCP connections not in listen mode
ss -u # get UDP connections not in listen mode
ss -x # get Unix domain socket connections
ss -at # get all TCP connections (both listening and non-listening)
ss -au # get all UDP connections
ss -tn # TCP without service name resolution
ss -ltn # listening TCP without service name resolution
ss -ltp # listening TCP with PID and name
ss -s # prints statistics
ss -tn -o # TCP connections, show keepalive timer
ss -lt4 # IPv4 (TCP) connections See note in the netstat(8) manpage : NOTES This program is mostly obsolete.
Replacement for netstat is ss .
Replacement for netstat -r is ip route .
Replacement for netstat -i is ip -s link .
Replacement for netstat -g is ip maddr . | {
"source": [
"https://unix.stackexchange.com/questions/258711",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
258,727 | What is the difference between below variables assignments? var=23
var =23
var= 23
var = 23 Is there any difference in space around the assignment operator? | That very much depends on the shell. If we only look at the 4 main shell families (Bourne, csh, rc, fish): Bourne family That is the Bourne shell and all its variants and ksh , bash , ash / dash , zsh , yash . var=23 : that's the correct scalar variable assignment syntax: a word that consists of unquoted letters, digits or underscores followed by an unquoted = that appears before a command argument (here it's on its own) var =23 , the var command with =23 as argument (except in zsh where =something is a special operator that expands to the path of the something command. Here, you'd likely to get an error as 23 is unlikely to be a valid command name). var= 23 : an assignment var= followed by a command name 23 . That's meant to execute 23 with var= passed to its environment ( var environment variable with an empty value). var = 23 , var command with = and 23 as argument. Try with echo = 23 for instance. ksh , zsh , bash and yash also support some forms of array / list variables with variation in syntax for both assignment and expansion. ksh93 , zsh and bash also have support for associative arrays with again variation in syntax between the 3. ksh93 also has compound variables and types , reminiscent of the objects and classes of object programming languages. Csh family csh and tcsh . Variable assignments there are with the set var = value syntax for scalar variables, set var = (a b) for arrays, setenv var value for environment variables, @ var=1+1 for assignment and arithmetic evaluation. So: var=23 is just invoking the var=23 command. var =23 is invoking the var command with =23 as argument. var= 23 is invoking the var= command with 23 as argument var = 23 is invoking the var command with = and 23 as arguments. Rc family That's rc , es and akanga . In those shells, variables are arrays and assignments are with var = (foo bar) , with var = foo being short for var = (foo) (an array with one foo element) and var = short for var = () (array with no element, use var = '' or var = ('') for an array with one empty element). In any case, blanks (space or tab) around = are allowed and optional. So in those shells those 4 commands are equivalent and equivalent to var = (23) to assign an array with one element being 23 . Fish In fish , the variable assignment syntax is set var value1 value2 . Like in rc , variables are arrays. So the behaviour would be the same as with csh , except that fish won't let you run a command with a = in its name. If you have such a command, you need to invoke it via sh for instance: sh -c 'exec weird===cmd' . So all var=23 and var= 23 will give you an error, var =23 will call the var command with =23 as argument and var = 23 will call the var command with = and 23 as arguments. | {
"source": [
"https://unix.stackexchange.com/questions/258727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154082/"
]
} |
258,931 | I was working through a tutorial and saw use of both cat myfile.txt and cat < myfile.txt . Is there a difference between these two sequences of commands? It seems both print the contents of a file to the shell. | In the first case, cat opens the file, and in the second case, the shell opens the file, passing it as cat 's standard input. Technically, they could have different effects. For instance, it would be possible to have a shell implementation that was more (or less) privileged than the cat program. For that scenario, one might fail to open the file, while the other could. That is not the usual scenario, but mentioned to point out that the shell and cat are not the same program. | {
"source": [
"https://unix.stackexchange.com/questions/258931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154239/"
]
} |
258,941 | I've a radeon r9 270x with four outputs, two DVI, one HDMI and one DisplayPort output. I'd like to configure the X server such that it has two screens, from a user's point of view it should provide DISPLAY 0.0 and 0.1. I tried with two Monitor, two Device and two Screen sections in /etc/X11/xorg.conf. This works if I don't specify "Screen" explicitely in the Device section but then I end up with a single Screen (DISPLAY=0.0). I tried to explicitely set the Screen number in the screen section (like below) but this didn't work. If I select Screen number 0 for the first Device Section and Screen number 1 for the second Device section then the X server starts, but from /var/log/Xorg.0.log it see that the X server tries to use the DisplayPort and HDMI outputs which are not connected. I I select Screen numbers 2 and 3 in the Device sections then the X server refuses to start. Section "Device"
Identifier "Device0"
Driver "radeon"
# Screen 1 # doesn't work
EndSection Any ideas how to get a dual screen set up with the radeon driver? This is debian unstable, Kernel 4.3 if it matters. | In the first case, cat opens the file, and in the second case, the shell opens the file, passing it as cat 's standard input. Technically, they could have different effects. For instance, it would be possible to have a shell implementation that was more (or less) privileged than the cat program. For that scenario, one might fail to open the file, while the other could. That is not the usual scenario, but mentioned to point out that the shell and cat are not the same program. | {
"source": [
"https://unix.stackexchange.com/questions/258941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99434/"
]
} |
258,942 | I was pointed here from AskUbuntu because my question was about an unsupported Ubuntu derivate, here is the copy-pasted question: I am aware i am asking a duplicate question, but since the questions( q-1 , q-2 ) are unanswered i am still going to ask it. Please, do not flag as duplicate as this implies that no answers are needed, thus leaving yet another question unaswered. I upgraded my fresh install of Netrunner 17 Horizon (Ubuntu-based, screenfetch reports that the OS is Wily), and after the reboot i got no GUI except the splash-screen. Removing the quiet bootflag shows Starting version 225 after the splash-screen vanishes, this message does not disappear and there is no further output. I had this problem a day ago, so i did a clean reinstall and this time i copied the terminal output of the upgrade: The terminal output of apt-get upgrade exceeded the new question character limit (30.000) at least 4 times, so i dropped the output in here > pastebin/Jybu3aQB Upgraded packages: about-distro
bind9-host
binutils
chromium-codecs-ffmpeg-extra
cups-browsed
cups-filters
cups-filters-core-drivers
curl
dkms
dnsutils
dpkg
dpkg-dev
ffmpeg
firefox
firefox-locale-en
firefox-plasma
flashplugin-installer
grub-common
grub-efi-amd64
grub-efi-amd64-bin
grub-efi-amd64-signed
grub2-common
gtk2-engines-qtcurve
initscripts
isc-dhcp-client
isc-dhcp-common
kate5-data
kde-config-gtk-style-preview
kde-l10n-engb
kde-style-oxygen-qt4
kde-style-qtcurve-qt4
kdelibs-bin
kdelibs5-data
kdelibs5-plugins
kdoctools
kio
kpackagelauncherqml
ksnapshot
ksshaskpass
ktexteditor-katepart
kwin
kwrited
libav-tools-links
libavcodec-extra
libavcodec-ffmpeg-extra56
libavdevice-ffmpeg56
libavfilter-ffmpeg5
libavformat-ffmpeg56
libavresample-ffmpeg2
libavutil-ffmpeg54
libbind9-90
libcupsfilters1
libcurl3
libcurl3-gnutls
libdlrestrictions1
libdns-export100
libdns100
libdpkg-perl
libepoxy0
libfontembed1
libirs-export91
libisc-export95
libisc95
libisccc90
libisccfg-export90
libisccfg90
libkcmutils4
libkde3support4
libkdeclarative5
libkdecore5
libkdesu5
libkdeui5
libkdewebkit5
libkdnssd4
libkemoticons4
libkf5iconthemes-bin
libkf5js5
libkf5notifyconfig-data
libkf5notifyconfig5
libkf5parts-plugins
libkf5plotting5
libkf5pty-data
libkf5pty5
libkf5service-bin
libkf5texteditor5-libjs-underscore
libkf5unitconversion-data
libkf5unitconversion5
libkfile4
libkhtml5
libkidletime4
libkio5
libkjsapi4
libkjsembed4
libkmediaplayer4
libknewstuff2-4
libknewstuff3-4
libknotifyconfig4
libkntlm4
libkparts4
libkprintutils4
libkpty4
libkrosscore4
libkrossui4
libktexteditor4
libldb1
liblwres90
libmysqlclient18
libmysqlclient18:i386
libnm-glib-vpn1
libnm-glib4
libnm-util2
libnm0
libnss3
libnss3-nssdb
liboxygenstyle5-5
liboxygenstyleconfig5-5
libperl5.20
libplasma3
libpng12-0
libpng12-0:i386
libpolkit-agent-1-0
libpolkit-backend-1-0
libpolkit-gobject-1-0
libpostproc-ffmpeg53
libpowerdevilui5
libqt5clucene5
libqt5concurrent5
libqt5x11extras5
libqtcurve-utils2
libsmbclient
libsndfile1
libsndfile1:i386
libsolid4
libswresample-ffmpeg1
libswscale-ffmpeg3
libthreadweaver4
libvlc5
libvlccore8
libwbclient0
libxml2
libxml2:i386
libxml2-utils
linux-firmware
linux-libc-dev
mysql-client-core-5.6
mysql-common
mysql-server-core-5.6
nano
netrunner-artwork
netrunner-default-settings
netrunner-desktop-containment
network-manager
openssh-client
openssl
oxideqt-codecs-extra
oxygen-sounds
perl
perl-base
perl-modules
policykit-1
python-apt
python-apt-common
python-ldb
python-libxml2
python-samba
python3-apt
python3-dbus.mainloop.pyqt5
qml-module-org-kde-extensionplugin
qml-module-qtgraphicaleffects
qtcurve
qtcurve-l10n
qtdeclarative5-kf5declarative
qtdeclarative5-kf5solid
rootactions-servicemenu
rsync
samba
samba-common
samba-common-bin
samba-dsdb-modules
samba-libs
samba-vfs-modules
sddm-theme-breeze
smbclient
sysv-rc
sysvinit-utils
thunderbird
thunderbird-locale-en
thunderbird-locale-en-us
thunderbird-plasma
unattended-upgrades
virtualbox
virtualbox-dkms
virtualbox-guest-dkms
virtualbox-guest-utils
virtualbox-guest-x11
virtualbox-qt
vlc
vlc-data
vlc-nox
vlc-plugin-notify
vlc-plugin-pulse
vlc-plugin-samba
wine
xserver-common
xserver-xorg-core I basically have no idea what has gone wrong, google and a search on SE did not reveal anything that i found applicable to my OS, versions, and situation. I am experiencing every symptom of this unanswered question except the screen going black, for me the trouble began after the reboot. I would really appreciate any kind of help, hint, or answer. | In the first case, cat opens the file, and in the second case, the shell opens the file, passing it as cat 's standard input. Technically, they could have different effects. For instance, it would be possible to have a shell implementation that was more (or less) privileged than the cat program. For that scenario, one might fail to open the file, while the other could. That is not the usual scenario, but mentioned to point out that the shell and cat are not the same program. | {
"source": [
"https://unix.stackexchange.com/questions/258942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136880/"
]
} |
259,045 | I'm trying to configure the network interface on embedded linux using ifconfig: ifconfig eth0 192.168.0.101 netmask 255.255.255.0 but I don't know how to add the default gateway as an ifconfig parameter, Any Ideas? | ifconfig is not the correct command to do that. You can use route like in route add default gw 192.168.0.254 for example. And if route is not present, but ip is, you can use it like this: ip route add default via 192.168.0.254 dev eth0 , assuming that 192.168.0.254 is the ip of your gateway | {
"source": [
"https://unix.stackexchange.com/questions/259045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105594/"
]
} |
259,069 | I have Windows 10 HOME installed on my system. After I installed Windows 10 HOME, I installed Ubuntu 14.04 LTS on a separate partition so that I could dual boot. I removed Ubuntu 14.04 LTS by deleting the partition it was installed on. Now I am unable to start my system. At boot, my system stops at the Grub command line. I want to boot to my Windows 10 installation which I haven't removed from my system. This is displayed at startup: GNU GRUB version 2.02 beta2-9ubuntu1.3 <br>
minimal BASH-like editing is supported.for the first word, TAB lists
possible commands completions.anywhere else TAB lists the possible device or file completion.
grub> How can I boot my Windows partition from this grub command? | Just enter the command exit . It should take you to another menu that makes you select the Windows bootloader. Worked on Lenovo Y50 | {
"source": [
"https://unix.stackexchange.com/questions/259069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154351/"
]
} |
259,088 | I am studying for a public exam and see this question (pt-BR) Before answer, I read about chmod and understood that the permission are split in 3 groups (user, group, other), like this: Nível u g o
Permissão rwx r-x ---
Binário 111 101 000
Octal 7 5 0 So, why are there more than 9 (3x3) char in the permission string (-r--rwx-rw-) | Just enter the command exit . It should take you to another menu that makes you select the Windows bootloader. Worked on Lenovo Y50 | {
"source": [
"https://unix.stackexchange.com/questions/259088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154372/"
]
} |
259,193 | I know many examples of block devices (HDDs, SSDs, files, ...), but I haven't heard a simple definition of it. Especially since files are apparently included in the definition I feel a bit confused... | Probably you will never be able to find a simple definition of this. But in the most general and simplistic way, if you compare a character device to a block device, you can say the character device gives you direct access to the hardware, as in you put in one byte, that byte gets to the hardware (of course it is not as simple as that in this day and age). Whereas, the block device reads from and writes to the device in blocks of different sizes. You can specify the block size but since the communication is a block at a time, there is a buffering time involved. Think of a block device as a hard disk where you read and write one block of data at a time and, the character device is a serial port. You send one byte of data and other side receives that byte and then the next, and so forth and so on. Again, it is not a very simple concept to explain. The examples I gave are gross generalizations and can easily be refuted for some particular implementation of each example. | {
"source": [
"https://unix.stackexchange.com/questions/259193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33928/"
]
} |
259,640 | I have one server with net connectivity, where I can use "yum install $PACKAGE". I want some yum command, like yum cache-rpms $PACKAGE $DIRECTORY such that all required RPM files will be downloaded to $DIRECTORY, which will also have a file ( Install.sh ) stating the order in which to install these RPMs, on many other servers without net connectivity. Install.sh may even be a shell script, which has the same behaviour as yum install $PACKAGE , except that it will not use the network, but will only use $DIRECTORY . Possible? I am looking for a general solution where yum and RPM is available, but for specificity: It is on a set of CENTOS 6.7 servers. | Here's a specific example using "httpd" as the package to download and install. This process was tested on both CentOS6 and CentOS7. Install the stuff you need and make a place to put the downloaded RPMs: # yum install yum-plugin-downloadonly yum-utils createrepo
# mkdir /var/tmp/httpd
# mkdir /var/tmp/httpd-installroot Download the RPMs. This uses the installroot trick suggested here to force a full download of all dependencies since nothing is installed in that empty root. Yum will create some metadata in there, but we're going to throw it all away. Note that for CentOS7 releasever would be "7". # yum install --downloadonly --installroot=/var/tmp/httpd-installroot --releasever=6 --downloaddir=/var/tmp/httpd httpd Yes, that was the small version. You should have seen the size of the full-repo downloads! Generate the metadata needed to turn our new pile of RPMs into a YUM repo and clean up the stuff we no longer need: # createrepo --database /var/tmp/httpd
# rm -rf /var/tmp/httpd-installroot Configure the download directory as a repo. Note that for CentOS7 the gpgkey would be named "7" instead of "6": # vi /etc/yum.repos.d/offline-httpd.repo
[offline-httpd]
name=CentOS-$releasever - httpd
baseurl=file:///var/tmp/httpd
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 To check the missing dependencies: # repoclosure --repoid=offline-httpd I haven't figured out why on CentOS7 this reports things like libssl.so.10(libssl.so.10)(64bit) missing from httpd-tools when openssl-libs-1.0.1e-51.el7_2.2.x86_64.rpm (the provider of that library) is clearly present in the directory. Still, if you see something obviously missing, this might be a good chance to go back and add it using the same yum install --downloadonly method above. When offline or after copying the /var/tmp/httpd repo directory to the other server set up the repo there: # vi /etc/yum.repos.d/offline-httpd.repo
[offline-httpd]
name=CentOS-$releasever - httpd
baseurl=file:///var/tmp/httpd
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
# yum --disablerepo=\* --enablerepo=offline-httpd install httpd Hopefully no missing dependencies! | {
"source": [
"https://unix.stackexchange.com/questions/259640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54246/"
]
} |
259,659 | I run free -m on a debian VM running on Hyper-V: total used free shared buffers cached
Mem: 10017 9475 541 147 34 909
-/+ buffers/cache: 8531 1485
Swap: 1905 0 1905 So out of my 10GB of memory, 8.5GB is in use and only 1500MB is free (excluding cache). But I struggle to find what is using the memory. The output of ps aux | awk '{sum+=$6} END {print sum / 1024}' , which is supposed to add up the RSS utilisation is: 1005.2 In other words, my processes only use 1GB of memory but the system as a whole (excluding cache) uses 8.5GB. What could be using the other 7.5GB? ps: I have another server with a similar configuration that shows used mem of 1200 (free mem = 8.8GB) and the sum of RSS usage in ps is 900 which is closer to what I would expect... EDIT cat /proc/meminfo on machine 1 (low memory): MemTotal: 10257656 kB
MemFree: 395840 kB
MemAvailable: 1428508 kB
Buffers: 162640 kB
Cached: 1173040 kB
SwapCached: 176 kB
Active: 1810200 kB
Inactive: 476668 kB
Active(anon): 942816 kB
Inactive(anon): 176184 kB
Active(file): 867384 kB
Inactive(file): 300484 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1951740 kB
SwapFree: 1951528 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 951016 kB
Mapped: 224388 kB
Shmem: 167820 kB
Slab: 86464 kB
SReclaimable: 67488 kB
SUnreclaim: 18976 kB
KernelStack: 6736 kB
PageTables: 13728 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 7080568 kB
Committed_AS: 1893156 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 62284 kB
VmallocChunk: 34359672552 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 67520 kB
DirectMap2M: 10418176 kB cat /proc/meminfo on machine 2 (normal memory usage): MemTotal: 12326128 kB
MemFree: 8895188 kB
MemAvailable: 10947592 kB
Buffers: 191548 kB
Cached: 2188088 kB
SwapCached: 0 kB
Active: 2890128 kB
Inactive: 350360 kB
Active(anon): 1018116 kB
Inactive(anon): 33320 kB
Active(file): 1872012 kB
Inactive(file): 317040 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 3442684 kB
SwapFree: 3442684 kB
Dirty: 44 kB
Writeback: 0 kB
AnonPages: 860880 kB
Mapped: 204680 kB
Shmem: 190588 kB
Slab: 86812 kB
SReclaimable: 64556 kB
SUnreclaim: 22256 kB
KernelStack: 10576 kB
PageTables: 11924 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 9605748 kB
Committed_AS: 1753476 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 62708 kB
VmallocChunk: 34359671804 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 63424 kB
DirectMap2M: 12519424 kB | I understand you're using Hyper-V, but the concepts are similar. Maybe this will set you on the right track. Your issue is likely due to virtual memory ballooning, a technique the hypervisor uses to optimize memory. See this link for a description I observed your exact same symptoms with my VMs in vSphere. A 4G machine with nothing running on it would report 30M used by cache, but over 3G "used" in the "-/+ buffers" line. Here's sample output from VMWare's statistics command. This shows how close to 3G is being tacked on to my "used" amount: vmware-toolbox-cmd stat balloon
3264 MB In my case, somewhat obviously, my balloon driver was using ~3G I'm not sure what the similar command in Hyper-V is to get your balloon stats, but I'm sure you'll get similar results | {
"source": [
"https://unix.stackexchange.com/questions/259659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48590/"
]
} |
260,162 | I know that with ps I can see the list or tree of the current processes running in the system. But what I want to achieve is to "follow" the new processes that are created when using the computer. As analogy, when you use tail -f to follow the new contents appended to a file or to any input, then I want to keep a follow list of the process that are currently being created. Is this even posible? | If kprobes are enabled in the kernel you can use execsnoop from perf-tools : In first terminal: % while true; do uptime; sleep 1; done In another terminal: % git clone https://github.com/brendangregg/perf-tools.git
% cd perf-tools
% sudo ./execsnoop
Tracing exec()s. Ctrl-C to end.
Instrumenting sys_execve
PID PPID ARGS
83939 83937 cat -v trace_pipe
83938 83934 gawk -v o=1 -v opt_name=0 -v name= -v opt_duration=0 [...]
83940 76640 uptime
83941 76640 sleep 1
83942 76640 uptime
83943 76640 sleep 1
83944 76640 uptime
83945 76640 sleep 1
^C
Ending tracing... | {
"source": [
"https://unix.stackexchange.com/questions/260162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102926/"
]
} |
260,167 | Just wondering why this is not working #!/bin/bash
ls /bin
ls !$ I expect to run ls /bin twice, but the second one raises errors as !$ was not interpreted Did I miss something, or !$ only work in command line? I couldn't find relevant part in man bash (on mac) | History and history expansion are disabled by default when the shell run non-interactively. You need: #!/bin/bash
set -o history
set -o histexpand
ls /bin
ls !$ or: SHELLOPTS=history:histexpand bash script.sh it will affect all bash instances that script.sh may run. | {
"source": [
"https://unix.stackexchange.com/questions/260167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
260,323 | In bash , watch (e.g. watch -n 5 ls -l ) could be used to repeat the command at fixed intervals. This command seem to be missing on zsh. Is there an equivalent? | watch is not an internal command: $ type watch
/usr/bin/watch so make sure it installed on the system where you are running zsh . | {
"source": [
"https://unix.stackexchange.com/questions/260323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155248/"
]
} |
260,533 | What command can be used to determine the used encryption on a LUKS partition (all the relevant information, initialization vector, generation scheme, mode of operation and block cipher primitive)? | If the decrypted volume is /dev/mapper/crypto then you can get the information with dmsetup table crypto
0 104853504 crypt aes-cbc-essiv:sha256 000[...]000 0 254:2 4096 If the encrypted volume is /dev/storage2/crypto then you get the information with cryptsetup luksDump /dev/storage2/crypto
LUKS header information for /dev/storage2/crypto
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha256
[...] | {
"source": [
"https://unix.stackexchange.com/questions/260533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
260,630 | I have a list of directories and subdirectories that contain large csv files. There are about 500 million lines in these files, each is a record. I would like to know How many lines are in each file. How many lines are in directory. How many lines in total Most importantly, I need this in 'human readable format' eg. 12,345,678 rather than 12345678 It would be nice to learn how to do this in 3 ways. Plain vanilla bash tools, awk etc., and perl (or python). | How many lines are in each file. Use wc , originally for word count, I believe, but it can do lines, words, characters, bytes, and the longest line length. The -l option tells it to count lines. wc -l <filename> This will output the number of lines in : $ wc -l /dir/file.txt
32724 /dir/file.txt You can also pipe data to wc as well: $ cat /dir/file.txt | wc -l
32724
$ curl google.com --silent | wc -l
63 How many lines are in directory. Try: find . -name '*.pl' | xargs wc -l another one-liner: ( find ./ -name '*.pl' -print0 | xargs -0 cat ) | wc -l BTW, wc command counts new lines codes, not lines. When last line in the file does not end with new line code, this will not counted. You may use grep -c ^ , full example: #this example prints line count for all found files
total=0
find /path -type f -name "*.php" | while read FILE; do
#you see use grep instead wc ! for properly counting
count=$(grep -c ^ < "$FILE")
echo "$FILE has $count lines"
let total=total+count #in bash, you can convert this for another shell
done
echo TOTAL LINES COUNTED: $total How many lines in total Not sure that I understood you request correctly. e.g. this will output results in the following format, showing the number of lines for each file: # wc -l `find /path/to/directory/ -type f`
103 /dir/a.php
378 /dir/b/c.xml
132 /dir/d/e.xml
613 total Alternatively, to output just the total number of new line characters without the file by file counts to following command can prove useful: # find /path/to/directory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}'
613 Most importantly, I need this in 'human readable format' eg.
12,345,678 rather than 12345678 Bash has a printf function built in: printf "%0.2f\n" $T As always, there are many different methods that could be used to achieve the same results mentioned here. | {
"source": [
"https://unix.stackexchange.com/questions/260630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136941/"
]
} |
260,813 | At my company, when I log into some servers, my last login and a huge banner are displayed: me@my-laptop$ ssh the-server
Last login: Mon Feb 8 18:54:36 2016 from my-laptop.company.com
************************************************************************
* *
* C O M P A N Y I N F O R M A T I O N S Y S T E M S *
* *
* !WARNING! Your connection has been logged !WARNING! *
* *
* This system is for the use of authorized personnel only. *
* Individuals using this *computer system without authorization, *
* or in excess of their authority as determined by the Company *
* Code of Ethics and Acceptable Use Policy, are subject to having all *
* of their activities on this system monitored, recorded and/or *
* terminated by system personnel. *
* If such monitoring reveals possible evidence of criminal activity, *
* Company may provide said evidence to law enforcement officials, *
* in compliance with its confidentiality obligations and all *
* applicable national laws/regulations with regards to data privacy. *
* *
* This device is maintained by Company Department *
* [email protected] *
************************************************************************
me@the-server$ Of course, I don't want this huge banner displayed every time I login, but I would like to keep the last login time and host displayed . If I use touch ~/.hushlogin , the banner is not displayed but I also loose the the last login information . In fact, nothing at all is displayed: ssh the-server
me@the-server$ How do I remove the banner but keep the last login time and host, like this: ssh the-server
Last login: Mon Feb 8 18:54:36 2016 from my-laptop.company.com
me@the-server$ | One way would be to add the following to ~/.ssh/rc , which contains commands to be run when you ssh into the machine: lastlog -u $USER | perl -lane 'END{print "Last login: @F[3..6] $F[8] from $F[2]"}' The command will get the time of your last login from lastlogin and then format it so that it looks like the original version. You can now touch ~/.hushlogin and you will still see that message. | {
"source": [
"https://unix.stackexchange.com/questions/260813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37426/"
]
} |
260,981 | I am reading about pulseaudio, how it works and how I can configure it. I am encountering two keywords a lot: SINK , SOURCE. At first I thought SINK meant OUTPUT and SOURCE meant INPUT , but it seems that this is not the case. Could someone explain what SINK and SOURCE mean in simple English? | As per the project description : PulseAudio clients can send audio to "sinks" and receive audio from "sources". So sinks are outputs (audio goes there), sources are inputs (audio comes from there). | {
"source": [
"https://unix.stackexchange.com/questions/260981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59565/"
]
} |
261,371 | I'm running XFCE 4.12 on top of Gentoo with a 4.2.0 kernel. My PlayPause button on my keyboard used to work as a global hotkey for VLC. Now VLC won't even recognize the key. It does see "Alt + Media Play Pause" but not the key alone. Is there a way to see if and what program might be capturing that key? When I run xdotool key "XF86LogGrabInfo" the tail /var/log/Xorg.0.log file reads [ 10138.690] (II) Printing all currently active device grabs:
[ 10138.690] (II) End list of active device grabs | To find out which app/program grabbed your key use the debug keysym XF86LogGrabInfo . Use xdotool to press keys + XF86LogGrabInfo at the same time e.g. in a terminal run KEY=XF86AudioPlay
xdotool keydown ${KEY}; xdotool key XF86LogGrabInfo; xdotool keyup ${KEY} Then check for output with tail /var/log/Xorg.0.log Note that with gnome 3/gdm and systemd this is no longer logged to Xorg.0.log (it's instead logged to the journal ). In that case you could
run journalctl -f and then in another terminal run the xdotool commands. Switch to the first terminal and you'll see something like /usr/lib/gdm/gdm-x-session[629]: Active grab 0x40c0a58e (xi2) on device 'Virtual core keyboard' (3):
/usr/lib/gdm/gdm-x-session[629]: client pid 708 /usr/bin/gnome-shell
/usr/lib/gdm/gdm-x-session[629]: at 32595124 (from passive grab) (device frozen, state 6)
/usr/lib/gdm/gdm-x-session[629]: xi2 event mask for device 3: 0xc000
/usr/lib/gdm/gdm-x-session[629]: passive grab type 2, detail 0xac, activating key 172 In the above example the program (the client) that grabbed the key is gnome-shell . How do I figure out what the keys are called? Check out the manpage of xdotool using man xdotool or an online version , as it lists a number of the special keys. For instance, "alt+r", "Control_L+J", "ctrl+alt+n", "BackSpace". The LinuxQuestions wiki also has a list of X Keysyms one could use. To make things a bit easier, xdotool also has aliases for some of these, such that pressing Shift-Alt-Tab would for instance just be shift+alt+Tab . To verify that this does indeed click that key combination, you could send the input to xev , which is a program that will print whatever key or mouse events it gets to the console. Just do sleep 2; xdotool keydown ${KEY} and switch to the xev window before two seconds has passed to see the keys being clicked on that window. It should then output a series of events, such as these: PropertyNotify event, serial 168, synthetic NO, window 0x1e00001,
atom 0x13e (_GTK_EDGE_CONSTRAINTS), time 4390512, state PropertyNewValue
MappingNotify event, serial 168, synthetic NO, window 0x0,
request MappingKeyboard, first_keycode 8, count 248
KeyPress event, serial 168, synthetic NO, window 0x1e00001,
root 0x163, subw 0x0, time 4390719, (882,657), root:(1000,771),
state 0x0, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
KeyPress event, serial 169, synthetic NO, window 0x1e00001,
root 0x163, subw 0x0, time 4390738, (882,657), root:(1000,771),
state 0x8, keycode 23 (keysym 0xff09, Tab), same_screen YES,
XLookupString gives 1 bytes: (09) " "
XmbLookupString gives 1 bytes: (09) " "
XFilterEvent returns: False | {
"source": [
"https://unix.stackexchange.com/questions/261371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2049/"
]
} |
261,531 | I am just fooling around on my terminal (Gnome terminal). I was wondering is there a way to send output of one terminal to another without having to make a new file or pipe. for example: on first terminal I run ls and want its output to be displayed on second terminal (with or without using any command on second) | If both terminals belong to the same user, you can send your output to the virtual device that is used as the particular terminal's tty. So you can use the output from w , which includes the TTY information, and write directly to that device. ls > /dev/pts/7 (If the device mentioned by w was pts/7) Another option is to use the number of a process that is connected to that device. Send your output to /proc/<process number>/fd/1 . ls > /proc/5555/fd/1 Assuming the process number that you found that runs in that terminal is 5555. Note that this direct write is only allowed if the user that attempts to write is the same user that owns the other terminal . | {
"source": [
"https://unix.stackexchange.com/questions/261531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
261,687 | As most people here know, when using bash at the command prompt if you partially type a file name a command or an option to a command etc, bash will complete the word if there is exactly one match. When there is more than one match, you need to hit <Tab> twice and bash will generate a list of possible matches. I would like to configure bash to simply provide those options on the first <Tab> . Is this possible without writing a script? i.e. a shell option? man bash has a section "programmable completion" but I couldn't make out if there is an option to enable "single tab completion". | Put this in your ~/.inputrc : set show-all-if-ambiguous on For additional credit, add: set completion-ignore-case on All of the options are in the GNU manual ... | {
"source": [
"https://unix.stackexchange.com/questions/261687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
261,693 | How can I bulk replace the suffix for many files? I have a lot of files like NameSomthing-min.png NameSomthing1-min.png NameSomthing2-min.png I would like to change all them to NameSomthing.png NameSomthing1.png NameSomthing2.png i.e., remove the characters -min from the name.
How would I do this? | Put this in your ~/.inputrc : set show-all-if-ambiguous on For additional credit, add: set completion-ignore-case on All of the options are in the GNU manual ... | {
"source": [
"https://unix.stackexchange.com/questions/261693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156169/"
]
} |
262,185 | I have a file with ANSI colors. test.txt: \e[0;31mExample\e[0m I would like to display the content of this file in a terminal, like cat does, but I would like to display the colors as well. | I was looking for a solution to this exact bash question. I nearly missed @Thomas Dickey's comment which provided me with the most elegant solution. echo -e $(cat test.txt) Some things which did not work for me are(apparently you cant pipe things to echo) cat test.txt | echo -e or less -R test.txt Another issue I had was that echo -e didn't print newlines and contiguous whitespaces within the file nicely. To print those, I used the following. echo -ne $(cat test.txt | sed 's/$/\\n/' | sed 's/ /\\a /g') This works for a test.txt file containing \e[0;31mExa mple\e[0m
\e[0;31mExample line2\e[0m | {
"source": [
"https://unix.stackexchange.com/questions/262185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119603/"
]
} |
263,274 | I work on two computers with one USB headset. I want to listen to both by piping the non-Linux computers' output into the Linux computer's line in (blue audio jack) and mixing the signal into the Linux computer's headset output using PulseAudio. pavucontrol shows a "Built-in Audio Analog Stereo" Input Device which allows me to pick ports like "Line In" (selected), "Front Microphone", "Rear Microphone". I can see the device's volume meter reacting to audio playback on the non-Linux machine. How do I make PulseAudio play that audio signal into my choice of Output Device? | 1. Load the loopback module pacmd load-module module-loopback latency_msec=5 creates a playback and a recording device. 2. Configure the devices in pavucontrol In pavucontrol, in the Recording tab, set the "Loopback" device's from input device to the device which receives the line in signal. In the Playback tab, set the "Loopback" device's on output device to the device through which you want to hear the line in signal. 3. Troubleshooting If the audio signal has issues, remove the module with pacmd unload-module module-loopback and retry a higher latency_msec= value Additional Notes Your modern Mid-Range computer might easily be able to manage lower latency with the latency_msec=1 option: pacmd load-module module-loopback latency_msec=1 This answer was made possible by this forum post . Thanks! | {
"source": [
"https://unix.stackexchange.com/questions/263274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79702/"
]
} |
263,615 | I am running Windows 10 and am starting to learn how to boot from USB devices. I have a 16GB USB (USB 3.0) drive and I want to do the following: Make the 16GB USB drive run Debian Linux. Keep Windows 10 on my C: drive. Not partition my hard drive or set up a dual boot. Run the OS from my USB drive. Let all of my files and programs be saved to the USB (so I don't think that a live OS would be suitable). It should work as though it was a dual boot as in the way files are saved. Make it work on any computer it is plugged in to (assuming the BIOS is compatible). I already know how to boot from a USB in my BIOS but I am unsure as to where to get an ISO file and how to install it to the USB. | To create a bootable USB, you can follow the steps below: STEP 1 Go to the website of the OS you wish to install, and find an iso image to download. In your case, since you want to run a Debian OS, here is a link to its iso options: https://www.debian.org/distrib/netinst Choose an iso image from the options, and click on it. This should automatically start the image download. While file is downloading, go to second step. STEP 2 Get a utility program to format and create bootable USB flash drives. Some have already been suggested, so I will just link you to my favourite: https://rufus.akeo.ie/ Download the utility and go to third step. STEP 3 By this stage, if your iso image has not yet finished downloading, then wait until it does. Now that you have both the utility and the iso image downloaded: Plug in your USB drive Open Rufus (to write your USB) Select the iso image you just downloaded to write on the USB, and fill out the other options accordingly (eg. selecting your USB drive etc) Click on the option for starting the write process (with Rufus, it is the "Start" button) Once Rufus finishes, simply reboot, booting from your USB, which should start up your Debian OS. | {
"source": [
"https://unix.stackexchange.com/questions/263615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156732/"
]
} |
263,668 | I have a string of the format [0-9]+\.[0-9]+\.[0-9] . I need to extract the first, second, and third numbers separately. As I understand it, capture groups should be capable of this. I should be able to use sed "s/\([0-9]*\)/\1/g to get the first number, sed "s/\([0-9]*\)/\2/g to get the second number, and sed "s/\([0-9]*\)/\3/g to get the third number. In each case, though, I am getting the whole string. Why is this happening? | We can't give you a full answer without an example of your input but I can tell you that your understanding of capture groups is wrong. You don't use them sequentially, they only refer to the regex on the left hand side of the same substitution operator. If you capture, for example, /(foo)(bar)(baz)/ , then foo will be \1 , bar will be \2 and baz will be \3 . You can't do s/(foo)/\1/; s/(bar)/\2/ , because in the second s/// call, there is only one captured group, so \2 will not be defined. So, to capture your three groups of digits, you would need to do: sed 's/\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\)/\1 : \2 : \3/' Or, the more readable: sed -E 's/([0-9]*)\.([0-9]*)\.([0-9]*)/\1 : \2 : \3/' | {
"source": [
"https://unix.stackexchange.com/questions/263668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89807/"
]
} |
263,801 | I tried to update my OS Debian jessie using the terminal and i get an error : “E: The method driver /usr/lib/apt/methods/https could not be found.” error? My sources.list : deb http://httpredir.debian.org/debian/ jessie main
deb-src http://httpredir.debian.org/debian/ jessie main
deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main
# jessie-updates, previously known as 'volatile'
deb http://httpredir.debian.org/debian/ jessie-updates main
deb-src http://httpredir.debian.org/debian/ jessie-updates main
deb http://ftp.de.debian.org/debian jessie main How to fix apt-get update and aptitude update ? | Sounds like you may have added some https sources. Since there are no https sources in your sources.list , it would be something in /etc/apt/sources.list.d/ . You may also be dealing with a proxy that always redirects to https. You can add support for https apt sources by installing a couple of packages: apt-get install apt-transport-https ca-certificates If your apt-get is too broken to do this, you can download the package directly and install it with dpkg -i . Any additional dependencies of that package can be tracked down and fetched similarly ( dpkg will let you know if anything is missing). If it still doesn't work, you might try editing the source entry to use http instead of https, or just remove it and start over following the source maintainer's instructions. | {
"source": [
"https://unix.stackexchange.com/questions/263801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
263,869 | I have trouble understanding a weird behavior: vi seems to add a newline (ASCII: LF, as it is a Unix ( AIX ) system) at the end of the file, when I did NOT specifically type it. I edit the file as such in vi (taking care to not input a newline at the end): # vi foo ## Which I will finish on the char "9" and not input a last newline, then `:wq`
123456789
123456789
123456789
123456789
~
~
## When I save, the cursor is just above the last "9", and no newline was added. I expect vi to save it "as is", so to have 39 bytes: 10 ASCII characters on each of the first three lines (numbers 1 to 9, followed by a newline (LF on my system)) and only 9 on the last line (characters 1 to 9, no terminating newline/LF). But it appears when I save it it is 40 bytes (instead of 39), and od shows a terminating LF : # wc foo
4 4 40 foo ## I expected 39 here! as I didn't add the last newline
# od -a toto
0000000 1 2 3 4 5 6 7 8 9 lf 1 2 3 4 5 6
0000020 7 8 9 lf 1 2 3 4 5 6 7 8 9 lf 1 2
0000040 3 4 5 6 7 8 9 lf
0000050
## An "lf" terminates the file?? Did vi add it silently? If I create the file with a printf doing exactly what I did inside vi, it works as expected: # ## I create a file with NO newline at the end:
# printf "123456789\n123456789\n123456789\n123456789" > foo2
# wc foo2 ## This one is as expected: 39 bytes, exactly as I was trying to do above with vi.
3 4 39 foo ## As expected, as I didn't add the last newline
## Note that for wc, there are only three lines!
## (So wc -l doesn't count lines; it counts the [newline] chars... Which is rather odd.)
# root@SPU0WMY1:~ ## od -a foo2
0000000 1 2 3 4 5 6 7 8 9 lf 1 2 3 4 5 6
0000020 7 8 9 lf 1 2 3 4 5 6 7 8 9 lf 1 2
0000040 3 4 5 6 7 8 9
0000047 ## As expected, no added LF. Both files (foo (40 characters) and foo2 (39 characters) appear exactly the same if I re-open them with vi... And if I open foo2 (39 characters, no terminating newline) in vi and just do :wq without editing it whatsoever , it says it writes 40 chars, and the linefeed appears! I can't have access to a more recent vi (I do this on AIX, vi (not Vim ) version 3.10 I think? (no "-version" or other means of knowing it)). # strings /usr/bin/vi | grep -i 'version.*[0-9]'
@(#) Version 3.10 Is it normal for vi (and perhaps not in more recent version? Or Vim?) to silently add a newline at the end of a file? (I thought the ~ indicated that the previous line did NOT end with a newline.) -- Edit: some additional updates and a bit of a summary, with a big thanks to the answers below : vi silently add a trailing newline at the moment it writes a file that lacked it (unless file is empty). it only does so at the writing time! (ie, until you :w, you can use :e to verify that the file is still as you openened it... (ie: it still shows "filename" [Last line is not complete] N line, M character). When you save, a newline is silently added, without a specific warning (it does say how many bytes it saves, but this is in most cases not enough to know a newline was added) (thanks to @jiliagre for talking to me about the opening vi message, it helped me to find a way to know when the change really occurs) This (silent correction) is POSIX behavior! (see @barefoot-io answer for references) | POSIX requires this behavior, so it's not in any way unusual. From the POSIX vi manual : INPUT FILES See the INPUT FILES section of the ex command for a description of the input files supported by the vi command. Following the trail to the POSIX ex manual : INPUT FILES Input files shall be text files or files that would be text files except for an incomplete last line that is not longer than {LINE_MAX}-1 bytes in length and contains no NUL characters. By default, any incomplete last line shall be treated as if it had a trailing <newline>. The editing of other forms of files may optionally be allowed by ex implementations. The OUTPUT FILES section of the vi manual also redirects to ex: OUTPUT FILES The output from ex shall be text files. A pair of POSIX definitions: 3.397 Text File A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character. Although POSIX.1-2008 does not distinguish between text files and binary files (see the ISO C standard), many utilities only produce predictable or meaningful output when operating on text files. The standard utilities that have such restrictions always specify "text files" in their STDIN or INPUT FILES sections. 3.206 Line A sequence of zero or more non- <newline> characters plus a terminating <newline> character. These definitions in the context of these manual page excerpts mean that while a conformant ex/vi implementation must accept a malformed text file if that file's only deformity is an absent final newline, when writing that file's buffer the result must be a valid text file. While this post has referenced the 2013 edition of the POSIX standard, the relevant stipulations also appear in the much older 1997 edition . Lastly, if you find ex's newline appension unwelcome, you will feel profoundly violated by Seventh Edition UNIX's (1979) intolerant ed. From the manual : When reading a file, ed discards ASCII NUL characters and all characters after the last newline. It refuses to read files containing non-ASCII characters. | {
"source": [
"https://unix.stackexchange.com/questions/263869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27616/"
]
} |
263,883 | I'm trying to search for files using find , and put those files into a Bash array so that I can do other operations on them (e.g. ls or grep them). But I can't figure out why readarray isn't reading the find output as it's piped into it. Say I have two files in the current directory, file1.txt and file2.txt . So the find output is as follows: $ find . -name "file*"
./file1.txt
./file2.txt So I want to pipe that into an array whose two elements are the strings "./file1.txt" and "./file2.txt" (without quotes, obviously). I've tried this, among a few other things: $ declare -a FILES
$ find . -name "file*" | readarray FILES
$ echo "${FILES[@]}"; echo "${#FILES[@]}"
0 As you can see from the echo output, my array is empty. So what exactly am I doing wrong here? Why is readarray not reading find 's output as its standard input and putting those strings into the array? | When using a pipeline, bash runs the commands in subshells¹. Therefore, the array is populated, but in a subshell, so the parent shell has no access to it. You also likely want the -t option so as not to store that line delimiters in the array members as they are not part of the file names. Use process substitution: readarray -t FILES < <(find .) Note that it doesn't work for files with newlines in their paths. Unless you can guarantee if won't be the case, you'd want to use NUL delimited records instead of newline delimited ones: readarray -td '' < <(find . -print0) (the -d option was added in bash 4.4) ¹ except for the last pipe component when using the lastpipe option, but that's only for non-interactive invocations of bash . | {
"source": [
"https://unix.stackexchange.com/questions/263883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153578/"
]
} |
264,117 | I wanted to write a little bash function such that I can tell bash, import os or from sys import stdout and it will spawn a new Python interpreter with the module imported. The latter from function looks like this: from () {
echo "from $@" | xxd
python3 -i -c "from $@"
} If I call this: $ from sys import stdout
00000000: 6672 6f6d 2073 7973 2069 6d70 6f72 7420 from sys import
00000010: 7374 646f 7574 0a stdout.
File "<string>", line 1
from sys
^
SyntaxError: invalid syntax
>>> The bytes in from sys are 66 72 6f 6d 20 73 79 73 20
f r o m s y s There's no EOF in there, yet the Python interpreter is behaving as if it read EOF. There is a newline at the end of the stream, which is to be expected. from 's sister, that imports a whole Python module, looks like this, and which solves the problem by sanitising and processing the string, and by failing on non-existent modules. import () {
ARGS=$@
ARGS=$(python3 -c "import re;print(', '.join(re.findall(r'([\w]+)[\s|,]*', '$ARGS')))")
echo -ne '\0x04' | python3 -i
python3 -c "import $ARGS" &> /dev/null
if [ $? != 0 ]; then
echo "sorry, junk module in list"
else
echo "imported $ARGS"
python3 -i -c "import $ARGS"
fi
} That solves the problem of an unexplained EOF in the stream, but I would like to understand why Python thinks there is an EOF. | The table in this Stack Overflow answer (which got it from the Bash Hackers Wiki ) explains how the different Bash variables are expanded: You're doing python -i -c "from $@" , which turns into python -i -c "from sys" "import" "stdout" , and -c only takes a single argument, so it's running the command from sys . You want to use $* , which will expand into python -i -c "from sys import stdout" (assuming $IFS is unset or starts with a space). | {
"source": [
"https://unix.stackexchange.com/questions/264117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136107/"
]
} |
264,393 | I have a remote machine running Debian 8 (Jessie) with lightdm installed. I want it to start in no-GUI mode, but I don't want to remove all X-related stuff to still be able to run it though SSH with the -X parameter. So how to disable X server autostart without removing it? I tried systemctl stop lightdm , it stops the lightdm, but it runs again after reboot. I also tried systemctl disable lightdm , but it basically does nothing. It renames lightdm's scripts in /etc/rc*.d directories, but it still starts after reboot, so what am I doing wrong? And I can't just update-rc.d lightdm stop , because it's deprecated and doesn't work. | The disable didn't work because the Debian /etc/X11/default-display-manager logic is winding up overriding it. In order to make text boot the default under systemd (regardless of which distro, really): systemctl set-default multi-user.target To change back to booting to the GUI, systemctl set-default graphical.target I confirmed those work on my Jessie VM and Slashback confirmed it on Stretch, too. PS: You don't actually need the X server on your machine to run X clients over ssh. The X server is only needed where the display (monitor) is. | {
"source": [
"https://unix.stackexchange.com/questions/264393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120537/"
]
} |
264,522 | When a script is launched from command prompt the shell will spawn a subprocess for that script. I want to show that relationship between terminal level process and its children using ps in a tree style output. How can I do this? What I have tried so far file: script.sh #!/bin/bash
ps -f -p$1 Then I invoke the script from the command line passing in the process id of the terminal shell: $ ./script.sh $$ What I want is something like this top level (terminal) shell process ./script.sh process for ps command itself USER PID [..]
ubuntu 123 -bash
ubuntu 1234 \_ bash ./script.sh
ubuntu 12345 \_ ps auxf what Im getting is: PID TTY STAT TIME COMMAND
14492 pts/24 Ss 0:00 -bash | Try # ps -aef --forest
root 114032 1170 0 Apr05 ? 00:00:00 \_ sshd: root@pts/4
root 114039 114032 0 Apr05 pts/4 00:00:00 | \_ -bash
root 56225 114039 0 13:47 pts/4 00:00:16 | \_ top
root 114034 1170 0 Apr05 ? 00:00:00 \_ sshd: root@notty
root 114036 114034 0 Apr05 ? 00:00:00 | \_ /usr/libexec/openssh/sftp-server
root 103102 1170 0 Apr06 ? 00:00:03 \_ sshd: root@pts/0
root 103155 103102 0 Apr06 pts/0 00:00:00 | \_ -bash
root 106798 103155 0 Apr06 pts/0 00:00:00 | \_ su - postgres
postgres 106799 106798 0 Apr06 pts/0 00:00:00 | \_ -bash
postgres 60959 106799 0 14:39 pts/0 00:00:00 | \_ ps -aef --forest
postgres 60960 106799 0 14:39 pts/0 00:00:00 | \_ more | {
"source": [
"https://unix.stackexchange.com/questions/264522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
Subsets and Splits