source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
400,650 | Is it possible to use wc to count the chars of each line instead of the total amount of chars? e.g. echo -e foo\\nbar\\nbazz | grep -i ba returns: barbazz So why doesn't echo -e foo\\nbar\\nbazz | grep ba | wc -m return a list of the lengths of those words? (3 and 4) Suggestions? P.S.: why are linefeeds counted with wc -m ? wc -l counts the newlines, so why should wc -m count them too? | wc counts over the whole file;You can use awk to process line by line (not counting the line delimiter): echo -e "foo\nbar\nbazz\n" | grep ba | awk '{print length}' or as awk is mostly a superset of grep : echo -e "foo\nbar\nbazz\n" | awk '/ba/ {print length}' (note that some awk implementations report the number of bytes (like wc -c ) as opposed to the number of characters (like wc -m ) and others will count bytes that don't form part of valid characters in addition to the characters (while wc -m would ignore them in most implementations)) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197819/"
]
} |
400,772 | I am tasked with automating a gpg decryption using cron (or any Ubuntu Server compatible job scheduling tool). Since it has to be automated I used --passphrase but it ends up in the shell history so it is visible in the process list. How can I go about automating decryption while maintaining good (preferably great) security standards? An example will be highly appreciated. | Store the passphrase in a file which is only readable by the cron job’s user, and use the --passphrase-file option to tell gpg to read the passphrase there. This will ensure that the passphrase isn’t visible in process information in memory. The level of security will be determined by the level of access to the file storing the passphrase (as well as the level of access to the file containing the key), including anywhere its contents end up copied to (so take care with backups), and off-line accessibility (pulling the disk out of the server). Whether this level of security is sufficient will depend on your access controls to the server holding the file, physically and in software, and on the scenarios you’re trying to mitigate. If you want great security standards, you need to use a hardware security module instead of storing your key (and passphrase) locally. This won’t prevent the key from being used in situ , but it will prevent it from being copied and used elsewhere. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/400772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257605/"
]
} |
400,849 | I want to print the value of /dev/stdin, /dev/stdout and /dev/stderr. Here is my simple script : #!/bin/bashecho your stdin is : $(</dev/stdin)echo your stdout is : $(</dev/stdout)echo your stderr is : $(</dev/stderr) i use the following pipes : [root@localhost home]# ls | ./myscript.sh[root@localhost home]# testerr | ./myscript.sh only $(</dev/stdin) seems to work , I've also found on some others questions people using : "${1-/dev/stdin}" tried it without success. | stdin , stdout , and stderr are streams attached to file descriptors 0, 1, and 2 respectively of a process. At the prompt of an interactive shell in a terminal or terminal emulator, all those 3 file descriptors would refer to the same open file description which would have been obtained by opening a terminal or pseudo-terminal device file (something like /dev/pts/0 ) in read+write mode. If from that interactive shell, you start your script without using any redirection, your script will inherit those file descriptors. On Linux, /dev/stdin , /dev/stdout , /dev/stderr are symbolic links to /proc/self/fd/0 , /proc/self/fd/1 , /proc/self/fd/2 respectively, themselves special symlinks to the actual file that is open on those file descriptors. They are not stdin, stdout, stderr, they are special files that identify what files stdin, stdout, stderr go to (note that it's different in other systems than Linux that have those special files). reading something from stdin means reading from file descriptor 0 (which will point somewhere within the file referenced by /dev/stdin ). But in $(</dev/stdin) , the shell is not reading from stdin, it opens a new file descriptor for reading on the same file as the one open on stdin (so reading from the start of the file, not where stdin currently points to). Except in the special case of terminal devices open in read+write mode, stdout and stderr are usually not open for reading. They are meant to be streams that you write to . So reading from the file descriptor 1 will generally not work. On Linux, opening /dev/stdout or /dev/stderr for reading (as in $(</dev/stdout) ) would work and would let you read from the file where stdout goes to (and if stdout was a pipe, that would read from the other end of the pipe, and if it was a socket, it would fail as you can't open a socket). In our case of the script run without redirection at the prompt of an interactive shell in a terminal, all of /dev/stdin, /dev/stdout and /dev/stderr will be that /dev/pts/x terminal device file. Reading from those special files returns what is sent by the terminal (what you type on the keyboard). Writing to them will send the text to the terminal (for display). echo $(</dev/stdin)echo $(</dev/stderr) will be the same. To expand $(</dev/stdin) , the shell will open that /dev/pts/0 and read what you type until you press ^D on an empty line. They will then pass the expansion (what you typed stripped of the trailing newlines and subject to split+glob) to echo which will then output it on stdout (for display). However in: echo $(</dev/stdout) in bash ( and bash only ), it's important to realise that inside $(...) , stdout has been redirected. It is now a pipe. In the case of bash , a child shell process is reading the content of the file (here /dev/stdout ) and writing it to the pipe, while the parent reads from the other end to make up the expansion. In this case when that child bash process opens /dev/stdout , it is actually opening the reading end of the pipe. Nothing will ever come from that, it's a deadlock situation. If you wanted to read from the file pointed-to by the scripts stdout, you'd work around it with: { echo content of file on stdout: "$(</dev/fd/3)"; } 3<&1 That would duplicate the fd 1 onto the fd 3, so /dev/fd/3 would point to the same file as /dev/stdout. With a script like: #! /bin/bash -printf 'content of file on stdin: %s\n' "$(</dev/stdin)"{ printf 'content of file on stdout: %s\n' "$(</dev/fd/3)"; } 3<&1printf 'content of file on stderr: %s\n' "$(</dev/stderr)" When run as: echo bar > errecho foo | myscript > out 2>> err You'd see in out afterwards: content of file on stdin: foocontent of file on stdout: content of file on stdin: foocontent of file on stderr: bar If as opposed to reading from /dev/stdin , /dev/stdout , /dev/stderr , you wanted to read from stdin, stdout and stderr (which would make even less sense), you'd do: #! /bin/sh -printf 'what I read from stdin: %s\n' "$(cat)"{ printf 'what I read from stdout: %s\n' "$(cat <&3)"; } 3<&1printf 'what I read from stderr: %s\n' "$(cat <&2)" If you started that second script again as: echo bar > errecho foo | myscript > out 2>> err You'd see in out : what I read from stdin: foowhat I read from stdout:what I read from stderr: and in err : barcat: -: Bad file descriptorcat: -: Bad file descriptor For stdout and stderr, cat fails because the file descriptors were open for writing only, not reading, the the expansion of $(cat <&3) and $(cat <&2) is empty. If you called it as: echo out > outecho err > errecho foo | myscript 1<> out 2<> err (where <> opens in read+write mode without truncation), you'd see in out : what I read from stdin: foowhat I read from stdout:what I read from stderr: err and in err : err You'll notice that nothing was read from stdout, because the previous printf had overwritten the content of out with what I read from stdin: foo\n and left the stdout position within that file just after. If you had primed out with some larger text, like: echo 'This is longer than "what I read from stdin": foo' > out Then you'd get in out : what I read from stdin: fooread from stdin": foowhat I read from stdout: read from stdin": foowhat I read from stderr: err See how the $(cat <&3) has read what was left after the first printf and doing so also moved the stdout position past it so that the next printf outputs what was read after. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/400849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201418/"
]
} |
400,851 | Yesterday, one of our computers dropped to grub shell or honestly, I am unsure what shell it was when we turned on the machine. It showed that it can't mount the root filesystem or something in this sense, because of inconsistencies. I ran, I believe: fsck -fy /dev/sda2 Rebooted and the problem was gone. Here comes the question part: I already have in her root's crontab: @reboot /home/ruzena/Development/bash/fs-check.sh while the script contains: #!/bin/bashtouch /forcefsck Thinking about it, I don't know, why I created script file for such a short command, but anyways... Further, in the file: /etc/default/rcS I have defined: FSCKFIX=yes So I don't get it. How could the situation even arise? What should I do to force the root filesystem check (and optionally a fix) at boot? Or are these two things the maximum, that I can do? OS: Linux Mint 18.x Cinnamon 64-bit. fstab : cat /etc/fstab | grep ext4 shows: UUID=a121371e-eb12-43a0-a5ae-11af58ad09f4 / ext4 errors=remount-ro 0 1 grub : fsck.mode=force was already added to the grub configuration. | ext4 filesystem check during boot Tested on OS: Linux Mint 18.x in a Virtual Machine Basic information /etc/fstab has the fsck order as the last (6th) column, for instance: <file system> <mount point> <type> <options> <dump> <fsck>UUID=2fbcf5e7-1234-abcd-88e8-a72d15580c99 / ext4 errors=remount-ro 0 1 FSCKFIX=yes variable in /etc/default/rcS This will change the fsck to auto fix, but not force a fsck check. From man rcS : FSCKFIX When the root and all other file systems are checked, fsck is invoked with the -a option which means "autorepair". If there are major inconsistencies then the fsck process will bail out. The system will print a message asking the administrator to repair the file system manually and will present a root shell prompt (actually a sulogin prompt) on the console. Setting this option to yes causes the fsck commands to be run with the -y option instead of the -a option. This will tell fsck always to repair the file systems without asking for permission. From man tune2fs If you are using journaling on your filesystem, your filesystemwill never be marked dirty, so it will not normally be checked. Start with Setting the following FSCKFIX=yes in the file /etc/default/rcS Check and note last time fs was checked: sudo tune2fs -l /dev/sda1 | grep "Last checked" These two options did NOT work Passing -F (force fsck on reboot) argument to shutdown : shutdown -rF now Nope; see: man shutdown . Adding the /forcefsck empty file with: touch /forcefsck These scripts seem to use this: /etc/init.d/checkfs.sh/etc/init.d/checkroot.sh did NOT work on reboot, but the file was deleted. Verified by: sudo tune2fs -l /dev/sda1 | grep "Last checked"sudo less /var/log/fsck/checkfssudo less /var/log/fsck/checkroot These seem to be the logs for the init scripts. I repeat, these two options did NOT work! Both of these methods DID work systemd-fsck kernel boot switches Editing the main grub configuration file: sudoedit /etc/default/grub GRUB_CMDLINE_LINUX="fsck.mode=force" sudo update-grubsudo reboot This did do a file system check as verified with: sudo tune2fs -l /dev/sda1 | grep "Last checked" Note: This DID a check, but to force a fix too, you need to specify fsck.repair="preen" , or fsck.repair="yes" . Using tune2fs to set the number of file system mounts before doing a fsck , man tune2fs tune2fs' info is kept in the file system superblock -c switch sets the number of times to mount the fs before checking the fs. sudo tune2fs -c 1 /dev/sda1 Verify with: sudo tune2fs -l /dev/sda1 This DID work as verified with: sudo tune2fs -l /dev/sda1 | grep "Last checked" Summary To force a fsck on every boot on Linux Mint 18.x, use either tune2fs , or fsck.mode=force , with optional fsck.repair=preen / fsck.repair=yes , the kernel command line switches. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/400851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
400,868 | I found this way to do if commands: $if echo test | grep st ; then echo yes ; fi ; echo $?testyes0 Looks good. Let's try a non-match: $if echo test | grep 123 ; then echo yes ; fi ; echo $?0 That's not right. The if statement functions properly, but it returns a zero exit code. Can someone explain this for me? | $? after an if block contains the exit status of the if statement. The standard specifies that The exit status of the if command shall be the exit status of the then or else compound-list that was executed, or zero, if none was executed. In the first case you’re seeing the exit status of the echo yes command. In the second case, no command is executed in a then or else block, so the exit status is 0. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256301/"
]
} |
400,888 | I have some old code that was written in java for interfacing with a serial port that I'm attempting to convert to python. The following lines are found in a bash file that is run to setup the serial port dev device before the java code is executed in a daemon. Can anyone explain what these options mean as the man page for stty is impenetrable. stty -F /dev/ttyUSB0 1:0:9ad:0:3:1c:7f:15:4:5:1:0:11:13:1a:0:12:f:17:16:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 #9600 7E1 | $? after an if block contains the exit status of the if statement. The standard specifies that The exit status of the if command shall be the exit status of the then or else compound-list that was executed, or zero, if none was executed. In the first case you’re seeing the exit status of the echo yes command. In the second case, no command is executed in a then or else block, so the exit status is 0. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257742/"
]
} |
400,889 | My question is on return values produced by this code: if [ -n ]; then echo "true"; else echo "false"; fi This prints true . Its complementary test using [ -z ] also prints true : if [ -z ]; then echo "true"; else echo "false"; fi In the above code, why does the [ -n ] test assume the string value that is not passed at all, as not null? The code below prints false . This is expected since the passed string value is null and of zero length. if [ -n "" ]; then echo "true"; else echo "false"; fi | [ x ] is equivalent to [ -n x ] even if x starts with - provided there is no operand. $ [ -o ] ; echo $?0$ [ -eq ] ; echo $?0$ [ -n -o ] ; echo $?0$ [ -n -eq ] ; echo $?0 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/400889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194688/"
]
} |
400,893 | I know this question is similar to " Udev : renaming my network interface ", but I do not consider it a duplicate because my interface is not named via a udev rule, and none of the other answers in that question worked for me. So I have one WiFi adapter on this laptop machine, and I would like to rename the interface from wlp5s0 to wlan0: root@aj-laptop:/etc/udev/rules.d# iwconfigwlp5s0 IEEE 802.11 ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:oneth0 no wireless extensions.lo no wireless extensions.root@aj-laptop:/etc/udev/rules.d# ifconfig wlp5s0wlp5s0: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 00:80:34:1f:d8:3f txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 However, there are no rules for this interface in 70-persistent-net.rules or any of the other files in the /etc/udev/rules.d/ directory. Is there any way that I can rename this interface? | Choose a solution: ip link set wlp5s0 name wlan0 - not permanent create yourself an udev rule file in /etc/udev/rules.d - permanent add net.ifnames=0 kernel parameter into grub.cfg - permanent, ifyour distro won't overwrite it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/400893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197044/"
]
} |
400,908 | I have this script I'm basing my current script off of. I just don't understand why he has typeset result part dir=${1-$PWD} in there. I get the same result if I just write dir=$PWD . With typeset is ${1-$PWD} changing how dir is set vs $PWD ? | ${1-$PWD} is a shell parameter expansion pattern. It is used to expand to a default value based on another -- whatever on the right of - . Here, in your case: If $1 is unset, then the expansion of $PWD would be substituted Otherwise i.e. if $1 is set to any value (including null), its value would be used as the result of expansion Example: % echo "Foo${1-$PWD}" Foo/home/bar% set -- Spam% echo "Foo${1-$PWD}"FooSpam | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174776/"
]
} |
400,918 | I'm trying to use qemu , but whenever I try to start it, I get this error message: [x86-64-debug] me:~/fuchsia$ frun -g+ echo CMDLINE: TERM=xterm-256color kernel.entropy-mixin=540cde8490a5d80fdb709ff7dcbf6094d92a40dc7d9cfed901758756a666e9dc kernel.halt-on-panic=trueCMDLINE: TERM=xterm-256color kernel.entropy-mixin=540cde8490a5d80fdb709ff7dcbf6094d92a40dc7d9cfed901758756a666e9dc kernel.halt-on-panic=true+ exec /home/colton/androidstuff/fuchsia/buildtools/linux-x64/qemu/bin/qemu-system-x86_64 -m 2048 -serial stdio -vga std -net none -smp 4,threads=2 -machine q35 -kernel /home/colton/androidstuff/fuchsia/out/build-zircon/build-zircon-pc-x86-64/zircon.bin -cpu Haswell,+smap,-check -initrd /home/colton/androidstuff/fuchsia/out/debug-x86-64/user.bootfs -append 'TERM=xterm-256color kernel.entropy-mixin=540cde8490a5d80fdb709ff7dcbf6094d92a40dc7d9cfed901758756a666e9dc kernel.halt-on-panic=true 'Could not initialize SDL(x11 not available) - exiting` (I'm trying to build Google's Fuchsia repository) I've already looked around, but nothing has worked (most people get a display unavailable error message or they run into this problem in R ). I'm running Ubuntu Budgie Any ideas as to how to fix this? | ${1-$PWD} is a shell parameter expansion pattern. It is used to expand to a default value based on another -- whatever on the right of - . Here, in your case: If $1 is unset, then the expansion of $PWD would be substituted Otherwise i.e. if $1 is set to any value (including null), its value would be used as the result of expansion Example: % echo "Foo${1-$PWD}" Foo/home/bar% set -- Spam% echo "Foo${1-$PWD}"FooSpam | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254099/"
]
} |
400,938 | I am running Kubernetes on CentOS 7, and I am unable to deploy pods. After running # kubectl run nginx --image=nginx I run # kubectl describe pod nginx which gives the following output: Name: nginx-701339712-8sx7mNamespace: defaultNode: node2/192.168.1.126Start Time: Fri, 27 Oct 2017 14:06:35 -0400Labels: pod-template-hash=701339712 run=nginxStatus: PendingIP:Controllers: ReplicaSet/nginx-701339712Containers: nginx: Container ID: Image: nginx Image ID: Port: State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Volume Mounts: <none> Environment Variables: <none>Conditions: Type Status Initialized True Ready False PodScheduled TrueNo volumes.QoS Class: BestEffortTolerations: <none>Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 21s 21s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-701339712-8sx7m to node2 21s 7s 2 {kubelet node2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)" If you scroll the last line,you’ll see that it’s going to redhat.com and failing. I don’t know why it’s going to the RedHat repo for image pull;it should pull from docker hub. | ${1-$PWD} is a shell parameter expansion pattern. It is used to expand to a default value based on another -- whatever on the right of - . Here, in your case: If $1 is unset, then the expansion of $PWD would be substituted Otherwise i.e. if $1 is set to any value (including null), its value would be used as the result of expansion Example: % echo "Foo${1-$PWD}" Foo/home/bar% set -- Spam% echo "Foo${1-$PWD}"FooSpam | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206348/"
]
} |
400,946 | $ ls -ldrwsrwsrwt 2 caine caine 4096 2017-01-10 13:21 10050-rw-r--r-- 10 caine caine 4096 2017-01-19 11:29 10051drwxr-xr-x 20 caine 18 8096 2017-01-19 11:29 10052drwxr-xr-x 21 11 caine 4096 2017-01-19 11:29 10053drwxr-xr-x 22 caine 22 4096 2017-01-19 11:29 10054-rw-r--r-- 14 caine caine 50 2017-01-19 11:29 10055 Based on the above , how do I work out which file size is the biggest? I was told the answer is 10051 but I can't figure out why. Is there a specific method I can use to work out which file is biggest given only the information above? | ls -l outputs 7 columns: type & permissions, number of links, owner, group, size, modification time, and name. (This is documented for GNU ls in the info page; see info ls , then pick "What information is listed" from the menu, and scroll down to -l . Alternatively info ls 'long ls format' should take you there directly). So you look at the first column (type & permissions), the first letter tells you the type: you have d for directories and - for ordinary files. So there are only two files: 10051 and 10055. Then you look at the 5th (size) column and one is 4096 bytes, the other is 50 bytes — it's clear which is larger. Of course, all those directories may contain quite a few files, and the total sizes of those files may be larger. ls shows the size of the directory itself (if you think about it, it makes sense that "what is in a directory" is data, and thus must be stored somewhere, though on Unix the names of files are stored as part of the directory too), not of its contents—one of the directories itself (10052) is bigger. If you want to know the sizes of directories, including the files and subdirectories (recursively) they contain, du is the command to use. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257770/"
]
} |
400,958 | What I'm really trying to do is run X number of jobs, with X amount in parallel for testing an API race condition. I've come up with this echo {1..10} | xargs -n1 | parallel -m 'echo "{}"'; which prints 7 8 9104 5 61 2 3 but what I really want to see is ( note order doesn't actually matter). 12345678910 and those would be processed in parallel 4 at a time (or whatever number of cpus/cores, I have, e.g. --jobs 4 ). For a total of 10 separate executions. I tried this echo {1..10} | xargs -n1 | parallel --semaphore --block 3 -m 'echo -n "{} "; but it only ever seems to print once. bonus points if your solution doesn't need xargs which seems like a hack around the idea that the default record separator is a newline, but I haven't been able to get a space to work like I want either. 10 is a reasonably small number, but lets say it's much larger, 1000 echo {1..1000} | xargs -n1 | parallel -j1000 prints parallel: Warning: Only enough file handles to run 60 jobs in parallel.parallel: Warning: Running 'parallel -j0 -N 60 --pipe parallel -j0' orparallel: Warning: raising 'ulimit -n' or 'nofile' in /etc/security/limits.confparallel: Warning: or /proc/sys/fs/file-max may help. I don't actually want 1000 processes, I want 4 processes at a time, each process should process 1 record, thus by the time I'm done it will have executed 1000 times. | I want 4 processes at a time, each process should process 1 record parallel -j4 -k --no-notice 'echo "{}"' ::: {1..10} -j4 - number of jobslots. Run up to 4 jobs in parallel -k - keep sequence of output same as the order of input. Normally the output of a job will be printed as soon as the job completes ::: - arguments The output: 12345678910 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
401,064 | I'm running iwlist wlo1 scan | grep ESSID inside a script. It displays French characters in the following format \xC3\x89 for É and \xC3\xA9 for é. I'm not sure what this format is called. I tried using an answer for converting unicode echo -ne '\xC3\xA9' | iconv -f utf-16be but it converted to 쎩 . What is the official name for this format and how can I convert it in bash? | Hexdecimal numeric constants are usually represented with 0x prefix. Character and string constants may express character codes in hexadecimal with the prefix \x followed by two hex digits. echo -ne '\xC3\x89' should give you É . -e - enable interpretation of backslash escapes(including \xHH - byte with hexadecimal value HH (1 to 2 digits)) To deal with better portability use printf function: printf "%b" '\xC3\x89'É | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
401,166 | Looking at the output of env , I noticed that the following function is also listed. BASH_FUNC_mc%%=() { . /usr/share/mc/mc-wrapper.sh } The content of the /usr/share/mc/mc-wrapper.sh file is the following. MC_USER=`id | sed 's/[^(]*(//;s/).*//'`MC_PWD_FILE="${TMPDIR-/tmp}/mc-$MC_USER/mc.pwd.$$"/usr/bin/mc -P "$MC_PWD_FILE" "$@"if test -r "$MC_PWD_FILE"; then MC_PWD="`cat "$MC_PWD_FILE"`" if test -n "$MC_PWD" && test -d "$MC_PWD"; then cd "$MC_PWD" fi unset MC_PWDfirm -f "$MC_PWD_FILE"unset MC_PWD_FILE What do the %% characters mean in the function name? Do they make it the function invoked in specific cases, or do they allow me to call it differently from other functions? If it makes any difference, I am using OpenSUSE 42.3, with Bash version 4.3.42(1)-release (x86_64-suse-linux-gnu). | The function name was crafted by bash updated as a response to the shellshock vulnerability. There was a function named mc that was exported and your bash version is renaming it by prepending BASH_FUNC_ and replacing () by %% . $ d() { date ; }$ export -f d$ env | grep %%BASH_FUNC_d%% { date Here is the bash patch by Florian Weimer that introduced this fix, dated Sept 25 2014: http://seclists.org/oss-sec/2014/q3/att-693/variables-affix.patch Note that a function name can contain almost any characters in bash just like a command name in general (i.e. a file name) so %% is definitely valid here. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1114/"
]
} |
401,203 | [root@localhost ~]# vim /usr/lib64/sas12/smtpd.conf pwcheck_method: saslauthdmech_list: PLAIN LOGINlog_level:3 :wq An error occurs. "/usr/lib64/sas12/smtpd.conf" E212: Can't open file for writing. Why root can't open file for writing? | Check that the /usr/lib64/sas12 directory already exists: root@host:~# ls /usr/lib64/sas12 If it is not the case, you must create the directory before attempting to create the file: root@host:~# mkdir -p /usr/lib64/sas12root@host:~# vim /usr/lib64/sas12/smtpd.conf You vim command should now work as expected. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
401,207 | lets say we want that only user tutu can read the file /home/grafh/file.txt what is the configuration that need to do in order to enable that? file owner must be stay as root ( and only user tutu can read the file ) | You have two possibilities, using the the classical DAC (Discretionary Access Control, the usual rwx rights) of using files ACL (Access Control Lists). Using DAC permissions If tutu has not its own group (check groups tutu output), you must create a new group and make tutu the only member of this group. root@host:~# addgroup tuturoot@host:~# usermod -G tutu tutu Then change the file permissions to allow read access to the members of the tutu group: root@host:~# chgrp tutu /home/grafh/file.txtroot@host:~# chmod 640 /home/grafh/file.txt This file will remain owned by root , but be readable (but not writeable) by tutu and not by the other other users. Using ACL permissions ACLs are additional rights which come in addition to the DAC permissions seen above. There are meant to solve situation which cannot be easily solved using the historical Unix DAC permission system. To allow tutu to read the file: root@host:~# setfacl -m u:tutu:r /home/grafh/file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
401,251 | I have a user account on my server to which i access through ssh key authentication.I want to give a temporary access to that account to a third person.I was planning to create a password as an alternative authentication method (hence the server will be accessible either by password or by ssh key), give it to that third person for her to perform a job, and then delete the password once the job is done. How can i create (and then delete) such a password? | The answer to the password question is: Edit the /etc/ssh/sshd_config file to ensure that passwords are enabled. PasswordAuthentication yes PermitEmptyPasswords no Then restart the ssh service (HT - @tonioc). This will work for sysvinit systems: /etc/init.d/ssh restart And this should work for systemd systems: systemctl restart ssh And then either: Login with your key and change the passwd of the account if the password is locked. Or (better): Add a new user account for the new user and add that user to whatever minimum groups are required to accomplish the new user's task. Or (even better): Add a new user and have them give you a public key Add their key to their ~/.ssh/authorized_keys file if they don't know how to copy it themselves. However, for the least number of changes but rather poor security, you can simply add another key to: ~/.ssh/authorized_keys on the server. You can have as many keys as you want in the authorized_keys file. It's one key per line with options prepended. There are many options that can be added to the authorized_keys file. See here And/or: man authorized_keys Of course, as others have pointed out, it's not a good idea to have more than one user per account unless it's run by a team. Temporary privileged access or accounts are probably not a good idea. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401251",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257983/"
]
} |
401,268 | I can clearly hear the fans (there are 2 of them) inside my laptop spin more on Linux Mint 18.2 Cinnamon 64-bit with kernel 4.10.0-37-generic, but any kernel I've tried for that matter, than on Windows 10 Pro. What is more important, my laptop's cooling does not keep up for the temperatures, as shows this dmesg snippet: [10498.701800] CPU1: Package temperature above threshold, cpu clock throttled (total events = 2582)[10498.701802] CPU4: Package temperature above threshold, cpu clock throttled (total events = 2582)[10498.701804] CPU7: Package temperature above threshold, cpu clock throttled (total events = 2582)[10498.701805] CPU0: Package temperature above threshold, cpu clock throttled (total events = 2582)[10498.701806] CPU3: Package temperature above threshold, cpu clock throttled (total events = 2582)[10498.701807] CPU5: Package temperature above threshold, cpu clock throttled (total events = 2582)[10498.701809] CPU2: Package temperature above threshold, cpu clock throttled (total events = 2582)[10498.701816] CPU6: Package temperature above threshold, cpu clock throttled (total events = 2582) with 2582 of these events in uptime -p of 3 hours, I don't know what does this Linux otherwise than the Windows. I must stress that I am using this Linux for a long time, and it always had this issue. And I start worrying about my CPU in the long-run. I tried installing intel-microcode in Driver Manager. Did not change a thing. For example, I am playing a browser game based on Flash Player in Chrome. The CPU in question is Intel Core i7 4700HQ. EDIT1: ps -aux output (expired on pastebin). | I may have found a solution: sudo apt-get install thermald This package should do the following: Thermal daemon looks for thermal sensors and thermal cooling drivers in the Linux thermal sysfs (/sys/class/thermal) and builds a list of sensors and cooling drivers. Each of the thermal sensors can optionally be binded to a cooling drivers by the in kernel drivers. In this case the Linux kernel thermal core can directly take actions based on the temperature trip points, for each sensor and associated cooling device. For example a trip temperature X in a sensor can be associates a cooling driver Y. So when the sensor temperature = X, the cooling driver "Y" is activated. Since I installed it and rebooted, I have only 4 occurrences of overheating with uptime of 2 hours. I wonder, why this useful package was not pre-installed, but never mind. After I have run 8 simultaneous sha256sum of a 100GiB file, CPU was used at 100% for several minutes: Without the thermald package, the chassis of the laptop above the CPU would literally burn my fingers when touching it, but now it's only moderately warm! Not to mention there is nothing in dmesg about CPU throttling. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401268",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
401,492 | Say I have a card with these properties: index: 1name: <alsa_card.pci-0000_00_1f.3>driver: <module-alsa-card.c>owner module: 7properties: alsa.card = "0" alsa.card_name = "HDA Intel PCH" alsa.long_card_name = "HDA Intel PCH at 0xf7240000 irq 129" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:00:1f.3" sysfs.path = "/devices/pci0000:00/0000:00:1f.3/sound/card0" device.bus = "pci" device.vendor.id = "8086" device.vendor.name = "Intel Corporation" device.product.id = "a170" device.form_factor = "internal" device.string = "0" device.description = "Built-in Audio" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci"profiles: input:analog-stereo: Analog Stereo Input (priority 60, available: unknown) output:analog-stereo: Analog Stereo Output (priority 6000, available: unknown) output:analog-stereo+input:analog-stereo: Analog Stereo Duplex (priority 6060, available: unknown) output:hdmi-stereo: Digital Stereo (HDMI) Output (priority 5400, available: unknown) output:hdmi-stereo+input:analog-stereo: Digital Stereo (HDMI) Output + Analog Stereo Input (priority 5460, available: unknown) output:hdmi-surround: Digital Surround 5.1 (HDMI) Output (priority 300, available: unknown) output:hdmi-surround+input:analog-stereo: Digital Surround 5.1 (HDMI) Output + Analog Stereo Input (priority 360, available: unknown) output:hdmi-surround71: Digital Surround 7.1 (HDMI) Output (priority 300, available: unknown) output:hdmi-surround71+input:analog-stereo: Digital Surround 7.1 (HDMI) Output + Analog Stereo Input (priority 360, available: unknown) output:hdmi-stereo-extra1: Digital Stereo (HDMI 2) Output (priority 5200, available: unknown) output:hdmi-stereo-extra1+input:analog-stereo: Digital Stereo (HDMI 2) Output + Analog Stereo Input (priority 5260, available: unknown) output:hdmi-stereo-extra2: Digital Stereo (HDMI 3) Output (priority 5200, available: unknown) output:hdmi-stereo-extra2+input:analog-stereo: Digital Stereo (HDMI 3) Output + Analog Stereo Input (priority 5260, available: unknown) output:hdmi-surround-extra2: Digital Surround 5.1 (HDMI 3) Output (priority 100, available: unknown) output:hdmi-surround-extra2+input:analog-stereo: Digital Surround 5.1 (HDMI 3) Output + Analog Stereo Input (priority 160, available: unknown) output:hdmi-surround71-extra2: Digital Surround 7.1 (HDMI 3) Output (priority 100, available: unknown) output:hdmi-surround71-extra2+input:analog-stereo: Digital Surround 7.1 (HDMI 3) Output + Analog Stereo Input (priority 160, available: unknown) off: Off (priority 0, available: unknown)active profile: <output:hdmi-stereo-extra1+input:analog-stereo>sinks: alsa_output.pci-0000_00_1f.3.hdmi-stereo-extra1/#1: Built-in Audio Digital Stereo (HDMI 2)sources: alsa_output.pci-0000_00_1f.3.hdmi-stereo-extra1.monitor/#1: Monitor of Built-in Audio Digital Stereo (HDMI 2) alsa_input.pci-0000_00_1f.3.analog-stereo/#2: Built-in Audio Analog Stereoports: analog-input-headphone-mic: Microphone (priority 8700, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-input-microphone" analog-input-headset-mic: Headset Microphone (priority 8700, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-input-microphone" analog-output-lineout: Line Out (priority 9900, latency offset 0 usec, available: no) properties: analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-speakers" analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: yes) properties: device.icon_name = "audio-headphones" hdmi-output-0: HDMI / DisplayPort (priority 5900, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" hdmi-output-1: HDMI / DisplayPort 2 (priority 5800, latency offset 0 usec, available: yes) properties: device.icon_name = "video-display" device.product.name = "Inspiron 7459" hdmi-output-2: HDMI / DisplayPort 3 (priority 5700, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" I'd like to output unique audio streams to the analog-output-lineout and potentially all of (hdmi-output-0, hdmi-output-1, hdmi-output2): ports. Is there a means to do that simultaneously in PulseAudio? I know in ALSA I can do something like: gst-launch-1.0 audiotestsrc ! alsasink device="hw:0,0"gst-launch-1.0 audiotestsrc ! alsasink device="hw:0,3" But what I'm seeing in Pulse indicates that I have to set a single "profile" for the "card", and all the profiles seem tied to a single output port. Is there a means to do this or is Pulse just fundamentally limited in this regard? | You must write a custom profile that exposes all the HDMI outputs you need as separate sinks. Have a look at profiles in the Pulseaudio docs, at the files in /usr/share/pulseaudio/alsa-mixer/paths/ , esp. the comments in analog-output.conf.common . All of this is woefully underdocumented. An attempt to make it work: Modify /usr/share/pulseaudio/alsa-mixer/profile-sets/default.conf and append something like the following: [Profile output:analog-stereo+output:hdmi-stereo+output:hdmi-stereo+output:hdmi-stereo]description = Foobaroutput-mappings = analog-stereo hdmi-stereo hdmi-stereo-extra1 hdmi-stereo-extra2input-mappings = Then restart pulse as the regular desktop user: pulseaudio --kill; sleep 1; pulseaudio --start Set the card to use the new profile: pacmd set-card-profile 0 output:analog-stereo+output:hdmi-stereo+output:hdmi-stereo+output:hdmi-stereo Now pacmd list-sinks shows a distinct sink for each port identified in the new profile. The last thing that needs to be done is to figure out how to not muck with the system file. It would be nice to do this through a file in ~/.config/pulse if possible. Edit Here is a description how to setup a new profile for an M-Audio USB device.. I also dug up what I did, that's a slight variation of that (I don't like modifying existing files, they tend to get overwritten by package upgrades): I add a new file /etc/udev/rules.d/91-pulseaudio.rules with the following contents: # Custom Profile for onboard Intel 8086:12c0SUBSYSTEM!="sound", GOTO="xpulseaudio_end"ACTION!="change", GOTO="xpulseaudio_end"KERNEL!="card*", GOTO="xpulseaudio_end"SUBSYSTEMS=="pci", ATTRS{vendor}=="0x8086", ATTRS{device}=="0x1c20", ENV{PULSE_PROFILE_SET}="my-personal.conf"LABEL="xpulseaudio_end" That's mostly a copy from /lib/udev/rules.d/90-pulseaudio.rules . As that is an onboard sound card, these rules must be executed at boot, so they should be in the initrd that your kernel uses. I compile my own kernel, and make-kpkg copies these rules, so that wasn't a problem for me. Then you add make a new file /usr/share/pulseaudio/alsa-mixer/profile-sets/my-personal.conf where you can list the configuration you want (copy and modify from the other configuration files). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125142/"
]
} |
401,501 | I'm using a pipe procedure for data analyzing every day: alias analyze='fetch_data | prog1 | prog2 | prog3 > result.txt' This script works well mostly, but it has about 1% probability to fail. As running it again and again is quite time consuming, I hope I can backup the result for each procedure, for example: /tmp/2017-10-31-10am/fetch_data.txt/tmp/2017-10-31-10am/prog1.txt/tmp/2017-10-31-10am/prog2.txt/tmp/2017-10-31-10am/prog3.txt | You must write a custom profile that exposes all the HDMI outputs you need as separate sinks. Have a look at profiles in the Pulseaudio docs, at the files in /usr/share/pulseaudio/alsa-mixer/paths/ , esp. the comments in analog-output.conf.common . All of this is woefully underdocumented. An attempt to make it work: Modify /usr/share/pulseaudio/alsa-mixer/profile-sets/default.conf and append something like the following: [Profile output:analog-stereo+output:hdmi-stereo+output:hdmi-stereo+output:hdmi-stereo]description = Foobaroutput-mappings = analog-stereo hdmi-stereo hdmi-stereo-extra1 hdmi-stereo-extra2input-mappings = Then restart pulse as the regular desktop user: pulseaudio --kill; sleep 1; pulseaudio --start Set the card to use the new profile: pacmd set-card-profile 0 output:analog-stereo+output:hdmi-stereo+output:hdmi-stereo+output:hdmi-stereo Now pacmd list-sinks shows a distinct sink for each port identified in the new profile. The last thing that needs to be done is to figure out how to not muck with the system file. It would be nice to do this through a file in ~/.config/pulse if possible. Edit Here is a description how to setup a new profile for an M-Audio USB device.. I also dug up what I did, that's a slight variation of that (I don't like modifying existing files, they tend to get overwritten by package upgrades): I add a new file /etc/udev/rules.d/91-pulseaudio.rules with the following contents: # Custom Profile for onboard Intel 8086:12c0SUBSYSTEM!="sound", GOTO="xpulseaudio_end"ACTION!="change", GOTO="xpulseaudio_end"KERNEL!="card*", GOTO="xpulseaudio_end"SUBSYSTEMS=="pci", ATTRS{vendor}=="0x8086", ATTRS{device}=="0x1c20", ENV{PULSE_PROFILE_SET}="my-personal.conf"LABEL="xpulseaudio_end" That's mostly a copy from /lib/udev/rules.d/90-pulseaudio.rules . As that is an onboard sound card, these rules must be executed at boot, so they should be in the initrd that your kernel uses. I compile my own kernel, and make-kpkg copies these rules, so that wasn't a problem for me. Then you add make a new file /usr/share/pulseaudio/alsa-mixer/profile-sets/my-personal.conf where you can list the configuration you want (copy and modify from the other configuration files). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258214/"
]
} |
401,517 | Here's a text file I have: 1|this|10002|that|20003|hello|30004|hello world|40005|lucky you|50006|awk is awesome|6000... How do I only print the lines that have two and only two words (line 4 and 5) in the $2? This is what I have tried but it counts the number of letters instead of words: awk -F"|" '{if(length($2==2) print $0}' | You can use the return value of the awk split function: $ awk -F'|' 'split($2,a,"[ \t]+") == 2' file4|hello world|40005|lucky you|5000 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258220/"
]
} |
401,547 | While trying to receive keys in my Debian Stretch server, I get this error: sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFExecuting: /tmp/apt-key-gpghome.4B7hWtn7Rm/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFgpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directorygpg: connecting dirmngr at '/tmp/apt-key-gpghome.4B7hWtn7Rm/S.dirmngr' failed: No such file or directorygpg: keyserver receive failed: No dirmngr | Installing the package dirmngr fixed the error. user@debian-server:~$ sudo apt-get install dirmngr Retrying : user@debian-server:~$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFExecuting: /tmp/apt-key-gpghome.haKuPppywi/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFgpg: key A6A19B38D3D831EF: public key "Xamarin Public Jenkins (auto-signing) <[email protected]>" importedgpg: Total number processed: 1gpg: imported: 1 | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/401547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237568/"
]
} |
401,561 | The OpenShift Origin Client Tools allow to forward ports (example command: oc port-forward postgresql-1-a7hrv 5432 ). However, my database backups are fetched from a FreeBSD box. Apparently the oc tools are not available on *BSD and I'd rather use standard commands anyway. How can I do an oc port-forward -equivalent on FreeBSD and access the according database? | Installing the package dirmngr fixed the error. user@debian-server:~$ sudo apt-get install dirmngr Retrying : user@debian-server:~$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFExecuting: /tmp/apt-key-gpghome.haKuPppywi/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFgpg: key A6A19B38D3D831EF: public key "Xamarin Public Jenkins (auto-signing) <[email protected]>" importedgpg: Total number processed: 1gpg: imported: 1 | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/401561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12169/"
]
} |
401,576 | The script in question terminates the latest process on my localhost port 8080. #!/bin/bashx=$(lsof -i:8080 | tail -1 | awk '{print $2}')kill -9 $x It didn't work, if the script was named 'killl' (get it? Kill Latest?). It gave me a prompt for cmdsubst> Renaming the script to 'asdf', everything works. Is there an explanation for this behaviour? I'm using MacOS El Capitán. | cmdsubst> is the secondary prompt printed by the zsh shell when it's waiting for the end of a command substitution being entered. If you get that prompt after just entering killl<Return> , then the only reasonable explanation is that you have an alias (which is some form of string macro expansion) for killl that expands to something that contains an unterminated $(...) command substitution, like: $ alias 'killl=echo $(lsof -ti'$ killl :22cmdsubst> Where zsh is asking you to close that $(...) command substitution. A few more notes: the output of lsof is sorted by pid. pid numbers are wrapped, a larger pid is not a guarantee that the process was started later. -i:8080 will report TCP or UDP sockets that have the 8080 port as the source or destination port, whether it's a listening, accepting or connecting socket. If you want to get the pid only, you can use the -t option of lsof : lsof -ti:8080 | tail -n2 kill -9 is kill -s KILL , which sends a signal that the application cannot act upon to exit gracefully. It should only be used as a last resort. To kill the most recently started process that has a socket bound (either end) on port 8080, you could do: #! /bin/sh -unset IFSpids=$(lsof -ti:8080) && LC_ALL=C ps -o pid=,lstart= -p $pids | LC_ALL=C sort -k6,6n -k4,4M -k3,3n -k5,5 -k1,1n | awk 'END{system("kill " $1)}' (assumes GNU sort (as found on macOS) and a ps implementation that supports the lstart column (like macOS' and procps-ng's, though the code would have to be updated for procps-ng where the month and day fields are swapped)). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191974/"
]
} |
401,604 | I previously added some external sources to /etc/apt/sources.list.d but I now want to remove one of them. I also want to: remove all packages solely from that source revert all packages to versions in my original source(s) alternatively, make a list of all packages from this source so I can perform this procedure manually How can I do this? | Depending on the configuration of the repository you wish to remove, apt list --installed might provide enough information to identify packages you need to uninstall or downgrade. Another option, if the repository defines a unique “Origin”, is to use aptitude search '~i ~Oorigin' (replacing origin as appropriate). (This is a generic answer; if you edit your question to specify exactly which source you want to remove, I can add a specific answer.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269/"
]
} |
401,621 | Cron doesn't use the path of the user whose crontab it is and, instead, has its own. It can easily be changed by adding PATH=/foo/bar at the beginning of the crontab, and the classic workaround is to always use absolute paths to commands run by cron, but where is cron's default PATH defined? I created a crontab with the following contents on my Arch system (cronie 1.5.1-1) and also tested on an Ubuntu 16.04.3 LTS box with the same results: $ crontab -l* * * * * echo "$PATH" > /home/terdon/fff That printed: $ cat fff/usr/bin:/bin But why? The default system-wide path is set in /etc/profile , but that includes other directories: $ grep PATH= /etc/profilePATH="/usr/local/sbin:/usr/local/bin:/usr/bin" There is nothing else relevant in /etc/environment or /etc/profile.d , the other files I thought might possibly be read by cron: $ grep PATH= /etc/profile.d/* /etc/environment/etc/profile.d/jre.sh:export PATH=${PATH}:/usr/lib/jvm/default/bin/etc/profile.d/mozilla-common.sh:export MOZ_PLUGIN_PATH="/usr/lib/mozilla/plugins"/etc/profile.d/perlbin.sh:[ -d /usr/bin/site_perl ] && PATH=$PATH:/usr/bin/site_perl/etc/profile.d/perlbin.sh:[ -d /usr/lib/perl5/site_perl/bin ] && PATH=$PATH:/usr/lib/perl5/site_perl/bin/etc/profile.d/perlbin.sh:[ -d /usr/bin/vendor_perl ] && PATH=$PATH:/usr/bin/vendor_perl/etc/profile.d/perlbin.sh:[ -d /usr/lib/perl5/vendor_perl/bin ] && PATH=$PATH:/usr/lib/perl5/vendor_perl/bin/etc/profile.d/perlbin.sh:[ -d /usr/bin/core_perl ] && PATH=$PATH:/usr/bin/core_perl There is also nothing relevant in any of the files in /etc/skel , unsurprisingly, nor is it being set in any /etc/cron* file: $ grep PATH /etc/cron* /etc/cron*/*grep: /etc/cron.d: Is a directorygrep: /etc/cron.daily: Is a directorygrep: /etc/cron.hourly: Is a directorygrep: /etc/cron.monthly: Is a directorygrep: /etc/cron.weekly: Is a directory/etc/cron.d/0hourly:PATH=/sbin:/bin:/usr/sbin:/usr/bin So, where is cron's default PATH for user crontabs being set? Is it hardcoded in cron itself? Doesn't it read some sort of configuration file for this? | It’s hard-coded in the source code (that link points to the current Debian cron — given the variety of cron implementations, it’s hard to choose one, but other implementations are likely similar): #ifndef _PATH_DEFPATH# define _PATH_DEFPATH "/usr/bin:/bin"#endif cron doesn’t read default paths from a configuration file; I imagine the reasoning there is that it supports specifying paths already using PATH= in any cronjob, so there’s no need to be able to specify a default elsewhere. (The hard-coded default is used if nothing else specified a path in a job entry .) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/401621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
401,623 | I noticed this difference some time ago, but until now, I didn't bother to ask, why is that? On Linux Mint 18 (Ubuntu-based) I can run dmesg without using sudo . On GNU/Linux Debian 9 I must use sudo in order to use for example dmesg . I wonder, where is this behavior coded? And can it be changed? | This is controlled by the dmesg_restrict sysctl entry, documented in the kernel documentation . Its default value is determined by the CONFIG_SECURITY_DMESG_RESTRICT kernel configuration value, which is typically enabled in modern distributions. You can see the current value by running /sbin/sysctl kernel.dmesg_restrict and change its value using (as root ) sysctl -w kernel.dmesg_restrict=1 (to enable the restriction) or sysctl -w kernel.dmesg_restrict=0 (to disable it and restore the old behaviour). To make this change permanent (automatically applied at boot), write it to /etc/sysctl.conf or a configuration file under /etc/sysctl.d : echo kernel.dmesg_restrict=0 | sudo tee -a /etc/sysctl.d/99-dmesg.conf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
401,670 | I have data: 7456 7456 0 0 0 27463 7463 0 0 1 2 I want to add column headers so the output is: FID IID PAT MAT SEX PHENOTYPE 7456 7456 0 0 0 27463 7463 0 0 1 2 I have tried echo -e "FID\tIID\tPAT\tMAT\tSEX\tPHENOTYPE" | cat file1 > file2 But this is copying the original file and not the headers. sed '1i\FID, IID, PAT, MAT, SEX PHENOTYPE' file1 > file2 has the error sed: 1: "1i\FID, IID, PAT, MAT, ...": extra characters after \ at the end of i command Any advice please? | This GNU sed adds the text as the first line in the file: sed -i '1i FID IID PAT MAT SEX PHENOTYPE' test.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256212/"
]
} |
401,672 | I have a computer that is connected to the router with DHCP and is in fully working order, what I am having issues with adding raspberry pi zero's in link-only mode. First I can get the pi's to connect by just editing the connection from DHCP to link-only but then I obviously lose internet as I can only have one wired device connected at a time and they aren't DHCP like my main ethernet connection. I then though it would be a simple job of adding a new connection with link-only set that I could use to connect the pi's while leaving my internet intact, this is where the issue started I was able to make the connection but for some reason unable to get it to assign itself to the pi, I get two different messages one is activation of network connection failed and the other is Unable to find a connection with UUID ('null') . I searched the internet and didn't find much as sugested in one of the posts I ran nmcli c show which returns a list of connections and here new connection all seem to have a UUID. I am stuck with this issue and would appreciate any help. | How to setup Networking via Network Manager's nmcli for a Raspberry Pi connected to a host PC via the USB port running in OTG mode. I successfully setup a bridged network to a Raspberry Pi Zero using Network Manager via the nmcli commandline interface. Raspberry Pi Setup I followed these instructions listed in the comments above to setup the Pi for OTG Ethernet connection. Modify the Pi's MiniSD card's filesystem Add the following to the SD card's /boot/config.txt # Enable USB OTG like ethernetdtoverlay=dwc2 Create an empty file called ssh in the SD card's /boot directory touch ssh And append the OTG ethernet module onto the boot loader via adding the following to the SD card's /boot/cmdline.txt after rootwait ` modules-load=dwc2,g_ether ` Setup Host PC Network Bridge Then I setup a bridge network interface on my PC's wired ethernet port. nmcli con add type bridge ifname br0nmcli con modify bridge-br0 bridge.stp nonmcli con add type bridge-slave ifname eth1 master bridge-br0 Plugged in the Raspberry Pi Zero to the PC using the OTG port on the Pi. Checked the interface name using ifconfig. And then added the OTG interface to the network bridge. nmcli con add type bridge-slave ifname enp0s29f7u1u4u3 master bridge-br0 With everything set, I enabled the connections: nmcli con up bridge-br0nmcli con up bridge-slave-eth1nmcli con up bridge-slave-enp0s29f7u1u4u3 Verified all interfaces green with nmcli connection root@local:/etc/ssh# nmcli conNAME UUID TYPE DEVICE bridge-br0 ab1fab48-2c31-4ccc-90bf-db444751c080 bridge br0 bridge-slave-enp0s29f7u1u4u3 5efed614-89c7-48d4-996e-0a2e6e616846 802-3-ethernet enp0s29f7u1u4u3 bridge-slave-eth1 53c4d66a-3f9e-49f4-b954-92b13ecf96f8 802-3-ethernet eth1 Watch DHCP server for Raspberry Pi Address And then watched my DHCP server for the Raspberry Pi network address assignment. On your configuration, that information would be available via your router. SSH to Pi success: SSH into Pi root@local:/etc/ssh# ssh [email protected]@192.168.xxx.xxx's password: The programs included with the Debian GNU/Linux system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extentpermitted by applicable law.Last login: Wed Nov 1 19:33:48 2017 from 192.168.xxx.xxxpi@raspberrypi:~ $ uname -aLinux raspberrypi 4.4.38+ #938 Thu Dec 15 15:17:54 GMT 2016 armv6l GNU/Linuxpi@raspberrypi:~ $ ifconfiglo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:724 errors:0 dropped:0 overruns:0 frame:0 TX packets:724 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:58980 (57.5 KiB) TX bytes:58980 (57.5 KiB)usb0 Link encap:Ethernet HWaddr 8e:31:5b:06:db:bb inet addr:192.168.xxx.xxx Bcast:192.168.xxx.xxx Mask:255.255.255.0 inet6 addr: fe80::xxx:/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:756 errors:0 dropped:0 overruns:0 frame:0 TX packets:430 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:90941 (88.8 KiB) TX bytes:76278 (74.4 KiB)pi@raspberrypi:~ $ routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 192.168.xxx.xxx 0.0.0.0 UG 202 0 0 usb0192.168.xxx.0 * 255.255.255.0 U 202 0 0 usb0pi@raspberrypi:~ $ sudo -sroot@raspberrypi:/home/pi# ping unix.stackexchange.comPING unix.stackexchange.com (151.101.65.69) 56(84) bytes of data.64 bytes from 151.101.65.69: icmp_seq=1 ttl=55 time=27.2 ms64 bytes from 151.101.65.69: icmp_seq=2 ttl=55 time=7.91 ms64 bytes from 151.101.65.69: icmp_seq=3 ttl=55 time=6.40 ms64 bytes from 151.101.65.69: icmp_seq=4 ttl=55 time=6.78 ms64 bytes from 151.101.65.69: icmp_seq=5 ttl=55 time=7.87 ms^C--- unix.stackexchange.com ping statistics ---5 packets transmitted, 5 received, 0% packet loss, time 4006msrtt min/avg/max/mdev = 6.404/11.240/27.219/8.012 msroot@raspberrypi:/home/pi# Addendum After reading through Part 2 of the Pi OTG Networking instructions, I will add the component that they provide to permanently set the Pi's MAC address. With a permanent MAC address, your DHCP server and/or router will stop giving sequential IP addresses with every reboot of the RPi. webonomic's says: You do this by appendign(sic) this to cmdline.txt on the boot partition on your pi: g_ether.host_addr=8a:3e:d4:ce:89:53 Just take the address form the ifconfig command of your laptop, after you have established a connection. They go on to mention changing the Pi's configuration from DHCP to Static IP. However, I do not recommend doing it that way. I always recommend setting up static leases on your network's DHCP server. This way, only the DHCP server needs to be modified in order to redesign your network. There are also useful methods for dividing up subnets in order to limit access to certain hosts from others. Most modern home routers allow setting up static leases. It's a good practice to utilize the full capabilities of your DHCP server. Additional notes I discovered that setting g_ether.host_addr is not enough to provide a constant MAC address for the Pi. There is a second requirement to set g_ether.dev_addr The Gadget documentation recommends also setting a manufacturer as well a product number. Linux-USB Gadget API Framework says: To better support DHCP, ZCIP, and related network autoconfiguration, you'll want to manage Ethernet addresses so that each peripheral reuses the same unique address every time it boots. You should assign those addresses using a registered IEEE 802 company id; this will also make the device appear to Linux hosts as an "ethN" interface, not as "usbN". It's easy to do this without a separate ID PROM (or an initrd) if you're using boot firmware like U-Boot: *#* manufacturing assigns Ethernet addresses; company id is xx:xx:xxsetenv eth_a_host xx:xx:xx:01:23:45setenv eth_a_gadget xx:xx:xx:67:89:acsetenv eth_i_vendor "Great Stuff, LLC"setenv eth_i_product "Our Cool Thing"setenv eth_args g_ether.host_addr=\$(eth_a_host)setenv eth_args $(eth_args) g_ether.dev_addr=\$(eth_a_gadget)setenv eth_args $(eth_args) g_ether.iManufacturer=\$(eth_i_vendor)setenv eth_args $(eth_args) g_ether.iProduct=\$(eth_i_product)*#* you can assign USB vendor/product/version codes too...setenv add_eth_args setenv bootargs $(eth_args) \$(bootargs)...setenv bootcmd run add_eth_args\;bootm These parameters can also be appended to the cmdline.txt for better usability with DHCP servers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68407/"
]
} |
401,728 | I can't seem to get an empty JSON {} to echo if an envvar is missing. I either have a trailing } in the output if set, or the escape displays. bash-3.2$ unset Xbash-3.2$ echo "${X:-{}}"{}bash-3.2$ X=ybash-3.2$ echo "${X:-{}}"y}bash-3.2$ echo "${X:-{\}}"ybash-3.2$ unset Xbash-3.2$ echo "${X:-{\}}"{\}bash-3.2$ echo "${X:-'{}'}"'{}'bash-3.2$ X=zbash-3.2$ echo "${X:-'{}'}"z How do I escape it correctly? | Quote your braces: bash-3.2$ echo "${X:-"{}"}"{}bash-3.2$ X=ybash-3.2$ echo "${X:-"{}"}"ybash-3.2$ unset Xbash-3.2$ echo "${X:-"{}"}"{} Inner double quotes are required here, which looks funny but is syntactically fine. Single quotes won't work, and I'm not entirely sure why not. This is real nested quoting, not end-and-resume, which you can verify by putting spaces in. Double will work fine though. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42385/"
]
} |
401,746 | I'm running Debian Testing (Last updated 31/10/2017) and when I play a video in full screen through a browser from either Twitch or iView it hangs the GPU, so the GUI is all frozen. The computer I have is an ' Up Squared ' with an Intel 505HD. The kernel is still running though, as I can still access it via ssh. I'm running kernel 4.12 Linux BB-8 4.12.0-0.bpo.2-amd64 #1 SMP Debian 4.12.13-1~bpo9+1 (2017-09-28) x86_64 GNU/Linux I'm also using a work around for video tearing in my /etc/X11/xorg.conf Section "Device" Identifier "Intel Graphics" Driver "intel" Option "TearFree" "true"End Error message (dmesg output); [52661.796383] [drm] GPU HANG: ecode 9:1:0xeeffefa1, in Xorg [688], reason: Hang on bcs, action: reset[52661.796642] drm/i915: Resetting chip after gpu hang[52661.799118] BUG: unable to handle kernel NULL pointer dereference at 0000000000000070[52661.807992] IP: reset_common_ring+0x8d/0x110 [i915][52661.813475] PGD 0 [52661.813476] P4D 0 [52661.819653] Oops: 0000 [#1] SMP[52661.823178] Modules linked in: ftdi_sio usbserial bnep 8021q garp mrp stp llc cpufreq_conservative cpufreq_userspace cpufreq_powersave iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat snd_hda_codec_hdmi nf_conntrack libcrc32c usb_f_acm i2c_designware_platform iptable_mangle i2c_designware_core usb_f_fs iptable_filter usb_f_serial u_serial libcomposite udc_core snd_soc_skl snd_soc_skl_ipc snd_soc_sst_ipc snd_soc_sst_dsp snd_hda_ext_core configfs snd_soc_sst_match snd_soc_core snd_compress bluetooth ecdh_generic rfkill intel_rapl x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_rapl_perf binfmt_misc pcspkr nls_ascii efi_pstore nls_cp437 vfat fat efivars lpc_ich evdev snd_hda_intel snd_hda_codec snd_hda_core i915 joydev idma64[52661.902704] hid_generic snd_hwdep snd_pcm snd_timer drm_kms_helper snd mei_me intel_lpss_pci drm soundcore sg mei intel_lpss shpchp i2c_algo_bit mfd_core video button i2c_dev parport_pc ppdev lp parport efivarfs ip_tables x_tables autofs4 hid_logitech_hidpp hid_logitech_dj usbhid hid ext4 crc16 jbd2 crc32c_generic fscrypto ecb mbcache sd_mod mmc_block crc32c_intel aesni_intel aes_x86_64 crypto_simd cryptd glue_helper ahci i2c_i801 sdhci_pci xhci_pci libahci sdhci xhci_hcd mmc_core usbcore usb_common libata r8169 scsi_mod mii[52661.955100] CPU: 0 PID: 10403 Comm: kworker/0:1 Not tainted 4.12.0-0.bpo.2-amd64 #1 Debian 4.12.13-1~bpo9+1[52661.966039] Hardware name: AAEON UP-APL01/UP-APL01, BIOS UPA1AM18 06/23/2017[52661.973996] Workqueue: events_long i915_hangcheck_elapsed [i915][52661.980744] task: ffff94e405403180 task.stack: ffffa3c603344000[52661.987428] RIP: 0010:reset_common_ring+0x8d/0x110 [i915][52661.993492] RSP: 0000:ffffa3c603347b98 EFLAGS: 00010206[52661.999356] RAX: 0000000000003e60 RBX: ffff94e417fec900 RCX: ffff94e5322fb6f8[52662.007364] RDX: 0000000000003ea0 RSI: ffff94e5350e8000 RDI: ffff94e5322fb6c0[52662.015372] RBP: ffffa3c603347bb8 R08: 000000000009fbe0 R09: ffffa3c6300137a0[52662.023380] R10: 00000000ffffffff R11: 0000000000000070 R12: ffff94e5320f6000[52662.031406] R13: 0000000000000000 R14: ffff94e533b8a900 R15: ffff94e533b88000[52662.039418] FS: 0000000000000000(0000) GS:ffff94e53fc00000(0000) knlGS:0000000000000000[52662.048499] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033[52662.054945] CR2: 0000000000000070 CR3: 0000000254761000 CR4: 00000000003406f0[52662.062955] Call Trace:[52662.065706] ? bit_wait_io_timeout+0x90/0x90[52662.070534] ? i915_gem_reset+0xbe/0x370 [i915][52662.075661] ? intel_uncore_forcewake_put+0x36/0x50 [i915][52662.081845] ? bit_wait_io_timeout+0x90/0x90[52662.086673] ? i915_reset+0xd9/0x160 [i915][52662.091424] ? i915_reset_and_wakeup+0x17d/0x190 [i915][52662.097309] ? i915_handle_error+0x1df/0x220 [i915][52662.102789] ? scnprintf+0x49/0x80[52662.106644] ? hangcheck_declare_hang+0xce/0xf0 [i915][52662.112456] ? fwtable_read32+0x83/0x1b0 [i915][52662.117569] ? i915_hangcheck_elapsed+0x2b1/0x2e0 [i915][52662.123533] ? process_one_work+0x181/0x370[52662.128227] ? worker_thread+0x4d/0x3a0[52662.132531] ? kthread+0xfc/0x130[52662.136246] ? process_one_work+0x370/0x370[52662.140935] ? kthread_create_on_node+0x70/0x70[52662.146018] ? do_group_exit+0x3a/0xa0[52662.150215] ? ret_from_fork+0x25/0x30[52662.154420] Code: c8 01 00 00 89 50 14 48 8b 83 80 00 00 00 8b 93 c8 01 00 00 89 50 28 48 8b bb 80 00 00 00 e8 2b 29 00 00 4d 8b ac 24 60 02 00 00 <49> 8b 45 70 48 39 43 70 74 51 4d 85 ed 74 14 4c 89 ef e8 cc c1 [52662.175681] RIP: reset_common_ring+0x8d/0x110 [i915] RSP: ffffa3c603347b98[52662.183401] CR2: 0000000000000070[52662.201377] ---[ end trace c9ac8dcf9dad3202 ]---[52665.887380] asynchronous wait on fence i915:Xorg[688]/0:9fbe1 timed out[52665.947423] pipe A vblank wait timed out[52665.951876] ------------[ cut here ]------------[52665.957155] WARNING: CPU: 0 PID: 8318 at /build/linux-RdeW6Z/linux-4.12.13/drivers/gpu/drm/i915/intel_display.c:12636 intel_atomic_commit_tail+0xf21/0xf50 [i915][52665.973347] Modules linked in: ftdi_sio usbserial bnep 8021q garp mrp stp llc cpufreq_conservative cpufreq_userspace cpufreq_powersave iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat snd_hda_codec_hdmi nf_conntrack libcrc32c usb_f_acm i2c_designware_platform iptable_mangle i2c_designware_core usb_f_fs iptable_filter usb_f_serial u_serial libcomposite udc_core snd_soc_skl snd_soc_skl_ipc snd_soc_sst_ipc snd_soc_sst_dsp snd_hda_ext_core configfs snd_soc_sst_match snd_soc_core snd_compress bluetooth ecdh_generic rfkill intel_rapl x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_rapl_perf binfmt_misc pcspkr nls_ascii efi_pstore nls_cp437 vfat fat efivars lpc_ich evdev snd_hda_intel snd_hda_codec snd_hda_core i915 joydev idma64[52666.052546] hid_generic snd_hwdep snd_pcm snd_timer drm_kms_helper snd mei_me intel_lpss_pci drm soundcore sg mei intel_lpss shpchp i2c_algo_bit mfd_core video button i2c_dev parport_pc ppdev lp parport efivarfs ip_tables x_tables autofs4 hid_logitech_hidpp hid_logitech_dj usbhid hid ext4 crc16 jbd2 crc32c_generic fscrypto ecb mbcache sd_mod mmc_block crc32c_intel aesni_intel aes_x86_64 crypto_simd cryptd glue_helper ahci i2c_i801 sdhci_pci xhci_pci libahci sdhci xhci_hcd mmc_core usbcore usb_common libata r8169 scsi_mod mii[52666.104723] CPU: 0 PID: 8318 Comm: kworker/u8:2 Tainted: G D 4.12.0-0.bpo.2-amd64 #1 Debian 4.12.13-1~bpo9+1[52666.116991] Hardware name: AAEON UP-APL01/UP-APL01, BIOS UPA1AM18 06/23/2017[52666.124918] Workqueue: events_unbound intel_atomic_commit_work [i915][52666.132146] task: ffff94e3d1a27140 task.stack: ffffa3c604d1c000[52666.138825] RIP: 0010:intel_atomic_commit_tail+0xf21/0xf50 [i915][52666.145673] RSP: 0018:ffffa3c604d1fda8 EFLAGS: 00010286[52666.151517] RAX: 000000000000001c RBX: ffff94e533b88000 RCX: 0000000000000000[52666.159498] RDX: 0000000000000000 RSI: ffff94e53fc0dee8 RDI: ffff94e53fc0dee8[52666.167486] RBP: 0000000000000000 R08: ffff94e5339a7a18 R09: 000000000000036e[52666.175473] R10: ffffa3c604d1fda8 R11: ffffffffbb6cddcd R12: 0000000000000000[52666.183489] R13: 0000000000000000 R14: ffff94e5321b9000 R15: 0000000000000001[52666.191476] FS: 0000000000000000(0000) GS:ffff94e53fc00000(0000) knlGS:0000000000000000[52666.200533] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033[52666.206964] CR2: 00007f0ca6011670 CR3: 0000000200009000 CR4: 00000000003406f0[52666.214954] Call Trace:[52666.217729] ? remove_wait_queue+0x60/0x60[52666.222316] ? process_one_work+0x181/0x370[52666.227027] ? worker_thread+0x4d/0x3a0[52666.231311] ? kthread+0xfc/0x130[52666.235018] ? process_one_work+0x370/0x370[52666.239711] ? kthread_create_on_node+0x70/0x70[52666.244792] ? do_group_exit+0x3a/0xa0[52666.248988] ? ret_from_fork+0x25/0x30[52666.253181] Code: 4c 89 44 24 08 48 83 c7 08 e8 5c 4b fe f9 4c 8b 44 24 08 4d 85 c0 0f 85 36 fe ff ff 8d 75 41 48 c7 c7 b0 9e 94 c0 e8 05 2b 0b fa <0f> ff e9 20 fe ff ff 8d 70 41 48 c7 c7 80 9e 94 c0 e8 ef 2a 0b [52666.274391] ---[ end trace c9ac8dcf9dad3203 ]--- Full dmesg: https://gist.github.com/anonymous/9cf0a1768cbcc950bba593e50ca024f1 This is pretty easy to replicate and can be done by loading twitch in fullscreen and after about 5minutes it will hang. Sometimes at the start and sometimes a little way through. How can I fix this? Update I updated the kernel to 4.13 which didn't help the problem. I then removed the TearFree option. This did help I believe, but then I have tearing. ** Error **Here is a slightly different error but still the same result being it's frozen GUI screen and have to do the Magic Keys to reboot. [125311.098771] systemd-gpt-auto-generator[16633]: Failed to dissect: Input/output error[125311.375868] systemd-gpt-auto-generator[16649]: Failed to dissect: Input/output error[244118.272043] [drm:intel_cpu_fifo_underrun_irq_handler [i915]] *ERROR* CPU pipe A FIFO underrun[324053.730179] usb 1-3.4: reset low-speed USB device number 7 using xhci_hcd[425672.192351] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=21284282 end=21284283) time 232 us, min 1074, max 1079, scanline start 1069, end 1082[428291.432332] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=21415244 end=21415245) time 136 us, min 1074, max 1079, scanline start 1073, end 1080[597930.852731] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=29897215 end=29897216) time 158 us, min 1074, max 1079, scanline start 1071, end 1080[664909.893109] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=33246167 end=33246168) time 346 us, min 1074, max 1079, scanline start 1066, end 1088[678368.073058] [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=33919076 end=33919077) time 165 us, min 1074, max 1079, scanline start 1072, end 1081[682058.832485] [drm] GPU HANG: ecode 9:1:0xeeffefa1, in Xorg [786], reason: Hang on bcs0, action: reset[682058.832609] drm/i915: Resetting chip after gpu hang[682058.835055] BUG: unable to handle kernel NULL pointer dereference at 0000000000000070[682058.844025] IP: reset_common_ring+0x99/0xf0 [i915] | Please do as root: EDIT /etc/default/grub Find the line that starts with GRUB_CMDLINE_LINUX and append i915.enable_rc6=0 , giving you for example: GRUB_CMDLINE_LINUX="splash quiet i915.enable_rc6=0" EXECUTE: update-grub REBOOT (optional step) EXECUTE systool -m i915 -av | grep enable_rc6 to check whether you have set this option correctly | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114279/"
]
} |
401,747 | I have (foolishly?) written a couple of moderately general-purpose xslt scripts. I'd quite like to turn these into executables that read an xml document from standard in or similar. The way you do this with other languages is to use a shbang . Is there an easy / standard way to do this with xsltproc and friends? Sure I could hack up a wrapper around xsltproc that pulls off the first comment line... but if there is something approximating a standard this would be nicer to use. | You could use the generic binfmt-misc kernel module that handles which interpreter is used when an executable file is run. It is typically used to allow you to run foreign architecture files without needing to prefix them with qemu or wine , but can be used to recognise any magic characters sequence in a file header, or even a given filename extension, like *.xslt . See the kernel documentation . As an example, if you have a file demo.xslt that starts with the characters <xsl:stylesheet version=... you can ask the module to recognise the string <xsl:stylesheet at offset 0 in the file and run /usr/bin/xsltproc by doing as root colon=$(printf '\\x%02x' \':) # \x3aecho ":myxsltscript:M::<xsl${colon}stylesheet::/usr/bin/xsltproc:" >/etc/binfmt.d/myxslt.conf cat /etc/binfmt.d/myxslt.conf >/proc/sys/fs/binfmt_misc/register You don't need to go via the /etc file unless you want the setting to be preserved over a reboot. If you don't have the /proc file, you will need to mount it first: mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc Now, if you chmod +x demo.xslt you can run demo.xslt with any args and it will run xsltproc with the filename demo.xslt provided as an extra first argument. To undo the setup, use echo -1 >/proc/sys/fs/binfmt_misc/myxsltscript | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36185/"
]
} |
401,752 | Is it possible to add a name as a group member when the name has a space? For example "foo bars" is the name and I want it to add to the group called "reindeers". This group is created in AD and it is quite common for names to have spaces. I won't be able to change the name. Apologies if this has already been asked here. I just could not find any references. I did find solutions/discussions to adding a username with a space in the sudoers config file by replacing the space with a "_" instead, or escaping the space with a backslash. Not sure if this works with regards to adding it to a group. Thanks,Mrky | Group and user names aren’t allowed to contain the space character on POSIX-style systems; see Command line login failed with two strings ID in Debian Stretch for references (the restrictions apply to groups as well as users). In your case you might be able to work around the limitation by managing your groups in AD rather than in /etc/group . But I’d recommend trying to convince the powers that be to drop spaces entirely... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258422/"
]
} |
401,759 | I just installed Debian 9.2.1 on an old laptop as a cheap server. The computer is not physically accessed by anyone other than myself, so I would like to automatically login upon startup so that if I have to use the laptop itself rather than SSH, I don't have to bother logging in. I have no graphical environments installed, so none of those methods would work, and I've tried multiple solutions such as https://superuser.com/questions/969923/automatic-root-login-in-debian-8-0-console-only However all it did was result in no login prompt being given at all... So I reinstalled Debian.What can I do to automatically log in without a graphical environment? Thanks! | Edit your /etc/systemd/logind.conf , change #NAutoVTs=6 to NAutoVTs=1 Create a /etc/systemd/system/[email protected]/override.conf through ; systemctl edit getty@tty1 Paste the following lines [Service]ExecStart=ExecStart=-/sbin/agetty --autologin root --noclear %I 38400 linux enable the [email protected] then reboot systemctl enable [email protected] Arch linux docs :getty | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/401759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258425/"
]
} |
401,785 | I'd like to sort this output by lstart (start of process): ps -eo lstart,pid,cmd Is there a way to output lstart in ISO format like YYYY-MM-DD HH:MM:SS? But sorting alone does not solve it. I really would like to have ISO date format. | Is there a way to output lstart in ISO format like YYYY-MM-DD HH:MM:SS ? With awk + date cooperation: ps -eo lstart,pid,cmd --sort=start_time | awk '{ cmd="date -d\""$1 FS $2 FS $3 FS $4 FS $5"\" +\047%Y-%m-%d %H:%M:%S\047"; cmd | getline d; close(cmd); $1=$2=$3=$4=$5=""; printf "%s\n",d$0 }' Alternative approach using ps etimes keyword(elapsed time since the process was started, in seconds): ps -eo etimes,pid,cmd --sort=etimes | awk '{ cmd="date -d -"$1"seconds +\047%Y-%m-%d %H:%M:%S\047"; cmd | getline d; close(cmd); $1=""; printf "%s\n",d$0 }' date -d -"$1"seconds - difference between the current timestamp and elapsed time, will give the timestamp value of the process | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22068/"
]
} |
401,840 | I was searching for a way to convert hexadecimal via command line and found there is a very easy method echo $((0x63)) . It's working great but I'm a little confused as to what is happening here. I know $(...) is normally a sub-shell, where the contents are evaluated before the outer command. Is it still a sub-shell in this situation? I'm thinking not as that would mean the sub-shell is just evaluating (0x63) which isn't a command. Can someone break down the command for me? | $(...) is a command substitution (not just a subshell), but $((...)) is an arithmetic expansion. When you use $((...)) , the ... will be interpreted as an arithmetic expression. This means, amongst other things, that a hexadecimal string will be interpreted as a number and converted to decimal. The whole expression will then be replaced by the numeric value that the expression evaluates to. Like parameter expansion and command substitution, $((...)) should be quoted as to not be affected by the shell's word splitting and filename globbing. echo "$(( 0x63 ))" As a side note, variables occurring in an arithmetic expression do not need their $ : $ x=030; y=30; z=0x30$ echo "$(( x + y +x ))"78 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/401840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
401,905 | Unix iMac shell terminal sed -i 's/original/new/g' maths.tx Message returned: sed: -i may not be used with stdin | Macs use the BSD version of utilities such as sed and date , which have their own idiosyncrasies. In this specific case, the BSD build of sed mandates the extension for the backup file with -i , rather than it being optional , as in GNU sed . As such: sed -i .bak 's/needle/pin/g' haystack The shown command will replace all instances of needle with pin in the file haystack , and the original file will be preserved in haystack.bak . From the manual for the implementation of sed on a Mac: -i extension Edit files in-place, saving backups with the specified extension. If a zero-length extension is given, no backup will be saved. It is not recommended to give a zero-length extension when in-place editing files, as you risk corruption or partial content in situations where disk space is exhausted, etc. As opposed to on a Linux host: -i[SUFFIX], --in-place[=SUFFIX] edit files in place (makes backup if SUFFIX supplied) Note that "a zero-length extension" is distinct from "no extension". You can eschew the backup entirely, then, with: sed -i '' 's/needle/pin/g' haystack | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258478/"
]
} |
401,934 | I want to perform some action only if my shell is "connected" to a terminal, i.e. only if my standard input comes from a terminal's input and my standard output (and standard error? maybe that doesn't matter) gets printed/echoed to a terminal. How can I do that, without relying on GNU/Linux specifics (like /proc/self ) directly? | isatty is a function for checking this , and the -t flag of the test command makes that accessible from a shell script: -t file_descriptor True if file descriptor number file_descriptor is open and is associated with a terminal. False if file_descriptor is not a valid file descriptor number, or if file descriptor number file_descriptor is not open, or if it is open but is not associated with a terminal. You can check if FD 0 (standard input) is a TTY with: test -t 0 You can do the same for FDs 1 and 2 to check the output and error streams, or all of them: test -t 0 -a -t 1 -a -t 2 The command returns 0 (succeeds) if the descriptors are hooked up to a terminal, and is false otherwise. test is also available as the [ command for a "bracket test": if [ -t 0 ] ; then ... is an idiomatic way to write this conditional. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/401934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
401,948 | I would like to know if there is a shortcut in deepin os for docking a window to a side or corner of the screen. Just like it does when the window is dragged to one edge of the screen. It doesn't seem to be listed in the shortcut section of the control panel. Or if there is none, is it somewhere explained how to create on? | Following your last answer, and looking at the source code ; I found this using DBUS. You can create a custom shortcut using the following line as command qdbus com.deepin.wm /com/deepin/wm com.deepin.wm.TileActiveWindow 1 1 is for left, and 2 for right. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258564/"
]
} |
401,956 | I'm troubleshooting an interactive command and would like to: see output printed to my screen with its original coloration, unbuffered or line-buffered (instead of block -buffered,) as that command produces it use something like tee to redirect this command's output to a file at the same time, preserving — that is, not garbling (such as by passing ANSI escape sequences through raw instead of processing them properly) — its coloration in this resulting log file. (I've realized that part of this preference probably isn't possible and thus replaced it with those that follow.) use tee or something like it to redirect this command's output to a file at the same time display output as colored by the command in question at run-time not pollute the resulting logs with ANSI escape sequences — that is, strip them from the output after displaying that on the terminal but before saving it to the log file. Normally, tee -a does this for me with the output from the same kind of command I'm trying to log here like a charm, but some odd corner case I've hit in the pipeline ending in tee is disrupting this normal, sensible behavior, it seems. I've done a bit of digging around to see if anybody's had a similar problem before and come up with a solution for it, but all the relevant material I've been able to dredge up is this: How to make output of any shell command unbuffered? — Stack Overflow Preserve colors while piping to tee — SuperUser Turn off buffering in pipe — Unix & Linux Stack Exchange Script Command without Junk Character — Unix & Linux Stack Exchange Removing control chars (including console codes / colours) from script output — Unix & Linux Stack Exchange format output from Unix “script” command: remove backspaces, linefeeds and deleted chars? — Stack Overflow Realtime print statements with tee in interactive script — Unix & Linux Stack Exchange None of these resources, however, quite offer enough hints that I think I might be able to use them to cobble something along the lines of what I'm after together on my own nearly as quickly as I'd like — so I can finally pass the logs I need help generating on to somebody who can glean something from them, of course. I've tried a few different variations on the unbuffer - and script -based suggestions on offer in what I've perused already and was considering giving stdbuf some gos before I started writing this question up, but haven't gotten around to doing that last one just yet. I'm using /usr/bin/bash on OS X v10.11.6 'El Capitan.' The command I'm trying to troubleshoot is HOMEBREW_BUILD_FROM_SOURCE=1 brew upgrade -vd --build-from-source mailutils , where brew is the Homebrew package manager and the version of Mailutils I'm trying to build from source and to which I'm trying to upgrade is v3.3 (I currently have v3.2,) but I deeply suspect that switching it out with an entirely different command that also produces block-buffered output when used as part of a pipeline (which, if I understand correctly , would be any shell command, as I've seen it mentioned that setting up a pipeline may incur block buffering in and of itself) would not solve my problem here. What tool or tools might I be able to use to achieve the outcome I seek with respect to the logs I'm trying to capture here? | Process substitution 1 lets you transform a script typescript as it's written. You can use script -q >(./dewtell >> outfile ) brew args... instead of script -aq outfile brew args... to write a log with escape sequences removed, where ./dewtell is whatever command runs the script that removes them. On other systems than relatively recent versions of macOS or FreeBSD, the command will be slightly different, because different script implementations support different options. See below for full details. The Story So Far (including alternative solutions) Your solution Your brew command runs numerous other output-generating programs as subprocesses, such as ./configure scripts and make , and you have found that piping brew 's output to tee with brew args... 2>&1 | tee -a outfile causes the output (of at least some of those subprocesses) to be buffered and not to appear on your terminal in real time. You found that using script , by running script -aq , solved this problem by keeping standard output a terminal 2 , and that you can also pass -k if you want your own input logged even in situations when it is not echoed to the terminal as you type it . You found further that dewtell's extended version of Gilles's Perl script to remove escape sequences from files cleans up the the generated typescript effectively, transforming it into what you need. The difference between Gilles's original script and dewtell's extended version is that, while they both remove escape sequences, including but not limited to those that specify color changes, dewtell's script also removes carriage return characters ( $'\r' , represented as ^M in vim and in the output of cat -v ) as well as backspace characters ( $'\b' , represented as ^H in vim and in the output of cat -v ) and whatever characters, if any, that they appear to have erased. Problems with some Perl implementations You reported that the script needs a "relatively recent" Perl interpreter. But it doesn't call for any newer features with use or otherwise appear to rely on them, and a friend of mine who runs macOS 10.11.6 El Capitan has verified that it works with the system-provided perl 5.8.12, so I don't know why (or if) you needed a newer perl. I expect most people can just use the perl they have. But the script did fail on Mac OS X 10.4.11 Tiger (PPC) with the system-provided perl 5.8.6, which incorrectly believes (at least on my system) that m is not in the character class [@-~] , even with LC_COLLATE=C or LC_ALL=C and even though the system-provided grep , sed , python , and ruby do not have this problem. This caused pieces of color-specifying escape sequences to remain in the output, as the sequences failed to match (?:\e\[|\x9b) [ -?]* [@-~] and matched the later alternative \e. instead. With perlbrew , I installed perl 5.27.5, 5.8.9, and even 5.6.2 on that system; none had the problem. If one does need or want to run the script with a Perl interpreter installed elsewhere than /usr/bin/perl , then one can change the hashbang line at the top to the correct path; or one can change it to /usr/bin/env perl if the desired perl executable appears first in the path, i.e., if it would run if one typed perl and pressed Enter ; or one can invoke the interpreter explicitly with the script's filename as its first argument, e.g., /usr/local/bin/perl dewtell instead of ./dewtell . Ways to keep or replace the original typescript Some users who need to remove escape sequences from a typescript will want to keep the old unprocessed typescript too. If such a user wishes to process a typescript called dirty.log and write the output to clean.log , they would run ./dewtell dirty.log > clean.log , if necessary replacing ./dewtell with whatever other command runs dewtell's script (or some other script they wish to use). To modify the typescript "in place" instead, one can pass -i to perl , running perl -i.orig dewtell typescript where typescript is the typescript generated by script , .orig is any suffix to be used for the backup file, and dewtell is the Perl script. Or one may instead run perl -i dewtell typescript , which discards the original because no backup suffix is supplied. These methods don't work with all Perl scripts but they work with this one because it uses <> to read input, and <> in Perl respects -i . You used sponge to write the changes back to the original typescript. This is also a good and reliable method, though it requires that moreutils be installed. Preventing Escape Sequences From Ever Being Logged The remaining question is how to write a log that has escape sequences removed in the first place. As you say : [T]here may further be a way to string this all together into a single pipeline, but I haven't been able to figure that out as of this writing; additional comments or answers showing how... Use process substitution instead of a pipeline. The problem is that script on most (all?) systems does not have an option to support those transformations. Since script writes to a file whose name you either specify or defaults to typescript --not to standard output--piping from script would not affect what is written to the typescript. Placing the script command on the right side of the pipe operator ( | ) to pipe to it is not a good idea either. In your case this is specifically because output from brew or its subprocesses was buffered when its standard output was a pipe, so it didn't appear when you needed to see it. Even if that problem were solved, I don't know of any reasonable way to use a pipeline 1 together with script to accomplish this task. But it can be done with process substitution. 3 In process substitution 1 (also explained here ), you write <( command... ) or >( command... ) . The shell creates a named pipe and uses it as standard output or input, respectively, for a subshell in which command... is run. The text <( command... ) or >( command... ) is replaced with the filename of the named pipe--that's the substitution --so you can pass it as an argument to a program or use it as the target of a redirection. Use <( command... ) to run command... like its output is the contents of a file you'll read from . 4 Use >( command... ) to run command... like its input is the contents of a file you'll write to . 4 Not all systems support named pipes, but most do. Not all shells support process substitution, but Bash does, so long as it's running on a system that is capable of supporting it, your build of Bash hasn't omitted support for it, and POSIX mode is turned off in the shell. In Bash you usually have access to process substitution, especially if you're using any remotely recent operating system. Even on my Mac OS X 10.4.11 Tiger (PPC) system where "$BASH_VERSION" is 2.05b.0(1)-release , process substitution works just fine. Here's how to do that while using script 's syntax on a recent macOS system. This should work on your macOS 10.11 El Capitan system--and, going by that manpage , any macOS system at least as far back as macOS 10.9 Mavericks and possibly earlier: script -q >(./dewtell >> clean.log ) brew args... That logs everything written to the terminal, including your own input if it is echoed back to you , i.e., if it appears in the terminal, which it usually does. If you want your own input logged even if it doesn't appear , bearing in mind that the situation where this occurs is often that you are entering a password, then as you mentioned in your answer , add the -k option: script -kq >(./dewtell >> clean.log ) brew args... In either case, replace ./dewtell with whatever command runs dewtell's script or any other program or script you want to use to filter the output, clean.log with name of the file you want to write the typescript to with escape sequences omitted, and brew args... 5 with the command you are running and its arguments. Overwriting or Appending to the Log If you want to overwrite clean.log instead of appending to it then use > clean.log instead of >> clean.log . The actual file is being written by the command that is run via process substitution, so the > or >> redirection operator appears inside >( ) . Don't attempt to use >>( instead of >( , which is a syntax error as well as meaningless because the > in >( for process substitution does not mean redirection. Don't pass -a to script with the intention that it would prevent your log file from being overwritten in this situation, because this would simply open the named pipe in append mode--which has the same effect as opening it for a normal write--and then either overwrite or append clean.log , still depending on whether > clean.log or >> clean.log is used in the subshell. Similarly, don't use >& or &> or add 2>&1 inside >( ) (or anywhere), because if ./dewtell generates any errors or warnings, you would want to see those rather than having them written to clean.log . The script command automatically includes text from standard error in its typescript; you don't need to do anything special to achieve this. On Other Operating Systems As your answer says: [S]ome versions of the script command have a different syntax; the one given is for OS X/macOS, so adjust as necessary. GNU/Linux Most GNU/Linux systems use the script implementation provided by util-linux . If you want to cause it to run a specific command rather than starting a shell, you must use the -c option and pass the entire command as a single command-line argument to script , which you can achieve by enclosing it in quotes. This is different from the version of script on recent macOS systems like yours, which allows you to pass the command naturally as multiple arguments placed after the output filename (with no option like -c ). So on Debian, Ubuntu, Fedora, CentOS, and most other GNU/Linux systems , you could use this command (if it had a brew command 6 , or replacing it with whatever command you want to run and log transformed output): script >(./dewtell >> clean.log ) -qc 'brew args... ' As with script on your system, on GNU/Linux remove -q if you want script to include more messages about how logging has begun and ended. Even with the -q option, this version of script does still include one line at the top saying when it started running, though it does not show you that line and it does not write or show anything about when it stopped running. There is no -k option. Only text that appears in the terminal is recorded. 7 FreeBSD The script command in macOS originated in FreeBSD. All versions support -a to append instead of overwriting (though, as noted above, this does not help you append when you are writing through a named pipe using process substitution). -a was the only option up to and including FreeBSD 2.2.5 . The -q option was added in FreeBSD 2.2.6 . The -k option was added in FreeBSD 2.2.7 . Up through FreeBSD 2.2.5 , the script command did not allow a specific command to be given, but instead always ran the user's shell, given by the SHELL environment variable, with /bin/sh as a fallback if the variable is unset. Starting in FreeBSD 2.2.6 , a specific command could be given on the command line to script which it would run instead of a shell. Thus later versions of FreeBSD, including those commonly encountered today, are similar to newer macOS systems such as yours in the way the script command may be invoked. Likewise, older versions of FreeBSD are similar to older versions of macOS (see below). Note that perl is not part of FreeBSD's base system in any recent release, and bash never has been. Both may be readily installed using packages (such as with pkg install perl5 bash bash-completion ) or ports. The system-provided /bin/sh in FreeBSD does not support process substitution. Older Versions of macOS, and any other system with a less versatile script I tested on Mac OS X 10.4 Tiger where script accepts only the -a option. It does not accept -q or -k . It includes only keystrokes shown in the terminal in its typescript 7 , as with the util-linux version on GNU/Linux systems. At least until I can find a reliable source of documentation for script in every version of macOS (to my knowledge, only the 10.9 Mavericks manpages are readily available online), I recommend macOS users run man script to check what syntax their script command accepts, how it behaves by default, and what options it supports. You would want to use these commands on an old version of macOS like mine: script >(./dewtell >> clean.log )brew args... exit This also applies to script on any other OS where it doesn't support many options , or on OSes where other options are supported but you prefer not to use them. This method of using script to start a shell, running whatever command or commands in the shell that you need logged, and then exiting the shell, is the traditional way. The ugly hack of pretending your command is your shell If you really must use script to run a single command rather than a new instance of your shell, there is an ugly hack that you can sometimes use: you can fool it into thinking the command you want to run is actually your shell with SHELL= your-command script outfile . You should think twice before doing this, though, because if your-command itself actually consults the SHELL environment variable to check what actual shell you use, hilarity unfortunate behavior would ensue. Furthermore, that will not readily work for a command consisting of multiple words--that is, a command to which you are passing one or more arguments. If you wrote SHELL='brew args... ' before script on the same line, that would succeed at passing brew args... into script 's environment as the value of SHELL , but that entire string would be used as the name of the command, rather than just the first word, and no arguments would be passed to the command, rather than all the other words being passed. You could work around this by writing a shell script, called run-brew or whatever you want to call it, that runs brew with args... , and then passing that as the value of the SHELL environment variable. After you've made the run-brew shell script, running it via the script command could look like this: SHELL=run-brew script >(./dewtell >> clean.log ) For the reasons given above, I recommend against using the method of assigning your command name to SHELL , unless the action you are performing is unimportant or you are sure it will not involve the use of SHELL . Since Homebrew performs numerous, quite complicated actions, I suggest against actually running a run-brew script like this. (There's nothing wrong with putting your long, complicated brew command in a run-brew script, only with using SHELL=run-brew to make script run it.) I did find this method a bit useful when testing the techniques shown above with a simple program in place of brew args... , however. Testing and Demonstrating the Technique You may find it useful to try out some of these methods on a command less complicated than your long brew command. I know I did. The demo program / test input generator, and the testing method used I made this simple interactive Perl script that writes to standard error, prompts the user on standard output for their name, reads it from standard input, then writes a greeting to standard output with the user's name in color: #!/usr/bin/perluse strict;use warnings;use Term::ANSIColor;print STDERR $0, ": warning: this program is boring\n";print "What's your name? ";chomp(my $name = <STDIN>);printf "Hello, %s!\n", colored($name, 'cyan'); I called it colorhi and put it in the same directory as dewtell's script , which I called dewtell . In my own testing I replaced #!/usr/bin/perl with #!/usr/bin/env perl in both scripts. 8 I tested in Ubuntu 16.04 LTS with the system-provided perl 5.22.1 and versions 5.6.2 and 5.8.9 provided by perlbrew ; FreeBSD 11.1-RELEASE-p3 with the pkg -provided perl 5.24.3 and versions 5.6.2, 5.8.9, and 5.27.5 provided by perlbrew ; and Mac OS X 10.4.11 Tiger with the system-provided perl 5.8.6 and versions 5.6.2, 5.8.9, and 5.27.5 provided by perlbrew . I repeated the tests described below with each of those perl versions, first testing the system-provided 9 version, then using perlbrew use to temporarily cause each perlbrew -provided perl binary to appear first in $PATH (e.g., to test perl 5.6.2, I ran perlbrew use 5.6.2 , then the commands shown below for the system on which I was testing). A friend tested it in macOS 10.11.6 El Capitan, with the original hashbang lines, causing the system-provided perl 5.18.2 to be used, and not testing any other interpreters. That test employed the same commands I ran while testing on FreeBSD. All those tests succeeded except with the system-provided perl in Mac OS X 10.4.11 Tiger , which failed due to what appears to be a strange bug involving character classes in regular expressions, as I described earlier in detail, and as shown below in an example. On Ubuntu While in the directory that contained the scripts, I ran these commands on the Ubuntu system to produce a typescript with escape sequences and any backspace characters I might type: printf 'Whatever header you want...\n\n' >dirty.logscript dirty.log -aqc ./colorhi I typed Eliah , then behaved as though I had thought better of it, erasing it with backspaces and typing Bob from accounting instead. Then I pressed Enter and was greeted in color. Then I ran these commands to separately produce a typescript without escape sequences and without any signs of my real name, interacting with it in exactly the same way (including typing and erasing Eliah ): printf 'Whatever header you want...\n\n' >clean.logscript >(./dewtell >>clean.log) -qc ./colorhi vim displays control characters symbolically like cat -v and offers the advantage of brightened or colored text. This is what the buffer shown by view dirty.log looked like, but with the representations of control characters italicized so they stand out here: Whatever header you want...Script started on Thu 09 Nov 2017 07:17:19 AM EST./colorhi: warning: this program is boring ^M What's your name? Eliah ^H ^H^H ^H^H ^H^H ^H^H ^H Bob from accounting ^M Hello, ^[ [36mBob from accounting ^[ [0m! ^M And this is what the buffer looked like for view clean.log : Whatever header you want...Script started on Thu 09 Nov 2017 07:18:31 AM EST./colorhi: warning: this program is boringWhat's your name? Bob from accountingHello, Bob from accounting! Results were the same with each interpreter tested, except of course for the timestamp. On FreeBSD (and macOS 10.11.6 El Capitan) I carried out the test the same way on FreeBSD as on Ubuntu, except that I used these commands to produce dirty.log : printf 'Whatever header you want...\n\n' >dirty.logscript -aq dirty.log ./colorhi And I used these commands to produce clean.log : printf 'Whatever header you want...\n\n' >clean.logscript -q >(./dewtell >>clean.log) ./colorhi Those are the same commands my friend ran to test this on macOS 10.11 , and although the input issued was slightly different from my Eliah / Bob from accounting input, a name was still typed, erased with backspaces, and replaced by another name. The output was thus similar except for the names and number of backspaces. With all four of the Perl implementations tested on FreeBSD and the one (system-provided) implementation on macOS 10.11, both dirty.log and clean.log showed the expected output. Comparing the FreeBSD results with the Ubuntu results, the difference was the absence of any timestamps, due to -q . All escape sequences and carriage returns were successfully removed in clean.log , as were all backspaces and characters whose erasure the backspaces indicated. On Mac OS X 10.4.11 Tiger I carried out the test the same way on my old Tiger system as on Ubuntu and FreeBSD, except that I used these commands to produce dirty.log : printf 'Whatever header you want...\n\n' >dirty.logSHELL=colorhi script -a dirty.log And I used these commands to produce clean.log : printf 'Whatever header you want...\n\n' >clean.logSHELL=colorhi script >(./dewtell >>clean.log) Since this system's script command doesn't support -q , results included both (a) a Script started line appended after the header and (b) a newline followed by a Script done line appended at the very end of each typescript. Both those lines contained timestamps. Besides that, the results were the same as on Ubuntu and FreeBSD, except that the escape sequences to switch to and from cyan text were not fully removed with the system-provided perl . The relevant line from dirty.log always appeared this way in vim , as expected: Hello, ^[ [36mBob from accounting ^[ [0m! ^M With the system-provided perl 5.8.6, this was the corresponding line in clean.log , showing 6m and 0m , which should have been removed, left over: Hello, 6mBob from accounting0m! With each of the perlbrew -installed perls, all escape sequences were fully and correctly removed, and that line in clean.log looked like this, just as it did with all Perl interpreters I ran on Ubuntu and FreeBSD: Hello, Bob from accounting! Notes 1 That manual is for Bash 2. Many Bash users are on major version 4 and will prefer to read about process substitution , pipelines , and other topics in the current Bash manual . Current versions of macOS ship with Bash 3. 2 Standard error is almost always unbuffered, regardless of what type of file or device it is. There is no rule that programs cannot buffer writes to file descriptor 2, but there is a strong tradition not to do so, based in the need to actually see error and warning messages when they occur--and also the need to see them at all , even if the program terminates abnormally without ever properly closing or otherwise flushing its open file descriptors. It would usually be a bug for a program to buffer writes to standard error by default. 3 Process substitution uses a named pipe , also called a FIFO , which achieves the same general goal as the pipe operator | in shells, but is more versatile. However, even though this is a pipe , I consider that it is not a pipeline , which I take to refer to the specific syntactic construct and corresponding behavior of a shell. 4 If you consider a named pipe to be a file, which you should , then this is literally what is happening. 5 Although "$COMMAND" appears in your answer and passes an entire command as a single argument to script (because double quotes suppress word splitting ), you were able to pass the command to script as multiple arguments . 6 Such as with Linuxbrew , which I should acknowledge you introduced me to . 7 However, I recommend that anyone who relies on this behavior to keep sensitive data secret test the behavior of their script command, and maybe even inspect generated typescripts to ensure no data that must be protected are present. To be extra safe, use an editor that shows characters that would ordinarily be hidden on screen, or use cat -v . 8 The versions with #!/usr/bin/perl , including the colorhi implementation shown, should work and do the right thing on most systems. I used #!/usr/bin/env perl in my own testing. But my friend who has the same OS that you (the original poster) are running used #!/usr/bin/perl . This achieved the goal of checking, with minimal complication or potential for doubt, that the system-provided perl would work. 9 On FreeBSD there is no system-provided perl in the strictest sense. I tested the version installed via pkg first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/401956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86927/"
]
} |
401,984 | I am using iptables with ipset on an Ubuntu server firewall. I am wondering if there is a command for importing a file containg a list of ip's to ipset. To populate an ipset, right now, I am adding each ip with this command: ipset add manual-blacklist x.x.x.x It would be very helpfull if I can add multiple ip's with a single command, like importing a file or so. At command for ip in `cat /home/paul/ips.txt`; do ipset add manual-blacklist $ip;done I get this response resolving to IPv4 address failed to parse 46.225.38.155 for each ip in ips.txt I do not know how to apply it. | You can use ipset save/restore commands. ipset save manual-blacklist You can run above command and see how you need to create your save file. Example output: create manual-blacklist hash:net family inet hashsize 1024 maxelem 65536 add manual-blacklist 10.0.0.1 add manual-blacklist 10.0.0.2 And restore it with below command. ipset restore -! < ips.txt Here we use -! to ignore errors mostly because of duplication. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/401984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258597/"
]
} |
402,014 | Consider we have many photos with names like DSC_20170506_170809.JPEG . To rename the photos so that they follow the pattern Paris_20170506_170809.JPEG ,I've wrote the following script that works perfect. for file in *.JPEG; do mv ${file} ${file/DSC/Paris}; done My question is , how we can write this script using a while loop instead of a for loop? | There's nothing wrong with using a while loop here. You just have to do it right: set -- *.jpegwhile (($#)); domv -- "${1}" "${1/DSC/Paris}"shiftdone The while loop above is just as reliable as the for loop (it will work with any file names) and while the latter is - in many instances - the most appropriate tool to use, the former is a valid alternative 1 that has its uses (e.g. the above could process three files at a time or process only a certain number of arguments etc). All these commands ( set , while..do..done and shift ) are documented in the shell manual and their names are self-explanatory... set -- *.jpeg# set the positional arguments, i.e. whatever that *.jpeg glob expands towhile (($#)); do# execute the 'do...' as long as the 'while condition' returns a zero exit status# the condition here being (($#)) which is arithmetic evaluation - the return# status is 0 if the arithmetic value of the expression is non-zero; since $## holds the number of positional parameters then 'while (($#)); do' means run the# commands as long as there are positional parameters (i.e. file names)mv -- "${1}" "${1/DSC/Paris}"# this renames the current file in the listshift# this actually takes a parameter - if it's missing it defaults to '1' so it's# the same as writing 'shift 1' - it effectively removes $1 (the first positional# argument) from the list so $2 becomes $1, $3 becomes $2 and so on...done 1: It's not an alternative to text-processing tools so NEVER use a while loop to process text. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64110/"
]
} |
402,029 | I'm trying to append the output of a command (stdout and stderr) to an existing file. What I'm trying to do is something like this: command >>file 2>>&1 The problem is that 2>>&1 throws an error, but >>file 2>>file does not. So, I think I'm misunderstanding how redirection works, or what is a file descriptor and what information is saved inside it. Summarizing, what is the difference between the two following commands, and why the first one does not work, but the second one works? command >>file 2>>&1 #not workingcommand >>file 2>>file #working Thanks | What you want to do is set up file descriptor 1 (stdout) to append to a file, then redirect fd 2 (stderr) to simply do what fd 1 is doing. command >>file 2>&1 You get an error with 2>>&1 simply because >>& is not a redirection operator. Read Redirections in the bash manual, particularly sections 3.6.5 and 3.6.8 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145502/"
]
} |
402,169 | Executing (for example) the following command to get the list of memory mapped pages: pmap -x `pidof bash` I got this output: Why some read-only pages are marked as "dirty", i.e. written that require a write-back? If they are read-only, the process should not be able to write to them... (In the provided example dirty pages are always 4 kB, but I found other cases with different values) I checked also the /proc/ pid /smaps and that pages are described as "Private Dirty". | A dirty page does not necessarily require a write-back. A dirty page is one that was written to since the kernel last marked it as clean. The data doesn't always need to be saved back into the original file. The pages are private, not shared, so they wouldn't be saved back into the original file. It would be impossible to have a dirty page backed by a read-only file. If the page needs to be removed from RAM, it will be saved in swap. Pages that are read-only, private and dirty, but within the range of a memory-mapped file, are typically data pages that contain constants that need to be initialized at run time, but don't change after they have been initialized. For example, they may contain static data that embeds pointers; the pointer values depend on the address at which the program or library is mapped, so it has to be computed after the program has started, with the page being read-write at this stage. After the pointers have been computed, the contents of the page won't ever change in this instance of the program, so the page can be changed to read-only. See “Hunting Down Dirty Memory Pages” by stosb for an example with code fragments. You may, more rarely, see read-only, executable, private, dirty pages; these happen with some linkers that mix code and data more freely, or with just-in-time compilation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104917/"
]
} |
402,187 | I am using vim and I think I know how to change all of the / to ) but I do not know how to apply it for only lines that end in ) I want to change this: apples/oranges/grapes) apples/oranges/grapes apples/oranges/grapes) apples/oranges/grapes to this: apples)oranges)grapes)apples/oranges/grapesapples)oranges)grapes)apples/oranges/grapes | A dirty page does not necessarily require a write-back. A dirty page is one that was written to since the kernel last marked it as clean. The data doesn't always need to be saved back into the original file. The pages are private, not shared, so they wouldn't be saved back into the original file. It would be impossible to have a dirty page backed by a read-only file. If the page needs to be removed from RAM, it will be saved in swap. Pages that are read-only, private and dirty, but within the range of a memory-mapped file, are typically data pages that contain constants that need to be initialized at run time, but don't change after they have been initialized. For example, they may contain static data that embeds pointers; the pointer values depend on the address at which the program or library is mapped, so it has to be computed after the program has started, with the page being read-write at this stage. After the pointers have been computed, the contents of the page won't ever change in this instance of the program, so the page can be changed to read-only. See “Hunting Down Dirty Memory Pages” by stosb for an example with code fragments. You may, more rarely, see read-only, executable, private, dirty pages; these happen with some linkers that mix code and data more freely, or with just-in-time compilation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258754/"
]
} |
402,191 | I want to check if a file exist, isn't empty and equal to another file. If so, do nothing. If they aren't equal then overwrite both files with cat "some text" . If they don't exist or are empty then also create file with cat some text I tried a few solutions, but whenever I get one condition right it makes another fail, or fail when no files exist. What would be the cleanest way to solve this issue? All of this using bash? | if [ -f file1 ] && [ -s file1 ] && [ -f file2 ] && [ -s file2 ] && cmp file1 file2 &>/dev/null; then : do nothing in this case onlyelse echo "some text" >file1 echo "some text" >file2 # or cp file1 file2fi and a shorter version, based on the comments if [ -s file1 ] && cmp file1 file2 &>/dev/null; then : do nothing in this case onlyelse echo "some text" >file1 echo "some text" >file2 # or cp file1 file2fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119404/"
]
} |
402,233 | IF Nating is done in OUTPUT chain of the NAT table, then what is the function of SNAT in POSTROUTING | Have a look at this diagram . The green boxes are for iptables , the blue are for ebtables (ignore those). So you see that the OUTPUT chain is only traversed for packets produced by local applications, while the POSTROUTING chain is traversed by all packets, including those routed from somewhere else. There are two subcases for network address translation (NAT): S NAT translates the source address of the packet, while D NAT translates the destination address of the packet. You are restricted in which chains you can do either: nat/PREROUTING and nat/OUTPUT can do DNAT, while nat/POSTROUTING and possibly nat/INPUT (not sure if this still works) can do SNAT. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258778/"
]
} |
402,235 | I have an awk command which runs well on terminal: this awk command creates different file according to their column header.awk command: for((i=2;i<5;i++)); do awk -v i=$i 'BEGIN{OFS=FS="\t"}NR==1{n=$i}{print $1,$i > n".txt"}' ${Batch}.result.txtdone the same command when incorporated in a shell script shows error: Syntax error: Bad for loop variable It worked the following way. I tried with she-bang as suggested but it repeated the error. for i in 2 3 4; do awk -v i=$i 'BEGIN{OFS=FS="\t"}NR==1{n=$i}{print $1,$i n".txt"}' | I don't think the error has anything to with your Awk command. I think you are running it in the POSIX bourne shell sh in which for-loop construct with (( is not supported. Run the script with shebang set to the path where bash is installed. Usually it is safe to do #!/usr/bin/env bash because #!/usr/bin/env searches PATH for bash , and bash is not always in /bin , particularly on non-Linux systems. For on a OpenBSD system, it's in /usr/local/bin . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
402,264 | I know that different distributions patch the packages that are available in the respective repositories but I've never understood why is there a need to do so. I would appreciate it if somebody could explain or point me to the relevant documentation online. Thanks. | It took a few tries, but I think I comprehend what you're asking now. There are several possible reasons for a distribution to patch given software before packaging. I'll try and give a non-exclusive list; I'm sure there are other possible reasons. For purposes of this discussion, "upstream" refers to the original source code from the official developers of the software Patches that upstream has not (or not yet) incorporated into their main branch for whatever reason or reasons. Usually because the distribution's package maintainer for that package believes that said patches are worthwhile, or because they're needed to keep continuity in the distribution (Suppose you've got a webserver and after a routine update to php several functions you've been relying on don't work anymore, or it's unable to read a config file from the old style) Distributions tend to like standardized patterns for their filesystem hierarchy in /etc/ ; every software developer may or may not have their own ideas for what constitutes proper standards. Therefore, one of the first thing a distribution package maintainer tends to do is patch the build scripts to configure and expect said configuration files in a hierarchy pattern that corresponds to the rest of the distribution. Continuing on the topic of configuration, one of the first "patches" tends to be a set of default configuration files that will work with the rest of the distribution "out of the box" so to speak, allowing the end user to get started immediately after installing rather than having to manually sort out a working configuration. That's just off the top of my head. There may very well be others, but I hope this gives you some idea. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206398/"
]
} |
402,315 | This: $ echo {{a..c},{1..3}} produces this: a b c 1 2 3 Which is nice, but hard to explain given that $ echo {a..c},{1..3} gives a,1 a,2 a,3 b,1 b,2 b,3 c,1 c,2 c,3 Is this documented somewhere? The Bash Reference doesn't mention it (even though it has an example using it). | Well, it is unravelled one layer at a time: X{{a..c},{1..3}}Y is documented as being expanded to X{a..c}Y X{1..3}Y (that's X{A,B}Y expanded to XA XB with A being {a..c} and B being {1..3} ), themselves documented as being expanded to XaY XbY XcY X1Y X2Y X3Y . What may be worth documenting is that they can be nested (that the first } does not close the first { in there for instance). I suppose shells could have chosen to resolve the inner braces first, like by acting upon each closing } in turn: X{{a..c},{1..3}} X{a,{1..3}}Y X{b,{1..3}}Y X{c,{1..3}}Y (that is A{a..c}B expanded to AaB AbB AcB , where A is X{ and B is ,{1..3}Y ) X{a,1}Y X{a,2}Y X{a,3}Y X{b,1}Y X{b,2}Y X{b,3}Y X{c,1}Y X{c,2}Y X{c,3}Y XaY X1Y XaY Xa2 ... But I don't find that particularly more intuitive nor useful (see Kevin's example in comments for instance), there would still be some ambiguity as to the order in which the expansions would be done, and that's not how csh (the shell that introduced brace expansion in the late 70s, while the {1..3} form came later (1995) from zsh and {a..c} yet later (2004) from bash ) did it. Note that csh (from the start, see the 2BSD (1979) man page ) did document the fact that brace expansions could be nested, though did not explicitly say how nested brace expansions would be expanded. But you can look at the csh code from 1979 to see how it was done then. See how it does explicitly handle nesting indeed, and how it's resolved starting from the outer braces. In any case, I don't really see how the expansion of {a..c},{1..3} could have any bearing. In there, the , is not an operator of a brace expansion (as it's not inside braces), so is treated like any ordinary character. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171560/"
]
} |
402,384 | I have a script that works with /etc/NetworkManager : drwxr-xr-x 6 root root 4096 Apr 3 2017 NetworkManager/ I want to give the user programX write permission for this folder without changing the ownership. Is that possible or would I have to change the ownership? | This is what access control lists are for. setfacl -m 'u:programX:rwx' /etc/NetworkManager The user account programX now has read, write, and traverse access to the directory, but does not have ownership access. Bonus way of doing this on FreeBSD with its NFS ACLs: setfacl -m 'u:programX:rwxD::allow' /etc/NetworkManager Further reading Operation not supported with setfacl? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
402,390 | I'd like to create a script which creates another script using 2 parameters. The second script should use its own parameter. The problem I keep running into is that the the last echo puts the argument of the first script into the second. So how do I make my second script take a parameter? I'd like to know how to do this Here's my first script: #! /bin/bashMY_FILE=$1MY_PATH=$2touch $MY_PATH/$MY_FILEecho "#! /bin/bash" > $MY_FILEecho "ps -u $1 -o comm=CMD -o pid,ppid,user,uid,gid" >> $MY_FILE | This is what access control lists are for. setfacl -m 'u:programX:rwx' /etc/NetworkManager The user account programX now has read, write, and traverse access to the directory, but does not have ownership access. Bonus way of doing this on FreeBSD with its NFS ACLs: setfacl -m 'u:programX:rwxD::allow' /etc/NetworkManager Further reading Operation not supported with setfacl? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258895/"
]
} |
402,424 | I'm looking for a way to drop the mouse, forever and always. I got to a very comfortable point, where I don't use it for regular text editing, however, something's bothering me. When I work with the shell and I want to copy some prev output, I need to highlight + copy using the mouse. That sucks. I know about screen and 'Ctrl-A [', is cool, but I want to browse the scrollback in Vim, not in Screen's built-in interface. Is there a way for me to open the current shell output buffer into Vim and copy from it? | From within a Screen window, run screen -X hardcopy -h svim srm s hardcopy -h dumps the scrollback into a temporary file which you can then open in Vim if you like. If you script this you should use a proper temporary file name: #!/bin/shscrollback_file=$(mktemp)screen -X hardcopy -h "$scrollback_file"vim -c 'call delete(@%)' "$scrollback_file" (Or you could just run your shell in Emacs or in Neovim.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11181/"
]
} |
402,425 | I want to know if the Swap is used at all. free shows the usage of the memory: # free total used free shared buff/cache availableMem: 1362084 169864 38288 724 1153932 1163816Swap: 1048572 0 1048572 My understanding is that this is just a snapshot of the memory usage. The numbers change if I repeat the free command. Is there a possibility to see if the Swap was even used? | From within a Screen window, run screen -X hardcopy -h svim srm s hardcopy -h dumps the scrollback into a temporary file which you can then open in Vim if you like. If you script this you should use a proper temporary file name: #!/bin/shscrollback_file=$(mktemp)screen -X hardcopy -h "$scrollback_file"vim -c 'call delete(@%)' "$scrollback_file" (Or you could just run your shell in Emacs or in Neovim.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128826/"
]
} |
402,466 | I'm about to buy a new laptop that is being used with Linux only. Unfortunately finding a Linux laptop is not simple at all, and it seems the only option I found includes a nvidia Quadro M1200 and an Intel HD 630. I know that it is very complex/impossible to properly run wayland (Ubuntu for instance) on nvidia. Actually I don't care in any way about the nvidia GPU, the Intel GPU should be more than sufficient. But is it possible to completely disable the nvidia GPU to let wayland run properly on the Intel GPU? I read about nvidia prime: can I use it like this? Can I completely disable nvidia and just forget about it, like it was not even there? | The answer was simple: just install nvidia drivers, open the nvidia settings page and set to use the Intel HD GPU only. Login again and you are done. Works perfectly. Battery lasts much much longer and wayland works properly. As soon as the nvidia GPU is enabled, it seems that the fan turns on immediately, and keeps running even when idle. That is probably a large part of battery consumption. I'm wondering if that is reasonable or not: is that fan really always needed? NOTE: I recently discovered that what I described is a Ubuntu specific patch applied to the nvidia config app. Other distros may not include it entirely. Manjaro, for instance, is not including it in any way. It is probably possible to setup manually, but I didn't succeed. NOTE2: blacklisting nvidia and nouveau is sufficient to run on Intel only. Not sure how to run on nVidia only. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12635/"
]
} |
402,468 | I am connecting to remote-server from localhost, and I want to execute command on remote-server . works as expected: ssh remote-server "hostname"remote-server I am confused. Why does this return local hostname, instead of the hostname of remote server? ssh remote-server "print $HOST"localhost | Your second code example: ssh remote-server "print $HOST"localhost will be flagged by the great Shellcheck tool with the diagnostic SC2029 : Bash expands all arguments that are not escaped/singlequoted. This means that the problematic code is identical to ssh host "echo clienthostname" and will print out the client's hostname, not the server's hostname. By escaping the $ in $HOSTNAME, it will be transmitted literally and evaluated on the server instead. By using ssh host "print \$HOST" or ssh host 'print $HOST' you can prevent your local shell from expanding host and instead have the shell on remote-server expand it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402468",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
402,488 | I'm trying to pair two bluetooth devices, a mouse and a keyboard on fedora 26. I also have Windows 10 installed. What I did was: pair them on linux then on Windows, but when I tried to insert the key that I get from windows, I didn't find the entry [LinkKey] that was mentioned on the guide I followed This is what I have on the info file for one of the devices: [General]Name=Designer MouseAppearance=0x03c2AddressType=staticSupportedTechnologies=LE;Trusted=trueBlocked=falseServices=00001800-0000-1000-8000-00805f9b34fb;00001801-0000-1000-8000-00805f9b34fb;0000180a-0000-1000-8000-00805f9b34fb;0000180f-0000-1000-8000-00805f9b34fb;00001812-0000-1000-8000-00805f9b34fb;[IdentityResolvingKey]Key=D8F3A0A146FEB991BF2ECD9756C8BDFA[LocalSignatureKey]Key=23AB7AF05C5AC930F9322CF44114856BCounter=0Authenticated=false[LongTermKey]Key=D2681BEA8B2C177B1AB8786F22C89DBBAuthenticated=0EncSize=16EDiv=48309Rand=10283782112900107958[DeviceID]Source=2Vendor=1118Product=2053Version=272[ConnectionParameters]MinInterval=6MaxInterval=6Latency=60Timeout=300 According to the guide, it should be [LinkKey] entry, but there is none. I already have the key from windows and also tried the method mentioned on this question | The problem is that your device is a Bluetooth LE (Low Energy) device and they are handled differently. I've found the following two solutions that helped me set up my Microsoft 3600 mouse for dual boot. Check here for a tutorial on how to do it manually with Bluetooth LE devices: http://console.systems/2014/09/how-to-pair-low-energy-le-bluetooth.html The key steps are: First pair in Linux Reboot Pair in Windows Get the key values from HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\BTHPORT\Parameters\Keys\{computer-bluetooth-mac}\{device-bluetooth-id} It may be ControlSet001 or ControlSet002 which one cane be found in SYSTEM\Select but it's usually ControlSet001 This can be done e.g. using chntpw (from linux) cd {PATH_TO_WINDOWS_PARTITION}/Windows/System32/config/ chntpw -e SYSTEM Go to /var/lib/bluetooth/{computer-bluetooth-mac} Check for a directory that closely resembles the device bluetooth id (they are usually a bit off because they may change whenever you pair again) Rename that directory to match the device id Edit the info file in the renamed directory Copy the value of: IRK into Key in IdentityResolvingKey CSRK into Key in LocalSignatureKey LTK into Key in LongTermKey ERand into Rand : Take the hex value ab cd ef , byte reverse it ( ef cd ab ) and convert it into decimal (e.g. using the Programming mode of the calculator application) EDIV into EDiv : Just take the hex value and convert it normally or use the decimal value directly if it is displayed (chntpw displays it) Reboot Alternatively Use this python script by Mygod that does these steps for you: https://gist.github.com/Mygod/f390aabf53cf1406fc71166a47236ebf I've used the script and just copied the Key entries for the groups LongTermKey , LocalSignatureKey and IdentityResolvingKey , and the EDiv and Rand entries in the LongTermKey group. Notes for the linked manual route It didn't really work for me which is why I didn't use it but these are common fixes if it didn't work that worked for other people: The tutorial doesn't mention it but if you have an IRK entry, copy the value to the IdentityResolvingKey Key. Don't copy the KeyLength to EncSize. Just leave it at what it is (in my case 16) Don't forget to move the directory if the device names aren't exactly equal. In my case the 5th group was counting up with every pairing. Some additional help for the script: It's run in linux. The Windows partition has to be mounted. The command should look like this: ./export-ble-infos.py -s {PATH_TO_WINDOWS_PARTITION}/Windows/System32/config/SYSTEM You can also copy the SYSTEM file somewhere else and pass the path with -s {PATH} It crashes if there are other bluetooth devices that windows knows which aren't LE or at least not in this format. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81768/"
]
} |
402,497 | I try to rescue data from my 1TB external drive using ddrescue because I accidentally deleted everything from it. Practically I have to face exactly the same situation like posted here: How to split a ddrescue disk image and how to use it again? but in that question , the first part, which is my whole question, has not been answered, and during reading the forums, I can find guides only to different situations than mine, i.e. resume to the same location, resume to other drive with exactly the same storage capacity than the first one, rescue multiple partitions to an external HDD with greater capacity than together all of them that have to be rescued etc. So, my question is: I have to rescue data from my 1TB external HDD. For that purpose, I have an external HDD with around 935 GB (not GiB) free space on it, furthermore, on the windows partition on my laptop, which is currently empty, I have around 222 GB free space. I did the following: I started ddrescue with /# ddrescue -n -s900GB /dev/sdc /media/misi/Maxtor/recovery/recovery_part1.img /home/misi/recovery_log.txt to save just the first 900 GB to the external drive. Everything went fine, I received the following output: GNU ddrescue 1.17Press Ctrl-C to interruptrescued: 900000 MB, errsize: 0 B, current rate: 64104 kB/s ipos: 899999 MB, errors: 0, average rate: 38981 kB/s opos: 899999 MB, time since last successful read: 0 sFinished Then, I continued to save the rest to the windows partition: /# ddrescue -n /dev/sdc /windows/recovery_part2.img /home/misi/recovery_log.txt But it did not work: GNU ddrescue 1.17Press Ctrl-C to interruptInitial status (read from logfile)rescued: 900000 MB, errsize: 0 B, errors: 0Current statusrescued: 900000 MB, errsize: 0 B, current rate: 0 B/s ipos: 900000 MB, errors: 0, average rate: 0 B/s opos: 900000 MB, time since last successful read: 3 sCopying non-tried blocks...ddrescue: write error: Invalid argument According to the question posted above and the other guides and man pages, however, it should. I was able to continue rescuing the file to the other external HDD and I was also able to restart the whole progress to the windows partition. I interrupted both processes since I just wanted to test if I have some problems with e.g. my windows partition. I also tried to use an other log file and manually specifying the starting position, retrieved from the first log file (it changed a bit since I continued rescuing there for a few seconds as mentioned before): /# ddrescue -n -i900862115840 -o900862115840 /dev/sdc /windows/recovery_part2.img /home/misi/recovery_log_2.txt and received the same Invalid argument error message as mentioned before. What am I doing wrong, and what is the correct mode to save the biggest part of my data to the other external drive, and the rest to my windows partition? Thanks for the replies in advance! | The problem is that your device is a Bluetooth LE (Low Energy) device and they are handled differently. I've found the following two solutions that helped me set up my Microsoft 3600 mouse for dual boot. Check here for a tutorial on how to do it manually with Bluetooth LE devices: http://console.systems/2014/09/how-to-pair-low-energy-le-bluetooth.html The key steps are: First pair in Linux Reboot Pair in Windows Get the key values from HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\BTHPORT\Parameters\Keys\{computer-bluetooth-mac}\{device-bluetooth-id} It may be ControlSet001 or ControlSet002 which one cane be found in SYSTEM\Select but it's usually ControlSet001 This can be done e.g. using chntpw (from linux) cd {PATH_TO_WINDOWS_PARTITION}/Windows/System32/config/ chntpw -e SYSTEM Go to /var/lib/bluetooth/{computer-bluetooth-mac} Check for a directory that closely resembles the device bluetooth id (they are usually a bit off because they may change whenever you pair again) Rename that directory to match the device id Edit the info file in the renamed directory Copy the value of: IRK into Key in IdentityResolvingKey CSRK into Key in LocalSignatureKey LTK into Key in LongTermKey ERand into Rand : Take the hex value ab cd ef , byte reverse it ( ef cd ab ) and convert it into decimal (e.g. using the Programming mode of the calculator application) EDIV into EDiv : Just take the hex value and convert it normally or use the decimal value directly if it is displayed (chntpw displays it) Reboot Alternatively Use this python script by Mygod that does these steps for you: https://gist.github.com/Mygod/f390aabf53cf1406fc71166a47236ebf I've used the script and just copied the Key entries for the groups LongTermKey , LocalSignatureKey and IdentityResolvingKey , and the EDiv and Rand entries in the LongTermKey group. Notes for the linked manual route It didn't really work for me which is why I didn't use it but these are common fixes if it didn't work that worked for other people: The tutorial doesn't mention it but if you have an IRK entry, copy the value to the IdentityResolvingKey Key. Don't copy the KeyLength to EncSize. Just leave it at what it is (in my case 16) Don't forget to move the directory if the device names aren't exactly equal. In my case the 5th group was counting up with every pairing. Some additional help for the script: It's run in linux. The Windows partition has to be mounted. The command should look like this: ./export-ble-infos.py -s {PATH_TO_WINDOWS_PARTITION}/Windows/System32/config/SYSTEM You can also copy the SYSTEM file somewhere else and pass the path with -s {PATH} It crashes if there are other bluetooth devices that windows knows which aren't LE or at least not in this format. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258951/"
]
} |
402,510 | For a couple of years now I've been "scraping," using lynx -dump , content from a web page containing non-latin characters. I save the page content to a file, which I then modify via the agency of sed, and send that in the body of an e-mail--all this happening in a script I created. But I'm finding, after switching distros (Ubuntu to Void) that my script is not working as expected. I've identified the point of failure, as follows. When I run the very first part of my script (the part containing lynx -dump URL and the file name to which the content is to be saved) from the command line, all works as expected. The file shows up and contains the non-latin characters I'm expecting. However when I try to automate the process by stipulating that same command as a cron job, the results are different. The expected file does show up, but instead of containing the expected non-latin characters, what I get is the same text transliterated using latin characters--not what I want. What follows in my script is failing since it depends on the presence of the non-latin characters. So, why these strange results depending on whether I issue the lynx command from the command line as opposed to in a cron job? Perhaps the site is doing some sort of detection and providing a transliterated page in one case but not in the other? Or is lynx itself doing the transliterating of non-latin characters into latin ones? Input will be appreciated. | The problem is that your device is a Bluetooth LE (Low Energy) device and they are handled differently. I've found the following two solutions that helped me set up my Microsoft 3600 mouse for dual boot. Check here for a tutorial on how to do it manually with Bluetooth LE devices: http://console.systems/2014/09/how-to-pair-low-energy-le-bluetooth.html The key steps are: First pair in Linux Reboot Pair in Windows Get the key values from HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\BTHPORT\Parameters\Keys\{computer-bluetooth-mac}\{device-bluetooth-id} It may be ControlSet001 or ControlSet002 which one cane be found in SYSTEM\Select but it's usually ControlSet001 This can be done e.g. using chntpw (from linux) cd {PATH_TO_WINDOWS_PARTITION}/Windows/System32/config/ chntpw -e SYSTEM Go to /var/lib/bluetooth/{computer-bluetooth-mac} Check for a directory that closely resembles the device bluetooth id (they are usually a bit off because they may change whenever you pair again) Rename that directory to match the device id Edit the info file in the renamed directory Copy the value of: IRK into Key in IdentityResolvingKey CSRK into Key in LocalSignatureKey LTK into Key in LongTermKey ERand into Rand : Take the hex value ab cd ef , byte reverse it ( ef cd ab ) and convert it into decimal (e.g. using the Programming mode of the calculator application) EDIV into EDiv : Just take the hex value and convert it normally or use the decimal value directly if it is displayed (chntpw displays it) Reboot Alternatively Use this python script by Mygod that does these steps for you: https://gist.github.com/Mygod/f390aabf53cf1406fc71166a47236ebf I've used the script and just copied the Key entries for the groups LongTermKey , LocalSignatureKey and IdentityResolvingKey , and the EDiv and Rand entries in the LongTermKey group. Notes for the linked manual route It didn't really work for me which is why I didn't use it but these are common fixes if it didn't work that worked for other people: The tutorial doesn't mention it but if you have an IRK entry, copy the value to the IdentityResolvingKey Key. Don't copy the KeyLength to EncSize. Just leave it at what it is (in my case 16) Don't forget to move the directory if the device names aren't exactly equal. In my case the 5th group was counting up with every pairing. Some additional help for the script: It's run in linux. The Windows partition has to be mounted. The command should look like this: ./export-ble-infos.py -s {PATH_TO_WINDOWS_PARTITION}/Windows/System32/config/SYSTEM You can also copy the SYSTEM file somewhere else and pass the path with -s {PATH} It crashes if there are other bluetooth devices that windows knows which aren't LE or at least not in this format. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/183867/"
]
} |
402,550 | Given a script that allows anyone to provide input. Is there any input that can break out of a variable. For example given the following script: echo $1 Would there be anyway to make $1 something like: text && rm -rf / I'm trying to do something like the above and it doesn't work. Can anyone confirm that the above would be impossible? | In 2014, there was a exploit in the wild for a Bash vulnerability nicknamed Shellshock. Like most vulnerabilities in common software, a Common Vulnerabilities and Exposures (CVE) Bulletin was released, CVE-2014-6278. Shellshock is a remote exploit for Bash which allowed arbitrary code execution on the remote host via several attack vectors in common server software stacks including Apache's cgi modules as well as OpenSSH. The vulnerability affects all versions of Bash from 1989 until 2014 when it was patched once easily created exploits were widely demonstrated. For further reading: OWASP Shellshock Presentation, PDF NIST CVE-2014-6278 ServerFault Shellshock Question, 2014 I believe most versions available in Distro Repos have been patched. Correction: Shellshock is a family of vulnerabilities... CVE-2014-6271, CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187 And, it's good to remember that these can easily affect a LAN if there exists port forwarding for things like Apache web servers or SSH... as well as any unpatched (and probably unpatchable) Internet of Things devices. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
402,623 | I can't invoke my bluetoothctl anymore. It waits for connections withouth success showing this in the terminal: me@mashin:~$ bluetoothctl Waiting to connect to bluetoothd... Any suggestion how to start the joyful debuging? I am using Debian 9.2. Edit Output of sudo systemctl status bluetooth.service ● bluetooth.service - Bluetooth service Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset Active: inactive (dead) Docs: man:bluetoothd(8) | I had the same problem. I found a solution on archlinux.org's forums . I had to load the kernel module btusb .To test if this will solve the problem for you, run as root : modprobe btusbsystemctl start bluetooth then test if bluetoothctl works. If it does you can make this fix permanent by loading the module on boot. To do that on Debian add (as root) the line: btusb at the end of the file /etc/modules . You might also want to ask systemd to enable the bluetooth service on boot, in this case execute (as root): systemctl enable bluetooth | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
402,650 | I'm exploring methods of tracking the changes that happen to the kernel defconfig for a particular board. The changes I'm referring to are either selecting a new option through the menuconfig and persisting that, or moving to a new kernel that introduces new options. My idea is to remove comments and sort the defconfig before committing the changes: make ARCH=arm board_defconfigmake ARCH=arm menuconfig # Changes introduced here and saved to .configmake ARCH=arm savedefconfig # This creates the defconfig filegrep -v '^#' defconfig > tmpsort tmp > tmp_sorteduniq tmp_sorted > defconfigcp savedefconfig arch/arm/configs/board_defconfig menuconfig however has a very consistent habit of adding comment line. For example: # CONFIG_IOMMU_SUPPORT is not set# CONFIG_RTC_INTF_PROC is not set# CONFIG_IOMMU_SUPPORT is not set which makes me have second thoughts if I'm actually allowed to remove them. Is there a purpose to these comment lines making them unsafe to remove? | The objective answer is: The comment lines can safely be removed. Here is a reference for that claim. You can double check the configuration using menuconfig (or nconfig in more recent kernels) to validate whether the commented sections in fact hold the default values as per your preference. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29436/"
]
} |
402,698 | I want to set up up yum to auto update the system using yum-cron. But my internet connection is very limited and I don't want the update process to hog the tiny bit of internet that is available and make the computer usage for everybody on the network miserable. How can I set up yum to check and download updates automatically, but only between 2am and 6am? | Well, if it were me, I would set up a cron job (for root ) that starts at 2am every day. Like so: 0 2 * * * /bin/yum -y update It's about as KISS as it can get! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259098/"
]
} |
402,700 | I have the following Bash code: function suman { if test "$#" -eq "0"; then echo " [suman] using suman-shell instead of suman executable."; suman-shell "$@" else echo "we do something else here" fi}function suman-shell { if [ -z "$LOCAL_SUMAN" ]; then local -a node_exec_args=( ) handle_global_suman node_exec_args "$@" else NODE_PATH="${NEW_NODE_PATH}" PATH="${NEW_PATH}" node "$LOCAL_SUMAN" --suman-shell "$@"; fi} when the suman command is executed by the user with no arguments, then this is hit: echo " [suman] using suman-shell instead of suman executable."; suman-shell "$@" my question is - how can I append an argument to the "$@" value?I need to simply do something like: handle_global_suman node_exec_args "--suman-shell $@" obviously that's wrong but I cannot figure out how to do it. What I am not looking for - handle_global_suman node_exec_args "$@" --suman-shell the problem is that handle_global_suman works with $1 and $2 and if I make --suman-shell into $3 , then I have to change other code, and would rather avoid that. Preliminary answer: local args=("$@") args+=("--suman-shell") if [ -z "$LOCAL_SUMAN" ]; then echo " => No local Suman executable could be found, given the present working directory => $PWD" echo " => Warning...attempting to run a globally installed version of Suman..." local -a node_exec_args=( ) handle_global_suman node_exec_args "${args[@]}" else NODE_PATH="${NEW_NODE_PATH}" PATH="${NEW_PATH}" node "$LOCAL_SUMAN" "${args[@]}"; fi | Put the arguments into an array and then append to the array. args=("$@")args+=(foo)args+=(bar)baz "${args[@]}" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
402,728 | I have a directory with over 400 images. Most of them are corrupt. I identified the good ones. They are listed in a text file (there're 100+ of them). How can I move them all at once to another directory on BASH? | There are several ways to do this that come to mind immediately: Using a while-loop Using xargs Using rsync Suppose the file names are listed (one per line) in files.txt and we want to move them from the subdirectory source/ to the subdirectory target . The while-loop could look something like this: while read filename; do mv source/${filename} target/; done < files.txt The xargs command could look something like this: cat files.txt | xargs -n 1 -d'\n' -I {} mv source/{} target/ And the rsync command could look something like this: rsync -av --remove-source-files --files-from=files.txt source/ target/ It might be worthwhile to create a sandbox to experiment with and test out each approach, e.g.: # Create a sandbox directorymkdir -p /tmp/sandbox# Create file containing the list of filenames to be movedfor filename in file{001..100}.dat; do basename ${filename}; done >> /tmp/sandbox/files.txt# Create a source directory (to move files from)mkdir -p /tmp/sandbox/source# Populate the source directory (with 100 empty files)touch /tmp/sandbox/source/file{001..100}.dat# Create a target directory (to move files to)mkdir -p /tmp/sandbox/target# Move the files from the source directory to the target directoryrsync -av --remove-source-files --files-from=/tmp/sandbox/files.txt /tmp/sandbox/source/ /tmp/sandbox/target/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174627/"
]
} |
402,746 | I am trying to log in to my DSL router, because I'm having trouble with command-line mail. I'm hoping to be able to reconfigure the router. When I give the ssh command, this is what happens: $ ssh [email protected] to negotiate with 10.255.252.1 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 so then I looked at this stackexchange post , and modified my command to this, but I get a different problem, this time with the ciphers. $ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 [email protected] to negotiate with 10.255.252.1 port 22: no matching cipher found. Their offer: 3des-cbc so is there a command to offer 3des-cbc encryption? I'm not sure about 3des, like whether I want to add it permanently to my system. Is there a command to allow the 3des-cbc cipher? What is the problem here? It's not asking for password. | This particular error happens while the encrypted channel is being set up. If your system and the remote system don't share at least one cipher, there is no cipher to agree on and no encrypted channel is possible. Usually SSH servers will offer a small handful of different ciphers in order to cater to different clients; I'm not sure why your server would be configured to only allow 3DES-CBC. Now, 3DES-CBC isn't terrible. It's slow, and it provides less security than some other algorithms, but it's not immediately breakable as long as the keys are selected properly. CBC itself has some issues when ciphertext can be modified in transit, but I strongly suspect that the resultant corruption would be rejected by SSH's HMAC, reducing impact. Bottom line, there are worse choices than 3DES-CBC, and there are better ones. However, always tread carefully when overriding security-related defaults, including cipher and key exchange algorithm choices. Those defaults are the defaults for a reason; some pretty smart people spent some brain power considering the options and determined that what was chosen as the defaults provide the best overall security versus performance trade-off. As you found out, you can use -c ... (or -oCiphers=... ) to specify which cipher to offer from the client side. In this case adding -c 3des-cbc allows only 3DES-CBC from the client. Since this matches a cipher that the server offers, an encrypted channel can be established and the connection proceeds to the authentication phase. You can also add this to your personal ~/.ssh/config . To avoid making a global change to solve a local problem, you can put it in a Host stanza. For example, if your SSH config currently says (dummy example): Port 9922 specifying a global default port of 9922 instead of the default 22, you can add a host stanza for the host that needs special configuration, and a global host stanza for the default case. That would become something like... Host 10.255.252.1 Ciphers 3des-cbc KexAlgorithms +diffie-hellman-group1-sha1Host * Port 9922 The indentation is optional, but I find it greatly enhances readability. Blank lines and lines starting with # are ignored. If you always (or mostly) log in as the same user on that system, you can also specify that username: Host 10.255.252.1 Ciphers 3des-cbc KexAlgorithms +diffie-hellman-group1-sha1 User enduserHost * Port 9922 You don't need to add a Host * stanza if there was nothing in your ~/.ssh/config to begin with, as in that case only compiled-in or system-wide defaults (typically from /etc/ssh/ssh_config) would be used. At this point, the ssh command line to connect to this host reduces to simply $ ssh 10.255.252.1 and all other users on your system, and connections to all other hosts from your system, are unaffected by the changes. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/402746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
402,750 | I have a script that process a folder, and count the files in the mean time. i=1find tmp -type f | while read xdo i=$(($i + 1)) echo $idoneecho $i However, $i is always 1 , how do I resolve this? | In your example the while-loop is executed in a subshell, so changes to the variable inside the while-loop won't affect the external variable. This is because you're using the loop with a pipe, which automatically causes it to run in a subshell. Here is an alternative solution using a while loop: i=1while read x; do i=$(($i + 1)) echo $idone <<<$(find tmp -type f)echo $i And here is the same approach using a for-loop: i=1for x in $(find tmp -type f);do i=$(($i + 1)) echo $idoneecho $i For more information see the following posts: A variable modified inside a while loop is not remembered Bash Script: While-Loop Subshell Dilemma Also look at the following chapter from the Advanced Bash Scripting Guide: Chapter 23. Process Substitution | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/402750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
402,771 | I'm stumped. I had a perfectly functioning RAID1 setup on 16.10. After upgrading to 17.10, it auto-magically detected the array and re-created md0. All my files are fine, but when I mount md0, it says that the array is read-only: cat /proc/mdstat Personalities : [raid1] md0 : active (read-only) raid1 dm-0[0] dm-1[1] 5860390464 blocks super 1.2 [2/2] [UU] bitmap: 0/44 pages [0KB], 65536KB chunkunused devices: <none>sudo mdadm --detail /dev/md0/dev/md0: Version : 1.2 Creation Time : Sat Jul 9 23:54:40 2016 Raid Level : raid1 Array Size : 5860390464 (5588.90 GiB 6001.04 GB) Used Dev Size : 5860390464 (5588.90 GiB 6001.04 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Nov 4 23:16:18 2017 State : clean Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : x6:0 (local to host x6) UUID : baaccfeb:860781dd:eda253ba:6a08916f Events : 11596 Number Major Minor RaidDevice State 0 253 0 0 active sync /dev/dm-0 1 253 1 1 active sync /dev/dm-1 There are no errors in /var/log/kern.log nor dmesg. I can stop it and re-assemble it, to no effect: sudo mdadm --stop /dev/md0sudo mdadm --assemble --scan I do not understand why it worked perfectly before, but now the array is read-only for no reason I can detect. And this is the same array that auto-magically re-assembled when I upgraded from 16.04 to 16.10. Researching the problem, I found a post about the problem being /sys mounted read-only, which mine indeed is: ls -ld /sysdr-xr-xr-x 13 root root 0 Nov 5 22:28 /sys But neither works to fix it, as /sys stays read-only: sudo mount -o remount,rw /syssudo mount -o remount,rw -t sysfs sysfs /sysls -ld /sysdr-xr-xr-x 13 root root 0 Nov 5 22:29 /sys Can anyone provide some insight that I am missing? Edited to include /etc/mdadm/mdadm.conf: # mdadm.conf## !NB! Run update-initramfs -u after updating this file.# !NB! This will ensure that initramfs has an uptodate copy.## Please refer to mdadm.conf(5) for information about this file.## by default (built-in), scan all partitions (/proc/partitions) and all# containers for MD superblocks. alternatively, specify devices to scan, using# wildcards if desired.#DEVICE partitions containers# automatically tag new arrays as belonging to the local systemHOMEHOST <system># instruct the monitoring daemon where to send mail alertsMAILADDR root# definitions of existing MD arraysARRAY /dev/md/0 metadata=1.2 UUID=baaccfeb:860781dd:eda253ba:6a08916f name=x6:0# This configuration was auto-generated on Sun, 05 Nov 2017 15:37:16 -0800 by mkconf The device mapper files, which appear to be writable: ls -l /dev/dm-*brw-rw---- 1 root disk 253, 0 Nov 5 16:28 /dev/dm-0brw-rw---- 1 root disk 253, 1 Nov 5 16:28 /dev/dm-1 And something else that Ubuntu or Debian has changed; I have no idea what these osprober files are doing here. I thought they were only used at installation time: ls -l /dev/mapper/total 0crw------- 1 root root 10, 236 Nov 5 15:34 controllrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdb1 -> ../dm-0lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdc1 -> ../dm-1 parted info: sudo parted -lModel: ATA SanDisk Ultra II (scsi)Disk /dev/sda: 960GBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 1049kB 81.9GB 81.9GB ext4 2 81.9GB 131GB 49.2GB linux-swap(v1) 3 131GB 131GB 99.6MB fat32 boot, esp 4 131GB 960GB 829GB ext4Model: ATA WDC WD60EZRZ-00R (scsi)Disk /dev/sdb: 6001GBSector size (logical/physical): 512B/4096BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 1049kB 6001GB 6001GB raidModel: ATA WDC WD60EZRZ-00R (scsi)Disk /dev/sdc: 6001GBSector size (logical/physical): 512B/4096BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 1049kB 6001GB 6001GB raidError: /dev/mapper/osprober-linux-sdc1: unrecognised disk labelModel: Linux device-mapper (linear) (dm) Disk /dev/mapper/osprober-linux-sdc1: 6001GBSector size (logical/physical): 512B/4096BPartition Table: unknownDisk Flags: Error: /dev/mapper/osprober-linux-sdb1: unrecognised disk labelModel: Linux device-mapper (linear) (dm) Disk /dev/mapper/osprober-linux-sdb1: 6001GBSector size (logical/physical): 512B/4096BPartition Table: unknownDisk Flags: Model: Linux Software RAID Array (md)Disk /dev/md0: 6001GBSector size (logical/physical): 512B/4096BPartition Table: loopDisk Flags: Number Start End Size File system Flags 1 0.00B 6001GB 6001GB ext4 Device mapper info: $ sudo dmsetup tableosprober-linux-sdc1: 0 11721043087 linear 8:33 0osprober-linux-sdb1: 0 11721043087 linear 8:17 0$ sudo dmsetup infoName: osprober-linux-sdc1State: ACTIVE (READ-ONLY)Read Ahead: 256Tables present: LIVEOpen count: 1Event number: 0Major, minor: 253, 1Number of targets: 1Name: osprober-linux-sdb1State: ACTIVE (READ-ONLY)Read Ahead: 256Tables present: LIVEOpen count: 1Event number: 0Major, minor: 253, 0Number of targets: 1 strace output for attempt to set array to rw (with some context): openat(AT_FDCWD, "/dev/md0", O_RDONLY) = 3fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0ioctl(3, RAID_VERSION, 0x7fffb3813574) = 0fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0ioctl(3, RAID_VERSION, 0x7fffb38134c4) = 0ioctl(3, RAID_VERSION, 0x7fffb38114bc) = 0fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0readlink("/sys/dev/block/9:0", "../../devices/virtual/block/md0", 199) = 31openat(AT_FDCWD, "/sys/block/md0/md/metadata_version", O_RDONLY) = 4read(4, "1.2\n", 4096) = 4close(4) = 0openat(AT_FDCWD, "/sys/block/md0/md/level", O_RDONLY) = 4read(4, "raid1\n", 4096) = 6close(4) = 0ioctl(3, GET_ARRAY_INFO, 0x7fffb3813580) = 0ioctl(3, RESTART_ARRAY_RW, 0) = -1 EROFS (Read-only file system)write(2, "mdadm: failed to set writable fo"..., 66mdadm: failed to set writable for /dev/md0: Read-only file system) = 66 | This won’t explain why your array ended up in read-only mode, but mdadm --readwrite /dev/md0 should return it to normal. In your case it doesn’t, and the reason isn’t entirely obvious: if constituent devices are themselves read-only, the RAID array is read-only (which is what matches the behaviour you’re seeing, and the code-paths used when you try to re-enable read-write). The dmsetup table information gives a strong hint as to what’s going on: osprober (I imagine, given the names of the devices) is finding the real RAID components, and for some reason it’s creating device mapper devices on top of them, and these are being picked up and used for the RAID device. Since the only device mapper devices are those two osprober devices, the simplest solution is to stop the RAID device, stop the DM devices, and re-scan the RAID array so that the underlying components devices are used. To stop the DM devices, run dmsetup remove_all as root . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259159/"
]
} |
402,834 | Provided a user is authorized to access something, how can he execute a system call directly, like geteuid() - get effective user ID (it's just an example) from bash, how could I do it? | User-space kernel-space communication via system calls is done in terms of memory locations and machine registers. That's way below the abstraction level of shells, which operate mainly with text strings. That said, in bash, you can use the https://github.com/taviso/ctypes.sh plugin to get through the text-string abstraction down to C-level granularity: $ . ctypes.sh$ dlcall -r long geteuidlong:1001 For this particular operation though, it would be much simpler, more idiomatic, and more efficient to simply use bash's magic $UID variable. $ echo "$EUID" #effectively a cached geteuid call1001 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63603/"
]
} |
402,854 | Command not found should produce return code 127 : $ foo; echo $?bash: foo: command not found...127 I tried to assign $? to variable rc and then print it, but RC is always 0 . $ foo | awk -v rc="$?" 'BEGIN{print rc}'0bash: foo: command not found... Found out that it will print correct RC only in this case: $ qazqazbash: qazqaz: command not found...$ foo | awk -v rc="$?" 'BEGIN{print rc}'127bash: foo: command not found... Is it possible to work with RC in awk while pipes are used? Or is the problem somewhere else? I would like to stick to some old portable awk implementation. | In a pipeline, commands run concurrently. That's the whole point, the output of one is fed to the other in real time. You only know the exit status of a command when it returns. If you wanted awk to process the output of foo and also get access to its exit status, you'd need to run awk after foo after having stored foo 's output somewhere like: foo > fileawk -v "rc=$?" '{print rc, $0}' < file Alternatively, you could have awk run foo by itself (well, still via a shell to interpret a command line), read its output (through a pipe via its cmd | getline interface to popen() ) and get its exit status with: awk -v cmd=foo ' BEGIN { while ((cmd | getline) > 0) { print } rc = close(cmd) print rc }' However note that the way awk encodes the exit status varies from one awk implementation to the next. In some it's the status straight as returned by waitpid() or pclose() , in others it's that one divided by 256 (even when foo is killed by a signal)... though you should be able to rely on rc being 0 if and only if the command was successful. In the case of gawk , it did change recently . Or you could have the exit status fed at the end through the pipe: (foo; echo "$?") | awk ' {saved = $0} NR > 1 { # process the previous line $0 = prev print "output:", $0 } {prev = saved} END{rc = prev; print rc}' (assuming foo 's output ends in a newline character when it's not empty (is valid text)). Or fed through a separate pipe. For instance on Linux and with a shell other than ksh93: { : extra pipe | { (foo 3<&-; echo "$?" > /dev/fd/3) | awk ' {print} END {getline rc < "/dev/fd/3"; print rc}'} 3<&0 <&4 4<&-; } 4<&0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32129/"
]
} |
402,859 | I have a crontab script which monitor a DR process between 2 machines and this script generate a log file. What i have been asked to do is basically append the new log generated on the top of the previous (as log i use the same file name) and not on the bottom. I've seen already few options, but all my tentative failed. I tried with cat $LOGFILE >> $TEMPLOGrm $LOGFILEmv -i $TEMPLOG $LOGFILE and also with cat - $LOGFILE > $TEMPLOG && mv $TEMPLOG $LOGFILE the variable $LOGFILE is where the script append every single statement of the process. Thanks :) Basically want I'd like to to do is generate the right log with on top the last run before it is sent by mail. DATE=`date "+%d%m%y_%H%M"`PRIMARY_HOSTNAME=`hostname`LOGFILE=/dba/logs/monitor_sync_FM2.logTEMPLOG=/dba/logs/monitor_sync_LOG.logSERVER=`hostname`SITE=mycompanyEMAILTO="[email protected]"DBOPS="oracle@${SERVER}.${SITE}"export PRIMARY_HOSTNAME LOGFILE TEMPLOG SERVER SITE EMAILTO DBOPS DATEecho "\n\n### monitor DR sync started @ `date` ###" >> $LOGFILEecho "Running SQL command to verify latest SCN.." >> $LOGFILEecho "The current SCN of the Primary DB server is: $PRIMARY_CURRENT_SCN" >> $LOGFILEecho "Connecting now to the secondary standby database server..." >> $LOGFILESECONDARY_CURRENT_SCN=`ssh [email protected] /home/oracle/script_sync2.sh` >> $LOGFILEexport SECONDARY_CURRENT_SCNecho "Secondary SCN output returned as: $SECONDARY_CURRENT_SCN" >> $LOGFILEgrep ORA- /dba/scripts/output.txtif [ $? = 0 ]; thenecho "Remote ssh command to Secondary server failed..Exiting" >> $LOGFILEecho "### monitor DR sync failed @ `date` ###" >> $LOGFILEecho "PROBLEM" >> $LOGFILEmailx -r ${DBOPS} -s "PROBLEM" ${EMAILTO} < $LOGFILEexitelseecho "The current SCN of the Secondary DB server is: $SECONDARY_CURRENT_SCN" >> $LOGFILEDIFF=`expr $PRIMARY_CURRENT_SCN - $SECONDARY_CURRENT_SCN` ; export DIFFif [ $PRIMARY_CURRENT_SCN -ne $SECONDARY_CURRENT_SCN ]; then echo "The difference is $DIFF" >> $LOGFILE if [ `echo $DIFF` -gt 3 ]; then echo "Log Gap: $DIFF" >> $LOGFILE echo "PROBLEM" >> $LOGFILE echo "### script finished @ `date` ###\n\n" >> $LOGFILE mailx -r ${DBOPS} -s "PROBLEM" ${EMAILTO} < ${LOGFILE} else echo "SUCCESS" >> $LOGFILE mailx -r ${DBOPS} -s "SUCCESS" ${EMAILTO} < ${LOGFILE} echo "Log Gap: $DIFF" >> $LOGFILE fielseecho "Log Gap: $DIFF" >> $LOGFILEecho "SUCCESS" >> $LOGFILEmailx -r ${DBOPS} -s "SUCCESS" ${EMAILTO} < ${LOGFILE}fifi | In a pipeline, commands run concurrently. That's the whole point, the output of one is fed to the other in real time. You only know the exit status of a command when it returns. If you wanted awk to process the output of foo and also get access to its exit status, you'd need to run awk after foo after having stored foo 's output somewhere like: foo > fileawk -v "rc=$?" '{print rc, $0}' < file Alternatively, you could have awk run foo by itself (well, still via a shell to interpret a command line), read its output (through a pipe via its cmd | getline interface to popen() ) and get its exit status with: awk -v cmd=foo ' BEGIN { while ((cmd | getline) > 0) { print } rc = close(cmd) print rc }' However note that the way awk encodes the exit status varies from one awk implementation to the next. In some it's the status straight as returned by waitpid() or pclose() , in others it's that one divided by 256 (even when foo is killed by a signal)... though you should be able to rely on rc being 0 if and only if the command was successful. In the case of gawk , it did change recently . Or you could have the exit status fed at the end through the pipe: (foo; echo "$?") | awk ' {saved = $0} NR > 1 { # process the previous line $0 = prev print "output:", $0 } {prev = saved} END{rc = prev; print rc}' (assuming foo 's output ends in a newline character when it's not empty (is valid text)). Or fed through a separate pipe. For instance on Linux and with a shell other than ksh93: { : extra pipe | { (foo 3<&-; echo "$?" > /dev/fd/3) | awk ' {print} END {getline rc < "/dev/fd/3"; print rc}'} 3<&0 <&4 4<&-; } 4<&0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/402859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102028/"
]
} |
402,862 | I have an Asustor NAS that runs on Linux; I don't know what distro they use. I'm able to log in it using SSH and use all Shell commands. Internal Volume uses ext2, and external USB HDs use NTFS. When I try to use cp command to copy any file around, that file's date metadata is changed to current datetime. In example, if I use Windows to copy the file from SMB and the file was modified in 2007, the new file is marked as created now in 2017 but modified in 2007. But with Linux cp command its modified date is changed to 2017 too. This modified date is very relevant to me because it allows me to sort files on Windows Explore by their modified date. If it's overridden, I'm unable to sort and they all seem to have been created now. I also use modified date to know when I acquired some rare old files. Is there any parameter I can use in cp command to preserve original file metadata? Update: I tried cp --preserve=timestamps but it didn't work, it printed: cp: unrecognized option '--preserve=timestamps'BusyBox v1.19.3 (2017-03-22 17:23:49 CST) multi-call binary.Usage: cp [OPTIONS] SOURCE DESTCopy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY -a Same as -dpR -R,-r Recurse -d,-P Preserve symlinks (default if -R) -L Follow all symlinks -H Follow symlinks on command line -p Preserve file attributes if possible -f Overwrite -i Prompt before overwrite -l,-s Create (sym)links If I try just -p it says cp: can't preserve permissions of '...': Operation not permitted , but as far as I've tested, timestamps are being preserved. | If you use man cp to read the manual page for the copy command you'll find the -p and --preserve flags. -p same as --preserve=mode,ownership,timestamps and --preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps ), if possible additional attributes: context , links , xattr , all What this boils down to is that you should use cp -p instead of just cp . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/402862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259231/"
]
} |
402,999 | I cloned a disk (SSD) and put the cloned disk into another machine. Now both systems have the same value in /etc/machine-id . Is it any problem to simply edit /etc/machine-id to change the value? Can I do this while the system is running (or do I need to boot from a Live USB)? Is systemd-machine-id-setup a better alternative? The naive use of systemd-machine-id-setup doesn't work. I tried these steps: nano /etc/machine-id (to remove the existing value)systemd-machine-id-setup> Initializing machine ID from D-Bus machine ID.cat /etc/machine-id The new value is the same as the old value. | Although systemd-machine-id-setup and systemd-firstboot are great for systems using systemd, /etc/machine-id is not a systemd file, despite the tag. It is also used on systems that do not use systemd. So as an alternative, you can use the dbus-uuidgen tool: rm -f /etc/machine-id and then dbus-uuidgen --ensure=/etc/machine-id As mentioned by Stephen Kitt, Debian systems may have both a /etc/machine-id and a /var/lib/dbus/machine-id file. If both exist as regular files, their contents should match, so there, also remove /var/lib/dbus/machine-id : rm /var/lib/dbus/machine-id and re-create it: dbus-uuidgen --ensure This last command implicitly uses /var/lib/dbus/machine-id as the file name and will copy the machine ID from the already-newly-generated /etc/machine-id . The dbus-uuidgen invocation may or may not already be part of the regular boot sequence. If it is part of the boot sequence, then removing the file and rebooting should be enough. If you need to run dbus-uuidgen yourself, pay attention to the warning in the man page: If you try to change an existing machine-id on a running system, it will probably result in bad things happening. Don't try to change this file. Also, don't make it the same on two different systems; it needs to be different anytime there are two different kernels running. So after doing this, definitely don't continue using the system without rebooting. As an extra precaution, you may instead reboot first into rescue mode (or as you suggested, boot from a live USB stick), but from my experience, that is not necessary. Bad things may happen, but the bad things that do happen are fixed by the reboot anyway. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/402999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
403,010 | I have an example /etc/passwd file like this: tom:x:1000:1000:Work:/home/tom:/bin/bashgeorge:x:1000:1000:Work:/home/george:/bin/bashbla:x:1000:1000:Work:/home/bla:/bin/bashboo:x:1000:1000:Work:/home/boo:/bin/bashbee:x:1000:1000:Work:/root/list:/bin/bash I'm trying to list all users with a home folder in /home/ . I wrote cat ~/Desktop/e.txt |awk -F ":" '{if ($6 ~/^/home/) print $1;}' where e.txt is the text I copied here. I understand there is a problem with the backslash which is an escape character, but how do I fix it so I can list them in one line of a command? | You may escape forward slashes as shown below: awk -F':' '$6~/^\/home\//{ print $1 }' ~/Desktop/e.txt Another trick would be using complex field separator: awk -F'[:/]' '$7=="home"{ print $1 }' ~/Desktop/e.txt -F'[:/]' - treat both : and / to be a field separator | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259356/"
]
} |
403,026 | I like this pattern ps aux | grep something . This way I can find the needed information easily without remembering command line options for the ps command. Unfortunately the ps commands cuts the linux user name (first column) after 7 characters and adds a + if the username is longer. In my case it is important since the usernames are like "foobar_123" and "foobar_234". I know that I could use the following command, but it would be very nice if I could still use the ps aux | grep something pattern. ps ax o user:16,pid,pcpu,pmem,vsz,rss,stat,start_time,time,cmd How to get above format via configuration, so that ps aux | grep something does not cut the username? Hint: Answers like "use ps ... special...args..." are don't match above question. Version: procps-ng version 3.3.5 | If you know that a long command with multiple options does what you want, but you don't want to type it each time, then (assuming you're using Bash) you can create an alias for that command to make it easier. For example: alias ps_mod='ps ax o user:16,pid,pcpu,pmem,vsz,rss,stat,start_time,time,cmd' Then you can just type that simple command. You can add this line to your ~/.bash_profile (or ~/.bashrc, depending on your system) file, so that it is defined automatically on login. If you're not using Bash, then you can probably do something similar by defining a shell function instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22068/"
]
} |
403,028 | I accidentally sourced the wrong environment from a script. Is there any way to 'unsource' it or in other words to revert it and restore the previous environment? The obvious answer is to start from a clean shell session, of course, but I'm curious if there's another solution. Update: I was referring only to a script that sets some variables. | No, there is no general method for undoing the effects of sourcing a script (or even of "merely" executing one). This is a corollary of the fact that there exist irreversible commands (e.g. file deletion). If your script contains an irreversible command then the effects of sourcing that script will also be irreversible. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403028",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55407/"
]
} |
403,047 | I have a csv file like this: 5/05/2017;03;07;30;35;43;01;039/05/2017;08;12;16;22;26;06;0712/05/2017;02;20;28;29;44;03;0916/05/2017;08;11;15;20;30;03;0819/05/2017;09;11;12;19;30;04;0923/05/2017;08;15;25;27;42;01;0426/05/2017;05;07;26;36;39;02;10... that is, a date, plus a series of numbers followed by ; . I need to remove that date in the first position with a number in sequence starting with 1004... like this: 1004;03;07;30;35;43;01;031005;08;12;16;22;26;06;071006;02;20;28;29;44;03;091007;08;11;15;20;30;03;081008;09;11;12;19;30;04;091009;08;15;25;27;42;01;041010;05;07;26;36;39;02;10... I can remove the date using this: cut -f 2-8 -d';' 2.txt | xargs -I{} but how do I add a number in sequence replacing the date? | awk solution: awk -F';' 'BEGIN{ i=1004 }{ $1=i++ }1' OFS=';' file -F';' - input field separator i=1004 - starting increment The output: 1004;03;07;30;35;43;01;031005;08;12;16;22;26;06;071006;02;20;28;29;44;03;091007;08;11;15;20;30;03;081008;09;11;12;19;30;04;091009;08;15;25;27;42;01;041010;05;07;26;36;39;02;10 Or you may pass the variable i from "outside": awk -F';' '{ $1=i++ }1' i=1004 OFS=';' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45335/"
]
} |
403,048 | Here is the short version of my question: how do you convert comma-separated lists of dates from one format to another? More specifically, I'd like a single one-line command which converts strings of this form: YYYY/MM/DD,YYYY/MM/DD to strings of the following form: DD/MM/YYYY,DD/MM/YYYY Now I'll describe the context for my question. I have a CSV file whose rows contain pairs of adjacent dates in the following format: YYYY/MM/DD I run the following grep command to extract the pair of dates: grep -Po '[1-2][0-1][0-9][0-9]/[0-1][0-9]/[0-1][0-9]','[1-2][0-1][0-9][0-9]/[0-1][0-9]/[0-1][0-9]' file.csv` This results, for example, in strings such as the following: 2016/05/16,2017/06/15 I am able to convert a single date-string using the date command as follows: date -d '2016/05/16' '+%d/%m/%Y' This produces the desired result: 16/05/2016 I tried applying this command to multiple input strings, e.g.: date -d"2016/05/16","2017/06/15" "+%d-%m-%Y" But that didn't work. I received the following error message: Error :- Invalid date - 2016/05/16,2017/06/15' What I want is a single command which will convert 2016/05/16,2017/06/15 to 16/05/2016,15/06/2017 . | awk solution: awk -F';' 'BEGIN{ i=1004 }{ $1=i++ }1' OFS=';' file -F';' - input field separator i=1004 - starting increment The output: 1004;03;07;30;35;43;01;031005;08;12;16;22;26;06;071006;02;20;28;29;44;03;091007;08;11;15;20;30;03;081008;09;11;12;19;30;04;091009;08;15;25;27;42;01;041010;05;07;26;36;39;02;10 Or you may pass the variable i from "outside": awk -F';' '{ $1=i++ }1' i=1004 OFS=';' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259378/"
]
} |
403,049 | Application for which I'm writing service file expects its STDIN to be connected to the tty , or in other words to be run on the terminal. Systemd has the following settings which do the trick for my application, however here I use statically configured tty path, which could potentially already be used by other application or simply do not exist on the system. StandardInput=tty-forceTTYPath=/dev/tty30 It would be much nicer if systemd (or me for that matter) could figure out unused pseudoterminal pair and use that instead. | awk solution: awk -F';' 'BEGIN{ i=1004 }{ $1=i++ }1' OFS=';' file -F';' - input field separator i=1004 - starting increment The output: 1004;03;07;30;35;43;01;031005;08;12;16;22;26;06;071006;02;20;28;29;44;03;091007;08;11;15;20;30;03;081008;09;11;12;19;30;04;091009;08;15;25;27;42;01;041010;05;07;26;36;39;02;10 Or you may pass the variable i from "outside": awk -F';' '{ $1=i++ }1' i=1004 OFS=';' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111103/"
]
} |
403,066 | Whenever I use the KDE menu editor to change the layout of the KDE application menu, the configuration file ~/.config/menus/applications-kmenuedit.menu changes. This file seems to collect a lot of garbage every time an item is created, moved, changed, or deleted. It has now grown to over 100kB and is very confusing to read, and I am having trouble with menus that refuse to be renamed. There are still entries for applications I removed years ago. Is there any way I can clean this file from unnecessary junk that's accumulated? I don't want to delete it and start from scratch because it would be a lot of work to recreate my highly customized menus | I think the menu file is generated from the *.desktop files on your system. You may look at the following locations /usr/share/applications ~/.local/share/applications/ And remove whatever you don't need. (Or add Hidden=true the desktop file) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2233/"
]
} |
403,138 | FF 57 will break legacy extensions. I have several of these which I want to keep working. For this, I want to prevent FF from auto-updating. However, although previously you had the option of doing that in the "Advanced" tab of the settings panel (as can be seen below)... ...now in FF 56, that option has been silently removed. The advanced tab doesn't even exist any longer: This is very annoying and has put us users in collision course towards the day FF 57 is imposed on us, and we suddenly find ourselves with missing functionality on which we have come to rely. But there may be some other way I'm not aware of to prevent auto-updating. I know that updating brings security fixes, new features, etc. I'm not interested in any of those. I just want my normal work not to be disrupted by missing extensions. Can forced auto-updating be prevented in any other way? | Simply do: sudo apt-mark hold firefox This will add firefox package to the list of packages who should not recieve updates. To reverse it: sudo apt-mark unhold firefox You can list the packages on hold via: sudo apt-mark showhold For further information see man apt-mark.8 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146180/"
]
} |
403,178 | Invoking the task switcher (I use <Alt-Tab> ) prompts a (very distracting) message box: If I release <Alt-Tab> , but then activate it shortly after, it will continue to grow, as such: I'm running Debian 9.2, e.g. Stretch, on a HP Probook 6450b.None of the base switcher effects works, and neither does the downloadable ones I've tried.The switching itself works, there's just no effect, or preview of any windows, it just switches instantaneously. Questions: What could be causing this? How can I solve it? Edit: I found a bug report . Reported Nov. 2015 , maintainer* answered Aug. 2016 , and then, silence. Doesn't bode well. * I assume | The window switcher installation is broken, resources are missing It seems you downloaded some theme or custom task switcher that is not compatible with your current version. I had same problem and fixed it selecting a working theme , then restarting (or relogging), using following command : kwin_x11 --replace Hope it helps. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259466/"
]
} |
403,181 | I've got a temporary situation where I'm working with pre-release F27, and need to keep one package to a specific (working) version and not be upgraded until the subsequent versions stop breaking something. I know in general "you don't really want to do this", but as I said, it's a temporary dealing with a pre-rel system. You can "pin" packages with the apt system, but I can't locate anything equivalent for dnf. | The excludepkgs configuration option in dnf.conf lists packages that dnf should never try to install or upgrade; in a repo section it affects only that repo, in [main] all repos are affected. See the dnf.conf(5) man page for details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259471/"
]
} |
403,201 | I created an C++ image processor for LINUX that opens a file browser when double clicked. When I run it from the commandline with a filename argument, it opens the file. I configured my system so that files with image extensions will use my program to open. Unfortunately, instead of opening the file as when launched with the filename on the commandline, my program acts as if there is no filename parameter being passed to the program and opens the file browser. My question is; How do I get the filename argument from the O/S into my program when an image file that uses my program for opening is double clicked? I can't find this information anywhere. I'm assuming the filename is in argv[1], but obviously it is not. The program does not require a terminal to run. I'm running Gnome Desktop on CENTOS/Linux 7, all current. Set up a Desktop icon that works fine to launch the application by clicking the icon. Problem is when double-clicking an image file, its name is not getting to the application, so the browser comes up. This is mysterious, because all commandline arguments work fine when running from a terminal. The big question is 'Where is the system putting the filename of the file that was double-clicked on?' Thanks. | The excludepkgs configuration option in dnf.conf lists packages that dnf should never try to install or upgrade; in a repo section it affects only that repo, in [main] all repos are affected. See the dnf.conf(5) man page for details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96603/"
]
} |
403,223 | None of my aliases work. I tried the most simple thing, defining an alias to access the Desktop/ folder from $HOME : # just to show that I do have a Desktop/ folder~ bf$ cd Desktop/Desktop bf$ cd ~~ bf$ alias dd='Desktop/'~ bf$ cd dd-bash: cd: dd: No such file or directory~ bf$ echo dddd I also tried to save this alias in .bash_profile (in my $HOME directory), source it, but it won't work. What's happening? Also why does echoing the alias just return its name? | The excludepkgs configuration option in dnf.conf lists packages that dnf should never try to install or upgrade; in a repo section it affects only that repo, in [main] all repos are affected. See the dnf.conf(5) man page for details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259512/"
]
} |
403,244 | I observe behavior such as this on my Mac: Open a PDF with PDF Expert, make some changes to the file, move the file in Finder, save it in PDF Expert and it'll be correctly saved to the new place. Open a shell in a directory like ~/foo , trash the directory with another app and the shell's pwd correctly outputs ~/.Trash/foo . What's happening under the hood? These cases seem to indicate apps don't just hold an absolute path of the file like emacs (am I right with this?), or is it a totally differently mechanism? | macos has a special /.vol/ system mapped to the actual directory and files. The files and directories are accessible via /.vol/<device_id>/<inode_number> , regardless of where the files are on the file system. It is a nice little system. So, programs can for example get the inode number of /Users/jdoe/someFile.txt and then open it via /.vol/12345/6789 (in this case, device id is 12345 and inode number 6789). You then move /Users/jdoe/someFile.txt anywhere you want (on the same volume) and everything just works. You can even write a shell script that supports this magic . ls -di <file> to get inode number. $ ls -di /User/jdoe/someFile.txt6789 /User/jdoe/someFile.txt EDIT: You use stat to get the id of the volume and inode number, according to linked answer as highlighted by IMSoP. GetFileInfo /.vol/12345/6789 would return the current location of the file previously located in /Users/jdoe/someFile.txt . See https://stackoverflow.com/questions/11951328/is-there-any-function-to-retrieve-the-path-associated-with-an-inode for more information. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259524/"
]
} |
403,262 | I want to use cifs against some windows server in order to copy files [root@host ~]# mount -t cifs -o username=<share user>,password=<share password>,domain=example.com //WIN_PC_IP/<share name> /mnt where in the windows machine I need to configure the password ? mount -t cifs -v //$win_machine/hostlist /mnt -o user=file,pass=$pass | macos has a special /.vol/ system mapped to the actual directory and files. The files and directories are accessible via /.vol/<device_id>/<inode_number> , regardless of where the files are on the file system. It is a nice little system. So, programs can for example get the inode number of /Users/jdoe/someFile.txt and then open it via /.vol/12345/6789 (in this case, device id is 12345 and inode number 6789). You then move /Users/jdoe/someFile.txt anywhere you want (on the same volume) and everything just works. You can even write a shell script that supports this magic . ls -di <file> to get inode number. $ ls -di /User/jdoe/someFile.txt6789 /User/jdoe/someFile.txt EDIT: You use stat to get the id of the volume and inode number, according to linked answer as highlighted by IMSoP. GetFileInfo /.vol/12345/6789 would return the current location of the file previously located in /Users/jdoe/someFile.txt . See https://stackoverflow.com/questions/11951328/is-there-any-function-to-retrieve-the-path-associated-with-an-inode for more information. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
403,297 | Can I execute a command in the foreground, by typing on the terminal the command? e.g. command & | If you drop the & the command will run in the foreground: command If you want to bring a background command to the foreground, run fg You can specify a “jobspec”, for example fg %2 will bring job number 2 to the foreground ( jobs will list the job table). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255278/"
]
} |
403,349 | Is there any difference between these two commands: exec "$(dirname "$0")/suman-shell"; exit $?; and exec "$(dirname "$0")/suman-shell"; exit; is the $? redundant in the first case? | There are several flaws in your script: exec "$(dirname "$0")/suman-shell";exit $?; The first one is the ending semicolons are redundant. They are harmless but serve no purpose. An end of line is already a separator, semicolons are used as a separator between multiple commands on a single line. Their presence here is slightly worsen the code readability. The second one is exit by default returns the exit status of the previous command, so stating $? is redundant, albeit signalling the intent. The third one is exec never returns under normal circumstances 1 , so the exit call is not reached in the first place and is also redundant. 1 The only cases where exec returns is an empty argument, a broken redirection or the execfail bash option set (which is not the default setting). The first and second ones do not apply here, and there is no evidence the third one does. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
403,424 | I want to ssh into my linux mint 18 server (running X11) and log into a docker container and have iPython matplotlib plots forwarded to the local client (also mint). All in the local network. The closest question I found was: https://stackoverflow.com/questions/25281992/alternatives-to-ssh-x11-forwarding-for-docker-containers Following this, I could get a plot GUI out from the docker to the local machine's display (e.i., the mint server) by -e DISPLAY=$DISPLAY option passed to the docker run command. I can also ssh with -X option to the server to get xeyes window to the client. But if I ssh into the server with -X option and login to the container ran with -DISPLAY=localhost or client IP , I still cannot get a plot to the client machine. I know I could use VNC to go around it. But, how can I do this with X11 forwarding properly? | You need to resolve these things for it to work: That the X application can find the X server For SSH there needs to be a tunnel ("ssh -X" and "X11Forwarding yes" in /etc/ssh/sshd_config) The address must be in $DISPLAY (using -e). You must replace "localhost" with the actual IP address of the Docker host seen from the Docker container. That the X application is authorised to talk to the X server Propagate the xauth magic cookie into the Docker container Open up any firewall ports from the Docker host to the Docker container for the X11 port Make sure the SSH server is configured to accept X11 TCP connections on a remote IP. See my question (and answer) here on StackOverflow for details of how it can be done: https://stackoverflow.com/questions/48235040/run-x11-application-in-a-docker-container-reliably-on-a-server-connected-via-ssh | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95577/"
]
} |
403,446 | I have log files which are named in the following manner: localhost_log_file.2017-09-01.txtlocalhost_log_file.2017-09-02.txt....localhost_log_file.2017-10-30.txt In other words, each log file has the following form: localhost_log_file.YYYY-MM-DD.txt I want to cat all the log files taken between the dates of 2017-09-03 and 2017-10-08, i.e. every log file starting from localhost_log_file.2017-09-03.txt through localhost_log_file.2017-10-08.txt . Currently what I do is produce three intermediate files by separately executing each of the following commands: for((i=3;i<=9;i++)) do cat localhost_log_file.2017-09-0$i.txt >> log1.csv ; done;for((i=10;i<=30;i++)) do cat localhost_log_file.2017-09-$i.txt >> log2.csv ; done;for((i=1;i<=8;i++)) do cat localhost_log_file.2017-10-0$i.txt >> log3.csv ; done; Then I combine the intermediate files as follows: cat log1.csv log2.csv log3.csv> totallog.csv Is there a better way to do this? | How about using the date utility to iterate through the range of dates you're interested in? Here is what that might look like for your example: # Set the date counter to the start dated=2017-09-03# Iterate until we reach the end date (i.e. the date after the last date we want)while [ "$d" != 2017-10-09 ]; do # cat each file cat "localhost_log_file.${d}.txt"; # Increment the date counter d="$(date -I -d "$d + 1 day")";done See this for more information: Bash: Looping through dates Alternatively, you can pass the results of the loop to the cat command instead of invoking cat in the body of the loop. Here is what that could look like using command-substitution: d=2017-09-03cat $(while [ "$d" != 2017-10-09 ]; do echo "localhost_log_file.${d}.txt"; d="$(date -I -d "$d + 1 day")";done) And here is the same thing using a pipe and xargs : d=2017-09-03while [ "$d" != 2017-10-09 ]; do echo "localhost_log_file.${d}.txt"; d="$(date -I -d "$d + 1 day")";done | xargs cat | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403446",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250197/"
]
} |
403,486 | In a directory, I have hundreds of sub-directories. Each sub-directory has hundreds of jpg pictures. If the folder is called "ABC_DEF", the files inside the folder will be called "ABC_DEF-001.jpg", "ABC_DEF-002.jpg", so on and so forth. For example: ---Main Directory------Sub-Directory ABC_DEF----------ABC_DEF-001.jpg----------ABC_DEF-002.jpg------Sub-Directory ABC_GHI----------ABC_GHI-001.jpg----------ABC_GHI-002.jpg From each of the sub-directory I want to copy only the first file, e.g., the file with the extension -001.jpg - to a common destination folder called DESTDIR. I have changed the code given here to suit my use case. However, it always prints the first directory along with the filenames and I am not able to copy the files to the desired destination. The following is the code: DIR=/var/www/html/beeinfo.org/resources/videos/find "$DIR" -type d |while read d;do files=$(ls -t "$d" | sed -n '1h; $ { s/\n/,/g; p }') printf '%s,%s\n' "$files";done How can I fix this code? | Why find when all files are in directories of the same depth? cd -- "$DIR" && cp -- */*-001.jpg /destination/path | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247049/"
]
} |
403,511 | How to delete a route like the one below from a UNIX server? 122.252.228.38/255.255.255.255 122.252.228.38 UH 0 lan4 4136 | You haven't included which system you're on or which tool-set you're using, but the two most common commands for managing the routing tables are the route and ip commands. Here is how you might remove the route by using the route command (from the net-tools package): route del -net 122.252.228.38 netmask 255.255.255.255 And here is how you might delete the same route using the ip command (from the iproute2 package): ip route del 122.252.228.38/32 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403511",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259718/"
]
} |
403,543 | Example: # show starting permissions% stat -c '%04a' ~/testdir0700# change permissions to 2700% chmod 2700 ~/testdir# check% stat -c '%04a' ~/testdir2700# so far so good...# now, change permissions back to 0700% chmod 0700 ~/testdir# check% stat -c '%04a' ~/testdir2700# huh???# try a different tack% chmod g-w ~/testdir% stat -c '%04a' ~/testdir0700 Bug or feature? Why does chmod 0700 ~/testdir fail to change the permissions from 2700 to 0700 ? I've observed the same behavior in several different filesystems. E.g., in the latest one, the relevant line of mount 's output is /dev/sda5 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) Also, FWIW % stat -c '%04a' ~/0755 | Assuming you’re using GNU chmod , this is documented in the manpage : chmod preserves a directory's set-user-ID and set-group-ID bits unless you explicitly specify otherwise. You can set or clear the bits with symbolic modes like u+s and g-s , and you can set (but not clear) the bits with a numeric mode. This is allowed in POSIX : For each bit set in the octal number, the corresponding file permission bit shown in the following table shall be set; all other file permission bits shall be cleared. For regular files, for each bit set in the octal number corresponding to the set-user-ID-on-execution or the set-group-ID-on-execution, bits shown in the following table shall be set; if these bits are not set in the octal number, they are cleared. For other file types, it is implementation-defined whether or not requests to set or clear the set-user-ID-on-execution or set-group-ID-on-execution bits are honored. The reasoning for the behaviour in GNU chmod is given in the release notes for coreutils 6.0 : chmod , install , and mkdir now preserve a directory's set-user-ID and set-group-ID bits unless you explicitly request otherwise. E.g., chmod 755 DIR and chmod u=rwx,go=rx DIR now preserve DIR 's set-user-ID and set-group-ID bits instead of clearing them, and similarly for mkdir -m 755 DIR and mkdir -m u=rwx,go=rx DIR . To clear the bits, mention them explicitly in a symbolic mode, e.g., mkdir -m u=rwx,go=rx,-s DIR . To set them, mention them explicitly in either a symbolic or a numeric mode, e.g., mkdir -m 2755 DIR , mkdir -m u=rwx,go=rx,g+s DIR . This change is for convenience on systems where these bits inherit from parents. Unfortunately other operating systems are not consistent here, and portable scripts cannot assume the bits are set, cleared, or preserved, even when the bits are explicitly mentioned. For example, OpenBSD 3.9 mkdir -m 777 D preserves D 's setgid bit but chmod 777 D clears it. Conversely, Solaris 10 mkdir -m 777 D , mkdir -m g-s D , and chmod 0777 D all preserve D 's setgid bit, and you must use something like chmod g-s D to clear it. There’s more on the topic in #8391 , including the further rationale that the leading 0 is ambiguous (it could indicate either cleared bits, or an octal value, in the user’s mind). The coreutils manual also has a dedicated section, Directories and the Set-User-ID and Set-Group-ID Bits ; this reveals that there are GNU extensions to allow clearing the bits in question: chmod =700 ~/testdirchmod 00700 ~/testdir both clear the bits (but are non-portable). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
403,634 | After reading here and then here , I came to the conclusion, that what is called in Bash "Special parameters" is quite like environment variables, but the main difference is that we shouldn't reassign Special parameters - A thing we could otherwise do without restriction (but with much caution) for environment variables. Hence, this is my question: Should we call Bash special parameters, "environment constants" (at least metaphorically)? | No. "Environment" has a specific meaning, referring to a set of variables that are passed down to child processes at which point the variables are stored in their process space. Calling other variables "environment" would be misleading and inaccurate. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258010/"
]
} |
403,676 | I am aware that aliases can be bypassed by quoting the command itself. However, it seems that if builtin commands are "shadowed" by functions with the same names, there is no way to execute the underlying builtin command except...by using a builtin command. If you can get to it. To quote the bash man page (at LESS='+/^COMMAND EXECUTION' man bash ): COMMAND EXECUTION After a command has been split into words, if it results in a simple command and an optional list of arguments, the following actions are taken. If the command name contains no slashes, the shell attempts to locate it. If there exists a shell function by that name, that function is invoked as described above in FUNCTIONS. If the name does not match a function, the shell searches for it in the list of shell builtins. If a match is found, that builtin is invoked. So, is it possible to recover from the following, without starting a new shell? unset() { printf 'Haha, nice try!\n%s\n' "$*";}builtin() { printf 'Haha, nice try!\n%s\n' "$*";}command() { printf 'Haha, nice try!\n%s\n' "$*";} I didn't even add readonly -f unset builtin command . If it is possible to recover from the above, consider this a bonus question: can you still recover if all three functions are marked readonly? I came up with this question in Bash, but I'm interested in its applicability to other shells as well. | When bash is in posix mode, some builtins are considered special, which is compliant with POSIX standard. One special thing about those special builtins, they are found before function in command lookup process. Taking this advantage, you can try: $ unset builtinHaha, nice try!builtin$ set -o posix$ unset builtin$ builtin command -v echoecho though it does not work if set is overridden by a function named set : $ set() { printf 'Haha, nice try!\n%s\n' "$*";}$ set -o posixHaha, nice try! In this case, you just have to set POSIXLY_CORRECT to make bash enter posix mode, then you have all special builtins: $ POSIXLY_CORRECT=1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
403,677 | What is the difference between expanding a variable and printing it (as with echo or printf )? If I understand it correctly, printing a variable (its value), is just an example of expanding it. Maybe substituting its value is also an example. Update Please give a short definition of the term "variable expansion" in your own words , just before explaining the difference. | Expanding and printing are two different actions. Expansion covers a number of phases in the shell’s processing of a command : in Bash, brace expansion ( {1..5} becomes 1 2 3 4 5 ), tilde expansion ( ~user becomes /home/user as appropriate), shell parameter expansion ( ${variable} is replaced with the value of the variable), command substitution, arithmetic expansion, process substitution, word splitting, and filename expansion. (See also POSIX word expansion .) One possible explanation for the use of the term expansion for all these is that they can all result in the command expanding, i.e. becoming longer (which is a particular concern when developing a shell in C). In your case, the expansion is parameter expansion: echo "${variable}" becomes echo "value" after what you refer to as variable substitution, then echo value after quote removal (simplifying a little), and echo does the printing. It just so happens that echo and printf are shell built-ins, so only the shell is involved, but the steps are separate nevertheless, and the situaton would be identical with external commands. So printing isn’t a special case of expansion; however, substitution (as in your linked question) is, see the Bash manual for details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258010/"
]
} |
403,697 | I have encountered empty file just called 'sudo' in home directory. The file size is 0 bytes. Is it safe to just delete this file? | Yes, that's not a system file. It was probably created by mistake. Check its owner and creation date, that'll tell you more about it. You can safely delete it. At worst, you can recreate it via the command touch ~/sudo . Config files and other system files usually start with a dot ( . ). These are the so-called dotfiles, and you should not touch them unless you know what you're doing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236149/"
]
} |
403,725 | A Linux service is creating a huge log file. At the moment I am keeping it under control using cron (every X minutes I reduce it to the last X lines): */5 * * * * root echo "$(tail -n 1000 /var/log/XXX/logger_file.log)" > /var/log/XXX/logger_file.log Is there any other way to do the job in a cleaner way? The first line, after the cut, can even be damaged if the chars cut is size based (i.e. keep the last 1024 bytes). Of course I cannot modify the service itself to keep quiet. | Have you tried logrotate? man logrotate Here is a guide that could help. How to Use logrotate to Manage Log Files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187245/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.