source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
213,101 | I want to know what are the implications of create an account using -r option? # useradd -r ... The help says: -r, --system Create a system account. System users will be created with no aging information in /etc/shadow, and their numeric identifiers are chosen in the SYS_UID_MIN-SYS_UID_MAX range, defined in /etc/login.defs, instead of UID_MIN-UID_MAX (and their GID counterparts for the creation of groups). Note that useradd will not create a home directory for such an user, regardless of the default setting in /etc/login.defs (CREATE_HOME). You have to specify the -m options if you want a home directory for a system account to be created. But, beyond assign lower values to uid , gid and groups . Question 1 What files are affected? Question 2 What additional performance this type system account presents? Question 3 What behavior ignores or stops submit? Question 4 Can I change an account created with the "-r" option to an account as if it were created without that option? | Looking at the current source for useradd , one can see exactly what else changes when -r is specified: subordinate uid/gid feature is disabled mail directory is not created So, no major difference from regular user account. Certainly no automatic performance gain or loss. I am not sure what you mean by Q1 and Q3; as for Q4 -- technically, yes; but since that involves changing user ID, any files owned by former UID must be chown ed to new one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120753/"
]
} |
213,110 | I have a shell script with nested loops and just found out that "exit" doesn't really exit the script, but only the current loop. Is there another way to completely exit the script on a certain error condition? I don't want to use "set -e", because there are acceptable errors and it would require too much rewriting. Right now, I am using kill to manually kill the process, but it seems there should be a better way to do this. | Your problem is not nested loops, per se. It's that one or more of your inner loops is running in a subshell . This works: #!/bin/bashfor i in $(seq 1 100); do echo i $i for j in $(seq 1 10) ; do echo j $j sleep 1 [[ $j = 3 ]] && { echo "I've had enough!" 1>&2; exit 1; } done echo "After the j loop."doneecho "After all the loops." output: i 1j 1j 2j 3I've had enough! This presents the problem you have described: #!/bin/bashfor i in $(seq 1 100); do echo i $i cat /etc/passwd | while read line; do echo LINE $line sleep 1 [[ "$line" = "daemon:x:2:2:daemon:/sbin:/sbin/nologin" ]] && { echo "I've had enough!" 1>&2; exit 1; } done echo "After the j loop."done echo "After all the loops." output: i 1LINE root:x:0:0:root:/root:/bin/bashLINE bin:x:1:1:bin:/bin:/sbin/nologinLINE daemon:x:2:2:daemon:/sbin:/sbin/nologinI've had enough!After the j loop.i 2LINE root:x:0:0:root:/root:/bin/bashLINE bin:x:1:1:bin:/bin:/sbin/nologinLINE daemon:x:2:2:daemon:/sbin:/sbin/nologinI've had enough!After the j loop.i 3LINE root:x:0:0:root:/root:/bin/bash(...etc...) Here is the solution; you have to test the return value of inner loops that run in subshells: #!/bin/bashfor i in $(seq 1 100); do echo i $i cat /etc/passwd | while read line; do echo LINE $line sleep 1 [[ "$line" = "daemon:x:2:2:daemon:/sbin:/sbin/nologin" ]] && { echo "I've had enough!" 1>&2; exit 1; } done err=$?; [[ $err != 0 ]] && exit $err echo "After the j loop."doneecho "After all the loops." Note the test: [[ $? != 0 ]] && exit $? output: i 1LINE root:x:0:0:root:/root:/bin/bashLINE bin:x:1:1:bin:/bin:/sbin/nologinLINE daemon:x:2:2:daemon:/sbin:/sbin/nologinI've had enough! Edit: To verify what subshell you're in, modify the "answer" script to tell you what the process ID of your current shell is. NOTE: This only works in bash 4: #!/bin/bashfor i in $(seq 1 100); do echo pid $BASHPID i $i cat /etc/passwd | while read line; do echo pid $BASHPID LINE $line sleep 1 [[ "$line" = "daemon:x:2:2:daemon:/sbin:/sbin/nologin" ]] && { echo "I've had enough!" 1>&2; exit 1; } done err=$?; [[ $err != 0 ]] && echo pid $BASHPID && exit $err echo "After the j loop."doneecho "After all the loops." output: pid 31793 i 1pid 31796 LINE root:x:0:0:root:/root:/bin/bashpid 31796 LINE bin:x:1:1:bin:/bin:/sbin/nologinpid 31796 LINE daemon:x:2:2:daemon:/sbin:/sbin/nologinI've had enough!pid 31793 The variables "i" and "j" brought to you courtesy of Fortran. Have a nice day. :-) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213110",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112568/"
]
} |
213,128 | sudo ps o gpid,comm reports something like 3029 bash but the command has parameters --arbitrary -other -searchword is there a way to display these arguments? | Rather than formatting the output of ps and then using grep , you can simply use pgrep with -a option: pgrep -a bash This will show the command name ( bash ) along with its arguments (if any). From man pgrep : -a, --list-full List the full command line as well as the process ID. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121562/"
]
} |
213,137 | I have a USB stick and an NTFS hard drive partition that I'd want to use in NixOS. On some other distribution, I'd mount it using ntfs-3g in /mnt . But on NixOS, the directory doesn't exist; I suppose NixOS has some other canonical way and/or place of doing that. In NixOS, how should one set up automounting of external partitions, preferably using configuration.nix ? | Well, I costumarily use bashmount or udisksctl to mount USB sticks. They will be mounted in /run/media/$(user name)/$(drive label or UUID) . But if you are talking about an internal harddisk or partition in a local harddrive, the simplest way is: Create a directory of your preference, as /mnt/windows-partition Mount the desired partition, say /dev/sdn5, in that directory: $ mount /dev/sdn5 /mnt/windows-partition Run nixos-generate-config . It will update /etc/nixos/hardware-configuration.nix to match the new partition configuration (and configuration.nix stays untouched, unless you use the --force option). And, finally, a nixos-rebuild switch ! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213137",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121575/"
]
} |
213,185 | What's the right approach to handle restarting a service in case one of its dependencies fails on startup (but succeeds after retry). Here's a contrived repro to make the problem clearer. a.service (simulates failure on first try and success on second try) [Unit]Description=A[Service]ExecStartPre=/bin/sh -x -c "[ -f /tmp/success ] || (touch /tmp/success && sleep 10)"ExecStart=/bin/trueTimeoutStartSec=5Restart=on-failureRestartSec=5RemainAfterExit=yes b.service (trivially succeeds after A starts) [Unit]Description=BAfter=a.serviceRequires=a.service[Service]ExecStart=/bin/trueRemainAfterExit=yesRestart=on-failureRestartSec=5 Let's start b: # systemctl start bA dependency job for b.service failed. See 'journalctl -xe' for details. Logs: Jun 30 21:34:54 debug systemd[1]: Starting A...Jun 30 21:34:54 debug sh[1308]: + '[' -f /tmp/success ']'Jun 30 21:34:54 debug sh[1308]: + touch /tmp/successJun 30 21:34:54 debug sh[1308]: + sleep 10Jun 30 21:34:59 debug systemd[1]: a.service start-pre operation timed out. Terminating.Jun 30 21:34:59 debug systemd[1]: Failed to start A.Jun 30 21:34:59 debug systemd[1]: Dependency failed for B.Jun 30 21:34:59 debug systemd[1]: Job b.service/start failed with result 'dependency'.Jun 30 21:34:59 debug systemd[1]: Unit a.service entered failed state.Jun 30 21:34:59 debug systemd[1]: a.service failed.Jun 30 21:35:04 debug systemd[1]: a.service holdoff time over, scheduling restart.Jun 30 21:35:04 debug systemd[1]: Starting A...Jun 30 21:35:04 debug systemd[1]: Started A.Jun 30 21:35:04 debug sh[1314]: + '[' -f /tmp/success ']' A has been successfully started but B is left in a failed state and won't retry. EDIT I added the following to both services and now B successfully starts when A starts, but I can't explain why. [Install]WantedBy=multi-user.target Why would this affect the relationship between A and B? EDIT2 Above "fix" doesn't work in systemd 220. systemd 219 debug logs systemd219 systemd[1]: Trying to enqueue job b.service/start/replacesystemd219 systemd[1]: Installed new job b.service/start as 3454systemd219 systemd[1]: Installed new job a.service/start as 3455systemd219 systemd[1]: Enqueued job b.service/start as 3454systemd219 systemd[1]: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch oldcoreossystemd219 systemd[1]: Forked /bin/sh as 1502systemd219 systemd[1]: a.service changed dead -> start-presystemd219 systemd[1]: Starting A...systemd219 systemd[1502]: Executing: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmpoldcoreossystemd219 sh[1502]: + '[' -f /tmp/success ']'systemd219 sh[1502]: + touch /tmp/successsystemd219 sh[1502]: + sleep 10systemd219 systemd[1]: a.service start-pre operation timed out. Terminating.systemd219 systemd[1]: a.service changed start-pre -> final-sigtermsystemd219 systemd[1]: Child 1502 belongs to a.servicesystemd219 systemd[1]: a.service: control process exited, code=killed status=15systemd219 systemd[1]: a.service got final SIGCHLD for state final-sigtermsystemd219 systemd[1]: a.service changed final-sigterm -> failedsystemd219 systemd[1]: Job a.service/start finished, result=failedsystemd219 systemd[1]: Failed to start A.systemd219 systemd[1]: Job b.service/start finished, result=dependencysystemd219 systemd[1]: Dependency failed for B.systemd219 systemd[1]: Job b.service/start failed with result 'dependency'.systemd219 systemd[1]: Unit a.service entered failed state.systemd219 systemd[1]: a.service failed.systemd219 systemd[1]: a.service changed failed -> auto-restartsystemd219 systemd[1]: a.service: cgroup is emptysystemd219 systemd[1]: a.service: cgroup is emptysystemd219 systemd[1]: a.service holdoff time over, scheduling restart.systemd219 systemd[1]: Trying to enqueue job a.service/restart/failsystemd219 systemd[1]: Installed new job a.service/restart as 3718systemd219 systemd[1]: Installed new job b.service/restart as 3803systemd219 systemd[1]: Enqueued job a.service/restart as 3718systemd219 systemd[1]: a.service scheduled restart job.systemd219 systemd[1]: Job b.service/restart finished, result=donesystemd219 systemd[1]: Converting job b.service/restart -> b.service/startsystemd219 systemd[1]: a.service changed auto-restart -> deadsystemd219 systemd[1]: Job a.service/restart finished, result=donesystemd219 systemd[1]: Converting job a.service/restart -> a.service/startsystemd219 systemd[1]: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch oldcoreossystemd219 systemd[1]: Forked /bin/sh as 1558systemd219 systemd[1]: a.service changed dead -> start-presystemd219 systemd[1]: Starting A...systemd219 systemd[1]: Child 1558 belongs to a.servicesystemd219 systemd[1]: a.service: control process exited, code=exited status=0systemd219 systemd[1]: a.service got final SIGCHLD for state start-presystemd219 systemd[1]: About to execute: /bin/truesystemd219 systemd[1]: Forked /bin/true as 1561systemd219 systemd[1]: a.service changed start-pre -> runningsystemd219 systemd[1]: Job a.service/start finished, result=donesystemd219 systemd[1]: Started A.systemd219 systemd[1]: Child 1561 belongs to a.servicesystemd219 systemd[1]: a.service: main process exited, code=exited, status=0/SUCCESSsystemd219 systemd[1]: a.service changed running -> exitedsystemd219 systemd[1]: a.service: cgroup is emptysystemd219 systemd[1]: About to execute: /bin/truesystemd219 systemd[1]: Forked /bin/true as 1563systemd219 systemd[1]: b.service changed dead -> runningsystemd219 systemd[1]: Job b.service/start finished, result=donesystemd219 systemd[1]: Started B.systemd219 systemd[1]: Starting B...systemd219 systemd[1]: Child 1563 belongs to b.servicesystemd219 systemd[1]: b.service: main process exited, code=exited, status=0/SUCCESSsystemd219 systemd[1]: b.service changed running -> exitedsystemd219 systemd[1]: b.service: cgroup is emptysystemd219 sh[1558]: + '[' -f /tmp/success ']' systemd 220 debug logs systemd220 systemd[1]: b.service: Trying to enqueue job b.service/start/replacesystemd220 systemd[1]: a.service: Installed new job a.service/start as 4846systemd220 systemd[1]: b.service: Installed new job b.service/start as 4761systemd220 systemd[1]: b.service: Enqueued job b.service/start as 4761systemd220 systemd[1]: a.service: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmp/success && sleep 10)'systemd220 systemd[1]: a.service: Forked /bin/sh as 2032systemd220 systemd[1]: a.service: Changed dead -> start-presystemd220 systemd[1]: Starting A...systemd220 systemd[2032]: a.service: Executing: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmp/success && sleep 10)'systemd220 sh[2032]: + '[' -f /tmp/success ']'systemd220 sh[2032]: + touch /tmp/successsystemd220 sh[2032]: + sleep 10systemd220 systemd[1]: a.service: Start-pre operation timed out. Terminating.systemd220 systemd[1]: a.service: Changed start-pre -> final-sigtermsystemd220 systemd[1]: a.service: Child 2032 belongs to a.servicesystemd220 systemd[1]: a.service: Control process exited, code=killed status=15systemd220 systemd[1]: a.service: Got final SIGCHLD for state final-sigterm.systemd220 systemd[1]: a.service: Changed final-sigterm -> failedsystemd220 systemd[1]: a.service: Job a.service/start finished, result=failedsystemd220 systemd[1]: Failed to start A.systemd220 systemd[1]: b.service: Job b.service/start finished, result=dependencysystemd220 systemd[1]: Dependency failed for B.systemd220 systemd[1]: b.service: Job b.service/start failed with result 'dependency'.systemd220 systemd[1]: a.service: Unit entered failed state.systemd220 systemd[1]: a.service: Failed with result 'timeout'.systemd220 systemd[1]: a.service: Changed failed -> auto-restartsystemd220 systemd[1]: a.service: cgroup is emptysystemd220 systemd[1]: a.service: Failed to send unit change signal for a.service: Transport endpoint is not connectedsystemd220 systemd[1]: a.service: Service hold-off time over, scheduling restart.systemd220 systemd[1]: a.service: Trying to enqueue job a.service/restart/failsystemd220 systemd[1]: a.service: Installed new job a.service/restart as 5190systemd220 systemd[1]: a.service: Enqueued job a.service/restart as 5190systemd220 systemd[1]: a.service: Scheduled restart job.systemd220 systemd[1]: a.service: Changed auto-restart -> deadsystemd220 systemd[1]: a.service: Job a.service/restart finished, result=donesystemd220 systemd[1]: a.service: Converting job a.service/restart -> a.service/startsystemd220 systemd[1]: a.service: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmp/success && sleep 10)'systemd220 systemd[1]: a.service: Forked /bin/sh as 2132systemd220 systemd[1]: a.service: Changed dead -> start-presystemd220 systemd[1]: Starting A...systemd220 systemd[1]: a.service: Child 2132 belongs to a.servicesystemd220 systemd[1]: a.service: Control process exited, code=exited status=0systemd220 systemd[1]: a.service: Got final SIGCHLD for state start-pre.systemd220 systemd[1]: a.service: About to execute: /bin/truesystemd220 systemd[1]: a.service: Forked /bin/true as 2136systemd220 systemd[1]: a.service: Changed start-pre -> runningsystemd220 systemd[1]: a.service: Job a.service/start finished, result=donesystemd220 systemd[1]: Started A.systemd220 systemd[1]: a.service: Child 2136 belongs to a.servicesystemd220 systemd[1]: a.service: Main process exited, code=exited, status=0/SUCCESSsystemd220 systemd[1]: a.service: Changed running -> exitedsystemd220 systemd[1]: a.service: cgroup is emptysystemd220 systemd[1]: a.service: cgroup is emptysystemd220 systemd[1]: a.service: cgroup is emptysystemd220 systemd[1]: a.service: cgroup is emptysystemd220 sh[2132]: + '[' -f /tmp/success ']' | I'll try to summarize my findings for this issue in case someone comes across this as information on this topic is scant. Restart=on-failure only applies to process failures (does not apply to failure due to dependency failures) The fact that dependent failed units get restarted under certain conditions when a dependency successfully restart was a bug in systemd < 220: http://lists.freedesktop.org/archives/systemd-devel/2015-July/033513.html If there's even a small chance that a dependency might fail on start and you care about resiliency, don't use Before / After and instead perform a check on some artifact that the dependency produces e.g. ExecStartPre=/usr/bin/test -f /some/thingRestart=on-failureRestartSec=5s You could even use systemctl is-active <dependecy> . Very hacky, but I haven't found any better options. In my opinion, not having a way to handle dependency failures is a flaw in systemd. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/213185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121599/"
]
} |
213,213 | I am trying to understand the difference between "local port forwarding" and "dynamic port forwarding". In the ssh command for "local port forwarding", is it always required to specify the destination host? Does "dynamic" in "dynamic port forwarding" mean that, in the ssh command for "dynamic port forwarding", there is no need to specify the destination host? if yes, when is the destination specified? | Yes, you have to specify a destination IP and port when using local forwarding. From man ssh : -L [bind_address:]port:host:hostport Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side. Clearly, only the bind address is optional. No, you can't specify a destination host or port when using dynamic forwarding. In dynamic forwarding, SSH acts as a SOCKS proxy. Again from the manpage (emphasis mine): -D [bind_address:]port Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. With -L , SSH makes no attempt to understand the traffic. It just sends everything it receives on the local port to the target port - you determine the target port at the time the connection is made. With -D , SSH acts as a proxy server, and therefore can handle connections from multiple ports (for example, a browser configured to use it as a SOCKS proxy can then access HTTP, HTTPS, FTP, etc. over the same connection). And like with other proxy servers, it will use the traffic to determine the destination. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
213,245 | Hi I need to increase root partition space by reducing /home, in a Centos 6.6,my situation is this: /dev/mapper/VolGroup-lv_root 50G 46G 1,6G 97% / tmpfs 1,9G 0 1,9G 0% /dev/shm /dev/sda1 477M 61M 391M 14% /boot /dev/mapper/VolGroup-lv_home 140G 3,9G 129G 3% /home Is it possible? | It is not something I would do online but I think it is possible. I guess you are using ext4. umount /home $ umount /home shrink the /home filesystem $ fsck -f /dev/mapper/VolGroup-lv_home$ resize2fs /dev/mapper/VolGroup-lv_home 80G shrink the /home logical volume $ lvreduce -L -40G /dev/mapper/VolGroup-lv_home resize the /home partition to the size of the LV $ resize2fs /dev/mapper/VolGroup-lv_home extend the /root logical volume $ lvextend -L +40G /dev/mapper/VolGroup-lv_root extend the /root filesystem $ fsck -f /dev/mapper/VolGroup-lv_root$ resize2fs /dev/mapper/VolGroup-lv_root mount /home $ mount /home | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64116/"
]
} |
213,255 | I have some process that creates a stream of millions of highly similar lines. I'm piping this to gz . Does the compression ratio improve over time in such a setup? I.e. is the compression ratio better for 1 million similar lines, than say 10,000? | It does up to a certain point and this evens out. The compression algorithms have a restriction on the size of the blocks they look at ( bzip2 ) and/or on the tables they keep with information on previous patterns ( gzip ). In the case of gzip, once a table is full old entries get pushed out, and compression no further improves. Depending on the your compression quality factor ( -0 to -9 ) and the repetitiveness of your input this filling up can of course can take a while and you might not notice. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72672/"
]
} |
213,285 | I wander how to disable event sounds in kde5 Plasma (the one that is heard when scrolling the volume in systray for example). I use Plasma in Opensuse 13.2 but this is KDE specific methinks. UPDATE These are the available settings (after answer): | In recent Plasma 5, search "Audio Volume" in the application launcher and, under 'Applications' tab, disable 'Notification Sounds'. In some Plasma versions the above doesn't work , as I now see in Kubuntu 17.10. The Audio Volume tool looks different and notifications sounds is already muted. To stop the volume scrolling sound etc in this case, I have unmuted, and moved the slider to zero, and muted again. Something is buggy here though, as the slider is in fact always stuck to zero, and only the mute/unmute button does the work. In this case I have noticed that a different tool can be used: pavucontrol-qt : After using this, the bug mentioned (slider stuck to zero in "Audio Volume") disappeared. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
213,288 | I want to kill the background process belonging to a shell script that I am going to run again. That means before executing the shell script I want to delete the background process running for the same script. | Check for the existence of a PID from the same script. add this at the beginning of the script: #!/bin/bashscript_name=${BASH_SOURCE[0]}for pid in $(pidof -x $script_name); do if [ $pid != $$ ]; then kill -9 $pid fi done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213288",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121664/"
]
} |
213,303 | In bash shell ls can use a logical OR functionality through (of course I could also do ls name1 name2 but my true examples are more complicated): ls @(name1|name2) Is there a way to do this using find ? My naive implementation: find . -maxdepth 1 -name @("name1"|"name2") doesn't work (it just outputs nothing) | You can use -o for logical OR . Beware however that all find predicates have logical values, so you'll usually need to group OR ed things together with parens. And since parens also have a meaning to the shell, you'll also need to escape them: find /some/dir -maxdepth 1 \( -name '*.c' -o -name '*.h' \) -print | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/213303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90937/"
]
} |
213,311 | Where does the name of nm come from? The IEEE standard defines nm as: nm - write the name list of an object file Is nm an abbreviated form of word name / names ? Or does it have a completely different origin? | You can use -o for logical OR . Beware however that all find predicates have logical values, so you'll usually need to group OR ed things together with parens. And since parens also have a meaning to the shell, you'll also need to escape them: find /some/dir -maxdepth 1 \( -name '*.c' -o -name '*.h' \) -print | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/213311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109924/"
]
} |
213,324 | Created a folder "Sample_dir" and analysed its permissions. $ mkdir Sample_dir$ ll Sample_dir/total 36drwxrwxr-x 2 user user 4096 Jul 1 19:26 ./drwx------ 71 user user 28672 Jul 1 19:26 ../ Looking at the first entry, I thought the argument that had to me given to chmod to achieve these permissions should be 1775. $ chmod 1775 Sample_dir/$ ll Sample_dir/total 36drwxrwxr-t 2 user user 4096 Jul 1 19:26 ./drwx------ 71 user user 28672 Jul 1 19:26 ../ But, the ls output has changed. ll has been aliased to ls -alF and the name of the folder now appears in white text with a blue background. Please explain. | The permissions you got were the permissions you asked for. The 't' comes from the '1' in the '1775' permissions string you specified, and sets what is called the "sticky bit". This tells the system that files in that directory can only be renamed or removed by the file's owner, the directory's owner, or the root user. The get the permissions you wanted initially, you would have needed to use "755" or "0755" as the permissions argument to chmod . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39667/"
]
} |
213,330 | tail -f x.log I use this command to see a growing log file in the command prompt. I am interested only in seeing the log lines that are written to the file after running tail -f and not interested in the logs that were written to the file before doing tail -f . But tail -f command on start, takes the last 10 lines and displays it. This confuses me, at times if these logs are freshly generated (or) they are old logs? So, how can i customize tail -f to output only the new entries? | You can try: tail -n0 -f x.log From man page : -n, --lines= K output the last K lines, instead of the last 10; or use -n +K to output lines starting with the Kth | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/213330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
213,334 | On my machine ( Debian testing ), when I do ps aux | grep pam I obtain orto 609 0.0 0.0 58532 2148 ? S 08:06 0:00 (sd-pam) orto 5533 0.0 0.0 12724 1948 pts/1 S+ 16:51 0:00 grep pam (sd-pam) seems a strange name for a process. Reading this forum , I see that this name is set on purpose by systemd. In the source code we see /* The child's job is to reset the PAM session on * termination *//* This string must fit in 10 chars (i.e. the length * of "/sbin/init"), to look pretty in /bin/ps */rename_process("(sd-pam)"); What does it mean look pretty in /bin/ps and why to choose (sd-pam) and not just sd-pam as a name? Putting parenthesis around the name seems indicate that this process has something special like for a kernel thread e.g. [kintegrityd] . | Putting parenthesis around the name seems indicate that this process has something special There are two cases: (...) When PID 1 starts a service binary it will first fork off a process, then adjust the process' parameters according to the service config and finally invoke execve() to execute the actual service process. In the time between the fork and the exec, we use PR_SET_NAME to change the process' name to what is going to be started, to make it easy to map this to the eventual service started. Note however, that there's a strict size limit on he "comm" name (i.e. the process name that my be set with PR_SET_NAME, i.e. the one "top" shows), which means we have to truncate. We chop off the beginning of the string, since usually the suffix is more interesting (otherwise, all of systemd's various services would appears as "(systemd-)" – which isn't particularly useful). We enclose the name in (), in order to clarify that this is the process that is going to become the specified process eventually, but isn't it yet. See https://lists.freedesktop.org/archives/systemd-devel/2016-April/036322.html (sd-pam) is the special case If we spawn a unit with a non-empty 'PAMName=', we fork off a child-process inside the unit, known as '(sd-pam)', which watches the session. It waits for the main-process to exit and then finishes it via pam_close_session(3). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74926/"
]
} |
213,341 | I have a file that looks like so: chr22 43089055 43089055 - NM_017436 C 300 903delCchr22 43089715-43089717 43089715-43089717 - NM_017436 CTT 79 I want to remove all the characters before the - in column 3 to give me an output as depicted below: chr22 43089055 43089055 - NM_017436 C 300 903delCchr22 43089715-43089717 43089717 - NM_017436 CTT 79 I've used awk '{$2+=0}1' file in the past to remove characters after the - , but I don't believe I can use this same technique for my current problem. Any suggestions? | Perl to the rescue: perl -lane 'BEGIN { $, = "\t" } $F[2] =~ s/.*-//; print @F' < file -l appends newlines to print -n reads the input line by line -a splits each line on whitespace and populates the @F array $, separates list members when printed, set it to tab s/.*-// substitutes everything up to a dash with nothing, it's bound to the third column (arrays are indexed from 0) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121711/"
]
} |
213,364 | I want to setup an Apache Spark Cluster but I am not able to communicate from the worker machine to the master machine at port 7077 (where the Spark Master is running). So I tried to telnet to the master from the worker machine and this is what I am seeing: root@worker:~# telnet spark 7077Trying 10.xx.xx.xx...Connected to spark.Escape character is '^]'.Connection closed by foreign host. The command terminated with "Connection closed by foreign host" immediately. It does not timeout or anything. I verified that the the host is listening on the port and since telnet output shows "Connected to spark." — this also means that the connection is successful. What could be the reason for such behavior?I am wondering if this closing of the connection could be the reason why I am not able to communicate from my worker machine to the master. | The process that is listening for connections on port 7077 is accepting the connection and then immediately closing the connection. The problem lies somewhere in that application's code or configuration, not in the system itself. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/213364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121553/"
]
} |
213,367 | When I look in man bash none of these are defined. However, random posts online refer to them; where do they come from? Or are they just a convention? | They are just a convention as much as any other convention. EDITOR and PAGER are mentioned in the standards as belonging to variables you'd be unwise to conflict with since they are widely used. See Chapter 8, Section 1 : It is unwise to conflict with certain variables that are frequently exported by widely used command interpreters and applications: ...EDITOR...PAGER...VISUAL... Various programs respect various combinations of them: man 1 crontab (POSIX): The following environment variables shall affect the execution of crontab: EDITOR Determine the editor to be invoked when the -e option is specified. The default editor shall be vi. man 8 sudoedit : 2. The editor specified by the policy is run to edit the temporary files. The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order). If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers(5) option is used. man 1 man (POSIX): ENVIRONMENT VARIABLES The following environment variables shall affect the execution of man:... PAGER Determine an output filtering command for writing the output to a terminal. Any string acceptable as a command_string operand to the sh -c command shall be valid. When standard output is a terminal device, the reference page output shall be piped through the command. If the PAGER variable is null or not set, the command shall be either more or another paginator utility documented in the system documentation. It is not surprising that the bash manual doesn't mention them, as none of the bash builtins that I can think of make use of any of these. However, they are widely used in other utilities, and these three are just the ones that I commonly use. The BROWSER variable is not in the same league as EDITOR or PAGER - it is not mentioned by the standards. However, some programs may use them, like man : man 1 man (Debian): BROWSER If $BROWSER is set, its value is a colon-delimited list of com- mands, each of which in turn is used to try to start a web browser for man --html. In each command, %s is replaced by a filename containing the HTML output from groff, %% is replaced by a single percent sign (%), and %c is replaced by a colon (:). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
213,385 | Consider a text file with the following entries: aaabbbcccdddeeefffggghhhiii Given a pattern (e.g. fff ), I would like to grep the file above to get in the output: all_lines except (pattern_matching_lines U (B lines_before) U (A lines_after)) For example, if B = 2 and A = 1 , the output with pattern = fff should be: aaabbbccchhhiii How can I do this with grep or other command line tools? Note, when I try: grep -v 'fff' -A1 -B2 file.txt I don't get what I want. I instead get: aaabbbcccdddeeefff----fffggghhhiii | don's might be better in most cases, but just in case the file is really big, and you can't get sed to handle a script file that large (which can happen at around 5000+ lines of script) , here it is with plain sed : sed -ne:t -e"/\n.*$match/D" \ -e'$!N;//D;/'"$match/{" \ -e"s/\n/&/$A;t" \ -e'$q;bt' -e\} \ -e's/\n/&/'"$B;tP" \ -e'$!bt' -e:P -e'P;D' This is an example of what is called a sliding window on input. It works by building a look-ahead buffer of $B -count lines before ever attempting to print anything. And actually, probably I should clarify my previous point: the primary performance limiter for both this solution and don's will be directly related to interval. This solution will slow with larger interval sizes , whereas don's will slow with larger interval frequencies . In other words, even if the input file is very large, if the actual interval occurrence is still very infrequent then his solution is probably the way to go. However, if the interval size is relatively manageable, and is likely to occur often, then this is the solution you should choose. So here's the workflow: If $match is found in pattern space preceded by a \n ewline, sed will recursively D elete every \n ewline that precedes it. I was clearing $match 's pattern space out completely before - but to easily handle overlap, leaving a landmark seems to work far better. I also tried s/.*\n.*\($match\)/\1/ to try to get it in one go and dodge the loop, but when $A/$B are large, the D elete loop proves considerably faster. Then we pull in the N ext line of input preceded by a \n ewline delimiter and try once again to D elete a /\n.*$match/ once again by referring to our most recently used regular expression w/ // . If pattern space matches $match then it can only do so with $match at the head of the line - all $B efore lines have been cleared. So we start looping over $A fter. Each run of this loop we'll attempt to s/// ubstitute for & itself the $A th \n ewline character in pattern space, and, if successful, t est will branch us - and our whole $A fter buffer - out of the script entirely to start the script over from the top with the next input line if any. If the t est is not successful we'll b ranch back to the :t op label and recurse for another line of input - possibly starting the loop over if $match occurs while gathering $A fter. If we get past a $match function loop, then we'll try to p rint the $ last line if this is it, and if ! not try to s/// ubstitute for & itself the $B th \n ewline character in pattern space. We'll t est this, too, and if it is successful we'll branch to the :P rint label. If not we'll branch back to :t op and get another input line appended to the buffer. If we make it to :P rint we'll P rint then D elete up to the first \n ewline in pattern space and rerun the script from the top with what remains. And so this time, if we were doing A=2 B=2 match=5; seq 5 | sed... The pattern space for the first iteration at :P rint would look like: ^1\n2\n3$ And that's how sed gathers its $B efore buffer. And so sed prints to output $B -count lines behind the input it has gathered. This means that, given our previous example, sed would P rint 1 to output, and then D elete that and send back to the top of the script a pattern space which looks like: ^2\n3$ ...and at the top of the script the N ext input line is retrieved and so the next iteration looks like: ^2\n3\n4$ And so when we find the first occurrence of 5 in input, the pattern space actually looks like: ^3\n4\n5$ Then the D elete loop kicks in and when it's through it looks like: ^5$ And when the N ext input line is pulled sed hits EOF and quits. By that time it has only ever P rinted lines 1 and 2. Here's an example run: A=8 B=7 match='[24689]0'seq 100 |sed -ne:t -e"/\n.*$match/D" \ -e'$!N;//D;/'"$match/{" \ -e"s/\n/&/$A;t" \ -e'$q;bt' -e\} \ -e's/\n/&/'"$B;tP" \ -e'$!bt' -e:P -e'P;D' That prints: 12345678910111229303132495051526970717299100 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
213,504 | I have defined the color red using tput red=$(tput setaf 1) to colorize warnings in my program. This works fine: printf '%sfail\n' "$red"# prints 'fail' in red But on one occasion I would like to print out the escape sequence as-is, something like: \E[31mfail How would I do this? I know printf has a %q flag but it escape others stuff I don't want to. | Sounds like you want the opposite of printing them literally, you want those escape characters converted to a printable descriptive form like \E or \033 , ^[ ... If it's just the ESC (0x1b) character you want to convert to \E , then with ksh93 , zsh or bash (typically, the same ones that also support that non-standard %q ), you can do: printf '%s\n' "${red//$'\e'/\\E}" Or pipe to sed $'s/\e/\\\\E/g' For a more generic approach at converting non-graphical characters, you can use: $ printf %s "$red" | od -A n -vt c # POSIX 033 [ 3 1 m$ printf %s "$red" | sed -n l # POSIX\033[31m$$ printf '%s\n' "${(qqqq)red}" # zsh$'\033[31m'$ printf '%s\n' "$red" | cat -vt # some cat implementations^[[31m$ printf %s "$red" | uconv -x ':: [:Cc:]; ::Hex;' # ICU tools\u001B[31m$ printf %s "$red" | uconv -x ':: [:Cc:]; ::Name;' # ICU tools\N{<control-001B>}[31m | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2726/"
]
} |
213,530 | I am currently trying to make a script that creates bytes that will be piped as input to netcat. Here is the idea of the script: (perl -e "print \"$BYTES\x00\";cat file;perl -e "print \"More bytes\"x16 . \"\r\n\"";) | netcat ip port I tried using both using a subshell and command substitution (ex. with $()) to execute the commands. However I fail to understand why the output of the script when using command substitution is wrong. I suspect that command substitution incorrectly pipes its output when executing multiple commands. Can someone explain to me why this is so? EDIT Here is the variant that used command substitution: $(perl -e "print \"$BYTES\x00\";cat file;perl -e "print \"More bytes\"x16 . \"\r\n\"";) | netcat ip port | Okay, let's break this down. A subshell executes its contents in a chain (i.e., it groups them). This actually makes intuitive sense as a subshell is created simply by surrounding the chain of commands with () . But, aside from the contents of the subshell being grouped together in execution, you can still use a subshell as if it were a single command. That is, a subshell still has an stdin , stdout and stderr so you can pipe things to and from a subshell. On the other hand, command substitution is not the same thing as simply chaining commands together. Rather, command substitution is meant to act a bit like a variable access but with a function call. Variables, unlike commands, do not have the standard file descriptors so you cannot pipe anything to or from a variable (generally speaking), and the same is true of command substitutions. To try to make this more clear, what follows are a set of maybe-unclear (but accurate) examples and a set of, what I think may be, more easily-understood examples. Let's say the date -u command gives the following: Thu Jul 2 13:42:27 UTC 2015 But, we want to manipulate the output of this command. So, let's pipe it into something like sed : user@host~> date -u | sed -e 's/ / /g'Thu Jul 2 13:42:27 UTC 2015 Wow, that was fun! The following is completely equivalent to above (barring some environment differences that you can read about in the man pages about your shell): user@host~> (date -u) | sed -e 's/ / /g'Thu Jul 2 13:42:27 UTC 2015 That should be no surprise since all we did was group date -u . However, if we do the following, we are going to get something that may seem a bit odd at first: user@host~> $(date -u) | sed -e 's/ / /g'command not found: Thu This is because $(date -u) is equivalent to typing out exactly what date -u outputs. So the above is equivalent to the following: user@host~> Thu Jul 2 13:42:27 UTC 2015 | sed -e 's/ / /g' Which will, of course, error out because Thu is not a command (at least not one I know of); and it certainly doesn't pipe anything to stdout (so sed will never get any input). But, since we know that command substitutions act like variables, we can easily fix this problem because we know how to pipe the value of a variable into another command: user@host~> echo $(date -u) | sed -e 's/ / /g'Thu Jul 2 13:42:27 UTC 2015 But, as with any variable in bash , you should probably quote command substitutions with "" . Now, for the perhaps-simpler example; consider the following: user@host~> pwd/home/hypotheticaluser@host~> echo pwdpwduser@host~> echo "$(pwd)"/home/hypotheticaluser@host~> echo "$HOME"/home/hypotheticaluser@host~> echo (pwd)error: your shell will tell you something weird that roughly means “Whoa! you tried to have me echo something that isn't text!”user@host~> (pwd)/home/hypothetical I am not sure how to describe it any simpler than that. The command substitution works just like a variable access where the subshell still operates like a command. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/213530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84969/"
]
} |
213,574 | In following command cat takes the content of here-doc and redirects it to file named conf: cat > conf << EOFvar1="cat"var2="dog"var3="hamster"EOF How to understand the order of commands here? Does bash first processes everything else(here-doc part) and as a final step it looks the > conf part? | Here-Document is a kind of shell redirection, so the shell will perform it as normal redirection, from beginning to the end (or from left to right, or order of appearance). This's defined by POSIX: If more than one redirection operator is specified with a command, the order of evaluation is from beginning to end. In your command, cat will perform > conf first, open and truncate conf file for writing, then reading data from Here-Document . Using strace , you can verify it: $ strace -f -e trace=open,dup2 sh -c 'cat > conf << EOFvar1="cat"var2="dog"var3="hamster"EOF'...open("conf", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3dup2(3, 1) = 1dup2(3, 0) = 0... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
213,592 | I'm puzzled by this behaviour of cat when trying to output a heredoc containing JSON in bash 3.2: input: $ cat <(cat <<EOF> {"x":[{"a":1,"b":2}]}> EOF) output: {"x":["a":1]}{"x":["b":2]} What's going on? | This is just shell variable expansion by bash. In this context whatever is between the curly braces will be iterated and expanded into the expression. $ echo var{1,2,3,4}var1 var2 var3 var4$ echo var{1..10}var1 var2 var3 var4 var5 var6 var7 var8 var9 var10 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213592",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24176/"
]
} |
213,619 | I got several interesting :) e-mails from IP addresses where I can't identify country and owner (internet provider), it's by example 10.180.221.9710.220.113.13010.52.135.39 I tried several IP lookup services with no luck. Please could you help me? Is it possible that IP doesn't have country identification? | These addresses are coming from your private network or are somehow spoofed. Addresses from 10.0.0.0 to 10.255.255.255 are reserved for private networks (not connected to the internet) http://tldp.org/HOWTO/IP-Masquerade-HOWTO/addressing-the-lan.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121876/"
]
} |
213,638 | This is what I keep getting: ascendermedia@magic2tower:~$ sudo apt-get install flashplugin-installerbash: sudo: command not foundascendermedia@magic2tower:~$ sudo apt-get install flashplugin-installerbash: sudo: command not foundascendermedia@magic2tower:~$ apt-get install flashplugin-installerE: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?ascendermedia@magic2tower:~$ sudo aptitude install flashplugin-nonfreebash: sudo: command not foundascendermedia@magic2tower:~$ aptitude install flashplugin-nonfreeE: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?ascendermedia@magic2tower:~$ What am I doing wrong? | root is the superuser account on the system — it (basically) has all privileges. Many systems are configured so that you can use the sudo command in front of another command to run that command "as root" — that is, as if you are the root user, with the same privileges. It is usually the case that you need root privileges to install system packages, which is what apt-get does. So, it doesn't work, the first time because you don't have sudo available; the second time because sudo didn't magically appear just because you ran it twice; and the third (and fifth) times because apt-get (and aptitude ) really do require root privs to install packages like this. As you see from your first error message, sudo is either not installed or not in your path, and probably not configured. You may be able to substitute su -c instead, and give the root password (the password for the root account) when prompted. If you don't know that password, you need to a) ask the person who does to perform this task for you, if it is not your system, or b) find some way to recover it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121893/"
]
} |
213,674 | We can copy some file by extensions like this: cp *.txt ../new/ but how can I copy all files that have no extension? | The answer from @ubaid-ashraf is almost there. The way to specify file with no extension, in ksh would be: cp -- !(*.*) /new/path/ so that any file with dot in file name is skipped. For that to work in bash , you need to enable the extglob option ( shopt -s extglob ) and the kshglob option in zsh ( set -o kshglob ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114366/"
]
} |
213,705 | In GlusterFS, lets say i have 2 Nodes (Servers) on a Volume. Lets say the volume info is something like this: Volume Name: volume-wwwBrick1: gluster-server-01:/volume-www/brickBrick2: gluster-server-02:/volume-www/brick From the Client, as we know, we have to mount the volume volume-www by mounting from one Server. Like: mount -t glusterfs gluster-server-01:/volume-www /var/www I still feel there's a choke point since i am connecting to that gluster-server-01 only. What if it is FAILED? Ofcourse I can manually mount from another healthy Server again. But is there a smarter way (industrial approach) to solve this? | When you are doing this: mount -t glusterfs gluster-server-01:/volume-www /var/www You are initially connecting to one of the nodes that make up the Gluster volume, but the Gluster Native Client (which is FUSE-based) receives information about the other nodes from gluster-server-01 . Since the client now knows about the other nodes it can gracefully handle a failover scenario. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34722/"
]
} |
213,722 | I'm doing some placeholder replacements in various files that I'm feeding to ldapadd , to add new entries to an LDAP directory: sed \ -e 's/%%FOO%%/whatever/g' \ -e 's/%%BAR%%/other thing/g \ file1.ldif.template \ file2.ldif.template \ | ldapadd -x -D 'cn=admin,dc=example,dc=com' -W The problem I'm having with this is that if there is not an empty line at the end of file1.ldif.template , the first record in file2 is concatenated to the last record in file1 , and in ldif files different records should be separated by at least 1 newline. Of course I could add an empty line at the end of file1 , but that is very easy to fail in the future if some other developer (or their editors) remove the trailing newlines. So to summarize. Current (simplified) sed output: dn: cn=record1_file1,dc=example,dc=comcn: record1_file1dn: cn=record2_file1,dc=example,dc=comcn: record2_file1dn: cn=record1_file2,dc=example,dc=comcn: record1_file2dn: cn=record2_file2,dc=example,dc=comcn: record2_file2 Desired (simplified) output: dn: cn=record1_file1,dc=example,dc=comcn: record1_file1dn: cn=record2_file1,dc=example,dc=comcn: record2_file1dn: cn=record1_file2,dc=example,dc=comcn: record1_file2dn: cn=record2_file2,dc=example,dc=comcn: record2_file2 I'm working on linux (fedora 21) using GNU sed. Portability is not a concern (but i'll prefer a portable solution over a GNU solution). | When you are doing this: mount -t glusterfs gluster-server-01:/volume-www /var/www You are initially connecting to one of the nodes that make up the Gluster volume, but the Gluster Native Client (which is FUSE-based) receives information about the other nodes from gluster-server-01 . Since the client now knows about the other nodes it can gracefully handle a failover scenario. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17326/"
]
} |
213,726 | I'm trying to make an ssh connection (via lsh) from one Ubuntu host to another from within screen. If I try to run mc right after that I get the following error: Unknown terminal: screen-256color-sCheck the TERM environment variable.Also make sure that the terminal is defined in the terminfo database.Alternatively, set the TERMCAP environment variable to the desiredtermcap entry. The question is - who's causing this failure? Is it local host? remote? some package missing (which?), something not done by lsh-server ? or client? Just to be clear - I don't want workarounds like "TERM=xterm mc", I want to be able to use visual themes which support 256 colors on the (remote) console. | Finally, I've managed to figure out "obvious" package which supply screen-256-color-s (got to be installed on remote machine): sudo apt install ncurses-term fixed the problem for me: nice 256 colors and no need for ugly workarounds with environment variables. Hooray! :) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117282/"
]
} |
213,737 | My Internet access is through a proxy, my OS is Debian 8, each application must configure it to use the proxy, but there are some that are a headache to make it work with a proxy, then my question is: is there any way or a program to send all my connections(tcp, udp, etc.) to the proxy? that is to say, how do I set systemwide connection over a proxy server? | There are various solutions for this: 1. Configuring http_proxy variables You can set $http_proxy and other such variables. Most applications will pick this variable automatically. To set it system-wide, you can set this variable in either your ~/.bashrc file or /etc/profile . Set it as: http_proxy=http://user:[email protected]:3128https_proxy=https://user:[email protected]:3128export http_proxyexport https_proxy 2. Using proxy_chains Some applications would not use your proxy variable and they might not even have settings to use a proxy server. In such a case, you can direct all your PC traffic through a proxy server by using proxy_chains . I've never used proxy_chains , however their homepage seems to tell it all in one single page: http://proxychains.sourceforge.net/howto.html 3. Using transparent proxy To force all your PC connection through a proxy, you can also use transparent proxy as an alternative to proxy_chains. I don't have much idea how to set this up ( I did this a long time back though and it worked! ) so you'll have to look on your own. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81296/"
]
} |
213,766 | I have a source text file containing text where some words are l e t t e r s p a c e d like the word "letterspaced" in this question (i.e., there is a space character between the letters of the word. How can I undo letterspacing using sed? A pattern like \{[A-Za-z] \}+[A-Za-z] captures a letterspaced word, and s/ //g takes the spaces out, but how do I extract a letterspaced word out of a line of text and undo letterspacing without harming the legitimate space characters in the rest of the text? | You can do it like this: sed -e's/ \([^ ][^ ]\)/\n\1/g' \ -e's/\([^ ][^ ]\) /\1\n/g' \ -e's/ //g;y/\n/ /' <<\INI have a source text file containing text wheresome words are l e t t e r s p a c e dlike the word "letterspaced" in this question(i.e., there is a space character between theletters of the word. IN The idea is to first find all spaces which are either preceded by or followed by two or more not-space characters and set them aside as newline characters. Next simply remove all remaining spaces. And last, translate all newlines back to spaces. This is not perfect - without incorporating an entire dictionary of every word you could possibly use the best you will get is some kind of heuristic. This one's pretty good, though. Also, depending on the sed you use, you might have to use a literal newline in place of the n I use in the first two substitution statements as well. Aside from that caveat, though, this will work - and work very fast - with any POSIX sed . It doesn't need to do any costly lookaheads or behinds, because it just saves impossibles, which means it can handle all of pattern space for each substitution in a single address. OUTPUT I have a source text file containing text where somewords are letterspacedlike the word "letterspaced" in this question(i.e., there is a space character between theletters of the word. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65872/"
]
} |
213,799 | Is it possible in an interactive bash shell to enter a command that outputs some text so that it appears at the next command prompt, as if the user had typed in that text at that prompt ? I want to be able to source a script that will generate a command-line and output it so that it appears when the prompt returns after the script ends so that the user can optionally edit it before pressing enter to execute it. This can be achieved with xdotool but that only works when the terminal is in an X window and only if it's installed. [me@mybox] 100 $ xdotool type "ls -l"[me@mybox] 101 $ ls -l <--- cursor appears here! Can this be done using bash only? | With zsh , you can use print -z to place some text into the line editor buffer for the next prompt: print -z echo test would prime the line editor with echo test which you can edit at the next prompt. I don't think bash has a similar feature, however on many systems, you can prime the terminal device input buffer with the TIOCSTI ioctl() : perl -e 'require "sys/ioctl.ph"; ioctl(STDIN, &TIOCSTI, $_) for split "", join " ", @ARGV' echo test Would insert echo test into the terminal device input buffer, as if received from the terminal. A more portable variation on @mike's Terminology approach and that doesn't sacrifice security would be to send the terminal emulator a fairly standard query status report escape sequence: <ESC>[5n which terminals invariably reply (so as input) as <ESC>[0n and bind that to the string you want to insert: bind '"\e[0n": "echo test"'; printf '\e[5n' If within GNU screen , you can also do: screen -X stuff 'echo test' Now, except for the TIOCSTI ioctl approach, we're asking the terminal emulator to send us some string as if typed. If that string comes before readline ( bash 's line editor) has disabled terminal local echo, then that string will be displayed not at the shell prompt, messing up the display slightly. To work around that, you could either delay the sending of the request to the terminal slightly to make sure the response arrives when the echo has been disabled by readline. bind '"\e[0n": "echo test"'; ((sleep 0.05; printf '\e[5n') &) (here assuming your sleep supports sub-second resolution). Ideally you'd want to do something like: bind '"\e[0n": "echo test"'stty -echoprintf '\e[5n'wait-until-the-response-arrivesstty echo However bash (contrary to zsh ) doesn't have support for such a wait-until-the-response-arrives that doesn't read the response. However it has a has-the-response-arrived-yet feature with read -t0 : bind '"\e[0n": "echo test"'saved_settings=$(stty -g)stty -echo -icanon min 1 time 0printf '\e[5n'until read -t0; do sleep 0.02donestty "$saved_settings" Further reading See @starfry's answer 's that expands on the two solutions given by @mikeserv and myself with a few more detailed information. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/213799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9259/"
]
} |
213,840 | I would like to disable the default behavior that seems to happen with every Linux distribution that I've ever tried that any selected text is immediately sent to the clipboard (Mint, Ubuntu, Fedora, SuSE, etc.) and every window manager (Gnome, KDE, Cinnamon), and instead to behave more like the Windows implementation. I know that this is a beloved behavior by many in Linux, and I'm sure many will think I'm an idiot. The reason I want to do this, is that I'm a keyboard junky when navigating a GUI. (e.g. when I'm in Linux and I copy a URL and then switch to my browser and type Ctrl+L, it selects the address bar and moves my intended paste down one notch and replaces it with what I'm trying to overwrite.) I know there are MANY workarounds, but I don't really care about that, what I'd prefer is to be able to toggle the behavior for the clipboard. | First a misconception: any selected text is immediately sent to the clipboard Actually text is never "sent" anywhere until it is requested by a receiving application. When you select text, the application only claims the selection, which means basically that it raises a flag to say that from now on it owns it. Now on to your question: In X11 there can be multiple selections. 2 of them have well-known names and are standardized. They are called PRIMARY and CLIPBOARD. Their respective conventional behaviors are as follows: PRIMARY Applications claim PRIMARY when text is selected Applications request PRIMARY from the owning application and paste its contents on middle click. CLIPBOARD Applications claim CLIPBOARD when an explicit command is given, typically Ctrl - c . Applications request CLIPBOARD from the owning application and paste its contents when an explicit command is given, typically Ctrl - v . There might be additional rules I'm unsure about, like if no application owns CLIPBOARD but some application owns PRIMARY, paste primary instead upon Ctrl - v . It seems like CLIPBOARD already does what you need. You can ignore PRIMARY if you want (but note that some older applications like xterm may only support PRIMARY). Personally I do the opposite: I ignore CLIPBOARD and use only PRIMARY. I guess that's just the way I learned to use X11, I wasn't even aware that there was CLIPBOARD at first. But in order to mitigate the problem you describe, I often wish there was a pushable & poppable stack of PRIMARY selections, so I could "pop" to the previous selection after clobbering it with a different one. In response to your explicit question about whether the PRIMARY behaviour can be disabled, I think that would be quite difficult. The most straightforward way would be to individually disable it in each application (or toolkits which the applications use) which is surely not feasible. I suppose a kind of "X11 firewall" which blocks requests to claim PRIMARY could be constructed, but I don't think that would really buy you anything more than you can already get by ignoring PRIMARY and using CLIPBOARD only. More information: What's the difference between Primary Selection and Clipboard Buffer? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106121/"
]
} |
213,841 | I've got the following data I need to parse out the duplicates from column1 into a separate file, For example, 21288003132541:cr21288003267289:fr21288003758683:ph21288003758683:tag21288003758683:sel I want to take out this line 21288003758683:tag into a separate file, my output needed is separate files for any uniq line with subsequent files with any duplicates. So for example file 1 21288003132541:cr21288003267289:fr21288003758683:ph file 2 21288003758683:tag file 3 21288003758683:sel Hope this makes sense Thanks | First a misconception: any selected text is immediately sent to the clipboard Actually text is never "sent" anywhere until it is requested by a receiving application. When you select text, the application only claims the selection, which means basically that it raises a flag to say that from now on it owns it. Now on to your question: In X11 there can be multiple selections. 2 of them have well-known names and are standardized. They are called PRIMARY and CLIPBOARD. Their respective conventional behaviors are as follows: PRIMARY Applications claim PRIMARY when text is selected Applications request PRIMARY from the owning application and paste its contents on middle click. CLIPBOARD Applications claim CLIPBOARD when an explicit command is given, typically Ctrl - c . Applications request CLIPBOARD from the owning application and paste its contents when an explicit command is given, typically Ctrl - v . There might be additional rules I'm unsure about, like if no application owns CLIPBOARD but some application owns PRIMARY, paste primary instead upon Ctrl - v . It seems like CLIPBOARD already does what you need. You can ignore PRIMARY if you want (but note that some older applications like xterm may only support PRIMARY). Personally I do the opposite: I ignore CLIPBOARD and use only PRIMARY. I guess that's just the way I learned to use X11, I wasn't even aware that there was CLIPBOARD at first. But in order to mitigate the problem you describe, I often wish there was a pushable & poppable stack of PRIMARY selections, so I could "pop" to the previous selection after clobbering it with a different one. In response to your explicit question about whether the PRIMARY behaviour can be disabled, I think that would be quite difficult. The most straightforward way would be to individually disable it in each application (or toolkits which the applications use) which is surely not feasible. I suppose a kind of "X11 firewall" which blocks requests to claim PRIMARY could be constructed, but I don't think that would really buy you anything more than you can already get by ignoring PRIMARY and using CLIPBOARD only. More information: What's the difference between Primary Selection and Clipboard Buffer? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213841",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122036/"
]
} |
213,889 | Apt handles dependencies among packages installed from its repositories or *.deb files. However, what about software that users have compiled and installed from source with ./configure && make && make install without creating a .deb file first? Is it possible that Apt could remove packages needed by such softwares? Would installing software from source in /opt or /usr/local make a difference? | APT doesn't know anything about software that was installed manually. It doesn't know what libraries that software needs or anything. When APT installs a package only to fulfill the dependencies of another package, this package is marked as automatically installed. If you remove all the packages that depend on an automatically-installed package, that package is removed when you run apt-get autoremove ; higher-level frontends to APT will typically offer to do that after other maintenance. To avoid removing packages that are needed by locally-installed software, mark these packages as manually installed: apt-mark manual PACKAGE-NAME , or the m key in aptitude. To find what library packages a binary executable needs, run ldd /path/to/executable . For each line containing /usr/lib/ SOMETHING , run dpkg -S /usr/lib/ SOMETHING to display the name of the package containing that library. For scripts, head -n 1 /path/to/script shows the interpreter used by the script; make sure that this interpreter remains installed. Finding what libraries are used by a script can be difficult, there's no universal way to do that. If you've manually installed a more recent version of a package that's present in your distribution, look at the dependencies of the distribution's package and mark them as manually installed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8785/"
]
} |
213,980 | Is there a way to source a shell script into a namespace, preferably a bash shell script but I would look into other shells if they had this feature and bash didn't. What I mean by that is, e.g., something like "prefix all defined symbols with something so that they don't collide with already defined symbols (variable names, function names, aliases)" or any other facility that prevents name collisions. If there's a solution where I can namespace at source time ( NodeJS style), that would be the best. Example code: $ echo 'hi(){ echo Hello, world; }' > english.sh$ echo 'hi(){ echo Ahoj, světe; }' > czech.sh$ . english.sh$ hi #=> Hello, world$ . czech.sh #bash doesn't even warn me that `hi` is being overwritten here$ hi #=> Ahoj, světe#Can't use the English hi now#And sourcing the appropriate file before each invocation wouldn't be very efficient | From man ksh on a system with a ksh93 installed... Name Spaces Commands and functions that are executed as part of the list of a namespace command that modify variables or create new ones, create a new variable whose name is the name of the name space as given by identifier preceded by . . When a variable whose name is name is referenced, it is first searched for using .identifier.name . Similarly, a function defined by a command in the namespace list is created using the name space name preceded by a . . When the list of a namespace command contains a namespace command, the names of variables and functions that are created consist of the variable or function name preceded by the list of identifiers each preceded by . . Outside of a name space, a variable or function created inside a name space can be referenced by preceding it with the name space name. By default, variables staring with .sh are in the sh name space. And, to demonstrate, here is the concept applied to a namespace provided by default for every regular shell variable assigned in a ksh93 shell. In the following example I will define a discipline function that will act as the assigned .get method for the $PS1 shell variable. Every shell variable basically gets its own namespace with, at least, the default get , set , append , and unset methods. After defining the following function, any time the variable $PS1 is referenced in the shell, the output of date will be drawn at the top of the screen... function PS1.get { printf "\0337\33[H\33[K%s\0338" "${ date; }"} (Also note the lack of the () subshell in the above command substitution) Technically, namespaces and disciplines are not exactly the same thing (because disciplines can be defined to apply either globally or locally to a particular namespace ) , but they are both part and parcel to the conceptualization of shell data types which is fundamental to ksh93 . To address your particular examples: echo 'function hi { echo Ahoj, světe\!; }' > czech.kshecho 'function hi { echo Hello, World\!; }' >english.kshnamespace english { . ./english.ksh; }namespace czech { . ./czech.ksh; }.english.hi; .czech.hi Hello, World!Ahoj, světe! ...or... for ns in czech englishdo ".$ns.hi"done Ahoj, světe!Hello, World! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
214,056 | I want to use find to find some files, and return all files as a single line (without newline characters), and a custom delimiter between the files. So for example the result for three files would be /my/file/1::/my/file/2::/my/file/3 instead of /my/file/1/my/file/2/my/file/3 Is there any way of achieving this using standard unix tools in combination with find ? | With the GNU implementation of find and sed , you can use : find . -type f -printf '%p::' | sed '$s/::$/\n/' The -printf predicate of GNU find will print the file names in a single line delimited by :: and then sed will substitute the last :: on the last (here undelimited) line with a newline. Example : $ find . -type f -printf '%p\n'./foo./test filewhose namecontains newline characters and ::./bar$ find . -type f -printf '%p::' | sed '$s/::$/\n/'./foo::./test filewhose namecontains newline characters and ::::./bar The standard equivalent of -printf '%p::' would be -exec printf '%s::' {} + . There is no equivalent for that GNU sed expression as POSIX sed cannot handle non-text. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9266/"
]
} |
214,068 | Right now I have my iTerm session configured to use zsh ( /usr/local/bin/zsh ), but I'm trying to configure tmux to use zsh as well, instead of /bin/bash/ , which it's currently defaulting to. So far nothing I've read up on has gotten me where I need. Any ideas or things I may have missed? Below are some details about my current setup. Thanks! Check state: 1) Open iTerm echo $SHELL /bin/bash ps -p $$ PID TTY TIME CMD 19626 ttys000 0:00.52 /usr/local/bin/zsh 2) Run tmux tmux echo $SHELL /usr/local/bin/zsh Configuration: iTerm Profiles > General > Command: /usr/local/bin/zsh In .tmux.conf: set-option -g default-shell /usr/local/bin/zsh | You need to set default-command : set -g default-command /usr/local/bin/zsh default-shell variable only use to create a login shell, when default-command is empty - which is default value. Or you can simply change your default shell to zsh , in this case, tmux will start a login shell, instead of non-login shell. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121131/"
]
} |
214,079 | Either what I'm asking here is extremely unorthodox/unconventional/risky, or my Google-fu skills just aren't up to snuff... In a bash shell script, is there any easy of way of telling if it is getting sourced by another shell script, or is it being run by itself? In other words, is it possible to differentiate between the following two behaviors? # from another shell scriptsource myScript.sh# from command prompt, or another shell script./myScript.sh What I'm thinking of doing is to create an utilities-like shell script containing bash functions that can be made available when sourced. When this script is being run by itself though, I'll like it to perform certain operations, based on the defined functions too. Is there some kind of an environment variable that this shell script can pick up on, e.g. some_function() { # ...}if [ -z "$IS_SOURCED" ]; then some_function;fi Preferably, I'm looking for a solution that doesn't require the caller script to set any flag variables. edit : I know the difference between sourcing and and running the script, what I'm trying to find out here if it's possible to tell the difference in the script that is being used (in both ways). | Yes - the $0 variable gives the name of the script as it was run: $ cat example.sh#!/bin/bashscript_name=$( basename ${0#-} ) #- needed if sourced no paththis_script=$( basename ${BASH_SOURCE} )if [[ ${script_name} = ${this_script} ]] ; then echo "running me directly"else echo "sourced from ${script_name}"fi $ cat example2.sh#!/bin/bash. ./example.sh Which runs like: $ ./example.shrunning me directly$ ./example2.shexample.sh sourced from example2.sh That doesn't cater for being source from an interactive shell, but you get this idea (I hope). Updated to include BASH_SOURCE - thanks h.j.k | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44356/"
]
} |
214,082 | I've purchased a new Western Digital Green Power 1TB 3.5 inch internal hard drive on eBay but I can't get any of my desktops to recognise it. Model number WD 10EURX. I'm using Linux Mint 17 Mate and Windows 7. The drive was made in 2014 but my motherboards are much older. Even the BIOS upgrades are from earlier days. I've not had any luck searching for a driver or answers on the WD website. I remember seeing something about problems with EURX or SEURX(?) internal green drives but I can't find the article again. Can someone please suggest some first steps for me to take in order to progress with this problem (I'm suffering from a Brain Stall)? | Yes - the $0 variable gives the name of the script as it was run: $ cat example.sh#!/bin/bashscript_name=$( basename ${0#-} ) #- needed if sourced no paththis_script=$( basename ${BASH_SOURCE} )if [[ ${script_name} = ${this_script} ]] ; then echo "running me directly"else echo "sourced from ${script_name}"fi $ cat example2.sh#!/bin/bash. ./example.sh Which runs like: $ ./example.shrunning me directly$ ./example2.shexample.sh sourced from example2.sh That doesn't cater for being source from an interactive shell, but you get this idea (I hope). Updated to include BASH_SOURCE - thanks h.j.k | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67386/"
]
} |
214,089 | I'm working in a directory ~/foo which has subdirectories ~/foo/alpha~/foo/beta~/foo/epsilon~/foo/gamma I would like to issue a command that checks the total size under each "level 1" subdirectory of ~/foo and deletes the directory along with its contents if the size is under a given amount. So, say I'd like to delete the directories whose contents have less than 50K . Issuing $ du -sh */ returns 8.0K alpha/114M beta/20K epsilon/1.2G gamma/ I'd like my command to delete ~/alpha and ~/epsilon along with their contents. Is there such a command? I suspect this can be done with find somehow but I'm not quite sure how. | With GNU find and GNU coreutils , and assuming your directories don't have newlines in their names: find ~/foo -mindepth 1 -maxdepth 1 -type d -exec du -ks {} + | awk '$1 <= 50' | cut -f 2- This will list directories with total contents smaller than 50K. If you're happy with the results and you want to delete them, add | xargs -d \\n rm -rf to the end of the command line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
214,141 | I am not a Linux guy but stuck in some Script which I have to read for my Project.So can anyone can help me what this command is doing? shift $(($optind - 1)) | shift $((OPTIND-1)) (note OPTIND is upper case) is normally found immediately after a getopts while loop. $OPTIND is the number of options found by getopts . As pauljohn32 mentions in the comments, strictly speaking, OPTIND gives the position of the next command line argument. From the GNU Bash Reference Manual : getopts optstring name [args] getopts is used by shell scripts to parse positional parameters. optstring contains the option characters to be recognized; if a character is followed by a colon, the option is expected to have an argument, which should be separated from it by whitespace. The colon (‘:’) and question mark (‘?’) may not be used as option characters. Each time it is invoked, getopts places the next option in the shell variable name, initializing name if it does not exist, and the index of the next argument to be processed into the variable OPTIND . OPTIND is initialized to 1 each time the shell or a shell script is invoked. When an option requires an argument, getopts places that argument into the variable OPTARG . The shell does not reset OPTIND automatically; it must be manually reset between multiple calls to getopts within the same shell invocation if a new set of parameters is to be used. When the end of options is encountered, getopts exits with a return value greater than zero. OPTIND is set to the index of the first non-option argument, and name is set to ‘?’. getopts normally parses the positional parameters, but if more arguments are given in args , getopts parses those instead. shift n removes n strings from the positional parameters list. Thus shift $((OPTIND-1)) removes all the options that have been parsed by getopts from the parameters list, and so after that point, $1 will refer to the first non-option argument passed to the script. Update As mikeserv mentions in the comment, shift $((OPTIND-1)) can be unsafe. To prevent unwanted word-splitting etc, all parameter expansions should be double-quoted. So the safe form for the command is shift "$((OPTIND-1))" | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/214141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122226/"
]
} |
214,228 | I was reading Practical Unix and Internet Security , when I came across the following lines which I couldn't comprehend. If you are using the wu archive server, you can configure it in such a way that uploaded files are uploaded in mode 004 , so they cannot be downloaded by another client . This provides better protection than simply making the directory unreadable , as it prevents people from uploading files and then telling their friends the exact filename to download. A permission of 004 corresponds to -------r-- . Can't a file be downloaded if it has read access? Also why is it considered better than simply making the directory non-readable? What does this imply? Note: This is with regard to unauthorised users leaving illegal and copyrighted material on servers using anonymous FTP. The above solution was suggested to prevent this along with a script which deletes the directory contents after a period of time. | The permissions 004 (------r--) means that the file can only be read by processes that are not running as the same user or as the same group as the FTP server. This is rather unusual: usually the user has more rights than the group, and the group has more rights than others. Normally the user can change the permissions, so it's pointless to give more restrictive permissions to the user. It makes sense here because the FTP server (presumably) doesn't have a command to change permissions, so the files will retain their permissions until something else changes them. Since the user that the FTP server is running as can't read the files, people won't be able to download the file. That makes it impossible to use the FTP server to share file. Presumably some process running as a different user and group reads the file at some point, verifies that it complies to some policy, copies the data if it does, and deletes the uploaded file. It would have made more sense to me to give the file permissions 040 (readable by the group only) and have the consumer process run as the same group as the FTP server, but a different user. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/214228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122271/"
]
} |
214,234 | I have Ubuntu 14.04 running on two vm's. I have have permissive SELinux enabled on both. On system1, all of my files + linked directories in /var/www/html are marked as var_t and the symbolic linked directory (to home/../Documents ) is RED and appears not to work. On system2, all of my files + linked directories in /var/www/html are marked as file_t and the symbolic linked directory (to home/../chipweb ) is NOT RED and is ok to use? Why are my file SELinux types different in these two identical directories? I am confused? thanks! | The permissions 004 (------r--) means that the file can only be read by processes that are not running as the same user or as the same group as the FTP server. This is rather unusual: usually the user has more rights than the group, and the group has more rights than others. Normally the user can change the permissions, so it's pointless to give more restrictive permissions to the user. It makes sense here because the FTP server (presumably) doesn't have a command to change permissions, so the files will retain their permissions until something else changes them. Since the user that the FTP server is running as can't read the files, people won't be able to download the file. That makes it impossible to use the FTP server to share file. Presumably some process running as a different user and group reads the file at some point, verifies that it complies to some policy, copies the data if it does, and deletes the uploaded file. It would have made more sense to me to give the file permissions 040 (readable by the group only) and have the consumer process run as the same group as the FTP server, but a different user. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/214234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122279/"
]
} |
214,274 | What is the purpose of Linux permissions such as 111 or 333 (i.e. the user can execute , but cannot read the file), if the ability to execute does not automatically imply the ability to read? | I played with it and apparently, exec permissions do not imply read permissions. Binaries can be executable without being readable: $ echo 'int main(){ puts("hello world"); }' > hw.c$ make hw$ ./hwhello world$ chmod 111 hw$ ./hw hello world$ cat hw/bin/cat: hw: Permission denied I can't execute scripts though, unless they have both read and exec permission bits on: $ cat > hw.sh#!/bin/bashecho hello world from bash^D$ chmod +x ./hw.sh$ ./hw.sh hello world from bash$ chmod 111 ./hw.sh$ ./hw.sh/bin/bash: ./hw.sh: Permission denied | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/214274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122305/"
]
} |
214,327 | My OS is Debian 8. I have a file named clip01.mp4 that I would like to reverse, so it plays backwards. Audio can be discarded or reversed as well, doesn't matter. Apparently ffmpeg is deprecated in favor of avconv , but I can't seem to find a solution that uses either tool! I would like to keep the same video codec to avoid any sort of losses, if possible. Command line tools are preferred, for ease of scripting. | From https://stackoverflow.com/questions/2553448 : Dump all video frames $ ffmpeg -i input.mkv -an -qscale 1 %06d.jpg Dump audio $ ffmpeg -i input.mkv -vn -ac 2 audio.wav Reverse audio $ sox -V audio.wav backwards.wav reverse Cat video frames in reverse order to FFmpeg as input $ cat $(ls -t *jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -i backwards.wav -vcodec libx264 -vpre slow -crf 20 -threads 0 -acodec flac output.mkv Use mencoder to deinterlace PAL dv and double the frame rate from 25 to 50, then pipe to FFmpeg. $ mencoder input.dv -of rawvideo -ofps 50 -ovc raw -vf yadif=3,format=i420 -nosound -really-quiet -o - | ffmpeg -vsync 0 -f rawvideo -s 720x576 -r 50 -pix_fmt yuv420p -i - -vcodec libx264 -vpre slow -crf 20 -threads 0 video.mkv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65344/"
]
} |
214,388 | How to let the Firewall of RHEL7 the SNMP connection passing? When I did this command on the computer: systemctl stop firewalld All the SNMP packet are passing well. When I restarted firewalld all the packet arre blocked.I tried several connfigruation with the firewall running of course, like: iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 161 -j ACCEPT or firewall-cmd --zone=public --add-port=161/tcp --permanent I've not get any error message but the SNMP still in TIMEOUT. | The correct way to do this is to add a profile for SNMP to firewalld. Using UDP 161 not TCP vim /etc/firewalld/services/snmp.xml <?xml version="1.0" encoding="utf-8"?><service> <short>SNMP</short> <description>SNMP protocol</description> <port protocol="udp" port="161"/></service> Then you should reload your firewall firewall-cmd --reload Then you need to add the service to your public zone firewall-cmd --zone=public --add-service snmp --permanent Then finally reload your firewall again firewall-cmd --reload | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100826/"
]
} |
214,401 | I have always used windows but my computer is always overheating and i think i should try something new. I am going to clean it with compressed air and i am thinking of installing a linux distro as dual boot. What is the best option in terms of power, quickness, more software available and security? I would like to run programs like matlab, java, arduino software, NI multisim, office, itunes... Is there a way to while using linux open a program installed on the windows? My pc as a hard drive divided by disk c, where the windows is instaled and a disk d almost empty. I think they are partitures of "a ata st9500325as" Is it supposed to install in d??? | The correct way to do this is to add a profile for SNMP to firewalld. Using UDP 161 not TCP vim /etc/firewalld/services/snmp.xml <?xml version="1.0" encoding="utf-8"?><service> <short>SNMP</short> <description>SNMP protocol</description> <port protocol="udp" port="161"/></service> Then you should reload your firewall firewall-cmd --reload Then you need to add the service to your public zone firewall-cmd --zone=public --add-service snmp --permanent Then finally reload your firewall again firewall-cmd --reload | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122394/"
]
} |
214,405 | I would like to list all files with extension .log except the file backup.log . I have tried using this command: ls *.log -I "backup.log" But all the log files are listed, even backup.log ! How could I list all the log files except backup.log ? | The shell expands the wildcard, so ls gets backup.log as one of the parameters. Use an extended pattern (enabled by shopt -s extglob ): ls !(backup).log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214405",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122398/"
]
} |
214,433 | I have multiple scripts that detach a process from bash using nohup and &>/dev/null & . My question is, how do I kill the process after completely detaching it from bash. using killall or pidof ScriptName doesn't work. | nohup should only affect the hangup signal. So kill should still work normally. Maybe you are using the wrong pid or process name; compare with pstree -p or ps -ef . If you still suspect nohup , maybe you could try disown instead. $ sleep 1000 &$ jobs -p13561$ disown$ jobs -p$ pidof sleep13561$ kill 13561$ pidof sleep$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110093/"
]
} |
214,445 | I used this command to display the first result of files in my directory. ls | head -n 1 My simple question is, how can I modify this command to display say the nth result? Thanks! | You could use sed to select a single line, for example line 12: ls | sed -n 12p Option -n asks sed not to print every line (which is what it normally does), and 12p asks to print the pattern space when the address is 12. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/214445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122428/"
]
} |
214,471 | Introduction: I have created a bash function that is able to check whether a port is available and increments it by 1 if false until a certain maximum port number. E.g., if port 500 is unavailable then the availability of 501 will be checked until 550. Aim: In order to test this bash function I need to create a range of ports that are in LISTEN state. Attempts: On Windows it is possible to create a LISTEN port using these PowerShell commands : PS C:\Users\u> netstat -nat | grep 1234PS C:\Users\u> $listener = [System.Net.Sockets.TcpListener]1234PS C:\Users\u> $listener.Start();PS C:\Users\u> netstat -nat | grep 1234TCP 0.0.0.0:1234 0.0.0.0:0 LISTENING InHostPS C:\Users\u> $listener.Stop();PS C:\Users\u> netstat -nat | grep 1234PS C:\Users\u> Based on this I was trying to think about a command that could do the same on CentOS, but I do not know why and I started to Google without finding a solution that solves this issue. Expected answer : I will accept and upvote the answer that contains a command that is able to create a LISTEN port and once the command has been run the port should stay in LISTEN state, i.e.: [user@host ~]$ ss -nat | grep 500LISTEN 0 128 *:500 *:* | You could use nc -l as a method to do what you are looking for. Some implementations of nc have a -L option which allows the connections to persist. If you only need them for a little while you could open this command in a for loop and have a bunch of ports opened that way. If you need these opened longer you can use one of the super servers to create a daemon. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/214471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65367/"
]
} |
214,472 | I run Linux Mint 17.1. When I log in I get a Cinnamon notification saying: Problems during Cinnamon startup Cinnamon started successfully, but one or more applets, desklets or extension failed to load. Check your system log and the Cinnamon LookingGlass log for any issues. You can disable the offending extension(s) in Cinnamon Settings to prevent this message from occurring. Please contact the developer. Sure enough, when I checked the Desklets settings, there was an extension marked with "error". Out of curiosity, I tried to look for the log message mentioned in the notification to no avail. There were no relevant messages in /var/log/syslog and I could not find the LookingGlass log. This is what I've tried: dmesg | grep -i cinnamongrep -i cinnamon /var/log/syslogfind /var/log -iname "*cinnamon*"find /var/log -iname "*glass*"find /var/log -iname "*looking*" | It's in ~/.xsession-errors Prior to Cinnamon 3.8.8 it was ~/.cinnamon/glass.log | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/214472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55706/"
]
} |
214,488 | I have a ~1GB image that I'm writing to a 8GB SD card via the dd tool. I'd like to verify that it was written without corruption by reading it back and comparing its hash with original one. Obviously, when I read it back via dd the size of resulting image matches size of my SD card, therefore checking hashes are useless. I believe that I should somehow interpret the output of writing invocation to configure the skip / count parameters to read it back properly. Command that I used to write my image: > sudo dd if=my.img of=/dev/sdc bs=1M 8+50581 records in8+50581 records out3947888640 bytes (3.9 GB) copied, 108.701 s, 36.3 MB/s Command that I used to read my image: > sudo dd if=/dev/sdc of=same_as_my.img15523840+0 records in15523840+0 records out7948206080 bytes (7.9 GB) copied, 285.175 s, 27.9 MB/s | Determine the size of the image, for example with \ls -l my.img (not ls -lh , that would give you an approximate size; \ls protects against an alias like ls='ls -h' ) or with stat -c %s my.img . If you want to check the copy against the original just this once, then just compare the files. Using hashes is useless for a one-time comparison, it would only make things slower and require more commands. The command cmp compares binary files. You need to pass it the image file and the corresponding part of the SD card. Use head to extract the beginning of the SD card. </dev/sdc head -c "$(stat -c %s my.img)" | cmp - my.img If you want to perform many comparisons, then hashes are useful, because you only need to read each instance once, to calculate its hash. Any hash will do since you're worried about data corruption. If you needed to check that a file hasn't been modified for security reasons, then cksum and md5sum would not be suitable, you should use sha256sum or sha512sum instead. md5sum <my.img >my.img.md5sum</dev/sdc head -c "$(stat -c %s my.img)" | md5sum >sd-copy.md5sumcmp my.img.md5sum sd-copy.md5sum Note the input redirection in the first command; this ensures that the checksum file doesn't contain file names, so you can compare the checksum files. If you have a checksum file and a copy to verify, you can do the check directly with </dev/sdc head -c "$(stat -c %s my.img)" | md5sum -c my.img.md5sum Oh, and don't use dd , it's slow (or at best not faster) and doesn't detect copy errors. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4076/"
]
} |
214,572 | I'm using linux mint 17 and I want to prevent a service from starting on boot. I can stop the service with, /etc/init.d/<service_name> stop but it will start again on a reboot On centos7 I would use the following command systemctl disable <service_name> How do I do this on mint 17 ? | For Linux Mint 17 based on Ubuntu you have to use this because it uses upstart as it is the way on Ubuntu 14.04 LTS: echo manual | sudo tee /etc/init/<service_name>.override For Linux Mint Debian Edition , it uses System V init, so you can issue: update-rc.d -f <service_name> remove It will remove completely the init scripts for the service. So if you are disabling it for a while and you want it in the future, better use: update-rc.d <service_name> disable This only change the scripts of the service from signal start to signal kill , preventing it to start. In the future it will change to systemd for both of them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99839/"
]
} |
214,594 | I have a file in server A which I am able to do transfer to server B using scp. I need to do this through a cron entry. server B has a password.How do I perform this? | Don't use password authentication. Use ssh keypairs. Karthik@A $: ssh-keygen #keep the passphrase emptyKarthik@A $: ssh-copy-id B #enter your B password#^ this will copy your public key to Karthik@B:.ssh/authorized_keys From then on, you should be able to ssh from A to B (and by extension, scp from A to B ) without a password. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214594",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121412/"
]
} |
214,607 | I newly installed xfce on Arch Linux. xfce makes a beep noise every time I press the delete button or backspace, which is really annoying. How can I disable this? I tried un-commenting set bell-style none , but that didn't work. | To disable the bell for all X applications: xset b off | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110279/"
]
} |
214,613 | I have the following line: abababtestab I'm trying to figure out a sed expression to remove all occurrences of ab from the beginning of the line so the transformed line should be: testab I feel like this should be simple, but I really don't know anything about sed . What I have so far is: sed 's/^ab//' But this only removes the first occurrence of ab. | sed 's/^\(ab\)*//' <in >out You should group it. echo ababababtestab |sed 's/^\(ab\)*//' testab Some older sed s may not handle that very well, though. Though sub-expression duplication is a POSIX-specified feature of BRE, some sed s don't properly support it. In some of those, though... echo abababtestab |sed 's/^\(ab\)\1*//' ...might work instead. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122540/"
]
} |
214,630 | I have a cron that I am writing that will repeat every hour, and I need a simple way to pass a counter from the last run to the next run. My plan was to add the number to a file at the end, and then call it back at the beginning, e.g. At end of the first cron run: INC_COUNT=1echo $INC_COUNT > inc_counter.txt Then at the start of the second run: INC_COUNT_FILE="inc_counter.txt"OLD_INC_COUNTER=$(cat "$INC_COUNT_FILE") So far so good, but now I need to increment that number.I tried: NEW_INC_COUNTER="$OLD_INC_COUNTER"+1NEW_INC_COUNTER="$OLD_INC_COUNTER+1" neither of which worked. What is the best way to increment this number? | The following methods will work: NEW_INC_COUNTER=$((OLD_INC_Counter+1)) ((NEW_INC_COUNTER = OLD_INC_Counter+1)) ((OLD_INC_Counter+=1)) ((OLD_INC_Counter++)) let "NEW_INC_COUNTER = OLD_INC_Counter+1" let "OLD_INC_Counter+=1" let "OLD_INC_Counter++" Good luck! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102428/"
]
} |
214,632 | I'm looking for a way to display the filename when using this command: cat *.tcp | grep "tcp" | grep "open" | sort | uniq Is there a way to do it? | grep tcp *.tcp | grep open | sort -u Giving multiple filenames to grep will, by default, cause grep to prefix the matching output lines with the filename(s) they matched in. The only other change I made was to combine sort | uniq into sort -u (and to remove the quotes that are unnecessary here). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214632",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117923/"
]
} |
214,657 | zstyle seems like it's just a central place to store and retrieve data, like an alternative to export -ing shell parameters. Is that true, or is there more to it? | zstyle handles the obvious style control for the completion system, but it seems to cover more than just that. E.g., the vcs_info module relies on it for display of git status in your prompt. You can start by looking at the few explanatory paragraphs in man zshmodules in the zstyle section. You can simply invoke it to see what settings are in effect. This can be instructive. The Zsh Book has a nice chapter treatment on zstyle , also, explaining in detail its various fields. You could grep around in the .../Completion/ directory on your system to see how some of those files make use of zstyle . A common location is near /usr/share/zsh/functions/Completion/* . I see it used in 100+ files on my system there. Users often have zstyle sprinkled around their ~/.zshrc , too. Here are some nice ones to add some color and descriptions to your completing: # Do menu-driven completion.zstyle ':completion:*' menu select# Color completion for some things.# http://linuxshellaccount.blogspot.com/2008/12/color-completion-using-zsh-modules-on.htmlzstyle ':completion:*' list-colors ${(s.:.)LS_COLORS}# formatting and messages# http://www.masterzen.fr/2009/04/19/in-love-with-zsh-part-one/zstyle ':completion:*' verbose yeszstyle ':completion:*:descriptions' format "$fg[yellow]%B--- %d%b"zstyle ':completion:*:messages' format '%d'zstyle ':completion:*:warnings' format "$fg[red]No matches for:$reset_color %d"zstyle ':completion:*:corrections' format '%B%d (errors: %e)%b'zstyle ':completion:*' group-name ''# Completers for my own scriptszstyle ':completion:*:*:sstrans*:*' file-patterns '*.(lst|clst)'zstyle ':completion:*:*:ssnorm*:*' file-patterns '*.tsv'# ... The completion system makes most of the fields clear if you play around with it. Try typing zstyle :«tab» and you see some options. Tab-complete to the next colon and you’ll see the next set of options, etc. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/214657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73256/"
]
} |
214,664 | I need to go from a string to an array where each entry is each word on that string. For example, starting with: VotePedro="Vote for Pedro" I need the array: VoteForPedro Which I should then be able to iterate over as: for i in "${votePedroArray[@]}" do ## Do something done | VotePedro="Vote for Pedro"votePedroArray=(${VotePedro}) Explanation: Arrays are usually declared using parentheses. For example votePedroArray=("Vote" "For" "Pedro") would give you an array of length 3. And ${VotePedro} is the same as $VotePedro in this context. To access individual array elements, you can use brackets similar to what you had for the for loop in your question. e.g. ${votePedroArray[0]} is the first element in the array ("Vote" for this example) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117308/"
]
} |
214,687 | I would like to list only the USB storage devices connected to my computer. Since these are SCSI disks, I used the command lsscsi , which lists the USB drives as well as my computer's hard drive and CD drive. Is there a way to ignore the memory storage that's not a USB? I have also tried lsusb , but this includes my keyboard, mouse, and other non-storage devices. | This answer checks the list of all attached block devices and iterates over them with udevadmin to check their respective ID_BUS . You can see all attached block devices in /sys/block . Here is the bash script from the linked answer that should let you know if it is a USB storage device: for device in /sys/block/*do if udevadm info --query=property --path=$device | grep -q ^ID_BUS=usb then echo $device fidone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122592/"
]
} |
214,764 | I want to grep for a word in a file in the last n lines without using the pipe. grep <string> filename enables to search the filename for a string. But, I want to search for a string in the last N lines of the file. Any command to search for that without using the pipe? | If your shell supports it ( zsh , bash , some implementations of ksh ), you could utilise process substitution grep <pattern> <(tail -n5 yourfile.txt) Where -n5 means get the five last lines. Similarly, grep <pattern> <(head -n5 yourfile.txt) would search through the 5 first lines of yourfile.txt. Explanation Simply speaking, the substituted process pretends to be a file, which is what grep is expecting. One advantage with process substitution is that you can feed output from multiple commands as input for other commands, like diff in this example. diff -y <(brew leaves) <(brew list) This gets rid of the pipe ( | ) character, but each substitution is in fact creating a pipe 1 . 1 Note that with ksh93 on Linux at least, | does not use a pipe but a socket pair while process substitution does use a pipe (as it's not possible to open a socket): $ ksh93 -c 'readlink <(:)'pipe:[620224]$ ksh93 -c ': | readlink /proc/self/fd/0'socket:[621301] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109682/"
]
} |
214,796 | What is the reason to use such untelling system call names like time and creat instead of getCurrentTimeSecs and createFile or, maybe more suitable on Unix get_current_time_secs and create_file . Which brings me to the next point: why should someone want something like cfsetospeed without camel case or at least underscores to make it readable? Of course the calls would have more characters but we all know that readability of code is more important right? | It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names. Related questions: Why is 'umount' not spelled 'unmount'? What did Ken Thompson mean when he said, "I'd spell creat with an 'e'."? What, if any, naming convention was used for the standard Unix commands? https://stackoverflow.com/questions/682719/what-does-the-9th-commandment-mean | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/214796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122656/"
]
} |
214,820 | I have read this quote (below) several times, most recently here , and am continually puzzled at how dd can be used to patch anything let alone a compiler: The Unix system I used at school, 30 years ago, was very limited in RAM and Disk space. Especially, the /usr/tmp file system was very small, which led to problems when someone tried to compile a large program. Of course, students weren't supposed to write "large programs" anyway; large programs were typically source codes copied from "somewhere". Many of us copied /usr/bin/cc to /home/<myname>/cc , and used dd to patch the binary to use /tmp instead of /usr/tmp , which was bigger. Of course, this just made the problem worse - the disk space occupied by these copies did matter those days, and now /tmp filled up regularly, preventing other users from even editing their files. After they found out what happened, the sysadmins did a chmod go-r /bin/* /usr/bin/* which "fixed" the problem, and deleted all our copies of the C compiler. (Emphasis mine) The dd man-page says nothing about patching and a don't think it could be re-purposed to do this anyway. Could binaries really be patched with dd ? Is there any historical significance to this? | Let's try it. Here's a trivial C program: #include <stdio.h>int main(int argc, char **argv) { puts("/usr/tmp");} We'll build that into test : $ cc -o test test.c If we run it, it prints "/usr/tmp". Let's find out where " /usr/tmp " is in the binary: $ strings -t d test | grep /usr/tmp1460 /usr/tmp -t d prints the offset in decimal into the file of each string it finds. Now let's make a temporary file with just " /tmp\0 " in it: $ printf "/tmp\x00" > tmp So now we have the binary, we know where the string we want to change is, and we have a file with the replacement string in it. Now we can use dd : $ dd if=tmp of=test obs=1 seek=1460 conv=notrunc This reads data from tmp (our " /tmp\0 " file), writing it into our binary, using an output block size of 1 byte, skipping to the offset we found earlier before it writes anything, and explicitly not truncating the file when it's done. We can run the patched executable: $ ./test/tmp The string literal the program prints out has been changed, so it now contains " /tmp\0tmp\0 ", but the string functions stop as soon as they see the first null byte. This patching only allows making the string shorter or the same length, and not longer, but it's adequate for these purposes. So not only can we patch things using dd , we've just done it. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/214820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122678/"
]
} |
214,858 | I'm trying to store multiple lines in a bash variable, but it doesn't seem to work. For example, if I list /bin one file per line and store it in $LS , then I pass $LS as stdin to wc , it always returns 1: $ ls -1 /bin | wc -l134$ LS=$(ls -1 /bin); wc -l <<< $LS1 If I try to output to screen, I get various results: echo prints all the lines on a single line, while printf prints only the first line: #!/bin/bashLS=$(ls -1 /bin)echo $LSprintf $LS So, a bash variable can contain multiple lines? | You need to double quote it (and you should double quote variables in most case): echo "$LS" But don't use echo to print variables content, using printf instead : printf '%s\n' "$LS" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72319/"
]
} |
214,872 | As you can read here , ext4 file system has an extent feature that groups blocks into extents. Each of them can have up to 128MiB contiguous space. In e4defrag , there are lines similar to the following: [325842/327069]/file: 100% extents: 100 -> 10 [ OK ] The size of the file is around 150MiB. So according to the wiki page, there should be 2 extents instead of 10. Does anyone know why the extents are 15MiB instead of 128MiB? Is there a a tool that can check the exact extent size? How can I change the size so it could be 128MiB? | You need to double quote it (and you should double quote variables in most case): echo "$LS" But don't use echo to print variables content, using printf instead : printf '%s\n' "$LS" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52763/"
]
} |
214,879 | In a git repository, I have set up my .gitmodules file to reference a github repository: [submodule "src/repo"] path = src/repo url = repourl when I 'git status' on this repo, it shows: On branch masterYour branch is up-to-date with 'origin/master'.Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory)modified: src/repo (new commits) If I cd into src/repo and git status on repo, it says that there is nothing to commit. Why is my top-level git repo complaining? | It's because Git records which commit (not a branch or a tag, exactly one commit represented in SHA-1 hash) should be checked out for each submodule. If you change something in submodule dir, Git will detect it and urge you to commit those changes in the top-level repoisitory. Run git diff in the top-level repository to show what has actually changed Git thinks. If you've already made some commits in your submodule (thus "clean" in submodule), it reports submodule's hash change. $ git diffdiff --git a/src/repo b/src/repoindex b0c86e2..a893d84 160000--- a/src/repo+++ b/src/repo@@ -1 +1 @@-Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea+Subproject commit a893d84d323cf411eadf19569d90779610b10280 Otherwise it shows -dirty hash change which you cannot stage or commit in the top-level repository. git status also claims submodule has untracked/modified content. $ git diffdiff --git a/src/repo b/src/repo--- a/src/repo+++ b/src/repo@@ -1 +1 @@-Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea+Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea-dirty$ git statusOn branch masterChanges not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) (commit or discard the untracked or modified content in submodules) modified: src/repo (untracked content)no changes added to commit (use "git add" and/or "git commit -a") To update which commit records should be checked out for the submodule, you need to git commit the submodule in addition to committing the changes in the submodule: git add src/repo | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/214879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122721/"
]
} |
214,908 | The man page wait(2) states that the waitpid system call returns the ECHILD error if the specified process is not a child of the calling process. Why is this? Would waiting on a non-child process create some sort of security issue? Is there a technical reason why implementing waiting on a non-child process would be difficult or impossible? | Because of how waitpid works. On a POSIX system, a signal (SIGCHLD) is delivered to a parent process when one of its child processes dies. At a high level, all waitpid is doing is blocking until a SIGCHLD signal is delivered for the process (or one of the processes) specified. You can't wait on arbitrary processes, because the SIGCHLD signal would never be delivered for them. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58378/"
]
} |
214,911 | I am able to scp files based on their extensions as, scp sk@localhost:/home/sk/*.{txt,text} . But when I try to scp a single extension type, it fails, scp sk@localhost:/home/sk/*.{txt} . I was able to solve this error by removing the flower brackets in case#2, I am just curious why does using flower brackets fail in case, there is only file extension type. | Assuming you're using bash, the documentation says: A correctly-formed brace expansion must contain unquoted opening and closing braces, and at least one unquoted comma or a valid sequence expression. Any incorrectly formed brace expansion is left unchanged. Thus, {foo} is not a correctly-formed brace expansion: $ bash -c 'echo {foo} {foo,bar}'{foo} foo bar | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
214,921 | ...without polling. I want to detect when the currently focused window changes so that I can update a piece of custom GUI in my system. Points of interests: real time notifications. Having 0.2s lag is okay, having 1s lag is meh, having 5s lag is totally unacceptable. resource friendliness: for this reason, I want to avoid polling. Running xdotool getactivewindow getwindowname every, say, half a second, works quite alright... but is spawning 2 processes a second all that friendly to my system? In bspwm , one can use bspc subscribe which prints a line with some (very) basic stats, every time window focus changes. This approach seems nice at first, but listening to this won't detect when window title changes by itself (for example, changing tabs in the web browser will go unnoticed this way.) So, is spawning new process every half a second okay on Linux, and if not, how can I do things better? One thing that comes to my mind is to try to emulate what window managers do. But can I write hooks for events such as "window creation", "title change request" etc. independently from the working window manager, or do I need to become a window manager itself? Do I need root for this? (Another thing that came to my mind is to look at xdotool 's code and emulate only the things that interest me so that I can avoid all the process spawning boilerplate, but it still would be polling.) | I couldn't get your focus-change approach to work reliably under Kwin 4.x, but modern window managers maintain a _NET_ACTIVE_WINDOW property on the root window that you can listen for changes to. Here's a Python implementation of just that: #!/usr/bin/pythonfrom contextlib import contextmanagerimport Xlibimport Xlib.displaydisp = Xlib.display.Display()root = disp.screen().rootNET_ACTIVE_WINDOW = disp.intern_atom('_NET_ACTIVE_WINDOW')NET_WM_NAME = disp.intern_atom('_NET_WM_NAME') # UTF-8WM_NAME = disp.intern_atom('WM_NAME') # Legacy encodinglast_seen = { 'xid': None, 'title': None }@contextmanagerdef window_obj(win_id): """Simplify dealing with BadWindow (make it either valid or None)""" window_obj = None if win_id: try: window_obj = disp.create_resource_object('window', win_id) except Xlib.error.XError: pass yield window_objdef get_active_window(): win_id = root.get_full_property(NET_ACTIVE_WINDOW, Xlib.X.AnyPropertyType).value[0] focus_changed = (win_id != last_seen['xid']) if focus_changed: with window_obj(last_seen['xid']) as old_win: if old_win: old_win.change_attributes(event_mask=Xlib.X.NoEventMask) last_seen['xid'] = win_id with window_obj(win_id) as new_win: if new_win: new_win.change_attributes(event_mask=Xlib.X.PropertyChangeMask) return win_id, focus_changeddef _get_window_name_inner(win_obj): """Simplify dealing with _NET_WM_NAME (UTF-8) vs. WM_NAME (legacy)""" for atom in (NET_WM_NAME, WM_NAME): try: window_name = win_obj.get_full_property(atom, 0) except UnicodeDecodeError: # Apparently a Debian distro package bug title = "<could not decode characters>" else: if window_name: win_name = window_name.value if isinstance(win_name, bytes): # Apparently COMPOUND_TEXT is so arcane that this is how # tools like xprop deal with receiving it these days win_name = win_name.decode('latin1', 'replace') return win_name else: title = "<unnamed window>" return "{} (XID: {})".format(title, win_obj.id)def get_window_name(win_id): if not win_id: last_seen['title'] = "<no window id>" return last_seen['title'] title_changed = False with window_obj(win_id) as wobj: if wobj: win_title = _get_window_name_inner(wobj) title_changed = (win_title != last_seen['title']) last_seen['title'] = win_title return last_seen['title'], title_changeddef handle_xevent(event): if event.type != Xlib.X.PropertyNotify: return changed = False if event.atom == NET_ACTIVE_WINDOW: if get_active_window()[1]: changed = changed or get_window_name(last_seen['xid'])[1] elif event.atom in (NET_WM_NAME, WM_NAME): changed = changed or get_window_name(last_seen['xid'])[1] if changed: handle_change(last_seen)def handle_change(new_state): """Replace this with whatever you want to actually do""" print(new_state)if __name__ == '__main__': root.change_attributes(event_mask=Xlib.X.PropertyChangeMask) get_window_name(get_active_window()[0]) handle_change(last_seen) while True: # next_event() sleeps until we get an event handle_xevent(disp.next_event()) The more fully-commented version I wrote as an example for someone is in this gist . UPDATE: Now, it also demonstrates the second half (listening to _NET_WM_NAME ) to do exactly what was requested. UPDATE #2: ...and the third part: Falling back to WM_NAME if something like xterm hasn't set _NET_WM_NAME . (The latter is UTF-8 encoded while the former is supposed to use a legacy character coding called compound text but, since nobody seems to know how to work with it, you get programs throwing whatever stream of bytes they have in there and xprop just assuming it'll be ISO-8859-1.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41991/"
]
} |
214,932 | This question originated with a joke between co-workers about increasing performance by moving swap files to a tmpfs. Clearly even if this is possible, it's not a good idea. All I want to know is, can it be done? I'm currently on Ubuntu 14.04, but I'd imagine the process is similar for most Linux/Unix machines. Here's what I'm doing: > mkdir /mnt/tmp> mount -t tmpfs -o size=10m tmpfs /mnt/tmp> dd if=/dev/zero of=/mnt/tmp/swapfile bs=1024 count=10240> chmod 600 /mnt/tmp/swapfile> mkswap /mnt/tmp/swapfile# So far, so good!> swapon /mnt/tmp/swapfileswapon: /mnt/tmp/swapfile: swapon failed: Invalid argument So, on either linux or unix (I'm interested in any solution) can you somehow set up swap on a file/partition residing in ram? Is there a way around the Invalid argument error I'm getting above? Again, just want to emphasize that I'm not expecting this to be a solution to a real-world problem. Just a fun experiment, I guess. | It shouldn't be possible. swapon system call requires readpage and bmap (indirectly) calls being implemented by filesystem: http://lxr.free-electrons.com/source/mm/swapfile.c?v=4.0#L2412 if (!mapping->a_ops->readpage) { error = -EINVAL; goto bad_swap;} But none of them are implemented by tmpfs , such an entry is missing from corresponding address_space_operations : http://lxr.free-electrons.com/source/mm/shmem.c?v=4.0#L3104 For the same reason, tmpfs cannot hold loop mounts, and ramfs won't work either (it doesn't have bmap call) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122769/"
]
} |
214,934 | Is it possible to hold/stop the bash script progress without to kill the process? ( by kill command ) or other command For examplethis script - install_linux_pkgs.bash , will install Linux pkgs step by step ./install_linux_pkgs.bash What I want is to stop ( HOLD / HANG ) the script progress externally but not to kill it kill -l1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR213) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+439) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+843) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+1247) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-1451) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-1055) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-659) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-263) SIGRTMAX-1 64) SIGRTMAX | You can stop the process with ctrl - z . Then do whatever you want in the terminal. To continue the process use fg . Or from another terminal, use: kill -19 <pid> It sends SIGSTOP (signal number 19) to the process. This is not possible to catch for the process. To continue the process use: kill -18 <pid> This time it's SIGCONT that brings the process back in a running/sleeping state. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
214,956 | I want to have my super key start dmenu .I have set it as a keyboard shortcut in my rc.xml as follows: <keybind key="0xffeb"> <action name="Execute"> <command>dmenu_run</command> </action></keybind> I tried specifying it in the key attribute as W , W- , and 0xffeb , but none of these worked. W responds to pressing the letter w , and the others appear to do nothing. I want the shortcut to trigger when the super key is pressed and released on it's own. Is this possible? This is cross posted from super user as per the guidelines here . I've read this question: Super key as shortcut - Openbox , but I didn't see any useful information in it. | I ended up using xcape , a utility designed to do exactly this: xcape allows you to use a modifier key as another key when pressed and released on its own. Note that it is slightly slower than pressing the original key, because the pressed event does not occur until the key is released. Quoted from the xcape readme Using xcape, you can assign the press and release of a modifier key to a different key or even a sequence of keys.For example, you can assign Super to a placeholder shortcut like ⎈ Ctrl ⇧ Shift ⎇ Alt Super D with: xcape -e 'Super_L=Control_L|Shift_L|Alt_L|Super_L|D' Now when you press and release Super without pressing any other keys, xcape will send keyboard events simulating presses of ⎈ Ctrl ⇧ Shift ⎇ Alt Super D (holding all the modifier keys down as if you pressed them like a shortcut). If you press Super and another key (or hold Super too long, the default timeout is 500 ms), xcape will pass the keyboard events through as is, without firing extra keys. If you put the placeholder shortcut in rc.xml , it will run when Super and only Super is pressed. <keybind key="C-A-S-W-d"> <action name="Execute"> <command>dmenu_run</command> </action></keybind> Other shortcuts involving Super will not be affected. Note that you'll have to run xcape each time you boot, so you may want to put it somewhere like ~/.config/openbox/autostart where it will be run automatically. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/214956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86821/"
]
} |
214,991 | I have a string that is the result of some operation I have no control over. When I print this variable using echo , I get: echo $myvarhello However, when I do if [ $myvar = "hello" ]; then echo they are equalelse echo they are not equalfi I always get that they are not equal. I suspect this is because of a newline character. The string also behaves strangely. When I do: newVAR="this is my var twice: "$myvar$myvarecho $newVAR I get: hellois my var twice: hello How can I check if this is in fact due to a newline and, if so, remove it? | The problem is that you have an embedded Carriage-Return (CR, \r ). This causes the terminal text-insertion point to kick back to the start of the line it is printing. That is why you are seeing the 'hello' at the start of the line in you $newVAR example - sed -n l displays a readable view of unprintable characters (and end of line). var=ab$'\r'c ; echo "$var"; printf %s "$var" | sed -n l# output:cbab\rc$ You can test for it with a simple bash condition check: [[ $var == *$'\r'* ]] && echo yes || echo no# output:yes You can combine the test and fix in one step by testing for \r (s) and removing them via: fix="${var//$'\r'/}"; echo "$var"; echo "$fix"# output:cbabc The fix uses Shell Parameter Expansion . The particular form used above is for replacing substrings based on your proovided pattern: ${parameter/pattern/string} <-- This replaces only the first found pattern with string in variable named *parameter. To replace all patterns, you just need to change the first / to // . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/214991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117308/"
]
} |
215,015 | This question I would think would be far easier to answer by means of Googling it as it is so simple, but alas I am left to ask it here. What I would like to do is to remove launchers I no longer have need for on the KDE 4 panel in Sabayon. Here's a screenshot, down the bottom left you will see icons (which represent launchers) for Google Chrome, Terminator and Konsole, in that order. I would like to remove the Konsole launcher. The only solution I have managed to find on my own is removing the entire panel, creating a new panel and then adding the launchers I want and leaving out the launchers I don't want. As my list of launchers continues to grow this solution will only get more and more tedious with time, hence why I would prefer a simpler solution if anyone has one. The most natural solution to me would be to right-click on the unwanted launcher and find an option to remove the launcher, but this is the menu I get from right-clicking the Konsole launcher: Clicking "Icon Settings" just gives me this: which is just the options for the desktop configuration file used for the Konsole launcher and to my knowledge has nothing to do with removing the launcher from the KDE panel. | At least on my KDE4 desktop I can remove a launcher like this: right-click on the right-most side of the panel and select UnlockWidgets in the popup menu right-click again on the right-most side of the panel and select Panel Settings now displayed in the popup-menu move mouse on the desired launcher icon and click on the X in itspopup to remove the launcher (you can also click and drag it elsewhere if you want to) right-click on the right-most side of the panel and select LockWidgets in the popup menu (to prevent accidental panel changes) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/215015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
215,044 | What does the different assignment types mean in bitbake recipe scripts, such as: BB_NUMBER_THREADS ?= "${@oe.utils.cpu_count()}" PARALLEL_MAKE ?= "-j ${@oe.utils.cpu_count()}" MACHINE ??= "qemux86" What of above is analogous to Ruby's bb_number_threads ||= 'something' ? | As per this section of Bitbake manual ?= is: You can use the "?=" operator to achieve a "softer" assignment for a variable. This type of assignment allows you to define a variable if it is undefined when the statement is parsed, but to leave the value alone if the variable has a value. Here is an example: A ?= "aval" If A is set at the time this statement is parsed, the variable retains its value. However, if A is not set, the variable is set to "aval". ??= is: It is possible to use a "weaker" assignment than in the previous section by using the "??=" operator. This assignment behaves identical to "?=" except that the assignment is made at the end of the parsing process rather than immediately. Consequently, when multiple "??=" assignments exist, the last one is used. Also, any "=" or "?=" assignment will override the value set with "??=". Here is an example: A ??= "somevalue" A ??= "someothervalue" If A is set before the above statements are parsed, the variable retains its value. If A is not set, the variable is set to "someothervalue". Again, this assignment is a "lazy" or "weak" assignment because it does not occur until the end of the parsing process. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112907/"
]
} |
215,076 | I have text files like this: Mr.P.K.Baneerjee has visited the site today at 4.30pmThe permit tag has been set in the ds busbarOpearation has been performed in presence of control engineerOpeation was successfully completed All files have only four lines Only the count of words must be printed as an output, just like this: 81094 | You can do it with: awk '{print NF}' filename | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
215,084 | File1: John Daniel Hommes Cameron;Emily Terry Mussy BarbaraMimi Papu;David Swiss JenHans Peter Iril;KelvinLilly Gucci Kate Nik;Forum BillJune;Jill and Jack Output file: John Daniel Hommes CameronEmily Terry Mussy BarbaraMimi PapuDavid Swiss JenHans Peter IrilKelvinLilly Gucci Kate NikForum BillJuneJill and Jack | tr \; \\n <in >out ...is very likely the most efficient means of going from your sample input to your sample output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
215,086 | I've got a string like this: 8080 "ac ac df asd" 9019 "f v adfs" 1 "123 da 123x" Is there a clever way to convert this into arrays like this using Bash? 8080 "ac ac df asd"9019 "f v adfs"1 "123 da 123x" | tr \; \\n <in >out ...is very likely the most efficient means of going from your sample input to your sample output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52041/"
]
} |
215,112 | I have a file called listing.txt It contains data like: 12345 I then use the following to loop through the file and do stuff with each line: IFS=$'\n'while IFS= read -r inc; do if [ $inc -eq 1 ] then echo "FIRST: $inc" else echo "MID: $inc" fidone </home/user/listing.txt But I can't find an option to check if it is the Last line. What's the best way of finding if I am on the last line? I've seen numerous pages talking about loops, and pages talking about getting the last line from a file, but nothing that helps my loop to know it is on the last line. | What does it mean to be on the last line? It means that there are no lines after this one so one strategy would be to check if there is another line after the current one within the loop. If there isn't you are on the last line, otherwise you are in the middle. Another technique would be to keep the line in a variable that is scoped outside the loop, then when the loop ends the variable contains the last line. This has the disadvantage of making the middle calculation incorrect but you could probably find a way around that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102428/"
]
} |
215,157 | I installed Ubuntu 14.04 LTS. Ubuntu seems to come with Apache2 but in the apache2 folder there no httpd.conf ? Why is the file lacking and what do I have to do to create it? | Per https://help.ubuntu.com/lts/serverguide/httpd.html all configuration options have all been moved to subdirectories. httpd.conf: historically the main Apache2 configuration file, namedafter the httpd daemon. Now the file does not exist. In olderversions of Ubuntu the file might be present, but empty, as allconfiguration options have been moved to the below referenceddirectories. Check the reference link for the name and description of each subdirectory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2093/"
]
} |
215,229 | All of the tools I've tried until now were only capable to create a dual (GPT & MBR) partition table, where the first 4 of the GPT partitions were mirrored to a compatible MBR partition. This is not what I want. I want a pure GPT partition table, i.e. where there isn't MBR table on the disk, and thus there isn't also any synchronizing between them. Is it somehow possible? | TO ADDRESS YOUR EDIT: I didn't notice the edit to your question until just now. As written now, the question is altogether different than when I first answered it. The mirror you describe is not in the spec, actually, as it is instead a rather dangerous and ugly hack known as a hybrid-MBR partition format. This question makes a lot more sense now - it's not silly at all, in fact. The primary difference between a GPT disk and a hybrid MBR disk is that a GPT's MBR will describe the entire disk as a single MBR partition, while a hybrid MBR will attempt to hedge for (extremely ugly) compatibility's sake and describe only the area covered by the first four partitions. The problem with that situation is the hybrid-MBR 's attempts at compatibility completely defeat the purpose of GPT's Protective MBR in the first place . As noted below, the Protective MBR is supposed to protect a GPT-disk from stupid applications, but if some of the disk appears to be unallocated to those, all bets are off. Don't use a hybrid-MBR if it can be at all helped - which, if on a Mac, means don't use the default Bootcamp configuration . In general, if looking for advice on EFI/GPT-related matters go nowhere else (excepting maybe a slight detour here first) but to rodsbooks.com . ahem... This (used to be) kind of a silly question - I think you're asking how to partition a GPT disk without a Protective MBR . The answer to that question is you cannot - because the GPT is a disk partition table format standard, and that standard specifies a protective MBR positioned at the head of the disk. See? What you can do is erase the MBR or overwrite it - it won't prevent most GPT-aware applications from accessing the partition data anyway, but the reason it is included in the specification is to prevent non -GPT-aware applications from screwing with the partition-table. It prevents this by just reporting that the entire disk is a single MBR-type partition already, and nobody should try writing a filesystem to it because it is already allocated space. Removing the MBR removes that protection. In any case, here's how: This creates a 4G ./img file full of NULs... </dev/zero >./img \dd ibs=4k obs=4kx1k count=1kx1k 1048576+0 records in1024+0 records out4294967296 bytes (4.3 GB) copied, 3.38218 s, 1.3 GB/s This writes a partition table to it - to include the leading Protective MBR . Each of printf 's arguments is followed by a \n ewline and written to gdisk 's stdin. gdisk interprets the commands as though they were typed at it interactively and acts accordingly, to create two GPT partition entries in the GUID Partition Table it writes to the head of our ./img file. All terminal output is dumped to >/dev/null (because it's a lot and we'll be having a look at the results presently anyway) . printf %s\\n o y n 1 '' +750M ef00 \ n 2 '' '' '' '' \ w y | >/dev/null \gdisk ./img This gets pr 's four-columned formatted representation of the offset-accompanied strings in the first 2K of ./img . <./img dd count=4 |strings -1 -td | pr -w100 -t4 4+0 records in4+0 records out2048 bytes (2.0 kB) copied, 7.1933e-05 s, 28.5 MB/s 451 * 1033 K 1094 t 1212 n 510 U 1037 > 1096 e 1214 u 512 EFI PART 1039 ;@fY 1098 m 1216 x 524 \ 1044 30 1153 = 1218 529 P 1047 L 1158 rG 1220 f 531 ( 1050 E 1161 y=i 1222 i 552 " 1065 w 1165 G} 1224 l 568 V 1080 E 1170 $U.b 1226 e 573 G 1082 F 1175 N 1228 s 575 G 1084 I 1178 C 1230 y 577 y 1086 1180 b 1232 s 583 G 1088 S 1185 x 1234 t 602 Ml 1090 y 1208 L 1236 e 1024 (s* 1092 s 1210 i 1238 m You can see where the MBR ends there, yeah? Byte 512. This writes 512 spaces over the first 512 bytes in ./img . <>./img >&0 printf %0512s And now for the fruits of our labor. This is an interactive run of gdisk on ./img . gdisk ./img GPT fdisk (gdisk) version 1.0.0Partition table scan: MBR: not present BSD: not present APM: not present GPT: presentFound valid GPT with corrupt MBR; using GPT and will write newprotective MBR on save. Command (? for help): p Disk ./img: 8388608 sectors, 4.0 GiBLogical sector size: 512 bytesDisk identifier (GUID): 0528394A-9A2C-423B-9FDE-592CB74B17B3Partition table holds up to 128 entriesFirst usable sector is 34, last usable sector is 8388574Partitions will be aligned on 2048-sector boundariesTotal free space is 2014 sectors (1007.0 KiB)Number Start (sector) End (sector) Size Code Name 1 2048 1538047 750.0 MiB EF00 EFI System 2 1538048 8388574 3.3 GiB 8300 Linux filesystem | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/215229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52236/"
]
} |
215,234 | find /tmp -printf '%s %p\n' |sort -n -r | head This command is working fine but what are the %s %p options used here? Are there any other options that can be used? | What are the %s %p options used here? From the man page : %s File's size in bytes. %p File's name. Scroll down on that page beyond all the regular letters for printf and read the parts which come prefixed with a %. %n Number of hard links to file. %p File's name. %P File's name with the name of the starting-point under which it was found removed. %s File's size in bytes. %t File's last modification time in the format returned by the C `ctime' function. Are there any other options that can be used? There are. See the link to the manpage. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/215234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30355/"
]
} |
215,249 | I have a VPS to be set as my socket5 proxy, the Firefox plugin AutoProxy was installed. ssh -p 2034 -D 127.0.0.1:1080 root@vps_ip The port on my VPS is 2034. The command can work for some time, maybe 10 minutes or 20 minutes,during the time, I opened many web pages with my Firefox, suddenly, the connect was blocked,and an error info displayed. channel 8: open failed: administratively prohibitedchannel 9: open failed: administratively prohibitedchannel 10: open failed: administratively prohibited I have searched the problem on stackoverflow,for example : SSH tunneling error: "channel 1: open failed: administratively prohibited: open failed" My problem differ from that! I just can create ssh tunnel every time properly. When the ssh tunnel was created,i can browse web pages for sometime,about 10 or 20 minutes. After many web pages opened by my Firefox,the tunnel broken. If I close my Firefox and console for sometime, I can create the tunnel again. It will keep circulating. What is the matter with my VPS and ssh service? My system is debian8.1, where is the ssh logfile?no /var/log/secure in my debian.Maybe the ssh logfile can tell more fact. | It sounds like you're running into the SSH server's limit on the number of simultaneous sessions per connection. Your command-line session to the remote server is one session, and each individual forwarded TCP connection is another session. You can change the server's limit through the MaxSessions parameter in the server's sshd_config file : MaxSessions Specifies the maximum number of open sessions permitted per network connection. The default is 10. You'd update sshd_config like this: Find the file. It's usually /etc/ssh/sshd_config . Edit it as root. In the file look for an existing MaxSessions setting if any. Otherwise, add a new line. Set the number to 15 or so. Save the new file. Restart sshd to make it reread the file. Make a new ssh connection and see if the behavior changes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
215,253 | To open a file to edit in gedit I run gedit sample.py & . But with Sublime Text it is simply subl sample.py . This opens the file to edit and it doesn't run in the background (in my shell). How would I do that with gedit? I tried exec /usr/bin/gedit "$@" (copied from /usr/bin/subl ) but it works like gedit & . Or alias ged="gedit $file &" should do. What can I substitute $file in the alias? | You could use this function: gedit() { /usr/bin/gedit $@ & disown ;} It: Makes a function which can be called with gedit Launches gedit (using the full path /usr/bin/gedit ), passing all the arguments/files given to it using $@ & disown sends it to background and disown detaches it from the terminal/shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30355/"
]
} |
215,315 | I have an NVIDIA GeForce 9500 GT. I'm currently using nvidia-331 since the Nouveau open drivers were giving me some trouble. But I have several other drivers to select, see: What are the differences between them and which one should I select? | The versions are as follows: nvidia-331 : the current release nvidia-173 : an old legacy binary driver, supporting (much) older cards nvidia-304 : a more recent legacy binary driver, supporting older cards nvidia-331-updates : the update channel for the current release (but it's the same as the version in nvidia-331 now) nvidia-304-updates : the update channel for nvidia-304 nouveau : the nouveau free driver Update channels contain new versions available for testing; once a version has been tested it is made available in the normal channel. If nouveau isn't working for you, then nvidia-331 is the appropriate version, since it supports your card. You'd only use one of the legacy versions with an older card, or if the current driver is causing problems (as pointed out by Mark ). You could use the update channel if you want to try the very latest (untested) versions, but there isn't anything new there currently. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/215315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32411/"
]
} |
215,327 | I am trying to sort a file like this (which is a list of subroutine names) cfn=(1370) __lib_file_MOD_file_openfn=(1368) __universe_MOD_general_bootfn=(916) __universe_MOD_general_mainfn=(6142) __grid_var_MOD_get_overlap... according to the integer inside parentheses. I first tried the sort command as sort -t '=' -k 2 routine_list.txt but then fn=(916) comes at the end of the ouput. fn=(1368) __universe_MOD_general_bootcfn=(1370) __lib_file_MOD_file_openfn=(6142) __grid_var_MOD_get_overlap...fn=(916) __universe_MOD_general_main but I would like the numbers to be sorted in the increasing order (916 -> 1368 -> 1370 -> ...) Is it possible to do this relatively simply by using several commands or options? | How about: sort -nt'(' -k2 file.txt Test : $ sort -nt'(' -k2 file.txt fn=(916) __universe_MOD_general_mainfn=(1368) __universe_MOD_general_bootcfn=(1370) __lib_file_MOD_file_openfn=(6142) __grid_var_MOD_get_overlap -n indicates we are sorting numerically t'(' sets the delimiter as ( -k2 sets the key to sort as the second field i.e. starting from the digits to the end of the line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115885/"
]
} |
215,343 | If I define a variable with the quotation mark: TEMP="~/Dropbox" then ls $TEMP would not work, instead echo $TEMP | ls works. And to get the same result, I can also define the variable without the quotation mark, like: TEMP=~/Dropbox by doing so I can simply type ls $TEMP I am quite confused about the difference between them, and I am wondering when and where should I use these two kinds of definitions respectively? | TEMP="~/Dropbox" The above defines a variable that contains a literal tilde followed by a slash. Because it is in quotation marks, the shell does not expand ~/ to the home directory. Observe: $ echo "quotes=~/" noquotes=~/quotes=~/ noquotes=/home/john1024/ Thus, if you want ~/ to mean the home directory, then ~/ needs to be outside of quotes. The following works: TEMP=~/"Dropbox" Let's consider what happens if the ~/ is in quotes: TEMP="~/Dropbox"ls $TEMP The above command will look for the file ~/Dropbox meaning a file named Dropbox in a directory named ~ . Since you most likely do not have a directory named ~ , ls will return with an error message: "No such file or directory." Also, the command below does not do what you think: echo $TEMP | ls ls ignores standard input. A bare ls command just lists files in the current directory, whatever that directory is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123036/"
]
} |
215,359 | Using the fdisk -l command I got the following answer: Device Boot Start End Blocks Id System/dev/sda1 * 2048 28266495 14132224 27 Hidden NTFS WinRE/dev/sda2 28268544 28473343 102400 7 HPFS/NTFS/exFAT/dev/sda3 28473344 132552703 52039680 7 HPFS/NTFS/exFAT/dev/sda4 * 132556798 625141759 246292481 5 Extended/dev/sda5 193996800 198092799 2048000 82 Linux swap / Solaris/dev/sda6 234960896 625141759 195090432 7 HPFS/NTFS/exFAT/dev/sda7 198094848 234950316 18427734+ 83 Linux/dev/sda8 132556800 183756018 25599609+ 83 Linux I'd like to copy the three first partitions of my disk in an image using the dd command. So I mounted an external hard drive, entered in its folder and typed: # dd count=$((132552703-2048)) if=/dev/sda of=./newImage.image But this command copied all the sda disk to my external hard drive instead of just copying until the end of the sda3 partition. How can I use the dd to create an image that starts at the beginning of sda1 and finishes at the end of sda3? | first of all, here's how: First do almost as you did before, but no subtraction - and add one to the count. dd count=132552704 </dev/sda >img Next print the partition table at a sed process which can screen out the ones which you're removing. sed will write a d elete command to a second fdisk which has opened your img file for every partition from sda4 and on. fdisk -l img | sed -e'/sda4 /,$id' -e'g;$aw' | fdisk img There is no 3. You're done. secondly, here's why: A Partial Success... I'm pretty sure your command almost worked, but I'm willing to bet that it worked better than you think. I expect that when you say it copied all of sda you believe that because an fdisk -l of that image indicated all of the partitions were included within. Based on the dd command in your question, though, provided /dev/sda 's sector size is the fairly standard 512 bytes (and therefore identical to dd 's default blocksize) then you should have copied everything from byte 0 of /dev/sda only through to all but the last 2k sectors of /dev/sda3 . About Sectors... You can see below where the fdisk output reports on Units . That is the size of each sector that fdisk reports on. A disk sector might be 4096-bytes - if it is a very recently manufactured disk and handles the Advanced Format sector-size - otherwise it is very rare to find a disk not partitioned on a standard logical 512-byte sector-size. This is how fdisk 's man page puts it: -u , --units[=unit] When listing partition tables, show sizes in sectors or in cylinders . The default is to show sizes in sectors . For backward compatibility, it is possible to use the option without the unit argument - then the default is used. Note that the optional unit argument cannot be separated from the -u option by a space, the correct form is for example -u=cylinders . There's more on this here . And something about dd , too... dd cannot silently lose data . In fact, if a short read occurs, dd is specified to be very vocal about it: A partial input block is one for which read() returned less than the input block size. A partial output block is one that was written with fewer bytes than specified by the output block size... ...when there is at least one truncated block, the number of truncated blocks shall be written to standard error... "%u truncated %s\n" , <number of truncated blocks> , "record[s]" Block i/o... But anyway, that actually can't happen with block-device i/o. It's what makes a block-device a block-device - there's an extra layer (sometimes several) of buffered protection for block-devices as opposed to character devices. It is this distinction which enables POSIX to guarantee lseek() for files existing on block-devices - it's a very basic principle of blocked i/o. To sum up... And so you have copied all of your device up to the point you specified, but the thing is, the first 2k sectors of /dev/sda will contain its entire partition table, and as such you would have copied said partition table to your image, and so an fdisk -l of your image would report for all partitions of /dev/sda , whether or not the data for those partitions actually resides within that image-file. You can, instead, of course, just cat the separate data partitions separately into separate image files if you like - but in that case you lose the partition table entirely. All you really have to do is delete the partitions which you did not copy, and make sure you copy all of those you do. third, here's how I know: This will create an 4G ./img file full of NULs. </dev/zero >./img \dd ibs=8k obs=8kx1b count=1kx1b 524288+0 records in1024+0 records out4294967296 bytes (4.3 GB) copied, 3.53287 s, 1.2 GB/s This will partition ./img to match your disk up to the first three partitions but on a 1/16th scale: (set "$((p=0))" 28266495 27 \ 28268544 28473343 2\\n7 \ 28473344 132552703 3\\n7while [ "$#" -ge "$((p+=1))" ]do printf "n\np\n$p\n%.0d\n%d\nt\n%b\n" \ "$(($1/16))" "$(($2/16))" "$3" shift 3done; echo w)| fdisk ./img >/dev/null And so now we can look at it. fdisk -l ./img Disk ./img: 4 GiB, 4294967296 bytes, 8388608 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x5659b81cDevice Boot Start End Sectors Size Id Type./img1 2048 1766655 1764608 861.6M 27 Hidden NTFS WinRE./img2 1766784 1779583 12800 6.3M 7 HPFS/NTFS/exFAT./img3 1779584 8284543 6504960 3.1G 7 HPFS/NTFS/exFAT I'll also put some actual filesystems and files on the three partitions. sudo sh -c ' trap "$1" 0 cd /tmp; mkdir -p mnt for p in "$(losetup --show -Pf "$0")p"* do mkfs.vfat "$p" mount "$p" mnt echo "my part# is ${p##*p}" \ >./mnt/"part${p##*p}" sync; umount mnt done' "$PWD/img" 'losetup -D' Here are the byte offsets for where it all wound up... grep -Ebao '(my[^0-9]*|PART)[123]' <./img 2826272:PART12830336:my part# is 1904606240:PART2904624640:my part# is 2917656608:PART3917660672:my part# is 3 But did you notice that fdisk was perfectly happy to report on the partitions' sizes before ever we formatted them with filesystems? This is because the partition table lies at the very head of the disk - it's only a layout and nothing more. None of the partitions need actually exist to be reported. They're only logically mapped out within the first 1M of ./img . Watch: Let's try getting only the first two partitions off of ./img ... <./img >./img2 dd count=1779583 1779583+0 records in1779583+0 records out911146496 bytes (911 MB) copied, 1.84985 s, 493 MB/s We'll grep it again... grep -Ebao '(my[^0-9]*|PART)[123]' <./img2 2826272:PART12830336:my part# is 1904606240:PART2904624640:my part# is 2 And get an fdisk report... fdisk -l ./img2 Disk ./img2: 869 MiB, 911146496 bytes, 1779583 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xcbcab4d8Device Boot Start End Sectors Size Id Type./img2p1 2048 1766655 1764608 861.6M 27 Hidden NTFS WinRE./img2p2 1766784 1779583 12800 6.3M 7 HPFS/NTFS/exFAT./img2p3 1779584 8284543 6504960 3.1G 7 HPFS/NTFS/exFAT Now that is curious. fdisk still seems to believe there's a third partition extending as far out as 4G for a disk which it also seems to believe is only 869M in size! Probably we should remove that third partition from the partition table. printf %s\\n d 3 w |fdisk ./img2 >/dev/null And now lets see if we can mount the partitions we copied and if our files remain in tact... sudo sh -c ' trap "$1" 0 cd /tmp; mkdir -p mnt for p in "$(losetup --show -Pf "$0")p"* do mount "$p" mnt grep . /dev/null ./mnt/* umount mnt done' "$PWD/img2" 'losetup -D' ./mnt/part1:my part# is 1./mnt/part2:my part# is 2 Apparently it's not impossible. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103357/"
]
} |
215,406 | I'm trying to build btrfs-progs from sources, but when I run ./configure I get the error: checking for BLKID... noconfigure: error: Package requirements (blkid) were not met:No package 'blkid' foundConsider adjusting the PKG_CONFIG_PATH environment variable if youinstalled software in a non-standard prefix.Alternatively, you may set the environment variables BLKID_CFLAGSand BLKID_LIBS to avoid the need to call pkg-config.See the pkg-config man page for more details. blkid is installed in /sbin so, presumably, all its libraries are in the default locations. What do I need to do tell pkg-config where blkid is or am I actually missing a package? FYI: I'm running Debian 8 (sid/unstable) with a 4.1.0 kernel built from github.com/torvalds/linux.git sources about a week ago (commit:g6aaf0da). | If there are missing packages, you can use apt-cache : % apt-cache search blkidlibblkid-dev - block device id library - headers and static librarieslibblkid1 - block device id library or even: % apt-cache search blkid | grep '\-dev'libblkid-dev - block device id library - headers and static libraries We know that we need the development libraries to compile something, therefore do a... apt-get install libblkid-dev ...as root user. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21046/"
]
} |
215,433 | I have OpenBSD 5.7-amd64 installed and patched with all the latest available fixes. I would like to have a minimal Gnome desktop environment and I did the following to my user account (not root account): sudo pkg_add -vi gnome-session nautilus gnome-terminal gnome-menus gnome-system-monitor After installation of the above packages, I sudo nano /etc/rc.conf.local and modified/added the following: xdm_flags=NOgnome_enable=YESgdm_enable=YES I rebooted my box and logged in to my user account. After logging in, I am still presented with OpenBSD's default Fvwm manager, Xterm, etc. Before making this post I had consulted the following tutorials and discovered the instructions they contained to be unworkable. "Building an OpenBSD desktop" http://www.bsdnow.tv/tutorials/the-desktop-obsd "Display Manager on OpenBSD 4.7" http://www.gabsoftware.com/tips/installing-gnome-desktop-and-gnome-display-manager-on-openbsd-4-7/ | Ideally, you should install the gnome meta-package to ensure that you have all of the required packages installed, particularly DBus - I'd highly recommend you do that. Once you've installed the gnome meta package, follow the post-installation instructions in /usr/local/share/doc/pkg-readme for the GNOME version you installed (check the file gnome-{version} where {version} is the GNOME version). At a high level you need to do the following post-installation steps (all detailed in the aforementioned instructions): Add dbus_daemon to pkg_scripts in /etc/rc.conf.local and start dbus_daemon Configure GDM (which it appears you've done) (Optionally) Install avahi_daemon and enable multicast by adding multicast_host=YES to /etc/rc.conf.local . If you enable multicast, either restart networking (using /etc/netstart ) or reboot your machine. When you login again (via GDM), you should be using a GNOME desktop. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123077/"
]
} |
215,438 | I'd like to install php 56 on my Debian Wheeyz System. So I added the dotdeb repo to apt. While fetching the key, an error occurred: # wget http://www.dotdeb.org/dotdeb.gpg -O- |apt-key add – # gpg: can't open `–': No such file or directory What do I have to change to add the key to apt? | Your only problem is that the dash after apt-key add is not the ASCII 0x2D hyphen character, but the Unicode U+2013 en dash . The former instructs apt-key to read the key from the standard input (where the preceding wget provides it through the pipe), while the latter is not treated specially, thus interpreted as a file name to read the key from. Unsurprisingly, such a file does not seem to exist in your current directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/215438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72143/"
]
} |
215,450 | DataName AgeMadhavan 29saravana<Tab Press> ! How can i edit this data in vim in a tabular fashion? When i press tab key from column 1 in row 3, the cursor must move to the exclamatory mark position. Note: Exclamatory mark is just a marker position where i desire the cursor to go and it is not the real character in place. This can be done using org-table mode in emacs but it is overkill at times. So i am looking for simpler ways in vim/shell? | Your only problem is that the dash after apt-key add is not the ASCII 0x2D hyphen character, but the Unicode U+2013 en dash . The former instructs apt-key to read the key from the standard input (where the preceding wget provides it through the pipe), while the latter is not treated specially, thus interpreted as a file name to read the key from. Unsurprisingly, such a file does not seem to exist in your current directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/215450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
215,461 | I have tried to transfer about 50 Gb files from a Redhat Linux variant unsuccessfully to my Debian 8.1. I would like to find other ways than external HDD to move data. There are USB3 connections and HDMI to both machines but nothing else. I am not allowed to install BTsync to transfer the files fast between each other. How can you mass transfer of big files easily between two Linux boxes of different variants? | The fact that one machine is running Red Hat and the other Debian won't cause you any problems. For most intents and purposes, the differences between distributions are insignificant. Realistically, you have two options for your data transfer: Using a removable disk, connected using USB or eSATA or similar. Using the network. Once both machines can connect to one another over the network, you can use any one of a variety of tools to do the file transfer. You mentioned that you cannot using BitTorrent Sync but something like rsync may well be an option or, failing, that sftp or scp . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
215,466 | Our Nagios server NTP has isn't working right. First here is the issues: root@ccsd-lx-noc03 /var/log> tail -n 10000 messages | grep "NTP"Jul 5 16:19:36 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;WARNING;HARD;4;NTP WARNING: Offset 53.03026778 secsJul 5 16:20:49 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.96075022 secsJul 5 16:20:50 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.95908391 secsJul 5 16:22:49 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.96072233 secsJul 5 16:23:40 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.96058169 secsJul 5 16:24:20 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;WARNING;HARD;4;NTP WARNING: Offset 53.01928848 secsJul 5 16:24:44 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.961512 secsJul 5 16:25:14 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.9693791 secsJul 5 16:26:01 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.96211889 secsJul 5 16:26:18 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -71.26003572 secsJul 5 16:27:10 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -71.26059958 secsJul 5 16:27:20 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;WARNING;HARD;4;NTP WARNING: Offset 53.03374252 secsJul 5 16:27:32 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -71.26115555 secsJul 5 16:28:00 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.96324414 secsJul 5 16:28:19 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;WARNING;HARD;4;NTP WARNING: Offset 53.03296909 secsJul 5 16:28:25 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -84.96396494 secsJul 5 16:29:09 ccsd-lx-noc03 nagios: SERVICE ALERT: localhost;NTP2-localhost;CRITICAL;HARD;4;NTP CRITICAL: Offset -71.26274931 secs Next, I am no the admin, he departed a few weeks ago, trying to keep things in order. On the crontab file I see this: root@ccsd-lx-noc03 /data/nagios/var> crontab -l59 * * * * /usr/sbin/ntpd -q > /dev/null 2>&1 How do I go about fixing this? ntpdate -d time.ccsd.net 5 Jul 17:58:48 ntpdate[5098]: ntpdate [email protected] Wed Jun 18 21:20:36 UTC 2014 (1)Looking for host time.ccsd.net and service ntphost found : ns1.ccsd.nettransmit(206.194.10.13)receive(206.194.10.13)transmit(206.194.10.13)receive(206.194.10.13)transmit(206.194.10.13)receive(206.194.10.13)transmit(206.194.10.13)receive(206.194.10.13)server 206.194.10.13, port 123stratum 2, precision -23, leap 00, trust 000refid [206.194.10.13], delay 0.02682, dispersion 0.00015transmitted 4, in filter 4reference time: d9444d0c.ef1c4dc0 Sun, Jul 5 2015 17:40:44.934originate timestamp: d94450f9.d4061577 Sun, Jul 5 2015 17:57:29.828transmit timestamp: d944514e.d51c3225 Sun, Jul 5 2015 17:58:54.832filter delay: 0.02911 0.02684 0.02730 0.02682 0.00000 0.00000 0.00000 0.00000filter offset: -85.0055 -85.0048 -85.0046 -85.0048 0.000000 0.000000 0.000000 0.000000delay 0.02682, dispersion 0.00015offset -85.004863 5 Jul 17:58:54 ntpdate[5098]: step time server 206.194.10.13 offset -85.004863 sec | The fact that one machine is running Red Hat and the other Debian won't cause you any problems. For most intents and purposes, the differences between distributions are insignificant. Realistically, you have two options for your data transfer: Using a removable disk, connected using USB or eSATA or similar. Using the network. Once both machines can connect to one another over the network, you can use any one of a variety of tools to do the file transfer. You mentioned that you cannot using BitTorrent Sync but something like rsync may well be an option or, failing, that sftp or scp . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123089/"
]
} |
215,530 | My shell script look like this: #!/bin/bashfor file in *.fastadosignalp $file > $file.txtdone In the working folder I have 18.000 .fasta files. I want to run each through the signalp program. I guess its too many files in the folder, but I don't know how to adjust my code. Any help? | You can use find : find . -maxdepth 1 -type f -exec sh -c 'signalp "$1" >"$1".txt' _ {} \; -maxdepth 1 will make find to search for files ( -type f ) only in the current directory sh -c 'signalp "$1" >"$1".txt' will execute the signalp command on all the files found and save the output to the files named after adding .txt to the original filenames. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/215530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123123/"
]
} |
215,558 | While attempting to SSH into a host I received the following message from xauth : /usr/bin/xauth: timeout in locking authority file /home/sam/.Xauthority NOTE: I was trying to remote display an X11 GUI via an SSH connection so I needed xauth to be able to create a $HOME/.Xauthority file successfully, but as that message was indicating, it was clearly not. Attempts to run any X11 based apps, such as xeyes were greeted with this message: $ xeyesX11 connection rejected because of wrong authentication.Error: Can't open display: localhost:10.0 How can I resolve this issue? | Running an strace on the remote system where xauth is failing will show you what's tripping up xauth . For example $ strace xauth liststat("/home/sam/.Xauthority-c", {st_mode=S_IFREG|0600, st_size=0, ...}) = 0open("/home/sam/.Xauthority-c", O_WRONLY|O_CREAT|O_EXCL, 0600) = -1 EEXIST (File exists)rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0nanosleep({2, 0}, 0x7fff6c4430e0) = 0open("/home/sam/.Xauthority-c", O_WRONLY|O_CREAT|O_EXCL, 0600) = -1 EEXIST (File exists)rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0nanosleep({2, 0}, 0x7fff6c4430e0) = 0open("/home/sam/.Xauthority-c", O_WRONLY|O_CREAT|O_EXCL, 0600) = -1 EEXIST (File exists)rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0 So xauth is attempting to open a file and it already exists. The culprit file is /home/sam/.Xauthority-c . We can confirm the presence of this file on the remote system: $ ls -l .Xauthority*-rw------- 1 sam sam 55 Jul 12 22:04 .Xauthority-rw------- 1 sam sam 0 Jul 12 22:36 .Xauthority-c-rw------- 1 sam sam 0 Jul 12 22:36 .Xauthority-l The fix As it turns out. Those files are lock files for .Xauthority , so simply removing them resolves the issue. $ rm -fr .Xauthority-* With the files deleted, exit from the SSH connection and then reconnect. This will allow xauth to re-run successfully. $ ssh -t skinner ssh sam@blackbirdWelcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-44-generic x86_64) * Documentation: https://help.ubuntu.com/Last login: Sun Jul 12 22:37:54 2015 from skinner.bubba.net$ Now we're able to run xauth list and X11 applications without issue. $ xauth listblackbird/unix:10 MIT-MAGIC-COOKIE-1 cf01f793d2a5ece0ea58196ab5a7977a The GUI $ xeyes Alternative method to resolve the issue I came across this post titled: xauth: error in locking authority file .Xauthority [linux, ssh, X11] which mentions the use of xauth -b to break any lock files that may be hanging around. xauth 's man page seems to back this up: -b This option indicates that xauth should attempt to break any authority file locks before proceeding. Use this option only to clean up stale locks. References Dealing with xauth “error in locking authority file” errors | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/215558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
215,598 | EDIT : The problem was that Apple uses permissions to mark backups and prevents you from modifying them (probably a security feature). By using chmod -RN <dir> I removed ACL data from all the folders with important data and that allowed me to make myself the owner and apply the appropriate permissions. Original question I have an extremely large backup (>700GB) that now has the wrong permissions (my UID changed during clean install, long story) and I need to change them. The time-consuming option is to manually go through each folder and change the permissions but that will take ages. I want to use chown to make myself the owner of all my important data and then use chmod 700 on all those folders to give rwx permissions to only me. The ideal solution is some method of using find to recursively look for folders matching a regex (my current one is .*/[DCV].*|Pictures|M[ou].* ) and then make my UID the owner and set the permissions to 700. The important bit that I can't grasp: However, when I try to run chown Me DirectoryName I get chown: DirectoryName: Operation not permitted . Everything I find is related to changing the permissions of a file and not a directory. Maybe I'm looking at this the wrong way? Something tells me there isn't a way of giving my UID rwx and --- to everyone else. How can I achieve this? I'm running Mac OS X 10.10.3. I know that this is a UNIX/Linux forum (and I'm running Mac) but this question is a lot more about using the shell, chown , chmod , and permissions and any solutions posted here will be applicable to any UNIX-based OS. It would be preferable if the posted solutions will make my older backups reappear in Time Machine. Thanks to all who have promptly replied, but chown just doesn't seem to work on directories for some reason. Is the fact that this is a .sparsebundle disk image on a network drive relevant? I assumed it would be the same as on any external drive. | I may have misunderstood. But you can recursively use chmod and chown eg. chown -R username:username /path/directory To recursively apply permission 700 you can use: chmod -r 700 /path/directory Of course the above is for Linux so not sure if mac osx is the same. EDIT: Yea sorry forgot to mention you need to be root to chown something, I just assumed u knew this...my bad. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/215598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50840/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.