source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
257,856 | How do I install an older version of Apache httpd on my CentOS 6 machine? When I do: sudo yum --showduplicates list httpd | expand I get: file:///media/project/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/project/repodata/repomd.xmlTrying other mirror.Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: pubmirrors.dal.corespace.com * extras: pubmirrors.dal.corespace.com * updates: repos.dfw.quadranet.comAvailable Packageshttpd.x86_64 2.2.15-45.el6.centos basehttpd.x86_64 2.2.15-47.el6.centos updateshttpd.x86_64 2.2.15-47.el6.centos.1 updates The current stable release of Apache is 2.4.18, and I need to install an older version of Apache, 2.2.26 to be exact. The version I see available to me is 2.2.15. Do I have to add another YUM repo to my machine? When I try to do: sudo yum install httpd-2.2.26 I get a message stating: No package httpd-2.2.26 available The other option I guess would be to try to build it on my machine from source. | Simply: yum downgrade httpd-<version-number> The version must be available already in the repository, which you can verify with: yum list --showduplicates httpd You might then encounter dependency problems: an older version of httpd depends on an older package that has been obsoleted. In that case, you must remove the depending packages. If the version you are looking for isn't available in the repo, and you can't find the RPM using rpmbone search , build from source. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257856",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3811/"
]
} |
257,888 | I have a couple questions about systemd. I'm having issues consistently getting my script to run once the network interface is up. I have tried Requires and After as seen below but is inconsistent with waiting for the network to be up. Am I using the right service and implementing it correctly? To by pass this right now I am running a ping check loop which is very inefficient and hackish. Any advice would be great. Thanks! [Unit]Description=PBU installerRequires=network-online.serviceAfter=network-online.service[Service]Type=oneshotExecStart=/home/pbu/current/scripts/pbu-unpack.shRemainAfterExit=yes[Install]WantedBy=multi-user.target | I solved this problem by looking at the output of: systemctl list-units --no-pager That showed me many units that I didn't expect like all the network devices! sys-devices-virtual-net-lan0.device loaded active plugged /sys/devices/virtual/net/lan So I added BindsTo=sys-devices-virtual-net-lan0.deviceAfter=sys-devices-virtual-net-lan0.device to my unit service file and then my service didn't start until lan0 was available. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139796/"
]
} |
257,943 | I am wondering if I can keep the entries in /etc/ld.so.conf sorted. My ld.so.conf looks now like this: /usr/X11R6/lib64/Xaw3d/usr/X11R6/lib64/usr/lib64/Xaw3d/usr/X11R6/lib/Xaw3d/usr/X11R6/lib/usr/lib/Xaw3d/usr/x86_64-suse-linux/lib/usr/local/lib/opt/kde3/lib/usr/local/lib64/opt/kde3/lib64/lib64/lib/usr/lib64/usr/lib/usr/local/cuda-6.5/lib64 When I sort it would look like this - can I safely do it or are they some dependencies which I would "destroy" with the sort? /lib/lib64/opt/kde3/lib/opt/kde3/lib64/usr/X11R6/lib/usr/X11R6/lib/Xaw3d/usr/X11R6/lib64/usr/X11R6/lib64/Xaw3d/usr/lib/usr/lib/Xaw3d/usr/lib64/usr/lib64/Xaw3d/usr/local/cuda-6.5/lib64/usr/local/lib/usr/local/lib64/usr/x86_64-suse-linux/libinclude /etc/ld.so.conf.d/*.conf | The entries in /etc/ld.so.conf are searched in order. Therefore, order matters. This only matters if the same library name (precisely speaking, the same SONAME) is present in multiple directories. If there are directories that you are absolutely sure will never contain the same library then you can put them in the order you prefer. In particular this means that directories in /usr/local should come before directories outside /usr/local , since the point of these directories is to have priority over the default system files. Among distribution-managed directories, it probably doesn't matter. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144303/"
]
} |
257,969 | I ran into some issues when running some installation scripts where they complained of bad interpreter . So I made a trivial example but I can't figure out what the problem is, see below. #!/usr/bin/env bashecho "hello" Executing the script above results in the following error [root@ech-10-24-130-154 dc-user]# ./junk.shbash: ./junk.sh: /usr/bin/env: bad interpreter: No such file or directory The /usr/bin/env file exists, see below: [root@ech-10-24-130-154 dc-user]# ls -l /usr/bin/envlrwxrwxrwx 1 root root 13 Jan 27 04:14 /usr/bin/env -> ../../bin/env[root@ech-10-24-130-154 dc-user]# ls -l /bin/env-rwxr-xr-x 1 root root 23832 Jul 16 2014 /bin/env[root@ech-10-24-130-154 dc-user]# If I alter the script to use the regular shebang #!/bin/bash it works no problem. #!/bin/env bash works as well. What is missing from the environment to allow the portable shebang to work? ls -lL /usr/bin/env returns ls: cannot access /usr/bin/env: No such file or directory so I guess I need to alter the symbolic link? Can I point it to /bin/env ? env --version is 8.4 and the OS is Red Hat Enterprise Linux Server release 6.6. | ls -lL /usr/bin/env shows that the symbolic link is broken. That explains why the shebang line isn't working: the kernel is trying, and obviously failing, to execute a dangling symbolic link. /usr/bin/env -> ../../bin/env is correct if /usr and /usr/bin are both actual directories (not symlinks). Evidently this isn't the case on your machine. Maybe /usr is a symbolic link? (Evidently it isn't a symbolic link to / , otherwise /usr/bin/env would be the same file as /bin/env , not a symbolic link). You need to fix that symbolic link. You can make it an absolute link: sudo ln -snf /bin/env /usr/bin/env You can make it a relative link, but if you do, make sure it's correct. Switch to /usr/bin and run ls -l relative/path/to/bin/env to confirm that you've got it right before creating the symlink. This isn't a default RHEL setup, so you must have modified something locally. Try to find out what you did and whether that could have caused other similar problems. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/451/"
]
} |
257,970 | I'm using HP-UX OS and want to use previously typed command, as how I'm using in Linux Ctrl + r for ease access in the HP-UX command line. | If your not familiar with "vi" or "emacs" prompt commands the best would be to use the fc shell built-in command look at the "fc" help into the man sh-posix manpage. Use the mouse to copy paste the commands. The HP-UX shell is /usr/bin/sh the "POSIX shell" which command prompt is close to the korn shell, by default it is set to the vi command mode "Esc" will put the prompt in "command mode", that mode is similar to the vi command mode.Then you can hit: k to move backwards or j to move forward in the history. i, a, A, cw or cW will put the prompt back in edit mode (cw means change word). "/pattern" will search for the first command matching "pattern". Ifyou type "n" (n means "next") it will look backwards to nextoccurrence of "pattern" into the history, "N" will looks into the otherdirection. If you prefer the emacs mode like in bash, use set -o emacs command. Arrows keys should not works, use instead Ctrl commands : Ctrl-p previous command Ctrl-n next command Ctrl-f cursor move forward Ctrl-b cursor move backwards Ctrl-a begin of line Ctrl-e end of line Ctrl-r Search for string in the history (another ctrl-r will go to the next occurence) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125769/"
]
} |
257,986 | Please explain the usage of ${#1} below: getlable () { if (( ${#1} == 0 )); then test="-"; else test="${1}"; fi;} | ${#1} is the length (in number of characters) of $1 which is the first argument to the function. So (( ${#1} == 0 )) is a convoluted way to test whether the first argument is empty (or unset, unset parameters appear as empty when expanded) or not. To test for an empty parameter, the canonical way is: [ -z "$1" ] But there, more likely the intent was to check whether an argument was provided to the function in which case the syntax would be: [ "$#" -eq 0 ] (or (($# == 0)) if you want to make your script ksh/bash/zsh specific). In both cases however, Bourne-like shells have short cuts for that: test=${1:--} # set test to $1, or "-" if $1 is empty or not providedtest=${1--} # set test to $1, or "-" if $1 is not provided Now, if the intent is to pass that to cat or other text utility so that - (meaning stdin) is passed when no argument is provided, then you may not need any of that at all. Instead of: getlable() { test=${1--} cat -- "$test"} Just do: getlable() { cat -- "$@"} The list of argument to the function will be passed as-is to cat . If there's no argument, cat will receive no argument (and then read from stdin as if it had been a single - argument). And if there's one or more arguments they will be all passed as-is to cat . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77910/"
]
} |
257,993 | I want to create with sed the following: For example each word in the file that have the ssss... Should be replaced (all word) with target string as gggg . echo "duwdbnhb ssssssmnfkejfnei" | sed s'/ssssss*/gggg/g'duwdbnhb ggggmnfkejfnei should be: duwdbnhb gggg remark - string could be with couple of s strings ( for example ss or sss or ssssss ...) Example: echo "duwdbnhb sssmnfkejfnei" | sed s'/s*/gggg/g'duwdbnhb gggg example A echo "rf3 f34kf3ein3e ssghdwydgeug swswww ssswjdbuyhb" | sed s'/ss.*/gggg/'rf3 f34kf3ein3e gggg but should print that: rf3 f34kf3ein3e gggg swswww gggg example B echo "rf3 f34kf3ein3e ssghdwydgeug swswww ssswjdbuyhb" | sed s'/s.*/gggg/'rf3 f34kf3ein3e gggg but should print that: rf3 f34kf3ein3e gggg gggg gggg | ${#1} is the length (in number of characters) of $1 which is the first argument to the function. So (( ${#1} == 0 )) is a convoluted way to test whether the first argument is empty (or unset, unset parameters appear as empty when expanded) or not. To test for an empty parameter, the canonical way is: [ -z "$1" ] But there, more likely the intent was to check whether an argument was provided to the function in which case the syntax would be: [ "$#" -eq 0 ] (or (($# == 0)) if you want to make your script ksh/bash/zsh specific). In both cases however, Bourne-like shells have short cuts for that: test=${1:--} # set test to $1, or "-" if $1 is empty or not providedtest=${1--} # set test to $1, or "-" if $1 is not provided Now, if the intent is to pass that to cat or other text utility so that - (meaning stdin) is passed when no argument is provided, then you may not need any of that at all. Instead of: getlable() { test=${1--} cat -- "$test"} Just do: getlable() { cat -- "$@"} The list of argument to the function will be passed as-is to cat . If there's no argument, cat will receive no argument (and then read from stdin as if it had been a single - argument). And if there's one or more arguments they will be all passed as-is to cat . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257993",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153544/"
]
} |
258,074 | I'm running Debian Jessie 8.2. I have a bluetooth USB dongle connected to my machine. I run sudo bluetoothctl -a then do the following: [NEW] Controller 5C:F3:70:6B:57:60 debian [default]Agent registered[bluetooth]# scan onDiscovery started[CHG] Controller 5C:F3:70:6B:57:60 Discovering: yes[bluetooth]# devices[NEW] Device 08:DF:1F:A7:B1:7B Bose Mini II SoundLink[bluetooth]# pair 08:DF:1F:A7:B1:7BAttempting to pair with 08:DF:1F:A7:B1:7B[CHG] Device 08:DF:1F:A7:B1:7B Connected: yes[CHG] Device 08:DF:1F:A7:B1:7B UUIDs: 0000110b-0000-1000-8000-00805f9b34fb 0000110c-0000-1000-8000-00805f9b34fb 0000110e-0000-1000-8000-00805f9b34fb 0000111e-0000-1000-8000-00805f9b34fb 00001200-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:A7:B1:7B Paired: yesPairing successful[CHG] Device 08:DF:1F:A7:B1:7B Connected: no[bluetooth]# trust 08:DF:1F:A7:B1:7B[CHG] Device 08:DF:1F:A7:B1:7B Trusted: yesChanging 08:DF:1F:A7:B1:7B trust succeeded[bluetooth]# connect 08:DF:1F:A7:B1:7BAttempting to connect to 08:DF:1F:A7:B1:7BFailed to connect: org.bluez.Error.Failed But I can connect to my iPhone this way. Why can't I connect to my Bose Mini II SoundLink speaker? | This may be due to the pulseaudio-module-bluetooth package not being installed. Install it if it missing, then restart pulseaudio. sudo apt install pulseaudio-module-bluetooth pulseaudio -kpulseaudio --start If the issue is not due to the missing package, the problem in this case is that PulseAudio is not catching up. A common solution to this problem is to restart PulseAudio. Note that it is perfectly fine to run bluetoothctl as root while PulseAudio runs as user. After restarting PulseAudio, retry to connect. It is not necessary to repeat the pairing. Continue trying second part only if above does not work for you: If restarting PulseAudio does not work, you need to load module-bluetooth-discover. sudo pactl load-module module-bluetooth-discover The same load-module command can be added to /etc/pulse/default.pa .If that still does not work, or you are using PulseAudio's system-wide mode, also load the following PulseAudio modules (again these can be loaded via your default.pa or system.pa): module-bluetooth-policymodule-bluez5-devicemodule-bluez5-discover | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/258074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148559/"
]
} |
258,221 | I'm facing problem when I pull a file from another machine to my machine using SCP. File is successfully transferred but each time it asks for password. I want to run this SCP command as a cronjob , How can I save password for this automation? scp [email protected]:/usr/etc/Output/*.txt /usr/abc/ [email protected]'s password: | You can do: ( if not already done ) generate a set of public and private ssh keys on your machine for your user with: $ ssh-keygen Answer the questions in order to generate the set of keys. copy your public key to the remote host: $ ssh-copy-id remote-user@remote-host This will enable login-in from your username@host to remote-user@remote-host without being prompt with p/w authentication. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153531/"
]
} |
258,232 | I have a game server and someone is spamming it with bots. The SpamBot client makes the handshake with my server using UDP connections. It does this through a list of proxies. Basically, the Spambot Client sends lots of UDP packets to my server and spams it with bots. Now I've got 6 big lists of proxies that I know the person who spambots me uses them. I can write a shell script to block every IP from every list. Every IP is on a new line, so it's pretty easy to do it with a for loop. The problem is that I'm concerned about the performance of my server. If I'll block 15k IP addresses, is that going to affect my server's performance? At the moment, I run CentOS 7. Can you tell me if IP Tables is the good way to go, or what other alternatives should I try? Please write the commands, too. I just want my server to stop responding to these IP addresses, to not establish any connections with them. | For such a large amount of IPs you should use the ipsets module .ipset creates datasets on which iptables can react, it can easily handle 10s of 1000s of entries . Make sure you have the EPEL repo enabled and then install ipset via: yum install ipset An example: ipset -N blockedip iphash creates a set called 'blockedip' in format 'iphash' (there are different formats, this one is for IPs only). with ipset -A you can add data (in this case IPs) to the dataset: ipset -A blockedip 192.168.1.1ipset -A blockedip 192.168.1.2 and so on... Or to batch create it without having to run one ipset invocation for each IP address, assuming you big-file.list is a list of IPv4 addresses, one per line: ipset -N blockedip iphashsed 's/^/add blockedip /' < big-file.list | ipsec restore With the following iptables command you can tell the kernel to drop all packets coming from any of the sources in this set: iptables -A INPUT -m set --set blockedip src -j DROP | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142117/"
]
} |
258,246 | I created a /home/myname/.pam_environment file containing PATH DEFAULT=${PATH}:${HOME}/apps/flyway But my new path doesn't end with /home/myname/apps/flyway . Why not? $ echo $PATH/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/apps/flyway | This is apparently an old issue (as in 15 years old). The "fix" at them time was: * Note that HOME may not be useful in pam_environment, closes: #109281 The Linux PAM site also says as much: Note that many environment variables that you would like to use may not be set by the time the module is called. For example, HOME is used below several times, but many PAM applications don't make it available by the time you need it. Apparently, someone bothered to patch pam_env for it over on Fedora. Anyway, on Debian-based systems, a crude way is to use: HOME=/home/@{PAM_USER} Before referencing ${HOME} . This could be done in /etc/security/pam_env.conf , for example. Of course, this will break where the user's home directory is not /home/$USER . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90787/"
]
} |
258,284 | Is it possible to get current umask of a process? From /proc/<pid>/... for example? | Beginning with Linux kernel 4.7 ( commit ), the umask is available in /proc/<pid>/status . $ grep '^Umask:' "/proc/$$/status"Umask: 0022 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73160/"
]
} |
258,310 | I've tried putting parted magic versions on an flash drive using YUMI but every time I get an missing file error stating: This application has raised an unexpected error and must abort. [45] File or directory does not exist. os.debian.52 The flash drive is working and formatted with FAT32 as verified through gparted. YUMI also works successfully when I put Kali linux on it. As an alternative I tried multibootusb, which successfully puts parted magic on the USB drive but then it apparently doesn't do it correctly because after booting parted magic cannot find the SQFS file and is unable to load the GUI. According to this thread it may be a common problem with creating USB utilities. If there's a more appropriate forum for this just let me know. My OS is Ubuntu 15.04. | Beginning with Linux kernel 4.7 ( commit ), the umask is available in /proc/<pid>/status . $ grep '^Umask:' "/proc/$$/status"Umask: 0022 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153679/"
]
} |
258,316 | I am using systemd to handle some tasks and I have a service file which works fine once enabled with systemctl . Now, I would like to enable it automatically from the first boot. I know that putting a replacement file into /etc/systemd/system/ replaces the behavior of the file with the same name into /lib/systemd/system/ . There is a way to enable a service file automatically just to putting it in some directory ? | IMPORTANT NOTE: The following works for me under Ubuntu. It should work as is under Debian. RPM based distros prevent the auto-start by default, but it may still get you closer to your goal. In most cases, you want to install it in the multi-user.target using the install section as follow: [Install]WantedBy=multi-user.target This means the your-package.postinst script will automatically start the daemon for you. Note that if you have your own your-package.postinst script, you have to make sure to include the Debian helper as in: #!/bin/sh#DEBHELPER#...your own script here... Without the #DEBHELPER# pattern, the packager will not add the default code and as a result your daemon won't get enabled and started automatically. The code added there will enable and start the service: systemctl enable <service-name>systemctl start <service-name> unless the service is static (does not have an [Install] section as shown above.) A static service requires a special command line to be enabled and that's not available by default: systemctl add-wants multi-user.target <service-name> As we can see, the command includes multi-user.target which the default #DEBHELPER# (systemd, really) has no clue about unless you have an [Install] section. The package must also be built with the systemd extension. This means having the following in your debian/rules file: %: dh $@ --with systemd --parallel That should get you set. Just in case, if you wanted to not start the newly installed service, you can actually prevent such by adding the following: override_dh_installsystemd: dh_installsystemd --no-start --no-disable Mix and match as required by your system. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118055/"
]
} |
258,320 | I have a file with lines such as: ....pattern1 100 200 300pattern2 300 400 400pattern1 300 900 700pattern1 200 500 900... As shown in the above example, there are some lines where pattern2 follows pattern1 but not all. I would like to match pattern1 then check if the next line has pattern2 and if it does, alter the next number field by multiplying it with a constant factor. I tried using getline with awk but it erases the lines with pattern1 from the resulting output: awk '/pattern1/{getline; if($1==pattern2) $(NF-2)*=0.889848406214}1' infile.dat Any suggestions how can I accomplish this without altering anything else in the input file. | IMPORTANT NOTE: The following works for me under Ubuntu. It should work as is under Debian. RPM based distros prevent the auto-start by default, but it may still get you closer to your goal. In most cases, you want to install it in the multi-user.target using the install section as follow: [Install]WantedBy=multi-user.target This means the your-package.postinst script will automatically start the daemon for you. Note that if you have your own your-package.postinst script, you have to make sure to include the Debian helper as in: #!/bin/sh#DEBHELPER#...your own script here... Without the #DEBHELPER# pattern, the packager will not add the default code and as a result your daemon won't get enabled and started automatically. The code added there will enable and start the service: systemctl enable <service-name>systemctl start <service-name> unless the service is static (does not have an [Install] section as shown above.) A static service requires a special command line to be enabled and that's not available by default: systemctl add-wants multi-user.target <service-name> As we can see, the command includes multi-user.target which the default #DEBHELPER# (systemd, really) has no clue about unless you have an [Install] section. The package must also be built with the systemd extension. This means having the following in your debian/rules file: %: dh $@ --with systemd --parallel That should get you set. Just in case, if you wanted to not start the newly installed service, you can actually prevent such by adding the following: override_dh_installsystemd: dh_installsystemd --no-start --no-disable Mix and match as required by your system. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8141/"
]
} |
258,332 | I have a server, it's outdated running Debian Lenny, and yes I know that is probably half the problem. It also has PHPMyAdmin and ProFTPd. Again, I get it, all bad signs. But for the life of me, I cannot figure out how this user is logging in and adding files and executing commands. They are able to start screen sessions, and type things like nano file.sh and then create a script and ./file.sh to execute it. Does this mean they have SSH access? I don't understand. I check all of my log files, and nothing anywhere shows successful authentication. I check users , who , last , every little command I can type - nothing shows any signs of someone being logged in. Every now and again, I notice they create new directories and the owner is 500 or 1XXX , but these accounts don't show up when I look for them. Is there something I can do to figure out wtf is going on? We are going to wipe the server clean, don't get me wrong, but I'd like to know what happened exactly so I can avoid this sort of problem in the future. I don't want any recommendations regarding "don't use phpmyadmin, old unsupported distros, ftp, etc.", on our new server we won't have anything insecure, and will use passworded SSH Auth keys, etc. I just want a bit of insight on how I can know when the user is logged in, and where they logged in from. Granted, I'm probably not giving enough information, but maybe something will click for someone? Thanks. | Most scripted and manual break-ins do: clean up log entries and similar traces of the break-in install a rootkit, which allows entry to the system outside of default server programs replace default programs (like ps, netstat, ls, etc.) with manipulated versions which hide any activity of the above mentioned rootkit (ie. ps won't show the running rootkit process) Sometimes those attacks are faulty and do leave traces behind. But in any case: you cannot trust any diagnostic tools you have on the system. If you want to play a bit around and learn you could: Install and run 'rkhunter' [*] for example, which checks for known rootkits, but you cannot trust the output without: having run it at least once before the break-in happened hoping that the attacker ignored a rkhunter install on the system (did not manipulate rkhunter itself) Boot from a rescue CD/USB Mount the systems disks and look around with the binaries of the rescue system comparing md5sums of binaries with the stock version. load the system into a VM and inspect the network traffic tl;dr: It is near impossible to find out the attack vector on such an open system. One way or another: Please be responsible and take the system off the internet ASAP and set it up newly from scratch. [*] or other IDS systems, there are many. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153803/"
]
} |
258,341 | I've searched everywhere. Tried echo and print. Tried single and double quotes. But I have parsed data and assigned it to a variable and would like to then evaluate it for if there is a variable within it. I will then replace the variable with a wildcard and search for the file. Example: var="file.$DATE.txt"### Where it goes wrong- Needs to identify that $DATE is within the $var varaible.test=$(echo "$var"|grep '\$')if [[ $test ]]then ### I would use whatever fix is discovered here as well test=$(echo $test|sed 's/\$[a-zA-Z]*/\*/')fi### (Actually pulling from remote machine to local)cat $test > /tmp/temporary.file Here is at least one of my many failures: PROMPT> file=blah.$DATEPROMPT> test=$(echo "$file"|grep '\$')PROMPT> echo $testPROMPT>PROMPT> I know it has something to do with expansion, but have no idea how to work it out. Any help would be appreciated. Thanks! | If you need $date inside the variable var: var='file.$date.txt' That will keep the $ inside the variable: $ echo "$var" | grep '\$'file.$date.txt | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136473/"
]
} |
258,361 | I have a file that includes details about VMs running in a hypervisor. We run some command and redirect the output to a file. And the is data available in the below format. Virtual Machine : OL6U5 ID : 0004fb00000600003da8ce6948c441bb Status : Running Memory : 65536 Uptime : 17835 Minutes Server : MyOVS1.vmorld.com Pool : HA-POOL HA Mode: false VCPU : 16 Type : Xen PVM OS : Oracle Linux 6Virtual Machine : OL6U6 ID : 0004fb00000600003da8ce6948c441bc Status : Running Memory : 65536 Uptime : 17565 Minutes Server : MyOVS2.vmorld.com Pool : NON-HA-POOL HA Mode: false VCPU : 16 Type : Xen PVM OS : Oracle Linux 6Virtual Machine : OL6U7 ID : 0004fb00000600003da8ce6948c441bd Status : Running Memory : 65536 Uptime : 17835 Minutes Server : MyOVS1.vmorld.com Pool : HA-POOL HA Mode: false VCPU : 16 Type : Xen PVM OS : Oracle Linux 6 This output differs from hypervisor to hypervisor since on some hypervisors we have 50 + vms running. Above file is a just an example from hypervisor where we have only 3 VMs running and hence the redirected file is expected to contain information about several( N number of VMs) We need to get this details in the below format using awk/sed or with a shell script Virtual_Machine ID Status Memory Uptime Server Pool HA VCPU Type OSOL6U5 0004fb00000600003da8ce6948c441bb Running 65536 17835 MyOVS1.vmworld.com HA-POOL false 16 Xen PVM Oracle Linux 6OL6U6 0004fb00000600003da8ce6948c441bc Running 65536 17565 MyOVS2.vmworld.com NON-HA-POOL false 16 Xen PVM Oracle Linux 6OL6U5 0004fb00000600003da8ce6948c441bd Running 65536 17835 MyOVS1.vmworld.com HA-POOL false 16 Xen PVM Oracle Linux 6 | If you have the rs (reshape) utility available, you can do the following: rs -Tzc: < input.txt This gives the output format exactly as specified in the question, even down to the dynamic column widths. -T Transposes the input data -z sizes the columns appropriately from the max in each column -c: uses colon as the input field separator This works for arbitrarily sized tables, e.g.: $ echo "Name:Alice:Bob:CarolAge:12:34:56Eyecolour:Brown:Black:Blue" | rs -Tzc: Name Age EyecolourAlice 12 BrownBob 34 BlackCarol 56 Blue$ rs is available by default on OS X (and likely other BSD machines). It can be installed on Ubuntu (and debian family) with: sudo apt-get install rs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153616/"
]
} |
258,391 | I have a serial port /dev/ttyS2 that is connected to a supervisor. Normally, I use this line to send commands back and forth between CPU and supervisor. However, under some settings, I want to just redirect the entire console to this port. I can achieve this via a reboot and updating the uBoot kernel variable to direct console=ttyS2,115200 . But is there a way to achieve this without a reboot? | You could launch getty once you've booted to get a serial connection to your system. Note that this will not give you the default outputs typically seen with your console (Kernel Panics and other verbosities typically seen in console but not in normal terminals). But if you are just looking to get a login via serial after boot this should work. /sbin/agetty -L 115200 ttyS2 vt100 That should connect to /dev/ttyS2 at 115200 baud and emulate a vt100 terminal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105228/"
]
} |
258,393 | Sample : to use Composer locally, I must write : php composer.phar After some locals installation of Composer, I want to alias it only "composer" but keeping absolute path with "pwd" command. I tried something like this in my .bashrc file : alias composer='php ' . pwd . '/composer.phar' Tested with this signs : ".", "+", ";", "&&" and “nothing” but none works. And nothing found in Wikipedia article , official documentation or other stack question. | You could add a subshell to your alias. alias composer='php $(pwd)/composer.phar' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146130/"
]
} |
258,420 | I wanted to see how some ASCII art looked in terminal so: $ cat <<EOF > # ____ _> # _ _ / __/ ___ _ | |_> # | | | |/ / / _` || __|> # | |_| |\ \__ (_| || |_> # | _,_| \___\ \___,_| \__|> # |_/> #> EOFbash: bad substitution: no closing "`" in ` || __|# | |_| |\ \__ (_| || |_# | _,_| \___\ \___,_| \__|# |_/# The # octothorpes were there perchance, but now I'm confused. $ cat <<EOF> # echo hi> EOF# echo hi As expected. However: $ cat <<EOF> # `echo hello`> EOF# hello So bash gets at expanding `` and $( ) before cat does, but it doesn't care about # comments? What's the explanation behind this behaviour? | This is more general than bash. In POSIX shell, your EOF is referred to as a word , in the discussion of here-documents : If no characters in word are quoted, all lines of the here-document shall be expanded for parameter expansion, command substitution, and arithmetic expansion. In this case, the <backslash> in the input behaves as the <backslash> inside double-quotes (see Double-Quotes). However, the double-quote character ( '"' ) shall not be treated specially within a here-document, except when the double-quote appears within "$()" , "``" , or "${}" . Quoting is done using single-, double-quotes or the backslash character. POSIX mentions the here-documents in the discussion of quoting : The various quoting mechanisms are the escape character, single-quotes, and double-quotes. The here-document represents another form of quoting; see Here-Document . The key to understanding the lack of treatment of # characters is the definition for here-documents: allow redirection of lines contained in a shell input file That is, no meaning (other than possible parameter expansion, etc) is given to the data by the shell, because the data is redirected to another program: cat , which is not a shell interpreter. If you redirected to a shell program , the result would be whatever the shell could do with the data. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258420",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136107/"
]
} |
258,433 | So I've got a persistent program running in the background. Killing it just causes it to restart with a different PID. I'd like to suspend it (put it to sleep without actually killing it). Does kill -9 do this? If not, how should this be done? | kill -STOP $PID[...]kill -CONT $PID @jordanm adds: Also note that like SIGKILL (same as kill -9 ) and SIGSTOP can not be ignored. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153880/"
]
} |
258,448 | When I type ps -ef , there are lots of special kernel thread processes showing. I am not interested in kernel threads; I am only interested in user process/thread. Is there a way that I can hide the kernel threads? | ps output can be filtered in may ways. To see your processes, you could filter by the user/uid. relevant man page below -- U userlist Select by effective user ID (EUID) or name. This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to -u and --user. -U userlist Select by real user ID (RUID) or name. It selects the processes whose real user name or ID is in the userlist list. The real user ID identifies the user who created the process, see getuid(2). -u userlist Select by effective user ID (EUID) or name. This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user. To identify kernel vs user threads, it may depend on the kernel version. On my Ubuntu machine (3.5.0-30-generic), I can exclude kernel threads by excluding children of kthreadd (pid =2). The pid of kthreadd may differ on 2.6 kernels - however, you could just use the relevant pid. As an example, to get a list of all processes that don't have ppid =2, I do (for options to feed to -o, check the man page) -- ps -o pid,ppid,comm,flags,%cpu,sz,%mem --ppid 2 -N You could also filter these using grep or awk. The other way (not using ps) to identify kernel threads is to check whether /proc//maps or /proc/cmdline is empty - both are empty for kernel threads. You'll need root privileges to do this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106512/"
]
} |
258,503 | I'm wondering about the security of UNIX signals. SIGKILL will kill the process. So, what happens when a non root user's process sends a signal to a root user's process? Does the process still carry out the signal handler? I follow the accepted answer (gollum's), and I type man capabilites , and I find a lot of things about the Linux kernel. From man capabilities : NAME capabilities - overview of Linux capabilitiesDESCRIPTION For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process's credentials (usually: effective UID, effective GID, and supplementary group list). Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities , which can be independently enabled and disabled. Capabilities are a per-thread attribute. | On Linux it depends on the file capabilities. Take the following simple mykill.c source: #include <stdio.h>#include <sys/types.h>#include <signal.h>#include <stdlib.h>void exit_usage(const char *prog) { printf("usage: %s -<signal> <pid>\n", prog); exit(1);}int main(int argc, char **argv) { pid_t pid; int sig; if (argc != 3) exit_usage(argv[0]); sig = atoi(argv[1]); pid = atoi(argv[2]); if (sig >= 0 || pid < 2) exit_usage(argv[0]); if (kill(pid, -sig) == -1) { perror("failed"); return 1; } printf("successfully sent signal %d to process %d\n", -sig, pid); return 0;} build it: gcc -Wall mykill.c -o /tmp/mykill Now as user root start a sleep process in background: root@horny:/root# /bin/sleep 3600 &[1] 16098 Now as normal user try to kill it: demouser@horny:/home/demouser$ ps aux | grep sleeproot 16098 0.0 0.0 11652 696 pts/20 S 15:06 0:00 sleep 500demouser@horny:/home/demouser$ /tmp/mykill -9 16098failed: Operation not permitted Now as root user change the /tmp/mykill caps: root@horny:/root# setcap cap_kill+ep /tmp/mykill And try again as normal user: demouser@horny:/home/demouser$ /tmp/mykill -9 16098successfully sent signal 9 to process 16098 Finally please delete /tmp/mykill for obvious reasons ;) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106512/"
]
} |
258,512 | Basically, I want to "pluck out" the first occurrence of -inf from the parameter list. (The remaining parameters will be passed along to a different command.) The script I have has the following structure: #!/bin/sh<CODE>for POSITIONAL_PARAM in "$@"do <CODE> if [ "$POSITIONAL_PARAM" = '-inf' ] then <PLUCK $POSITIONAL_PARAM FROM $@> break fi <CODE>done<CODE>some-other-command "$@"# end of script Is there a good way to do this? BTW, even though I am mainly interested in answers applicable to /bin/sh , I am also interested in answers applicable only to /bin/bash . | POSIXly: for arg do shift [ "$arg" = "-inf" ] && continue set -- "$@" "$arg"doneprintf '%s\n' "$@" The above code even works in pre-POSIX shells, except the original Almquist shell (Read Endnote ). Change the for loop to: for argdo ...done guarantee to work in all shells. Another POSIX one: for arg do shift case $arg in (-inf) : ;; (*) set -- "$@" "$arg" ;; esacdone With this one, you need to remove the first ( in (pattern) to make it work in pre-POSIX shells. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
258,544 | I want a guest account just like in Ubuntu which has following features: It does not require password to login A new home folder (in /tmp if possible) is created with no data every time User data is deleted as soon as he/she logs out User can not use sudo I am running Gnome 3.20 on Arch Linux NOTE: please don't close my question as duplicate of Create guest account with restricted access to applications because that question does not have answers to my 2nd and 3rd point | It turns out it's quite simple with GDM. I assume you're using GDM since you're also using Gnome. First, create the guest user account with a blank password: sudo useradd -d /tmp/guest -p $(openssl passwd "") guest The openssl passwd "" will return the hash of the empty string, thereby setting the password to blank. Now, all you need are these two scripts: /etc/gdm/PostLogin/Default This is executed after you log in and will create the /tmp/$guestuser ( /tmp/guest by default) directory and copy the default files from /etc/skel to it. To change the default username for the guest user, set guestuser to something else at the beginning. <!-- language: lang-bash --> #!/bin/sh guestuser="guest" ## Set up guest user session if [ "$USER" = "$guestuser" ]; then mkdir /tmp/"$guestuser" cp /etc/skel/.* /tmp/"$guestuser" chown -R "$guestuser":"$guestuser" /tmp/"$guestuser" fi exit 0 /etc/gdm/PostSession/Default This is executed after you log out and will remove the /etc/$guestuser directory and all its contents. Make sure to set guestuser to the same value in both scripts. <!-- language: lang-bash --> #!/bin/sh guestuser="guest" ## Clear up the guest user session if [ "$USER" = "$guestuser" ]; then rm -rf /tmp/"$guestuser" fi exit 0 Finally, make the two scripts executable: sudo chmod 755 /etc/gdm/PostLogin/Default /etc/gdm/PostSession/Default Now, just log out and you will see your new guest user. You can log in by selecting it and hitting Enter when prompted for a password. The guest user won't be able to use sudo since that is the default for all users anyway. Only users explicitly mentioned in /etc/sudoers or those who are members of groups explicitly mentioned in sudoers (such as wheel or sudo , depending on your distribution) can use sudo . If you are using a recent version of GDM, it may disable the login button while the password box is empty. To work around this you can tell GDM not to prompt for the password for specific groups. The caveat is that this will also bypass the session selection menu for members of that group. If you want to do this you should add this line at the beginning of /etc/pam.d/gdm-password : auth sufficient pam_succeed_if.so user ingroup guest | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
258,549 | Suppose I'm in /path/to/dir . Within this dir is another dir called subdir . Is there a command I can issue which outputs the full path to subdir , no matter how it is identified? For example: $ cmd subdir/path/to/dir/subdir$ cmd /path/to/dir/subdir/path/to/dir/subdir | coreutils ' realpath does the trick: realpath subdir and it works however the directory (or file) is specified: realpath /blah/blah2/subdirrealpath ../blah2/subdir | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86396/"
]
} |
258,595 | I would like to access my security camera that communicates through rtsp feed with an API that only supports a character video kind of entry (I'm new on linux, and I'm not sure if it's called "character video" the '/dev/video1' sort). I followed this post and I get the output below for the following command: gst-launch-1.0 -v rtspsrc location=rtsp://admin:[email protected]:554/CH001.sdp ! v4l2sink device=/dev/video1...Progress: (request) Sending PLAY request...ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc3: Internal data flow error.Additional debug info:gstbasesrc.c(2943): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc3:streaming task paused, reason not-linked (-1) How do i solve this error? Do you know any way other than gstream for this task? PS: there are more on the message, I've just resumed to be more readable. | I've got the rtsp streaming on '/dev/video1' working with the following command: ffmpeg -i rtsp://admin:[email protected]:554/CH001.sdp -f v4l2 -pix_fmt yuv420p /dev/video1 . Thank you guys for the great support. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88371/"
]
} |
258,608 | I'm trying to watch number of files in my /tmp/ directory. For this I thought this command would work: watch sh -c 'ls /tmp/|wc -l' But it appears to work as if ls had no arguments. Namely, I'm in ~ , and I get number of files there instead of /tmp/ . I found a workaround, which seems to work: watch sh -c 'ls\ /tmp/|wc -l' But why do I need to escape the space between ls and /tmp/ ? How is the command transformed by watch so that ls output is feeded to wc , but /tmp/ is not passed as argument to ls ? | The difference may be seen via strace : $ strace -ff -o bq watch sh -c 'ls\ /tmp/|wc -l'^C$ strace -ff -o nobq watch sh -c 'ls /tmp/|wc -l'^C$ grep exec bq* | grep shbq.29218:execve("/usr/bin/watch", ["watch", "sh", "-c", "ls\\ /tmp/|wc -l"], [/* 54 vars */]) = 0bq.29219:execve("/bin/sh", ["sh", "-c", "sh -c ls\\ /tmp/|wc -l"], [/* 56 vars */]) = 0bq.29220:execve("/bin/sh", ["sh", "-c", "ls /tmp/"], [/* 56 vars */]) = 0$ grep exec nobq* | grep shnobq.29227:execve("/usr/bin/watch", ["watch", "sh", "-c", "ls /tmp/|wc -l"], [/* 54 vars */]) = 0nobq.29228:execve("/bin/sh", ["sh", "-c", "sh -c ls /tmp/|wc -l"], [/* 56 vars */]) = 0nobq.29229:execve("/bin/sh", ["sh", "-c", "ls", "/tmp/"], [/* 56 vars */]) = 0 In the backquote case, ls /tmp is passed as a single argument to the -c to sh , which runs as expected. Without this backquote, the command is instead word split when watch runs sh which in turn runs the supplied sh , so that only ls is passed as the argument to -c , meaning that the sub-sub sh will only run a bare ls command, and lists the contents of the current working directory. So, why the complication of sh -c ... ? Why not simply run watch 'ls /tmp|wc -l' ? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258608",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27672/"
]
} |
258,644 | I am using shell scripting and I am using the following expression: A=`echo "(( (($a / $b) ^ 0.3) -1 ))" |bc -l` I want to have a real number as an exponent. I noticed that If I place 0.3, it rounds off to an integer and takes the power of zero. Similarly if I use 5.5 or 5.9 in place of 0.3 in the above expression, I get the same answer. How do I calculate the power of a number with the exponent being a real number and not an integer | Why can't you use awk or perl one-liner to handle it? echo "$a $b" | awk '{ print ((($1/$2)^0.3) -1); }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96043/"
]
} |
258,656 | Bash behaviour I've just migrated from bash to zsh . In bash , I had the following line in ~/.inputrc . "\e\C-?": unix-filename-rubout Hence, Alt + Backspace would delete back to the previous slash, which was useful for editing paths. Separately, bash defaults to making Ctrl + w delete to the previous space , which is useful for deleting whole arguments (presuming they don't have spaces). Hence, there two slightly different actions performed with each key combination. Zsh behaviour In zsh , both Alt + Backspace and Ctrl + w do the same thing. They both delete the previous word, but they are too liberal with what constitutes a word-break, deleting up to the previous - or _ . Is there a way to make zsh behave similarly to bash , with two independent actions ? If it's important, I have oh-my-zsh installed. | A similar question was asked here: zsh: stop backward-kill-word on directory delimiter and a workable solution given: add these settings to your zshrc: autoload -U select-word-styleselect-word-style bash | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
258,679 | I just noticed that on one of my machines (running Debian Sid) whenever I type ls any file name with spaces has single quotes surrounding it. I immediately checked my aliases, only to find them intact. wyatt@debian630:~/testdir$ ls'test 1.txt' test1.txtwyatt@debian630:~/testdir$ aliasalias ls='ls --color=auto'alias wget='wget --content-disposition'wyatt@debian630:~/testdir$ (picture) Another test, with files containing single quotes in their names (also answering a request by jimmij): wyatt@debian630:~/testdir$ ls'test 1.txt' test1.txt 'thishasasinglequotehere'\''.txt'wyatt@debian630:~/testdir$ touch "'test 1.txt'"wyatt@debian630:~/testdir$ ls''\''test 1.txt'\''' test1.txt'test 1.txt' 'thishasasinglequotehere'\''.txt' (picture) update with new coreutils-8.26 output (which is admittedly much less confusing, but still irritating to have by default). Thanks to Pádraig Brady for this printout: $ ls"'test 1.txt'" test1.txt'test 1.txt' "thishasasinglequotehere'.txt"$ ls -N'test 1.txt' test1.txttest 1.txt thishasasinglequotehere'.txt Why is this happening? How do I stop it properly? To be clear, I myself set ls to automatically color output. It just never put quotes around things before. I'm running bash and coreutils 8.25. Any way to fix this without a recompile? EDIT:Appears the coreutils developers chose) to break with the convention and make this the global default. UPDATE - October 2017 - Debian Sid has re-enabled the shell escape quoting by default. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877582 And at the bottom of the reply chain to the previous bug report, "the change was intentional and will remain." https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=813164#226 I thought this had already been settled, but apparently it was just reverted so that the "stable" Debian branch could keep its "feature freeze" while getting the other fixes, etc. from the newer version. So that's a shame (in my opinion). UPDATE: April 2019: Just found a spurious bug report in PHP that was caused by this change to ls . When you're confusing developers and generating false bug reports, I think it might be time to re-evaluate your changes. Update: Android toybox ls is now doing something similar to this but with backslashes instead of quotes. Using the -q option makes spaces render as 'question mark characters' (I have not checked what they are, since they're obviously not spaces), so the only fix I have found so far without rooting the device in question is to add this to a script and source it when launching a shell. This function makes ls use columns if in a terminal and otherwise print one-per-line, while tricking ls into printing spaces verbatim because it's running through a pipe. ls() { # only way I can stop ls from escaping with backslashes if [ -t 1 ]; then /system/bin/ls -C $@ |cat else /system/bin/ls $@ |cat fi} | Preface : While it may be quite satisfying to upvote an answer such as this and call it a day, please be assured that the GNU coreutils maintainers do not care about SO answer votes, & that if you actually want to encourage them to change , you need to email them as this answer describes. Update 2019 : Sometime this past year the maintainers have doubled-down and now offer to any [email protected] reports about this issue only a boilerplate response pointing to an incredibly long page on their website listing problems people have with this change that they have committed themselves to ignoring . The unceasing pressure from [email protected] reports has clearly had an effect, forcing the generation of this immense & absurd page, and potentially reducing the number of maintainers willing to deal with the problem to only one. When this many people consider a thing a bug, then it's a bug whether maintainers disagree or not. Continuing to email them remains the simplest way to encourage change. " Why is this happening? " Several coreutils maintainers decided they knew better than decades of de facto standards. " How do I stop it properly? " http://www.gnu.org/software/coreutils/coreutils.html : Bug Reports If you think you have found a bug in Coreutils, then please send as complete a bug report as possible to <[email protected]> , and it will automatically be entered into the Coreutils bug tracker. Before reporting bugs please read the FAQ. A very useful and often referenced guide on how to write bug reports and ask good questions is the document How To Ask Questions The Smart Way . You can browse previous postings and search the bug-coreutils archive. Distros that have already reverted this change: Debian coreutils-8.25-2 Including consequently, presumably, Ubuntu and all of the hundreds of Debian-based and Ubuntu-based derivatives Distros unaffected: openSUSE (already used -N) " Any way to fix this without a recompile? " Proponents would have you... get back to the old format by adding -N to their ls alias …on all of your installs, everywhere, for the remainder of eternity. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/258679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78861/"
]
} |
258,711 | netstat -s prints out a lot of very detailed protocol statistics like number of TCP reset messages received or number of ICMP "echo request" messages sent or number of packets dropped because of a missing route. When in Linux netstat is considered deprecated at nowadays, then is there an alternative? Statistics provided by ss -s are superficial compared to the ones provided by netstat . | Netstat is considered deprecated at nowadays and other programs included in the net-tools like arp, ifconfig, iptunnel, nameif, netstat , and route. The functionality provided by several of these utilities has been reproduced and improved in the new iproute2 suite, primarily by using its new ip command. Examples for deprecated commands and their replacements: arp → ip n ( ip neighbor ) ifconfig → ip a ( ip addr ), ip link , ip -s ( ip -stats ) iptunnel → ip tunnel iwconfig → iw nameif → ip link , ifrename netstat → ss , ip route (for netstat -r ), ip -s link (for netstat -i ), ip maddr (for netstat -g ) The netstat command reads various /proc files to gather information. However, this approach falls weak when there are lots of connections to display. This makes it slower. The ss command gets its information directly from kernel space. The options used with the ss commands are very similar to netstat ,making it an easy replacement. Statistics provided by ss are superficial but it is considered the better alternative to netstat . [Citation needed] Examples ss | less # get all connectionsss -t # get TCP connections not in listen modess -u # get UDP connections not in listen modess -x # get Unix domain socket connectionsss -at # get all TCP connections (both listening and non-listening)ss -au # get all UDP connectionsss -tn # TCP without service name resolutionss -ltn # listening TCP without service name resolutionss -ltp # listening TCP with PID and namess -s # prints statisticsss -tn -o # TCP connections, show keepalive timerss -lt4 # IPv4 (TCP) connections See note in the netstat(8) manpage : NOTES This program is mostly obsolete. Replacement for netstat is ss . Replacement for netstat -r is ip route . Replacement for netstat -i is ip -s link . Replacement for netstat -g is ip maddr . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258711",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
258,727 | What is the difference between below variables assignments? var=23var =23var= 23var = 23 Is there any difference in space around the assignment operator? | That very much depends on the shell. If we only look at the 4 main shell families (Bourne, csh, rc, fish): Bourne family That is the Bourne shell and all its variants and ksh , bash , ash / dash , zsh , yash . var=23 : that's the correct scalar variable assignment syntax: a word that consists of unquoted letters, digits or underscores followed by an unquoted = that appears before a command argument (here it's on its own) var =23 , the var command with =23 as argument (except in zsh where =something is a special operator that expands to the path of the something command. Here, you'd likely to get an error as 23 is unlikely to be a valid command name). var= 23 : an assignment var= followed by a command name 23 . That's meant to execute 23 with var= passed to its environment ( var environment variable with an empty value). var = 23 , var command with = and 23 as argument. Try with echo = 23 for instance. ksh , zsh , bash and yash also support some forms of array / list variables with variation in syntax for both assignment and expansion. ksh93 , zsh and bash also have support for associative arrays with again variation in syntax between the 3. ksh93 also has compound variables and types , reminiscent of the objects and classes of object programming languages. Csh family csh and tcsh . Variable assignments there are with the set var = value syntax for scalar variables, set var = (a b) for arrays, setenv var value for environment variables, @ var=1+1 for assignment and arithmetic evaluation. So: var=23 is just invoking the var=23 command. var =23 is invoking the var command with =23 as argument. var= 23 is invoking the var= command with 23 as argument var = 23 is invoking the var command with = and 23 as arguments. Rc family That's rc , es and akanga . In those shells, variables are arrays and assignments are with var = (foo bar) , with var = foo being short for var = (foo) (an array with one foo element) and var = short for var = () (array with no element, use var = '' or var = ('') for an array with one empty element). In any case, blanks (space or tab) around = are allowed and optional. So in those shells those 4 commands are equivalent and equivalent to var = (23) to assign an array with one element being 23 . Fish In fish , the variable assignment syntax is set var value1 value2 . Like in rc , variables are arrays. So the behaviour would be the same as with csh , except that fish won't let you run a command with a = in its name. If you have such a command, you need to invoke it via sh for instance: sh -c 'exec weird===cmd' . So all var=23 and var= 23 will give you an error, var =23 will call the var command with =23 as argument and var = 23 will call the var command with = and 23 as arguments. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/258727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154082/"
]
} |
258,737 | How can I find all the files in directory greater than a certain size, say, 15 KB, and which have been modified in the last 10 days? | I'd do it like this: find /directory -mtime -10 -size +15k The /directory is the base directory where the search is preformed (recursively by default). -mtime 10 means it will look for files modified in the last 10 days: -mtime n File's data was last modified n*24 hours ago. See the comments for -atime to understand how rounding affects the interpretation of file modi- fication times. -size 15k means it will look for files larger than 15 kilobytes: -size n[cwbkMG] File uses n units of space, rounding up. The following suffixes can be used: `b' for 512-byte blocks (this is the default if no suffix is used) `c' for bytes `w' for two-byte words `k' for Kilobytes (units of 1024 bytes) `M' for Megabytes (units of 1048576 bytes) `G' for Gigabytes (units of 1073741824 bytes) The size does not count indirect blocks, but it does count blocks in sparse files that are not actually allocated. Bear in mind that the `%k' and `%b' format specifiers of -printf handle sparse files differently. The `b' suffix always denotes 512-byte blocks and never 1 Kilobyte blocks, which is different to the behaviour of -ls. The + and - prefixes signify greater than and less than, as usual, but bear in mind that the size is rounded up to the next unit (so a 1-byte file is not matched by -size -1M). If this is a homework question of some kind, please read the find(1) manual for your operating system by typing man find on your system, and really learn how it's working. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154082/"
]
} |
258,785 | Brace Expansion, eg {a,b,c} is not defined by POSIX . I would like to run myshell in POSIX mode. With Debian this is simple enough: $ bash -c 'echo {a,b,c}'a b c$ sh -c 'echo {a,b,c}'{a,b,c} However Fedora behaves differently: $ bash -c 'echo {a,b,c}'a b c$ sh -c 'echo {a,b,c}'a b c I tried using --posix option, but it has no effect: $ sh --posix -c 'echo {a,b,c}'a b c Can Bash be forced to operate in POSIX mode? | Bash can be told to disable brace expansion with set +B , which is the inverse of set -B : -B The shell will perform brace expansion (see Brace Expansion ). This option is on by default. You can also provide this on the command line when launching the shell: $ bash +B -c 'echo {a,b,c}'{a,b,c} You can combine this with the --posix or set -o posix options to get closer to fully-POSIX behaviour. You also need to enable shopt -s xpg_echo at least. There will be other corners as well — many of the extensions are quite deeply-ingrained — and I don't think it's possible to get Bash to support only the behaviour that is actually mandated by POSIX. Even dash doesn't manage that. However, you may find dash (the default /bin/sh on Debian) more helpful if you're aiming to avoid extended behaviours, although it supports some extensions as well. There is also BusyBox's ash applet , which also has some extensions, but many can be disabled statically. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17307/"
]
} |
258,787 | When editing multi-line commands, with escaped newlines, I cannot move up lines. For example, suppose I enter echo \ one one line, then I press Enter , and then I want to edit the echo \ part of the command. Pressing Up doesn't move back to the first command line. This works for long commands which wrap, but not with escaped newlines: _physical_up_line() { zle backward-char -n $COLUMNS }_physical_down_line() { zle forward-char -n $COLUMNS }zle -N physical-up-line _physical_up_linezle -N physical-down-line _physical_down_linebindkey -M vicmd "R" physical-up-linebindkey -M vicmd "N" physical-down-line | When you press Enter ( accept-line command), the current line is parsed and scheduled for execution. If the line is syntactically incomplete (e.g. echo \ or for x in foo ), it isn't executed, but it's already stored. You can see that zsh is in this state because it shows the PS2 prompt instead of the usual PS1 . As far as I know, there's no built-in way to edit such stored lines. It should be doable by storing the current line without executing it and recalling the previous history line for editing. The easiest way to get at the previous line is to make sure that the current line is unfinished (e.g. type \ at the end), accept it (press Enter ), then cancel it (press Ctrl + C ). Then you can recall the whole stored command as a single history line as a single multi-line buffer by pressing Up . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23994/"
]
} |
258,854 | My goal is to optimize my apache server. At first I want to disable some modules on it. I was surfing over the Internet and didn't find anything dedicated to apache which is installed on CentOS7. Here are what I have got from surfing: disable unneeded modules , enable apache modules from the command line and on.I can list Apache enabled modules using this httpd -t command. Also I know that modules that were compiled during the installation is lying in /etc/httpd/modules directory. So what is the right way of disabling and enabling apache modules on CentOS7? | On CentOS 7, the right way to do it is to go through /etc/httpd/conf.modules.d and find the appropriate conf files with the modules you want to disable. You can also check /etc/httpd/conf/httpd.conf , but you'll have better luck in the conf.modules.d folder. Simply comment them out, reload apache, and you're good to go. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137094/"
]
} |
258,856 | I'm trying to make rcssmonitor and I get the following error: /usr/bin/ld: cannot find -laudio I'm using Linux Mint 17.2. with gcc 4.8.4. | On CentOS 7, the right way to do it is to go through /etc/httpd/conf.modules.d and find the appropriate conf files with the modules you want to disable. You can also check /etc/httpd/conf/httpd.conf , but you'll have better luck in the conf.modules.d folder. Simply comment them out, reload apache, and you're good to go. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258856",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154193/"
]
} |
258,859 | I am practising and I am trying to: 1) create files with name of all combinations or r,w,x permission = 512 files; 2) change the permission of that file to match the name I have created this script: touch ./{r,-}{w,-}{x,-},{r,-}{w,-}{x,-},{r,-}{w,-}{x,-}for i in * do syntax="${i//:}" u=${syntax:0:3} g=${syntax:3:3} o=${syntax:6:3} chmod u="$u",g="$g",o="$o" -- "$i" done It changed a few permissions to match the name, but not all what did I do wrong? | On CentOS 7, the right way to do it is to go through /etc/httpd/conf.modules.d and find the appropriate conf files with the modules you want to disable. You can also check /etc/httpd/conf/httpd.conf , but you'll have better luck in the conf.modules.d folder. Simply comment them out, reload apache, and you're good to go. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154192/"
]
} |
258,889 | Most Linux distributions ship with a certain kernel version and only update it on point releases ( x.y.z to x.y.(z+1) ) and for security updates. On the other hand, I know that Linux has a very strict policy about not changing the kernel ABI and never breaking user space. In fact, Linus have had many public tantrums directed at developers who wanted to (intentionally or accidentally) change the kernel is non-backwards-compatible ways. I don't understand why distributions use "stable" kernels instead of always updating to the latest kernel. This is not a criticism, I'm just curious about the reason. | The Linux kernel's system call interfaces are very stable. But the kernel has other interfaces that aren't always compatible. /proc is mostly stable, but there have been a few changes in the past (e.g. some interfaces moving to /sys some time after /sys was created). A number of device-related interfaces have been removed in the past. /sys contains some stable interfaces (listed in Documentation/ABI/stable ) and some that aren't. You aren't supposed to use the ones that aren't, but sometimes people do, and a simple security and stability upgrade shouldn't break things for them. There have been incompatibilities with modutils in the past (newer kernels requiring a newer version of modutils), though I think it was quite a while ago. There have also been incompatibilities with respect to the boot process on some unusual configurations. Even increasing the size of the kernel could cause problems on some embedded systems. While the kernel's external interfaces are pretty stable, the internal interfaces are not. The rule for internal interfaces is that anyone can break them as long as they fix internal uses, but fixing third-party modules is the responsibility of the author of said modules. Overall quite a lot of installations run third-party modules: extra drivers for hardware that wasn't supported by the kernel (if the hardware is supported by the new kernel, that's fine, but what if it isn't), proprietary drivers (while the world would be a better place if all drivers were open source, this isn't the case; for example, if you want good 3D GPU performance, you're pretty much stuck with proprietary drivers), etc. Some people need to recompile their kernel, or some third-party modules. More recent kernels often can't be compiled with older compilers. All in all, the primary reason not to switch to a more recent kernel version is third-party modules. Some distributions nonetheless offer recent kernels as an option. For example, Debian makes kernels from testing available to users of the stable release through backports. Similarly, on Ubuntu LTS, kernels from more recent Ubuntu releases are available, but not used by default. This is mostly useful for new installations on hardware that wasn't supported yet when the distribution was finalized. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3491/"
]
} |
258,922 | Let the script below exemplify my quandary.. #!/bin/zshSTUFF=( moose-hoof ovary clydsedale )echo ${MINE=$(printf "MY-%s " $STUFF)}echo ${MINE_EXP=${STUFF/^/MY-}} MY-moose-hoof MY-ovary MY-clydsedale moose-hoof ovary clydsedale What are the right expansion flags to allow string concatenation on each element of the array? | Use $^array . It turns the array into a sort of brace expansion of the array. As in when a=(foo bar baz) , $^a would be a bit like {foo,bar,baz} . $ a=(foo bar baz)$ echo prefix${^a}suffixprefixfoosuffix prefixbarsuffix prefixbazsuffix For multiplexing arrays: $ a=(1 2 3) b=(a b c)$ echo $^a$^b1a 1b 1c 2a 2b 2c 3a 3b 3c Naturally, if the prefix or suffix contain shell special characters (like ; that separates commands or space that separate words, or $"'&*[?~ ...), they must be quoted: echo 'p r e f i x '$^a' s u f f i x' same as for csh 's (and bash, ksh, zsh's): echo 'p r e f i x '{foo,bar,baz}' s u f f i x' $^a itself must not be quoted, "foo${^a}bar" would expand as one word. One case where you would want $^array to be quoted, the same as for $array is when you want to preserve empty elements. Then, you need to quote the array expansion and use the (@) flag or the "${array[@]}" syntax (reminiscent of the Bourne shell's "$@" ): $ array=(x '')$ printf '<%s>\n' $array # empties removed<x>$ printf '<%s>\n' "$array" # array elts joined with spaces<x >$ printf '<%s>\n' "${(@)array}" # empties preserved<x><>$ printf '<%s>\n' "$array[@]" # empties preserved<x><>$ printf '<%s>\n' $^array$^array # empty removed<xx><x><x>$ printf '<%s>\n' "$^array$^array" # concatenation of joined arrays<x x >$ printf '<%s>\n' "$^array[@]$^array[@]" # multiplexing with empties preserved<xx><x><x><> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7310/"
]
} |
258,931 | I was working through a tutorial and saw use of both cat myfile.txt and cat < myfile.txt . Is there a difference between these two sequences of commands? It seems both print the contents of a file to the shell. | In the first case, cat opens the file, and in the second case, the shell opens the file, passing it as cat 's standard input. Technically, they could have different effects. For instance, it would be possible to have a shell implementation that was more (or less) privileged than the cat program. For that scenario, one might fail to open the file, while the other could. That is not the usual scenario, but mentioned to point out that the shell and cat are not the same program. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/258931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154239/"
]
} |
258,941 | I've a radeon r9 270x with four outputs, two DVI, one HDMI and one DisplayPort output. I'd like to configure the X server such that it has two screens, from a user's point of view it should provide DISPLAY 0.0 and 0.1. I tried with two Monitor, two Device and two Screen sections in /etc/X11/xorg.conf. This works if I don't specify "Screen" explicitely in the Device section but then I end up with a single Screen (DISPLAY=0.0). I tried to explicitely set the Screen number in the screen section (like below) but this didn't work. If I select Screen number 0 for the first Device Section and Screen number 1 for the second Device section then the X server starts, but from /var/log/Xorg.0.log it see that the X server tries to use the DisplayPort and HDMI outputs which are not connected. I I select Screen numbers 2 and 3 in the Device sections then the X server refuses to start. Section "Device" Identifier "Device0" Driver "radeon" # Screen 1 # doesn't workEndSection Any ideas how to get a dual screen set up with the radeon driver? This is debian unstable, Kernel 4.3 if it matters. | In the first case, cat opens the file, and in the second case, the shell opens the file, passing it as cat 's standard input. Technically, they could have different effects. For instance, it would be possible to have a shell implementation that was more (or less) privileged than the cat program. For that scenario, one might fail to open the file, while the other could. That is not the usual scenario, but mentioned to point out that the shell and cat are not the same program. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/258941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99434/"
]
} |
258,942 | I was pointed here from AskUbuntu because my question was about an unsupported Ubuntu derivate, here is the copy-pasted question: I am aware i am asking a duplicate question, but since the questions( q-1 , q-2 ) are unanswered i am still going to ask it. Please, do not flag as duplicate as this implies that no answers are needed, thus leaving yet another question unaswered. I upgraded my fresh install of Netrunner 17 Horizon (Ubuntu-based, screenfetch reports that the OS is Wily), and after the reboot i got no GUI except the splash-screen. Removing the quiet bootflag shows Starting version 225 after the splash-screen vanishes, this message does not disappear and there is no further output. I had this problem a day ago, so i did a clean reinstall and this time i copied the terminal output of the upgrade: The terminal output of apt-get upgrade exceeded the new question character limit (30.000) at least 4 times, so i dropped the output in here > pastebin/Jybu3aQB Upgraded packages: about-distrobind9-hostbinutilschromium-codecs-ffmpeg-extracups-browsedcups-filterscups-filters-core-driverscurldkmsdnsutilsdpkgdpkg-devffmpegfirefoxfirefox-locale-enfirefox-plasmaflashplugin-installergrub-commongrub-efi-amd64grub-efi-amd64-bingrub-efi-amd64-signedgrub2-commongtk2-engines-qtcurveinitscriptsisc-dhcp-clientisc-dhcp-commonkate5-datakde-config-gtk-style-previewkde-l10n-engbkde-style-oxygen-qt4kde-style-qtcurve-qt4kdelibs-binkdelibs5-datakdelibs5-pluginskdoctoolskiokpackagelauncherqmlksnapshotksshaskpassktexteditor-katepartkwinkwritedlibav-tools-linkslibavcodec-extralibavcodec-ffmpeg-extra56libavdevice-ffmpeg56libavfilter-ffmpeg5libavformat-ffmpeg56libavresample-ffmpeg2libavutil-ffmpeg54libbind9-90libcupsfilters1libcurl3libcurl3-gnutlslibdlrestrictions1libdns-export100libdns100libdpkg-perllibepoxy0libfontembed1libirs-export91libisc-export95libisc95libisccc90libisccfg-export90libisccfg90libkcmutils4libkde3support4libkdeclarative5libkdecore5libkdesu5libkdeui5libkdewebkit5libkdnssd4libkemoticons4libkf5iconthemes-binlibkf5js5libkf5notifyconfig-datalibkf5notifyconfig5libkf5parts-pluginslibkf5plotting5libkf5pty-datalibkf5pty5libkf5service-binlibkf5texteditor5-libjs-underscorelibkf5unitconversion-datalibkf5unitconversion5libkfile4libkhtml5libkidletime4libkio5libkjsapi4libkjsembed4libkmediaplayer4libknewstuff2-4libknewstuff3-4libknotifyconfig4libkntlm4libkparts4libkprintutils4libkpty4libkrosscore4libkrossui4libktexteditor4libldb1liblwres90libmysqlclient18libmysqlclient18:i386libnm-glib-vpn1libnm-glib4libnm-util2libnm0libnss3libnss3-nssdbliboxygenstyle5-5liboxygenstyleconfig5-5libperl5.20libplasma3libpng12-0libpng12-0:i386libpolkit-agent-1-0libpolkit-backend-1-0libpolkit-gobject-1-0libpostproc-ffmpeg53libpowerdevilui5libqt5clucene5libqt5concurrent5libqt5x11extras5libqtcurve-utils2libsmbclientlibsndfile1libsndfile1:i386libsolid4libswresample-ffmpeg1libswscale-ffmpeg3libthreadweaver4libvlc5libvlccore8libwbclient0libxml2libxml2:i386libxml2-utilslinux-firmwarelinux-libc-devmysql-client-core-5.6mysql-commonmysql-server-core-5.6nanonetrunner-artworknetrunner-default-settingsnetrunner-desktop-containmentnetwork-manageropenssh-clientopenssloxideqt-codecs-extraoxygen-soundsperlperl-baseperl-modulespolicykit-1python-aptpython-apt-commonpython-ldbpython-libxml2python-sambapython3-aptpython3-dbus.mainloop.pyqt5qml-module-org-kde-extensionpluginqml-module-qtgraphicaleffectsqtcurveqtcurve-l10nqtdeclarative5-kf5declarativeqtdeclarative5-kf5solidrootactions-servicemenursyncsambasamba-commonsamba-common-binsamba-dsdb-modulessamba-libssamba-vfs-modulessddm-theme-breezesmbclientsysv-rcsysvinit-utilsthunderbirdthunderbird-locale-enthunderbird-locale-en-usthunderbird-plasmaunattended-upgradesvirtualboxvirtualbox-dkmsvirtualbox-guest-dkmsvirtualbox-guest-utilsvirtualbox-guest-x11virtualbox-qtvlcvlc-datavlc-noxvlc-plugin-notifyvlc-plugin-pulsevlc-plugin-sambawinexserver-commonxserver-xorg-core I basically have no idea what has gone wrong, google and a search on SE did not reveal anything that i found applicable to my OS, versions, and situation. I am experiencing every symptom of this unanswered question except the screen going black, for me the trouble began after the reboot. I would really appreciate any kind of help, hint, or answer. | In the first case, cat opens the file, and in the second case, the shell opens the file, passing it as cat 's standard input. Technically, they could have different effects. For instance, it would be possible to have a shell implementation that was more (or less) privileged than the cat program. For that scenario, one might fail to open the file, while the other could. That is not the usual scenario, but mentioned to point out that the shell and cat are not the same program. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/258942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136880/"
]
} |
258,951 | Imagine the following file structure: foo.bar.1blablamoreblablarelevant=yesfoo.bar.2relevant=nofoo.bar.3blablablafoo.bar.4relevant=yes I want to retrieve all foo.bar lines where within the block following themselves and before the next foo.bar there is a line stating relevant=yes . So the output should be: foo.bar.1foo.bar.4 I could of course write a program/script iterating through the lines, remembering the foo.bars and print them when there is a line saying relevant=yes following them an before the next foo.bar . But I thought there might be an out-of-the box way using standard Unix utilities (grep/sed/awk)? Thanx for any hints! | If the input is processed line by line, then processing needs to go like this: if the current line is foo.bar , store it, forgetting any previous foo.bar line that wasn't enabled for output; if the current line is relevant=yes , this enables the latest foo.bar for output. This kind of reasoning is a job for awk. (It can also be done in sed if you like pain.) awk ' /^foo\.bar/ { foobar = $0 } /^relevant=yes$/ {if (foobar != "") {print foobar; foobar = ""}}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154258/"
]
} |
258,955 | Out of: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/errno.h.html It's always pretty clear why an errno code is named a particular way except for this one.How does SRCH relate to No such process ? | POSIX kill documents the "search" connotation: [ESRCH] No process or process group can be found corresponding to that specified by pid. The previous issue (2004) gave in the rationale more information: Some implementations provide semantic extensions to the kill() function when the absolute value of pid is greater than some maximum, or otherwise special, value. Negative values are a flag to kill(). Since most implementations return [ESRCH] in this case, this behavior is not included in this volume of IEEE Std 1003.1-2001, although a conforming implementation could provide such an extension. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/258955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
258,973 | I am not sure if I am using my argument correctly. I would like to pass in a argument which is a text file containing a string. When I run this script, it always enters the if statement even when the the number of characters of the string is below 32. #!/bin/bashif [ {$1} > 32 ]; then echo "Error: Password length invalid"else echo "okay"fi | You can get string length of the variable by using ${#variable} . And you should use -gt instead of > in the [ ] expression. #!/bin/bash -pass=$(cat < "$1") || exitif [ "${#pass}" -gt 32 ]; then echo >&2 "Error: Password length invalid" exit 1else echo "okay"fi That counts the number of characters (interpreted in the current locale's encoding), not bytes, in the file passed as first argument except for the trailing newline characters, so for a file containing one line of text, that gives you the number of characters in that line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/258973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154268/"
]
} |
259,041 | I have a long-term running script and I forgot to redirect its output to a file. I can see it in a terminal, but can I save it to a file? I'm not asking for tee , output redirection (e.g. > , >> ) etc - the command has started, and I can't run it again. I need to save already generated output. If I can see it on my display, it is somewhere stored/cached/buffered. Where? screendump , /dev/vcsX and so on allows me to save only last screen on terminal output (not the current! - scrolling terminal doesn't help). This is on a Linux virtual console, not a X11 terminal emulator like gnome-terminal with mouse and other goodies. | /dev/vcs[a]<n> will only get you the last screen-full even if you've scrolled up, but the selection ioctl() s as used by gpm will allow you to dump the currently displayed screen even when you've scrolled up. So you can can do: sleep 3; perl -e ' require "sys/ioctl.ph"; # copy: ioctl(STDIN, &TIOCLINUX, $arg = pack("CS5", 2, 1, 1, 80, 25, 2)); # paste: ioctl(STDIN, &TIOCLINUX, $arg = "\3")'; cat > file Adjust the 80 and 25 to your actual screen width and height. The sleep 3 gives you time to scroll up (with Shift+PageUP ) to the actual screen you want to dump. cat > file redirects the paste to file . Finish it with Ctrl+D . See console_ioctl(4) for details. If you have gpm installed and running, you can do that selection with the mouse. The Linux virtual console scrollback and selection are very limited and quite annoying (in that when you switch console, you lose the whole scrollback). Going forward, I'd suggest you use things like GNU screen or tmux within it (I personally use them in even more capable terminals). With them, you can have larger searchable scrollbacks and easily dump them to files (and even log all the terminal output, plus all the other goodies that come with those terminal multiplexers). As to automating the process to dump the whole scrollback buffer, it should be possible under some conditions, but quite difficult as the API is very limited. There is an undocumented ioctl (TIOCLINUX, subcode=13) to scroll the current virtual console by some offset (negative for scrolling up, positive for scrolling down). There is however no way (that I know) to know the current size of the scrollback buffer. So it's hard to know when you've reached the top of that buffer. If you attempt to scroll past it, the screen will not be shifted by as much and there's no reliable way to know by how much the screen has actually scrolled. I also find the behaviour of the scrolling ioctl erratic (at least with the VGA console), where scrolling by less than 4 lines works only occasionally. The script below seems to work for me on frame buffer consoles (and occasionally on VGA ones) provided the scrollback buffer doesn't contain sequences of identical lines longer than one screen plus one line. It's quite slow because it scrolls one line at a time, and needs to wait 10ms for eof when reading each screen dump. To be used as that-script > file from within the virtual console. #! /usr/bin/perlrequire "sys/ioctl.ph";($rows,$cols) = split " ", `stty size`;$stty = `stty -g`; chomp $stty;system(qw(stty raw -echo icrnl min 0 time 1));sub scroll { ioctl(STDIN, &TIOCLINUX, $arg = pack("Cx3l", 13, $_[0])) or die "scroll: $!";}sub grab { ioctl(STDIN, &TIOCLINUX, $arg = pack("CS5", 2, 1, 1, $cols, $rows, 2)) or die "copy: $!"; ioctl(STDIN, &TIOCLINUX, $arg = "\3") or die "paste: $!"; return <STDIN>;}for ($s = 0;;$s--) { scroll $s if $s; @lines = grab; if ($s) { last if "@lines" eq "@lastlines"; unshift @output, $lines[0]; } else { @output = @lines; } @lastlines = @lines;}print @output;exec("stty", $stty); | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/259041",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148471/"
]
} |
259,045 | I'm trying to configure the network interface on embedded linux using ifconfig: ifconfig eth0 192.168.0.101 netmask 255.255.255.0 but I don't know how to add the default gateway as an ifconfig parameter, Any Ideas? | ifconfig is not the correct command to do that. You can use route like in route add default gw 192.168.0.254 for example. And if route is not present, but ip is, you can use it like this: ip route add default via 192.168.0.254 dev eth0 , assuming that 192.168.0.254 is the ip of your gateway | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/259045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105594/"
]
} |
259,069 | I have Windows 10 HOME installed on my system. After I installed Windows 10 HOME, I installed Ubuntu 14.04 LTS on a separate partition so that I could dual boot. I removed Ubuntu 14.04 LTS by deleting the partition it was installed on. Now I am unable to start my system. At boot, my system stops at the Grub command line. I want to boot to my Windows 10 installation which I haven't removed from my system. This is displayed at startup: GNU GRUB version 2.02 beta2-9ubuntu1.3 <br> minimal BASH-like editing is supported.for the first word, TAB listspossible commands completions.anywhere else TAB lists the possible device or file completion.grub> How can I boot my Windows partition from this grub command? | Just enter the command exit . It should take you to another menu that makes you select the Windows bootloader. Worked on Lenovo Y50 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/259069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154351/"
]
} |
259,088 | I am studying for a public exam and see this question (pt-BR) Before answer, I read about chmod and understood that the permission are split in 3 groups (user, group, other), like this: Nível u g oPermissão rwx r-x --- Binário 111 101 000 Octal 7 5 0 So, why are there more than 9 (3x3) char in the permission string (-r--rwx-rw-) | Just enter the command exit . It should take you to another menu that makes you select the Windows bootloader. Worked on Lenovo Y50 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/259088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154372/"
]
} |
259,170 | I have a bash script with a case statement in it: case "$1" in bash) docker exec -it $(docker-compose ps -q web) /bin/bash ;; shell) docker exec -it $(docker-compose ps -q web) python manage.py shell ;; test) docker exec -it $(docker-compose ps -q web) python manage.py test "${@:2}" ;;esac On the test command, I want to pass the default argument of apps , but only if the user didn't pass any arguments other than test to the bash script. So, if the user runs the script like this: ./do test it should run the command docker exec -it $(docker-compose ps -q web) python manage.py test apps However, if they run the script like this: ./do test billing accounts it should run the command docker exec -it $(docker-compose ps -q web) python manage.py test billing accounts How can I test for the existence of arguments after the first argument? | I'd try to use bash variable substitution: test) shift docker exec -it $(docker-compose ps -q web) python manage.py test "${@-apps}" ;; Other way is to check $* instead of $1 : case $* in bash) ... test) docker exec -it $(docker-compose ps -q web) python manage.py test apps ;; test\ *) docker exec -it $(docker-compose ps -q web) python manage.py test "${@:2}" ;; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108027/"
]
} |
259,193 | I know many examples of block devices (HDDs, SSDs, files, ...), but I haven't heard a simple definition of it. Especially since files are apparently included in the definition I feel a bit confused... | Probably you will never be able to find a simple definition of this. But in the most general and simplistic way, if you compare a character device to a block device, you can say the character device gives you direct access to the hardware, as in you put in one byte, that byte gets to the hardware (of course it is not as simple as that in this day and age). Whereas, the block device reads from and writes to the device in blocks of different sizes. You can specify the block size but since the communication is a block at a time, there is a buffering time involved. Think of a block device as a hard disk where you read and write one block of data at a time and, the character device is a serial port. You send one byte of data and other side receives that byte and then the next, and so forth and so on. Again, it is not a very simple concept to explain. The examples I gave are gross generalizations and can easily be refuted for some particular implementation of each example. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/259193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33928/"
]
} |
259,208 | The find command allows you to search by size, which you can specify using units spelled out in the man page: File uses n units of space. The following suffixes can be used: `b' for 512-byte blocks (this is the default if no suffix is used) `c' for bytes `w' for two-byte words `k' for Kilobytes (units of 1024 bytes) `M' for Megabytes (units of 1048576 bytes) `G' for Gigabytes (units of 1073741824 bytes) Is there a historical reason b is chosen for "block" rather than "byte", which I suspect would be the more common assumption? And why would block be the default rather than byte? When and why would someone ever want to use this unit? Converting to bytes/kilobytes involves a bit of math, it doesn't seem very convenient to be the default unit. | The first versions of Unix happened to use 512-byte blocks in their filesystem and disk drivers. Unix started out as a pretty minimalist and low-level system, with an interface that closely followed the implementation, and leaked details that should have remained abstracted away such as the block size. This is why today, “block” still means 512 bytes in many contexts, even though there can be different block sizes, possibly even different block sizes applying to a given file (one for the filesystem, one for the volume manager, one for the disk…). The implementation tracked disk usage by counting how many data blocks were allocated for a file, so it was easy to report the size of a file as a number of blocks. The disk usage and the size of a file can differ, not only because the disk usage is typically the size rounded up to a whole number of blocks, but also because sparse files have fewer blocks than the size would normally require. As far as I know, early Unix systems that implemented sparse files had find -size use the number of blocks used by the file, not the file size; modern implementations use the file size rounded up (there's a note to this effect in the POSIX specification ). The earliest find implementations only accepted a number of blocks after -size . At some point, find -size started accepting a c suffix to indicate a number of c haracters instead of blocks; I don't know who started it, but it was the case in 4.3BSD . Other suffixes appeared later, for example in FreeBSD it was release 6.2 that introduced k , M and other suffixes but not b which I think only exists in GNU and BusyBox find. Historically, many programs used “character” and “byte” interchangeably, and tended to prefer the term “character”. For example, wc -c counts bytes. Support for multibyte characters, and hence a character count that differs from the byte count, is a relatively recent phenomenon. In summary, there is no purpose. The 512-byte block size, the fact that it's the default unit, and the use of the letter b did not arise deliberately, but through historical happenstance. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72823/"
]
} |
259,225 | This bloody error makes my headache going bigger and bigger everyday. I never met a same situation like this time. Well, after I authenticated into SSH successfully, doing few stuffs then my SSH connection being dropped suddenly!!? Here is my error message: packet_write_wait: Connection to XXX.XX.XX.XXX: Broken pipe I wished my error message look like this: Write Failed: broken pipe a lot, believe me! I tried a tons of resolution on the Internet like added ServerAliveInterval, ServerAliveCountMax, ClientAlive.... Someone said: Turn your TCPKeepAlive to no, added ServerAlive bllah blah idiot. I did that also but still same error. There is no luck for me until this moment. Any help will be appreciate. | Dear 2018 and later readers, Let me show you a comment from MelBurslan, If you are in a corporate environment, check with your firewall admins and see if they were updating rules and/or restarting the firewall after some sort of a change when this happens. If it is happening to a personal server of yours, you need to provide more information on what were you doing on the sshd server side, when this happened. Broken pipe generally means there was a network disconnect for some reason. So basically, if you are trying to use ssh [email protected] over a VPN (corporate environment). Then this error must be there with you over and over. The only solution I found so far is mobile-shell . Thanks who created it. You will need to install mosh-server in your target (the server you want to ssh'ed to) and mosh-client in your host machine. It will auto reconnect when your packets lost, that's pretty cool and suit all our needs, I think. Update 03/2020: If you can't install mosh-server on your servers, then you could use my script here: https://github.com/ohmybash/oh-my-bash/blob/master/tools/autossh.sh It will auto-reconnect to SSH automatically whenever SSH session dead. Happy ssh'ing! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102382/"
]
} |
259,231 | What is the difference between /usr/bin and /usr/local/bin ? Why are there both directories and why do some executable programs exist in both directories ? | /usr/bin : contains executable programs that are part of the operating system and installed by its package manager /usr/local/bin : default location for executable programs not part of the operating system and installed there by the local administrator, usually after building them from source with the sequence configure;make;make install . The goal is not to break the system by overwriting a functional program by a dysfunctional or one with a different behavior. When the same program exists in both directories, you can select which ones will be called by default by rearranging the order of the directories in your PATH . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/259231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154468/"
]
} |
259,249 | I am using RHEL 7 with GNOME 3. As far as I know, I can only configure an ADSL connection ( via GUI ) with nm-connection-editor. But where can I find a clickable symbol for the nm-connection-editor in the GUI? It should be available, but when digging around in network manager (with the mouse) it doesn't bring up nm-connection-editor! Or is the only solution for configuring ADSL to press ALT+F2, then type nm-connection-editor? How user friendly is that? | The documentation of the Network Manger project points out that it's the dekstop environment authors' responsibility to integrate nm-connection-editor with their GUIs: Most desktops provide a control center or settings utility that integrates with NetworkManager. You can also use 'nm-connection-editor', 'nmcli' or 'nmtui' tools directly. This does not seem to have been done in RHEL and its derivatives (nor in most other distributions), and answers like this one on Superuser and the NetworkManager documentation from ArchLinux (who generally ship their packages as they are upstream, without alterations) suggest that it has been like that since Gnome 3 came out. Let's check to make sure: $ locate nm-connection-editor.desktop/usr/share/applications/nm-connection-editor.desktop OK, so there is a launcher for that program deployed with the system, but... $ tail -n 2 /usr/share/applications/nm-connection-editor.desktopCategories=GNOME;GTK;Settings;X-GNOME-NetworkSettings;NotShowIn=KDE;GNOME; ...the last line of that .desktop file clearly shows that someone on the DE or distribution level decided to not show that symbol in KDE and GNOME. In summary, yes, you will need to launch the program directly, either via the Alt-F2 prompt or via a terminal. Creating a launcher yourself will, of course, also work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112826/"
]
} |
259,261 | I'm trying to list all hidden dirs with the following command ls -lhAF1 | grep -E '^d.*[0-9]{2}:[0-9]{2} \.' which works perfectly fine Explanation for Regex: I'm trying to get all rows that have the following Format: d, then some text, then the timestamp, then a space, then the dot and then more text However when I try to color the ls output with this command: ls --color -lhAF1 | grep -E '^d.*[0-9]{2}:[0-9]{2} \.' It gives zero results, the output without --color is: drwxr-xr-x 1 User Group 4096 Feb 1 08:48 .invisible Why does ls / grep behave this way? | --color adds escape sequences for the color. You can see this if you redirect the output (of ls --color ) to a file. This is what it looks like: drwxr-xr-x 6 root root 4.0K Jan 9 08:23 ^[[01;34m.cabal^[[0m/ To account for this, try this instead: ls -lhAF1 --color | grep -E '^d.*[0-9]{2}:[0-9]{2} .*\.' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154502/"
]
} |
259,287 | This works Normally, zsh 's tab completion works well. $ touch foo-1-bar foo-2-bar$ touch f<Tab>$ touch foo--bar ^ cursor here Pressing Tab again brings up a menu from which I can select files. $ touch foo--barfoo-1-bar foo-2-bar This doesn't However, this doesn't seem to work with strings where the beginning and end match. For example: touch foo-bar foo-foo-bartouch f<Tab>touch foo-bar ^ cursor here. <tab> again.touch foo-bar ^ cursor here. No menu is brought up, and there is no opportunity to select foo-foo-bar . Is this expected behaviour or a bug? Is there a setting to make a menu appear in the latter scenario? I'm using oh-my-zsh . I attempted removing all the completion-related lines from ~/.zshrc , but this made no difference. | As per the comments, I tried disabling oh-my-zsh , which fixed this problem. I then went through the oh-my-zsh source, selectively disabling modules. I previously had CASE_SENSITIVE="true" , but commenting out this line fixed it for me. Apparently it's a known bug . To fix it, I could put the following line in ~/.zshrc after sourcing oh-my-zsh . zstyle ':completion:*' matcher-list 'r:|=*' 'l:|=* r:|=*' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
259,330 | Here's the context : on a samba server, I have some folders (that we'll call A,B,C,D ) which are supposed to receive files from a network scanner. The scanner renders a PDF file named like this : YYYYMMDDHHmmss.pdf (Year, Month, Day, Hour, minute, seconds) I need those PDFs to be renamed the moment they appear in the folder, or within the minute (I'm thinking about crontab). the renaming must be Something like "[prefix_specific_to_the_folder]_YYYY-MM-DD.pdf" I've seen that "date +%F" does what I want for the timestamp, and I just have to manually set my prefix in the script. I have the algorithm in mind, it must be Something like "-read file.pdf -if the name of the file doesn't have [prefix] -then mv file.pdf [prefix]_[date].pdf -else nevermind about that file." It's really hard for me to find the correct syntax for this. I would prefer to retrieve the system timestamp of file creation and rename the file with it instead of using the filename generated by the scanner. | Here is a solution built around the inotifywait utility. (You could use incron too, but you'd still need code similar to this.) Run this at boot time, for example from /etc/rc.local . #!/bin/bash#cd /path/to/samba/folder# Rename received files to this prefix and suffixprefix="some_prefix"suffix="pdf"inotifywait --event close_write --format "%f" --monitor . | while IFS= read -r file do # Seconds since the epoch s=$(stat -c "%Y" "$file") # Convert to YYYY-MM-DD ymd="$(date --date "@$s" +'%Y-%m-%d')" # Rename the file. Mind the assumed extension mv -f "$file" "${prefix}_$ymd.$suffix" done I'm not sure what you expect to happen if there are two or more files created on the same day. At the moment the most recently arrived (and processed) will replace any earlier file from the same date. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154553/"
]
} |
259,335 | Currently, I have files which contain sections like this: code statement1code statement2# BEGIN SOMENAMEsome codesome other code# END SOMENAMEcode statement n +1code statement n +1 What I want to do is to comment out what is between # BEGIN SOMENAME and # END so that in the end, it looks like this: code statement1code statement2# BEGIN SOMENAME# some code# some other code# END SOMENAMEcode statement n +1code statement n +1 Can I achieve this with awk or sed ? And can I reverse it easily with an operation that "comments in" again? What I want to avoid is making mistakes, so if the lines are already commented outthey should be left alone. Also, in "comment in", it should not try to do something if the lines between the end and begin do not start with a # . Found a possible solution: awk ' BEGIN { i=0; line_with_no_comment_found=0 } /^# END/ { m=0; if ( line_with_no_comment_found == 1 ) { for (var in a) print "# "a[var] } else { for (var in a) print a[var] } delete a; i=0; line_with_no_comment_found=0; } /^# / { if (m==0) { print } else { a[i++]=$0; } } !/^# / { if (m==0) { print } else { a[i++]=$0; line_with_no_comment_found=1 } } /^# BEGIN ([a-zA-Z_])([1-9][0-9]*)*/ { m=1; } END { }'<<EOF | Here is a solution built around the inotifywait utility. (You could use incron too, but you'd still need code similar to this.) Run this at boot time, for example from /etc/rc.local . #!/bin/bash#cd /path/to/samba/folder# Rename received files to this prefix and suffixprefix="some_prefix"suffix="pdf"inotifywait --event close_write --format "%f" --monitor . | while IFS= read -r file do # Seconds since the epoch s=$(stat -c "%Y" "$file") # Convert to YYYY-MM-DD ymd="$(date --date "@$s" +'%Y-%m-%d')" # Rename the file. Mind the assumed extension mv -f "$file" "${prefix}_$ymd.$suffix" done I'm not sure what you expect to happen if there are two or more files created on the same day. At the moment the most recently arrived (and processed) will replace any earlier file from the same date. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120583/"
]
} |
259,361 | Excel files can be converted to CSV using: $ libreoffice --convert-to csv --headless --outdir dir file.xlsx Everything appears to work just fine. The encoding, though, is set to something wonky. Instead of a UTF-8 mdash (—) that I get if I do a "save as" manually from LibreOffice Calc, it gives me a \227 (�). Using file on the CSV gives me "Non-ISO extended-ASCII text, with very long lines". So, two questions: What on earth is happening here? How do I tell libreoffice to convert to UTF-8? The specific file that I'm trying to convert is here . | Apparently LibreOffice tries to use ISO-8859-1 by default, which is causing the problem. In response to this bug report , a new parameter --infilter has been added. The following command produces U+2014 em dash : libreoffice --convert-to csv --infilter=CSV:44,34,76,1 --headless --outdir dir file.xlsx I tested this with LO 5.0.3.2. From the bug report, it looks like the earliest version containing this option is LO 4.4. See also: https://ask.libreoffice.org/en/question/13008/how-do-i-specify-an-input-character-coding-for-a-convert-to-command-line-usage/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/259361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146087/"
]
} |
259,413 | From bash, I am spawning two processes. These two processes depend on each other. I want both to exit if either one dies. What is the cleanest way to do that? Currently I have the following: # start process a/bin/program_a;a_pid=$!# start process b/bin/program_b;b_pid=$!# kill process b if process a exitswait $a_pidecho "a_pid died, killing process b"kill -9 $b_pid But this only helps process b exit if process a dies. How to I also make process a exit if process b dies? | With zsh : pids=()trap ' trap - CHLD (($#pids)) && kill $pids 2> /dev/null' CHLDsleep 2 & pids+=$!sleep 1 & pids+=$!sleep 3 & pids+=$!wait (here using sleep as test commands). With bash it would seem the CHLD trap is only run when the m option is on. You don't want to start your jobs under that option though as that would run them in separate process groups. Also note that resetting the handler within the handler doesn't seem to work with bash. So the bash equivalent would be something like: pids=()gotsigchld=falsetrap ' if ! "$gotsigchld"; then gotsigchld=true ((${#pids[@]})) && kill "${pids[@]}" 2> /dev/null fi' CHLDsleep 2 & pids+=("$!")sleep 1 & pids+=("$!")sleep 3 & pids+=("$!")set -mwaitset +m | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147914/"
]
} |
259,430 | I would like to enable OCSP stapling in my nginx server.I'm using nginx version: nginx/1.6.2 debian Let's Encrypt certificate I'm really unexperienced in this matter, so it might be a trivial issue. Here my nginx security config ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_dhparam /etc/ssl/private/dhparams_4096.pem; Here my site/Server security config: add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; # All files have been generated by Let's encrypt ssl_certificate /etc/letsencrypt/live/myexample.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.org/privkey.pem; # Everything below this line was added to enable OCSP stapling # What is that (generated file) and is that required at all? ssl_trusted_certificate /etc/letsencrypt/live/myexample.org/chain.pem; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; I read that this would be sufficient to enable OCSP stapling. But if I test it using openssl s_client -connect myexample.org:443 -tls1 -tlsextdebug -status I will get the following response: TLS server extension "renegotiation info" (id=65281), len=10001 - <SPACES/NULS>TLS server extension "EC point formats" (id=11), len=40000 - 03 00 01 02 ....TLS server extension "session ticket" (id=35), len=0TLS server extension "heartbeat" (id=15), len=10000 - 01 .OCSP response: no response sentdepth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X1verify error:num=20:unable to get local issuer certificateverify return:0---Certificate chain 0 s:/CN=myexample.org i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X1 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X1 i:/O=Digital Signature Trust Co./CN=DST Root CA X3---[...] Especially OCSP response: no response sent What am I doing wrong? Certificate hierarchy: DST Root CA X3 Let's Encrypt Authority X1 myexample.org EDIT: OCSP: URI: http://ocsp.int-x1.letsencrypt.org/CA-Issuer: URI: http://cert.int-x1.letsencrypt.org/ | I found the solution based on the tutorial I found there : cd /etc/ssl/privatewget -O - https://letsencrypt.org/certs/isrgrootx1.pem https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem https://letsencrypt.org/certs/letsencryptauthorityx1.pem https://www.identrust.com/certificates/trustid/root-download-x3.html | tee -a ca-certs.pem> /dev/null and add this to your site/server config ssl_stapling on;ssl_stapling_verify on;ssl_trusted_certificate /etc/ssl/private/ca-certs.pem; Reload your config IMPORTANT: Open your browser and access your webpage once. Then you can test your server locally with this cmd: openssl s_client -connect myexample.org:443 -tls1 -tlsextdebug -status You will most likely get a valid response like this OCSP response:======================================OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response Version: 1 (0x0) Responder Id: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X1 Don't worry if you get a Verify return code: 20 (unable to get local issuer certificate) at the bottom as well , the Let's encrypt certificate is not yet in the default trusted certificate stores.(I don't have much ssl experience, so I might be wrong) The error will not show up if you execute the following cmd on the server: openssl s_client -CApath /etc/ssl/private/ -connect myexample.org:443 -tls1 -tlsextdebug -status After that you can test your server using: https://www.digicert.com/help/ Be aware that right now OCSP reponses won't be picked up by the ssllabs tests. I assume this is because the Let's encrypt certificate is not yet in the default trusted certificate stores. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154630/"
]
} |
259,478 | I have a minimal Centos 7 Docker image, and I'm trying to get some man pages on it to help in debugging my Dockerfile. Out of the box, it doesn't have much: # man lsNo manual entry for ls Per this Serverfault answer , I installed the man-pages RPM, and that seemed to go fine: # yum install -y man-pagesLoaded plugins: fastestmirror, ovlLoading mirror speeds from cached hostfile * base: mirror.vtti.vt.edu * extras: centos.mbni.med.umich.edu * updates: centos.netnitco.netResolving Dependencies--> Running transaction check---> Package man-pages.noarch 0:3.53-5.el7 will be installed--> Finished Dependency ResolutionDependencies Resolved====================================================================================================== Package Arch Version Repository Size======================================================================================================Installing: man-pages noarch 3.53-5.el7 base 5.0 MTransaction Summary======================================================================================================Install 1 PackageTotal download size: 5.0 MInstalled size: 4.6 MDownloading packages:man-pages-3.53-5.el7.noarch.rpm | 5.0 MB 00:00:01 Running transaction checkRunning transaction testTransaction test succeededRunning transaction Installing : man-pages-3.53-5.el7.noarch 1/1 Verifying : man-pages-3.53-5.el7.noarch 1/1 Installed: man-pages.noarch 0:3.53-5.el7 Complete! However: # man lsNo manual entry for ls I used rpm to check that man-pages was supposed to include the ls man page, and it looks like it does: # rpm -ql man-pages | grep -w ls/usr/share/man/man1p/ls.1p.gz But it doesn't look like it was actually installed: # man 1p lsNo manual entry for ls in section 1p# ls -l /usr/share/man/man1p/total 0 And it doesn't seem to be anywhere else on the filesystem, either. # find / -name ls.1\*# I can create files in /usr/share/man/man1p/ , so it's probably not some Docker virtual filesystem weirdness. The best part of this is that what I really wanted right this minute was the man page for the useradd command, which isn't even in that RPM. It's in shadow-utils . # yum whatprovides /usr/share/man/man8/useradd.8.gzLoaded plugins: fastestmirror, ovlLoading mirror speeds from cached hostfile * base: mirror.vtti.vt.edu * extras: mirror.tzulo.com * updates: centos.netnitco.net2:shadow-utils-4.1.5.1-18.el7.x86_64 : Utilities for managing accounts and shadow password filesRepo : baseMatched from:Filename : /usr/share/man/man8/useradd.8.gz Which is already installed. # yum install shadow-utilsLoaded plugins: fastestmirror, ovlLoading mirror speeds from cached hostfile * base: mirror.vtti.vt.edu * extras: centos.mbni.med.umich.edu * updates: centos.netnitco.netPackage 2:shadow-utils-4.1.5.1-18.el7.x86_64 already installed and latest versionNothing to do And, in fact, the binaries (e.g. /usr/sbin/useradd ) are there. But not the man pages. # ls -l /usr/share/man/man8/useradd.8.gzls: cannot access /usr/share/man/man8/useradd.8.gz: No such file or directory So my questions are: Why can't I find any of the man pages that are supposed to be in the shadow-utils RPM, when I can find the binaries? Why doesn't (successfully) installing the man-pages RPM install the files that are supposed to be in that RPM? Update: Per Aaron Marasco's answer and msuchy's comment , I tried yum reinstall shadow-utils . As with yum install man-pages , this appears to complete successfully, but doesn't actually put any files in /usr/share/man/ . | Your image probably has the nodocs transaction flag set in the yum configuration (cf. /etc/yum.conf ). You can remove it globally (or at the yum command line) before (re-)installing the packages you want the man pages for. For example: yum --setopt=tsflags='' reinstall shadow-utils | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/259478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17155/"
]
} |
259,518 | upowerd is consuming nearly 100% of CPU. I am currently affected by this issue on both a Lenovo Thinkpad E520 and on a desktop PC running Core i7 and Asus Z170a motherboard. Both run Kubuntu 15.10. However, I have found reports of this problem on several different distros (from Fedora to Arch to Ubuntu) going back several years. I found these bug reports, but I don't find any workaround or solution: FS#40444 : [upower] upowerd 0.99.0-2 eats all resources https://bugs.archlinux.org/task/40444 Bug #861642 “upowerd uses 100% cpu till killed” : Bugs : upower package : Ubuntu https://bugs.launchpad.net/ubuntu/+source/upower/+bug/861642 Bug #876279 “Upowerd excessive CPU usage” : Bugs : upower package : Ubuntu https://bugs.launchpad.net/ubuntu/+source/upower/+bug/876279 | Do you have an iphone connected to the computer? This happens to me whenever I connect my iphone to it. Linux arjun-thinkpad 4.4.7-1-lts #1 SMP Thu Apr 14 17:26:39 CEST 2016 x86_64 GNU/Linux | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
259,538 | Is it possible to use Grep to search a file and send a email based on the results? I have been using grep SEARCHSTRING /logs/error_log | mailx -s subject [email protected] But I don't want it to send an email when Null it met (No results found) | You can run mailx if the grep command returns success i.e. match is found: body="$(grep SEARCHSTRING /logs/error_log)" && echo "$body" | mailx -s subject [email protected] Saving the output of grep (if any) to variable body , if the grep command succeeds then mailx will be run with $body as the body of the mail. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150130/"
]
} |
259,617 | I know what the difference between interactive/non-interactive and login/non-login shells are, but it seems like in practice there's never going to be a non -interactive login shell unless you have something like /bin/bash --login some-script.sh in a script (and even that seems a little odd). Is this correct or are they more common? | I assume you're talking about Bash's concept of login vs. non-login and interactive vs. non-interactive Bash shells as described in the Invocation section of the manpage. (This is different from the interpretation in James Youngman's answer of any arbitrary command used as the "shell" (or user command interpreter) in the passwd(5) file and whether or not that program accepts user input; some, such as /usr/sbin/nologin , obviously don't.) You are correct that /bin/bash --login some-script.sh will produce a non-interactive login Bash invocation, and this is perhaps a pathological example. There is one case, perhaps uncommon but not truly weird, that produces a non-interactive login shell: ssh somehost < some-file . Here sshd will start Bash with argv[0] set to -bash because it's not been given a command to run, causing Bash to consider itself a login shell, but, because stdin is not connected to a terminal, Bash will not set itself to interactive mode ( $- will not contain i ). (Personally, that case seems far more reasonable than the converse, ssh somehost somecommand , which is not considered a "login shell" even though it's a fresh login to somehost just as the above is.) I have recently done what I should have done long ago and put together a table of Bash's modes and what init files are run . If you're finding it confusing, take heart in that at least I do as well. It mystifies me what their original aim was with the rules about when .bashrc is executed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/259617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121116/"
]
} |
259,618 | I have the following prompt in bash which shows the current git branch: PS1+="$(git_prompt)" #git_prompt is a function in my .bashrc which works when I source the .bashrc, but not when I change the branch, so the PS1 var gets only evaluated when I source the .bashrc, but it should be evaluated every time a new prompt is displayed. How can this be accomplished with bash 4.3 ? | Your problem is that $(git_prompt) is evaluated to some constant string before it is added to $PS1 . You have to add the code instead: PS1+='$(git_prompt)' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154502/"
]
} |
259,630 | I'm debugging a binary stream that is coming from a device file. I would like to have the output printed out in real time as hex values. So far I've used tail -f /dev/ttyAPP2 | hexdump -C but after I started missing some bytes in the output I understood that this could be a bad choice because it doesn't flush the data until a newline character is found. There is an unofficial binary tail but I currently can't use that approach and am looking for a suggestion how to achieve this with other means? Example First the tty is set to raw mode. stty -F /dev/ttyAPP2 raw Here is what I get when listening to the device (this is real output) root@Vdevice:/dev# hexdump -C < /dev/ttyAPP200000000 55 00 21 00 02 26 00 02 0b 00 09 02 06 01 00 01 00000010 99 0c ec 45 4f 01 03 47 41 54 45 57 41 59 43 54 However, the expected package should be (this isn't a real output): root@Vdevice:/dev# hexdump -C < /dev/ttyAPP200000000 55 00 21 00 02 26 00 02 0b 00 09 02 06 01 00 01 00000010 99 0c ec 45 4f 01 03 47 41 54 45 57 41 59 43 54 00000020 52 4c 00 00 00 00 00 8b The other part of the package gets printed out on arrival of the second package (this is real output) root@Vdevice:/dev# hexdump -C < /dev/ttyAPP200000000 55 00 21 00 02 26 00 02 0b 00 09 02 06 01 00 01 00000010 99 0c ec 45 4f 01 03 47 41 54 45 57 41 59 43 5400000020 52 4c 00 00 00 00 00 8b 55 00 21 00 02 26 00 0200000030 0b 00 09 02 06 01 00 01 99 0c ec 45 4f 01 03 4700000040 41 54 45 57 41 59 43 54 52 4c 00 00 00 00 00 8b | You don't need to tail -f a tty. If it's sending you EOF, or, if it is line-buffering, then you need to configure it. stty -F/dev/ttyAPP2 raw Now you can... cat /dev/ttyAPP2 ...as needed... You might try... </dev/ttyAPP2 \dd bs=16 conv=sync | od -vtx1 ...which will sync out every successful read() from your device into 16-byte, null-padded blocks, and so will write line-buffered output (such as to your terminal) in real-time regardless of throughput, though any trailing nulls might distort your stream. With GNU stdbuf and a dynamically linked od : stdbuf -o0 od -vtx1 </dev/ttyAPP2 ...would write output in real-time regardless. You might also buffer to a temp file like... f=$(mktemp)exec 3<>"$f"; rm -- "$f"while dd >&3 of=/dev/fd/1 bs=4k count=1 [ -s /dev/fd/3 ]do od -An -vtx1 /dev/fd/3 echodone </dev/ttyAPP2 2>/dev/null ...which, though likely not nearly as efficient as the other recommendations, might be worth considering if you wanted to delimit reads from your device by EOF. I find the technique useful sometimes when working with ttys, anyway. It is also possible to force hexdump to print out less bytes by using the custom print format. The example below will print every time there are 4 bytes available: hexdump -e '4/1 "%02x " "\n"' < /dev/ttyAPP2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29436/"
]
} |
259,640 | I have one server with net connectivity, where I can use "yum install $PACKAGE". I want some yum command, like yum cache-rpms $PACKAGE $DIRECTORY such that all required RPM files will be downloaded to $DIRECTORY, which will also have a file ( Install.sh ) stating the order in which to install these RPMs, on many other servers without net connectivity. Install.sh may even be a shell script, which has the same behaviour as yum install $PACKAGE , except that it will not use the network, but will only use $DIRECTORY . Possible? I am looking for a general solution where yum and RPM is available, but for specificity: It is on a set of CENTOS 6.7 servers. | Here's a specific example using "httpd" as the package to download and install. This process was tested on both CentOS6 and CentOS7. Install the stuff you need and make a place to put the downloaded RPMs: # yum install yum-plugin-downloadonly yum-utils createrepo# mkdir /var/tmp/httpd# mkdir /var/tmp/httpd-installroot Download the RPMs. This uses the installroot trick suggested here to force a full download of all dependencies since nothing is installed in that empty root. Yum will create some metadata in there, but we're going to throw it all away. Note that for CentOS7 releasever would be "7". # yum install --downloadonly --installroot=/var/tmp/httpd-installroot --releasever=6 --downloaddir=/var/tmp/httpd httpd Yes, that was the small version. You should have seen the size of the full-repo downloads! Generate the metadata needed to turn our new pile of RPMs into a YUM repo and clean up the stuff we no longer need: # createrepo --database /var/tmp/httpd# rm -rf /var/tmp/httpd-installroot Configure the download directory as a repo. Note that for CentOS7 the gpgkey would be named "7" instead of "6": # vi /etc/yum.repos.d/offline-httpd.repo[offline-httpd]name=CentOS-$releasever - httpdbaseurl=file:///var/tmp/httpdenabled=0gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 To check the missing dependencies: # repoclosure --repoid=offline-httpd I haven't figured out why on CentOS7 this reports things like libssl.so.10(libssl.so.10)(64bit) missing from httpd-tools when openssl-libs-1.0.1e-51.el7_2.2.x86_64.rpm (the provider of that library) is clearly present in the directory. Still, if you see something obviously missing, this might be a good chance to go back and add it using the same yum install --downloadonly method above. When offline or after copying the /var/tmp/httpd repo directory to the other server set up the repo there: # vi /etc/yum.repos.d/offline-httpd.repo[offline-httpd]name=CentOS-$releasever - httpdbaseurl=file:///var/tmp/httpdenabled=0gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6# yum --disablerepo=\* --enablerepo=offline-httpd install httpd Hopefully no missing dependencies! | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/259640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54246/"
]
} |
259,659 | I run free -m on a debian VM running on Hyper-V: total used free shared buffers cachedMem: 10017 9475 541 147 34 909-/+ buffers/cache: 8531 1485Swap: 1905 0 1905 So out of my 10GB of memory, 8.5GB is in use and only 1500MB is free (excluding cache). But I struggle to find what is using the memory. The output of ps aux | awk '{sum+=$6} END {print sum / 1024}' , which is supposed to add up the RSS utilisation is: 1005.2 In other words, my processes only use 1GB of memory but the system as a whole (excluding cache) uses 8.5GB. What could be using the other 7.5GB? ps: I have another server with a similar configuration that shows used mem of 1200 (free mem = 8.8GB) and the sum of RSS usage in ps is 900 which is closer to what I would expect... EDIT cat /proc/meminfo on machine 1 (low memory): MemTotal: 10257656 kBMemFree: 395840 kBMemAvailable: 1428508 kBBuffers: 162640 kBCached: 1173040 kBSwapCached: 176 kBActive: 1810200 kBInactive: 476668 kBActive(anon): 942816 kBInactive(anon): 176184 kBActive(file): 867384 kBInactive(file): 300484 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 1951740 kBSwapFree: 1951528 kBDirty: 16 kBWriteback: 0 kBAnonPages: 951016 kBMapped: 224388 kBShmem: 167820 kBSlab: 86464 kBSReclaimable: 67488 kBSUnreclaim: 18976 kBKernelStack: 6736 kBPageTables: 13728 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 7080568 kBCommitted_AS: 1893156 kBVmallocTotal: 34359738367 kBVmallocUsed: 62284 kBVmallocChunk: 34359672552 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 67520 kBDirectMap2M: 10418176 kB cat /proc/meminfo on machine 2 (normal memory usage): MemTotal: 12326128 kBMemFree: 8895188 kBMemAvailable: 10947592 kBBuffers: 191548 kBCached: 2188088 kBSwapCached: 0 kBActive: 2890128 kBInactive: 350360 kBActive(anon): 1018116 kBInactive(anon): 33320 kBActive(file): 1872012 kBInactive(file): 317040 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 3442684 kBSwapFree: 3442684 kBDirty: 44 kBWriteback: 0 kBAnonPages: 860880 kBMapped: 204680 kBShmem: 190588 kBSlab: 86812 kBSReclaimable: 64556 kBSUnreclaim: 22256 kBKernelStack: 10576 kBPageTables: 11924 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 9605748 kBCommitted_AS: 1753476 kBVmallocTotal: 34359738367 kBVmallocUsed: 62708 kBVmallocChunk: 34359671804 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 63424 kBDirectMap2M: 12519424 kB | I understand you're using Hyper-V, but the concepts are similar. Maybe this will set you on the right track. Your issue is likely due to virtual memory ballooning, a technique the hypervisor uses to optimize memory. See this link for a description I observed your exact same symptoms with my VMs in vSphere. A 4G machine with nothing running on it would report 30M used by cache, but over 3G "used" in the "-/+ buffers" line. Here's sample output from VMWare's statistics command. This shows how close to 3G is being tacked on to my "used" amount: vmware-toolbox-cmd stat balloon3264 MB In my case, somewhat obviously, my balloon driver was using ~3G I'm not sure what the similar command in Hyper-V is to get your balloon stats, but I'm sure you'll get similar results | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/259659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48590/"
]
} |
259,675 | In VIM, if, for example, I have the text http://a.comhttp://b.com is it possible to find all lines (the whole line) and replace it with something before and after it, such as: <a href="http://a.com">http://a.com</a><a href="http://b.com">http://b.com</a> Note that the text from every line is repeated. Once for the href and another for the text. | :%s:.*:<a href="&">&</a>: Same as in ed/sed/perl... Another less ex and more vim -like way would be: if you know how to do it once for a line, record it as a macro and then run :%normal @m where m is that macro. Like (in normal mode): qmS<a href="<Ctrl-R>""><Ctrl-R>"</a><Esc>q to record the macro. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67546/"
]
} |
259,718 | I have hundreds of *.txt files which have a common format. I can insert a comma at a specific position in one file, how can I generalize the below code to apply this operation at several places for all *.txt files in the directory? sed -i 's/^\(.\{4\}\)/\1,/' blank.txt For example inserting commas at positions 4, 8, 22 etc. Something like this perhaps? for i in *.txt; do sed -i 's/^\(.\{4\}\)/\1,/' $idone | In a general way, you can just do: sed 's/./&,/4' <in >out That will append a comma on output to the 4th character of all input lines with at least that many characters. And, if you'll take my advice, you should generally not use the -i switch to any sed which offers one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259718",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153453/"
]
} |
259,719 | I have a bunch of files in a single directory that I would like to rename as follows, example existing names: 1937 - Snow White and the Seven Dwarves.avi1940 - Pinocchio.avi Target names: Snow White and the Seven Dwarves (1937).aviPinocchio (1940).avi Cheers | In a general way, you can just do: sed 's/./&,/4' <in >out That will append a comma on output to the 4th character of all input lines with at least that many characters. And, if you'll take my advice, you should generally not use the -i switch to any sed which offers one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154836/"
]
} |
259,735 | Imagine I have a path that doesn't exist: $ ls /foo/bar/baz/hello/worldls: cannot access /foo/bar/baz/hello/world: No such file or directory But let's say /foo/bar does exist. Is there a quick way for me to determine that baz is the breaking point in the path? I'm using Bash. | Given a canonical pathname, such as yours, this will work: set -f --; IFS=/for p in $pathnamedo [ -e "$*/$p" ] || break set -- "$@" "$p"done; printf %s\\n "$*" That prints through the last fully existing/accessible component of $pathname , and puts each of those separately into the arg array. The first nonexistent component is not printed, but it is saved in $p . You might approach it oppositely: until cd -- "$path" && cd -do case $path in (*[!/]/*) path="${path%/*}";; (*) ! break esacdone 2>/dev/null && cd - That will either return appropriately or will pare down $path as needed. It declines to attempt a change to / , but if successful will print both your current working directory and the directory to which it changes to stdout. Your current $PWD will be put in $OLDPWD as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154846/"
]
} |
259,757 | Suppose i have a folderwith a lot of file namessome very strange and nonsenseI want to rename it like File-1File-2File-3.. I have tried this(echo is for tryng) for name in *; do echo mv $name File-`echo $(( RANDOM % (10 - 5 + 1 ) + 1 ))`;done But give me a lot of duplicates mv bio1 file-3mv memory23 file-1mv mernad file-3mv nio2 file-4mv nun3 file-4 | You could maybe use shuf (from the GNU coreutils package), which generates permutations rather than individual random samples - something like for f in *; do read i; echo mv -- "$f" "file-$i"; done < <(shuf -i 1-10) or (perhaps better) shuffle the filenames - and then simply rename them sequentially i=1; shuf -z -e -- * | while IFS= read -rd '' f; do echo mv -- "$f" "File-$((i++))"; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
259,759 | I am trying to have a kiosk style setup on raspberry pi where only one application is available via VNC e.g. gedit or firefox. Using a guide like this http://gpio.kaltpost.de/?page_id=84 the setup of headless X11 and VNC is easy enough but i am struggling to find a minimal windows manager that i can prevent a user from minimizing the applications main screen and also have a event hook so when the application is closed via it's menu, or crashed i guess, then the windows manager will exit as well, in turn ending the VNC server/session. Exiting from the application is OK as the x11/vncserver will be managed by a supervisord type process supervisor and will auto restart. The user should be isolated to the application so the windows manager should allow it's keybindings to be disabled if they allow multiple virtual desktops, or launching a shell, window re-sizing etc.. i.e i want a type of visual chroot. Can anyone suggest windows manager that supports this feature set and some examples of implementing please ? Thanks fLo | You could maybe use shuf (from the GNU coreutils package), which generates permutations rather than individual random samples - something like for f in *; do read i; echo mv -- "$f" "file-$i"; done < <(shuf -i 1-10) or (perhaps better) shuffle the filenames - and then simply rename them sequentially i=1; shuf -z -e -- * | while IFS= read -rd '' f; do echo mv -- "$f" "File-$((i++))"; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43531/"
]
} |
259,791 | I am currently logged in into a CentOS server and I would like to change my home directory from /home/myuserName/ to /var/www/html/ I tried the below command : > sudo usermod -d /var/www/html myuserName But this gives me an error: usermod: user myUserName is currently logged in | short answer : you can't. long answer : HOME dir is set in /etc/passwd , 6th field. It is read upon login; your shell is started with this home dir. The proper way to change home dir for joe is : have joe log off. use usermod -d /new/home joe to change home dir for subsequent session. Once session is run, you must do two things: edit $HOME to change home dir for session (to be repeated on all active session). use sudo vipw to edit home dir for next session Also, be aware you might have an issue with permissions/ownership on /var/www/html . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/259791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135712/"
]
} |
259,820 | I have records in XML file like below. I need to search for <keyword>SEARCH</keyword> and if presentthen I need to take the entire record and write to another file.(starting from <record> to </record> ) Below is my awk code which is inside loop. $1 holds line by line value of each record. if(index($1,"SEARCH")>0){print $1>> "output.txt"} This logic has two problems, It is writing to output.txt file, only <keyword>SEARCH</keyword> element and not the whole record(starting from <record> to </record> ) SEARCH can also be present in <detail> tag. This code will even write that tag to output.txt XML File: <record category="xyz"><person ssn="" e-i="E"><title xsi:nil="true"/><position xsi:nil="true"/><names><first_name/><last_name></last_name><aliases><alias>CDP</alias></aliases><keywords><keyword xsi:nil="true"/><keyword>SEARCH</keyword></keywords><external_sources><uri>http://www.google.com</uri><detail>SEARCH is present in abc for xyz reason</detail></external_sources></details></record><record category="abc"><person ssn="" e-i="F"><title xsi:nil="true"/><position xsi:nil="true"/><names><first_name/><last_name></last_name><aliases><alias>CDP</alias></aliases><keywords><keyword xsi:nil="true"/><keyword>DONTSEARCH</keyword></keywords><external_sources><uri>http://www.google.com</uri><detail>SEARCH is not present in abc for xyz reason</detail></external_sources></details></record> | I'm going to assume that what you've posted is a sample, because it isn't valid XML. If this assumption isn't valid, my answer doesn't hold... but if that is the case, you really need to hit the person who gave you the XML with a rolled up copy of the XML spec, and demand they 'fix it'. But really - awk and regular expressions are not the right tool for the job. An XML parser is. And with a parser, it's absurdly simple to do what you want: #!/usr/bin/env perluse strict;use warnings;use XML::Twig; #parse your file - this will error if it's invalid. my $twig = XML::Twig -> new -> parsefile ( 'your_xml' );#set output format. Optional. $twig -> set_pretty_print('indented_a');#iterate all the 'record' nodes off the root. foreach my $record ( $twig -> get_xpath ( './record' ) ) { #if - beneath this record - we have a node anywhere (that's what // means) #with a tag of 'keyword' and content of 'SEARCH' #print the whole record. if ( $record -> get_xpath ( './/keyword[string()="SEARCH"]' ) ) { $record -> print; }} xpath is quite a lot like regular expressions - in some ways - but it's more like a directory path. That means it's context aware, and can handle XML structures. In the above: ./ means 'below current node' so: $twig -> get_xpath ( './record' ) Means any 'top level' <record> tags. But .// means "at any level, below current node" so it'll do it recursively. $twig -> get_xpath ( './/search' ) Would get any <search> nodes at any level. And the square brackets denote a condition - that's either a function (e.g. text() to get the text of the node) or you can use an attribute. e.g. //category[@name] would find any category with a name attribute, and //category[@name="xyz"] would filter those further. XML used for testing: <XML><record category="xyz"><person ssn="" e-i="E"><title xsi:nil="true"/><position xsi:nil="true"/><details><names><first_name/><last_name></last_name></names><aliases><alias>CDP</alias></aliases><keywords><keyword xsi:nil="true"/><keyword>SEARCH</keyword></keywords><external_sources><uri>http://www.google.com</uri><detail>SEARCH is present in abc for xyz reason</detail></external_sources></details></person></record><record category="abc"><person ssn="" e-i="F"><title xsi:nil="true"/><position xsi:nil="true"/><details><names><first_name/><last_name></last_name></names><aliases><alias>CDP</alias></aliases><keywords><keyword xsi:nil="true"/><keyword>DONTSEARCH</keyword></keywords><external_sources><uri>http://www.google.com</uri><detail>SEARCH is not present in abc for xyz reason</detail></external_sources></details></person></record></XML> Output: <record category="xyz"> <person e-i="E" ssn=""> <title xsi:nil="true" /> <position xsi:nil="true" /> <details> <names> <first_name/> <last_name></last_name> </names> <aliases> <alias>CDP</alias> </aliases> <keywords> <keyword xsi:nil="true" /> <keyword>SEARCH</keyword> </keywords> <external_sources> <uri>http://www.google.com</uri> <detail>SEARCH is present in abc for xyz reason</detail> </external_sources> </details> </person> </record> Note - the above just prints the record to STDOUT. That's actually... in my opinion, not such a great idea. Not least because - it doesn't print the XML structure, and so it isn't actually 'valid' XML if you've more than one record (there's no "root" node). So I would instead - to accomplish exactly what you're asking: #!/usr/bin/env perluse strict;use warnings;use XML::Twig; my $twig = XML::Twig -> new -> parsefile ('your_file.xml'); $twig -> set_pretty_print('indented_a');foreach my $record ( $twig -> get_xpath ( './record' ) ) { if ( not $record -> findnodes ( './/keyword[string()="SEARCH"]' ) ) { $record -> delete; }}open ( my $output, '>', "output.txt" ) or die $!;print {$output} $twig -> sprint;close ( $output ); This instead - inverts the logic, and deletes (from the parsed data structure in memory) the records you don't want, and prints the whole new structure (including XML headers) to a new file called "output.txt". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130895/"
]
} |
259,885 | I am using following command to replace yyyymmdd to YYYYMMDDHH24MISS in my file: sed -e 's/\('yyyymmdd'\)/\('YYYYMMDDHH24MISS'\)/g' filename After I run the command in PuTTY, it displays the file with replaced values, but they do not reflect if I more the file. I tried using -i , but it says sed: illegal option -- i Can someone please suggest how do I replace the given code in multiple files and save them? | Try this: sed 's/yyyymmdd/YYYYMMDDHH24MISS/g' filename > changed.txt Or, to keep the same filename: sed 's/yyyymmdd/YYYYMMDDHH24MISS/g' filename > changed.txt && mv changed.txt filename | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129451/"
]
} |
259,898 | I try to download a PDF file while browsing with lynx. Unfortunately, lynx tries to open that file immediately using a PDF viewer (evince) though I do not have an X server running. How can I prevent lynx from doing that and have simply "download" the file instead? | Try this: sed 's/yyyymmdd/YYYYMMDDHH24MISS/g' filename > changed.txt Or, to keep the same filename: sed 's/yyyymmdd/YYYYMMDDHH24MISS/g' filename > changed.txt && mv changed.txt filename | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80227/"
]
} |
259,907 | At the startup I work, we're setting up a device for image capturing and analysis. It's a box with a camera, with Ubuntu Linux embedded, and, let's assume, we don't want to connect a monitor to this device for configuration. Some guys came with the solution of having a configuration webpage when connecting the device to a notebook through a network cable directly, just like you do with a router or modem, by accessing a well known IP. It sounds like a solution, but fact is that the device is not a router, and as I see it, it's quite a different context, the device won't be delegating an address to the notebook (making it part of a router's network, where it can have a well known address) since it's not a router. So I'm now looking for a solution that resembles the experience of configuring a router , but that's not a router, it's for a device that I should be able to access from a well know address. For that, I've dug a bit about zeroconf/APIPA but from zeroconf RFC 3927 the IP address must be one generated "using a pseudo-random number generator with a uniform distribution in the range from 169.254.1.0 to 169.254.254.255 inclusive". I think a random IP solution may still work, even though it's not a well know address, in case there's any means of discovering which IP this device has got. Besides this, this device should be using NetworkManager to handle connectivity through the many interfaces it's setup with. So, to sum up the problem situation: A device must be configured through a local network. This device is ON and using Network Manager to handle connectivity through many interfaces, let's say one interface connectivity gets down, it would be choosing another interface. We were thinking about having an eth0 alias to have eth0 both being handledby Network Manager (in context with other interfaces) as well as having fixed IP access through a non-managed (by Network Manager) alias. Not sure whether that's even possible. It's all about device discovering, I've also proposed using nmap to reach the device, but it has two drawbacks: scanning is slow on large networks and it's not a simple webpage access, a client using nmap must be built and used to do the discovery. If there's no means to have simple access in a well known IP, having a random one is also a solution, given that the device can be discovered like a printer in the network or something like this. It may be assumed that the solution can be one to configure a device directly connected to a notebook through a network cable and acquire access to it in a device's configuration webpage as well as one solution where the notebook gets connected to the local network the device is also connected, and be able to access the device discovering it in the network or accessing it through an alternative, exotic and fixed address. Notice that accessing the local network router or using nmap/arp scan is not an option. What matter should be studied to address this problem? Is there a common approach people use for this? In my experience I recall configuring my devices but none fitting the problem: Router: Provides an easy to access configuration webpage at a well known address, but it's the router, it's the gateway and it will be delegating my own address. Cubox-i: I have one of these devices, I had to discover it using nmap in my network and access its ssh. Printers: I have never owned one, so I don't know how its device discover/configuration works, but have used them on networks before, they were generally listed in the device settings on a Windows machine. I still have to take a look at "Avahi", "UPnP", "Zeroconf" and other names in the field which I never worked with. Maybe this is the kind of example that may fit the situation. If there's a simple tool I can run on my Arch Linux and have its IP discovered by other devices like my Android or my Windows notebook, I'd like to know. I've also thought about broadcasting but I'm not sure this would be OK in all LANs, where broadcasting could be blocked or unreliable (unsure regarding this). | The best way to do this is with avahi which implements multicast-dns (this is what Apple calls Bonjour). I would disable Network Manager and go with configuring networking in /etc/network/interfaces . The interfaces file supports the ipv4ll method, which uses avahi-autoipd to configure an interface with an IPv4 Link-Layer address (169.254.0.0/16 family). Next, set up a service in avahi to ensure the host advertises itself via bonjour and add mDNS name resolution to /etc/nsswitch.conf . If the rest of your systems are configured to resolve mDNS names, it should all work like magic. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42313/"
]
} |
259,911 | Is there a way to run one command before any terminal command gets executed? For example: > ls -ltr> "Hello you ran ls -ltr" //this is what I would like to achieve> ..output of ls -ltr will be here Is it possible to run make sure an echo runs before any command is executed? Thanks | You might want to look into setting a DEBUG trap, which allows you to set up what is effectively a pre-exec hook in a manner similar to zsh . See https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72931/"
]
} |
259,918 | My main objective is to copy the contents of a directory and send it to a file. Then cut out the directory location to just have the name. Then to organize it's contents but most appeared. This is also homework and my restrictions are it has to be one command This is what I thought would do the job but it doesn't wc -l ~location/folder/folder/*.log > ~/log.info | cut -d "/" -f9 ~/log.info | sort My output 1 /s/s/s/s/location/folder/folder/a.log1 /s/s/s/s/location/folder/folder/b.log1 /s/s/s/s/location/folder/folder/c.log3 /s/s/s/s/location/folder/folder/d.log2 /s/s/s/s/location/folder/folder/e.log What I want it to be 1 a1 b1 c2 e3 d | You might want to look into setting a DEBUG trap, which allows you to set up what is effectively a pre-exec hook in a manner similar to zsh . See https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154544/"
]
} |
259,922 | Sometimes I use the terminal (konsole) and all of a sudden the behaviour when mouse scrolling goes from scrolling through past output to scrolling through the history on the command prompt, and there is no way to see the past output anymore. (The scrollbar is still there but it is as if there were no past output) Since it only happens sometimes, and if I open a new subwindow it defaults back to scrolling through past output, I am guessing I am inadvertently hitting some shortcut that toggles between these modes, but I can't find out which shortcut it is. How can I get out of this weird scrolling through history mode? | Perhaps you ran a subshell from an editor, and it left the terminal in the alternate screen. You can test that by tput rmcup which would return to the normal display. While in the alternate screen, some terminals may override the scroll-wheel action by sending up/down cursor escapes. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/259922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/51034/"
]
} |
259,932 | I've been using ls -sh to check file sizes ever since 1997 or so, but today something strange happened: ninja@vm:foo$ ls -shtotal 98M1,0M app 64M app_fake_signed.sbp 800K loader 804K loader_fake_signed.sbp 1,0M web 32M web_fake_signed.sbp The app and web files were not supposed to be much smaller than their signed counterparts, and I spent several hours debugging the signing program. After finding nothing, by chance I happened to look at the files in a Samba share, to find them very similar in size. I checked again: ninja@vm:foo$ ls -lhtotal 98M-rw-rw-r-- 1 ninja ninja 63M lut 4 14:13 app-rw-rw-r-- 1 ninja ninja 64M lut 4 14:13 app_fake_signed.sbp-rw-rw-r-- 1 ninja ninja 800K lut 4 14:13 loader-rw-rw-r-- 1 ninja ninja 801K lut 4 14:13 loader_fake_signed.sbp-rw-rw-r-- 1 ninja ninja 31M lut 4 14:13 web-rw-rw-r-- 1 ninja ninja 32M lut 4 14:14 web_fake_signed.sbp I'm speechless? Why does ls -s show the app and web to be 1MB in size, while they are actually 63 and 32MB, respectively? This was Xubuntu 14.04 running in VirtualBox on Windows, if it makes any difference. Edit: The files app , web and loader are all created by a bash script (not of my design) which runs dd if=/dev/urandom of=app bs=$BLOCK count=1 seek=... in a loop. The signing program, written in C, takes these files and writes their signed versions to the disk, prepending and appending a binary signature to each. | You're using the -s option to ls . A file's size and the amount of disk space it takes up may differ. Consider for example, if you open new file, seek 1G into it, and write something, the OS doesn't allocate 1G (plus the space for something) on disk, it allocates only the same for something -- this is called a "sparse file". I wrote a small C program to create such a file: #include <stdio.h>#include <sys/types.h>#include <sys/stat.h>#include <fcntl.h>#include <unistd.h>int main(void){ int fd = open("/tmp/foo.dat", O_CREAT | O_WRONLY, 0600); if (fd > 0) { const off_t GIG = 1024 * 1024 * 1024; // Seek 1G into the file lseek(fd, GIG, SEEK_SET); // Write something write(fd, "hello", sizeof "hello"); close(fd); } return 0;} Running that program I get: $ ls -lh /tmp/foo.dat-rw------- 1 user group 1.1G Feb 4 15:25 /tmp/foo.dat But using -s , I get: $ ls -sh /tmp/foo.dat4.0K /tmp/foo.dat So a 4K block was allocated on disk to store "hello" (and 4K is the smallest unit of allocation for my filesystem). In your case, it looks like app and web are such sparse files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82253/"
]
} |
259,938 | I deleted dhcpcd off my arch install without realizing it was the IP provider for wireless. I need to get it re-installed, but I have no internet connection. I deleted the unused cache files with pacman -Sc after uninstalling dhcpcd so I cannot reinstall from the cache. I do have a flash drive with bootable Arch Linux that I used to install it originally. | If you have an Arch install disk, you can boot off it, mount your install partition and use pacstrap to install dhcpcd, similar to how you installed Arch in the first place. e.g. mount /dev/sda1 /mnt pacstrap /mnt dhcpcd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155006/"
]
} |
259,972 | Commands like cd cannot have output piped to them in order to change directories--they require command-line arguments. Why does the cd command (and those similar to it, such as mv , cp , & rm ) not function like most other commands when it comes to reading in STDIN ? What's the logic behind preventing it from reading standard input to change directories? The best answer I could find stated: cd is not an external command - it is a shell builtin function. It runs in the context of the current shell, and not, as external commands do, in a fork/exec'd context as a separate process. However, to me the above answer does not really explain it at all: Why does cd handle STDIN different than many other commands who read in STDIN ? | The commands that read stdin are almost all of the filter family, i.e. programs that transform a flow of text data to a transformed one. cat , sed , awk , gzip and even sh are good examples of such "filters". The cited commands, cp , mv and rm are definitely not filters but commands that do things with the arguments passed, here files or directories. The cd command is similar to them, it expects an argument (or simulate a default one if not provided), and generally doesn't output anything on stdout , although it might output something on it in some cases like when using CDPATH . Even if one want to create a cd variant that take the target directory from stdin, it wouldn't have any effect when used in a pipeline in the Bourne shell, dash and bash to name a few. The last component of the command being run in a subshell, the change to a new directory won't affect the current shell. e.g.: echo /tmp | cd would work with ksh93 but not bash , dash , zsh , sh , ... cd <(echo /tmp) would work with shells supporting process substitution (at least ksh , bash , zsh ) but wouldn't have any significant advantage compared to cd $(echo tmp) The only use case that might be of interest would be something like: echo tmp | (cd ; pwd) Finally, such a variant would need to sort out the case it was given no argument but the expected behavior is to change the directory to the users's home or it was given no argument but the expected behavior is to read the name of the target directory from stdin. As there is no reliable way to decide, this is doomed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/259972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155025/"
]
} |
260,056 | I run a bash command from a c++ code using the system function providedin cstdlib . My question is not about the c++ implementation, but the onething that is relevant to my question, is that I have to run multiple commandon one line. For instance, I run a gnuplot script contained in another directory. So mycommand is the following: cd some/path; gnuplot -e gnuplot_file.gp; cd - > /dev/NULL Both cd commands are important for personal reasons. I also want to "hide" the ouput in /dev/NULL . Here is my question : how do I know the exit status of the gnuplot command ?Knowing if it failed is enough. My problem is that I currently get the exit status of the last command, whichis true even if the gnuplot failed. I know that if I use && instead of ; , the last command won't be executed ifthe previous one fails and the exit status would then be false. But I need the last command to be executed... What is the workaround ? | Drop the gnuplot into a subshell and then it's the last command executed. You also no longer require the last cd because the change of directory at the beginning of the subshell affects only the gnuplot , and so the redirection to /dev/null is also moot. ( cd some/path; gnuplot -e gnuplot_file.gp ) Perhaps you intended the redirection to /dev/null to apply to the entire command? (That's not what you've written in your question, though.) ( cd some/path; gnuplot -e gnuplot_file.gp ) >/dev/null Finally, my preference for a snippet like this would be to run the gnuplot only if the initial cd succeeded. This would affect the exit status, in that you'd get a failed return if the change of directory failed, but is probably safer code ( cd some/path && gnuplot -e gnuplot_file.gp ) >/dev/null | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/260056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42623/"
]
} |
260,162 | I know that with ps I can see the list or tree of the current processes running in the system. But what I want to achieve is to "follow" the new processes that are created when using the computer. As analogy, when you use tail -f to follow the new contents appended to a file or to any input, then I want to keep a follow list of the process that are currently being created. Is this even posible? | If kprobes are enabled in the kernel you can use execsnoop from perf-tools : In first terminal: % while true; do uptime; sleep 1; done In another terminal: % git clone https://github.com/brendangregg/perf-tools.git% cd perf-tools% sudo ./execsnoopTracing exec()s. Ctrl-C to end.Instrumenting sys_execve PID PPID ARGS 83939 83937 cat -v trace_pipe 83938 83934 gawk -v o=1 -v opt_name=0 -v name= -v opt_duration=0 [...] 83940 76640 uptime 83941 76640 sleep 1 83942 76640 uptime 83943 76640 sleep 1 83944 76640 uptime 83945 76640 sleep 1^CEnding tracing... | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/260162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102926/"
]
} |
260,167 | Just wondering why this is not working #!/bin/bash ls /binls !$ I expect to run ls /bin twice, but the second one raises errors as !$ was not interpreted Did I miss something, or !$ only work in command line? I couldn't find relevant part in man bash (on mac) | History and history expansion are disabled by default when the shell run non-interactively. You need: #!/bin/bash set -o historyset -o histexpandls /binls !$ or: SHELLOPTS=history:histexpand bash script.sh it will affect all bash instances that script.sh may run. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/260167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
260,204 | I have a file that contains information as so: 20 BaDDOg31 baddog42 badCAT43 goodDoG44 GOODcAT and I want to delete all lines that contain the word dog . This is my desired output: 42 badCAT44 GOODcAT However, the case of dog is insensitive. I thought I could use a sed command: sed -e "/dog/id" file.txt , but I can't seem to get this to work. Does it have something to do with me working on an OSX? Is there any other method I could use? | Try grep : grep -iv dog inputfile -i to ignore case and -v to invert the matches. If you want to use sed you can do: sed '/[dD][oO][gG]/d' inputfile GNU sed extends pattern matching with the I modifier, which should make the match case insensitive but this does not work in all flavors of sed . For me, this works: sed '/dog/Id' inputfile but it won't work on OS X. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/260204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121711/"
]
} |
260,323 | In bash , watch (e.g. watch -n 5 ls -l ) could be used to repeat the command at fixed intervals. This command seem to be missing on zsh. Is there an equivalent? | watch is not an internal command: $ type watch/usr/bin/watch so make sure it installed on the system where you are running zsh . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/260323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155248/"
]
} |
260,373 | I need to log ssh passwords attempts. I read somewhere you could get this done by patching openssh but I don't know if that is the correct way to get this done. I am using Arch. EDIT: I want to get the passwords people tried to guess to gain access to my system.It could be with journalctl or get piled in a text file. If the person let's say types 1234 and tries to get access I want something like "ssh loggin attempt failed, tried user "admin" with password "1234" | What I think is a better answer is to download the LongTail SSH honeypot which is a hacked version of openssh to log username, password, source IP and port, and Client software and version. The install script is at https://github.com/wedaa/LongTail-Log-Analysis/blob/master/install_openssh.sh I also do analytics at http://longtail.it.marist.edu | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/260373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130244/"
]
} |
260,378 | I have to find how many times the word shell is used in a file. I used grep "shell" test.txt | wc -w in order to count how many times that word has been used, but the result comes out 4 instead of 3. The file content is: this is a test filefor shell_Ashell_Bshsheland shell_Cscript project | The wc command is counting the words in the output from grep, which includes "for": > grep shell test.txtfor shell_Ashell_Bshell_C So there really are 4 words. If you only want to count the number of lines that contain a particular word in a file, you can use the -c option of grep, e.g., grep -c shell test.txt Neither of those actually count words , but could match other things which include that string . Most implementations of grep (GNU grep, modern BSDs as well as AIX, HPUX, Solaris) provide a -w option for words, however that is not in POSIX. They also recognize a regular expression, e.g., grep -e '\<shell\>' test.txt which corresponds to the -w option. Again, that is not in POSIX. Solaris does document this, while AIX and HPUX describe -w without mentioning the regular expression. These all appear to be consistent, treating a "word" as a sequence of alphanumerics plus underscore. You could use a POSIX regular expression with grep to match words (separated by blanks, etc), but your example has none which are just "shell": they all have some other character touching the matches. Alternatively, if you care only about alphanumerics (and no underscore) and do not mind matching substrings, you could do tr -c '[[:alnum:]]' '\n' test.txt |grep -c shell The -o option suggested is non-POSIX, and since OP did not limit the question to Linux or BSDs, is not what I would recommend. In either case, it does not match words , but strings (which was OP's expectation). For reference: grep wc | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/260378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153081/"
]
} |
260,484 | After downloading MYSQL APT Repository at http://cdn.mysql.com//Downloads/MySQL-5.7/libmysqld-dev_5.7.11-1debian8_amd64.deb I ran the command dpkg -i libmysqld-dev_5.7.11-1debian8_amd64.deb and here is the result Selecting previously unselected package mysql-community-server.(Reading database ... 48773 files and directories currently installed.)Preparing to unpack mysql-community-server_5.7.11-1debian8_amd64.deb ...Unpacking mysql-community-server (5.7.11-1debian8) ...dpkg: dependency problems prevent configuration of mysql-community-server: mysql-community-server depends on mysql-common (= 5.7.11-1debian8); however: Package mysql-common is not installed. mysql-community-server depends on mysql-client (= 5.7.11-1debian8); however: Package mysql-client is not installed.dpkg: error processing package mysql-community-server (--install): dependency problems - leaving unconfiguredProcessing triggers for systemd (215-17+deb8u3) ...Processing triggers for man-db (2.7.0.2-5) ...Errors were encountered while processing: mysql-community-server Did I do anything wrong? How can I fix it? | You can see the Depends list inside the DEBIAN/control file of the binary package libmysqld-dev_5.7.11-1debian8_amd64.deb , then download and install the ones your system doesn't have. Example > wget http://cdn.mysql.com//Downloads/MySQL-5.7/libmysqld-dev_5.7.11-1debian8_amd64.deb> ar x libmysqld-dev_5.7.11-1debian8_amd64.deb> tar xf control.tar.gz> cat control | grep DependsDepends: libmysqlclient-dev (= 5.7.11-1debian8) If you have too much uninstalled dependencies, I recommend you to install the GPG key of that debian repository and add the source to /etc/apt/sources.list as described by the provider of that binary package . A Quick Guide to Using the MySQL APT Repository This is the line, that you should add to /etc/apt/sources.list or any .list file insde /etc/apt/sources.list.d/ : deb http://repo.mysql.com/apt/debian jessie mysql-5.7 To install the MySQL GPG Public key you can run: > gpg --recv-keys 5072E1F5> gpg --export 5072E1F5 > /etc/apt/trusted.gpg.d/5072E1F5.gpg After running apt-get update you should be able to install the package you want using dpkg -i and even running apt-get install libmysqld-dev | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/260484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90942/"
]
} |
260,506 | I want to keep only header and trailer records in a UNIX file. | With sed: sed -n '1p;$p' file Suppress automatic printing of pattern space ( -n ) but print first ( 1p ) and last line ( $p ) of pattern space. If you want to edit your file "in place" use sed's option -i . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/260506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155361/"
]
} |
260,513 | What is the difference between Sha1sum , Sha256sum and Md5sum ?and how to check all these for some iso file?and how to create md5sum.txt file in ubuntu ? | If you look at the man page for each of those, you'll see that they say: md5sum - compute and check MD5 message digestsha1sum - compute and check SHA1 message digestsha256sum - compute and check SHA256 message digest That tells you that they all create a message digest , which is a one-way function that takes as its argument an arbitrarily sized data and returns a fixed size hash. A hash is considered impossible (within the bounds of practicality) to reverse and to find two different messages with the same hash (called a collision). The difference between the three is the algorithm used to generate this hash. MD5 was invented in the early 1990s and is considered flawed and obsolete by now. SHA1 was also developed in the early 1990s. It is considered stronger than MD5, but not strong enough. Its use is currently being withdrawn from the digital signature on X.509 digital certificates. SHA256 is the currently recommended hash function. Unless you have a reason to use the weaker algorithms, then SHA256 is the way to go. To create the text file, simply redirect the output to the file. For example, if you have a Ubuntu ISO image you want to hash: md5sum Ubuntu.iso > md5sum.txt Of course, that works with the other variants too. You can then (for example) distribute that file over the Internet and the recipient can check the hash again with: md5sum Ubuntu.iso That will print the MD5 hash which the recipient can compare with the content of the md5sum.txt file that you will have published. If they are the same, the file hasn't been tampered with. Of course, it would be better to use sha256sum than md5sum . You'll often find a selection of these hashes published ( md5sum.txt , sha1sum.txt and/or sha256sum.txt ) with an ISO to allow for the fact that some systems might not have all of these utilities. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/260513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150850/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.