source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
245,409 | I want to list the subdirectories of a directory using tree command. But I don't want to print indentation lines. I only want to have the whitespaces instead. I couldn't find the correct parameter in man page. Maybe I can pipe the output of tree to sed to remove the lines. | So you want something like this: tree | sed 's/├\|─\|│\|└/ /g' It replaces all those "line" characters with spaces. See: $ tree.├── dir1│ ├── file1│ └── file2└── dir2 ├── file1 └── file22 directories, 4 files$ tree | sed 's/├\|─\|│\|└/ /g'. dir1 file1 file2 dir2 file1 file22 directories, 4 files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11219/"
]
} |
245,417 | I run my command in a loop. I wrote the loop directly to a bash command line: $ while true; do mycommand; done mycommand is a command doing a lot of io and waiting for events so it does not consume much processor time. How can I interrupt the loop? When I press ctrl-c or ctrl-\ then mycommand is terminated but it is started again immediately. When I log in to another terminal and kill the command then the situation is the same. Is it possible to interrupt the loop without killing the terminal session? | A simple enough way to kill loops started interactively would be to stop the command ( Ctrl - Z ), which should print a job number, say [2]+ Stopped , which you can then kill using kill %2 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99979/"
]
} |
245,433 | I have the following line to execute in a script and that delete two lines above it. But now my $TEST variable contain values like '/DATA/test10' and my variable ID contain '10'. How can i use this with sed? sed -i ":a;N;s/\n/&/2;Ta;/path = $TEST\/$ID/s/.*//;P;D" /collection.txt | If you simply want to treat a string as a literal in sed there's already an answer for that: escaped_testx="$(sed -e 's/[\/&]/\\&/g' <<< "$TEST"; echo x)"escaped_test="${escaped_testx%x}" The extra x is to be able to handle trailing newlines , which would otherwise be removed by the command substitution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236696/"
]
} |
245,497 | I have a fresh desktop install of CentOS7 with Gnome 3. I need to use this machine with a KVM but for some reason CentOS7 cannot detect my monitor through the KVM I have, so it defaults to "Unknown Display" at a 1024x768 resolution (everything detects and works at higher resolutions if I connect the monitor directly to the system). How can I manually configure things so I can use larger resolutions? I tried editing monitors.xml with a new resolution, but upon reboot CentOS7 rejected the change, saying it could not detect, and went back to 1024x768. | I found the base of the solution here: https://askubuntu.com/questions/186288/how-to-detect-and-configure-an-output-with-xrandr In modern Linux distributions, including CentOS, the xrandr library is responsible for things such as screen resolution, rotation and so on. Since your system does not autodetect, you have to manually tell it about the mode your monitor is capable of. I had the same problem with a KVM, and the sample output is from my computer: Step 1: Find the name of your port. This will be something like VGA1, HDMI1 or so. You maybe able to find it from /var/log/Xorg.0.log, or you can use the xrandr utility: > xrandrScreen 0: minimum 8 x 8, current 1024 x 768, maximum 32767 x 32767DP1 disconnected (normal left inverted right x axis y axis)HDMI1 disconnected (normal left inverted right x axis y axis)VGA1 connected primary 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.00* 800x600 60.32 56.25 848x480 60.00 640x480 59.94 VIRTUAL1 disconnected (normal left inverted right x axis y axis) My KVM is connected to the VGA port called VGA1. Because the KVM blocks auto-detection, xrandr only saw the 1024x768 resolution. Step 2: Tell xrandr about the new mode. Modes are simply strings that have video display parameters attached to them. Step 2.1 Find the display parameters you need. I wanted 1600x900 @ 60 Hz: > gtf 1600 900 60 -x# 1600x900 @ 60.00 Hz (GTF) hsync: 55.92 kHz; pclk: 119.00 MHz Modeline "1600x900_60.00" 119.00 1600 1696 1864 2128 900 901 904 932 -HSync +Vsync Step 2.2 Create the new mode with xrandr using the values from the gtf command: > xrandr --newmode "1600x900" 119.00 1600 1696 1864 2128 900 901 904 932 -HSync +Vsync The first parameter is the name of the new mode - you could actually call it anything you like, just use the same name in the subsequent steps. Step 3 Tell xrandr that VGA1 understands the mode called 1600x900: > xrandr --addmode VGA1 1600x900 Step 4 Tell xrandr to switch to the new mode. > xrandr --output VGA1 --mode 1600x900 Note: if you made a mistake and your monitor does not actually understand the new mode, you will get a blank screen! If you do get a blank screen, you can probably recovery by blindly typing: > xrandr --output VGA1 --mode 1024x768 Another way around that is to connect from another computer via SSH, and executing this command via SSH instead of on the console. Step 5 Create a script that automates the newmode, addmode and output commands, since they will not be preserved during a reboot. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144799/"
]
} |
245,548 | My PS1 in my ~/.bash_profile : export PS1="\\n\[\033[38;5;246m\]\u@\[\033[38;5;245m\]\h\[\033[38;5;15m\] \[\033[38;5;28m\]\w\[\033[38;5;15m\]\[\033[38;5;2m\]`__git_ps1`\[\033[38;5;15m\] \[\033[38;5;90m\]\t\[\033[38;5;15m\] \[\033[38;5;232m\]\[\033[38;5;15m\] \n\[\033[38;5;0m\]\\$ " (sorry, I don't have any aliases for my color codes, I created this prompt with an online editor) Which is a bit messy but produces a very nice prompt: But the current branch displayed is always wrong if I switch I'm not sure why this would happen. If I run the command by itself, I get the correct value. $ echo `__git_ps1`(dev) If I source the .bash_profile the new value will come in (but will be wrong next time I switch). Am I doing something wrong? | export PS1="…`__git_ps1`…" With `__git_ps1` inside double quotes, this command runs the command __git_ps1 and assigns its output (and other surrounding text) to the variable PS1 . Thus your prompt is showing the branch that was determined when your .bash_profile was executed. You need to run __git_ps1 each time bash displays a prompt. (Actually you don't need to run it again until the git information has changed, but that's difficult to detect.) There are two ways to do that. Include the literal text `__git_ps1` in the PS1 variable. Make sure that bash is configured to perform shell expansions on the prompt string, with the promptvars option turned on; that's the case by default but it can be turned off with shopt -u promptvars . PS1='\n\[…\]$(__git_ps1)\[…\]\$ ' Update the prompt content by a command run from the PROMPT_COMMAND variable. update_PS1 () { PS1="\\n\\[…\\]$(__git_ps1)\[…\]\\$ "}shopt -u promptvarsPROMPT_COMMAND=update_PS1 By the way, the prompt is a shell configuration, not a global setting, so you should set it in ~/.bashrc , not in ~/.bash_profile . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29679/"
]
} |
245,674 | Why is it that using bash and suspending a while loop, the loop stops after being resumed? Short example below. $ while true; do echo .; sleep 1; done..^Z[1]+ Stopped sleep 1$ fgsleep 1$ I'm familiar with signals, and I'm guessing this may be the natural behaviour of bash here, but I'd like to better understand why it behaves in this particular way. | This looks like a bug in several shells, it works as expected with ksh93 and zsh . Background: Most shells seem to run the while loop inside the main shell and Bourne Shell suspends the whole shell if you type ^Z with a non-login shell bash suspends only the sleep and then leaves the while loop in favor of printing a new shell prompt dash makes this command unsuspendable With ksh93 , things work very different: ksh93 does the same, while the command is started the first time, but as sleep is a buitin in ksh93, ksh93 has a handler that causes the while loop to fork off the main shell and then suspend at the time when you type ^Z. If you in ksh93 later type fg , the forked off child that still runs the loop is continued. You see the main difference when comparing the jobcontrol messages from bash and ksh93: bash reports: [1]+ Stopped sleep 1 but ksh93 reports: ^Z[1] + Stopped while true; do echo .; sleep 1; done zsh behaves similar to ksh93 With both shells, you have a single process (the main shell) as long as you don't type ^Z, and two shell processes after you typed ^Z. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13644/"
]
} |
245,681 | I have multiple directories on the server whose names starts with tomcat_Under this there is a file for example managetest.class Now I need to get only the modified date and time for that particular file in the below format Folder name modifieddate | This looks like a bug in several shells, it works as expected with ksh93 and zsh . Background: Most shells seem to run the while loop inside the main shell and Bourne Shell suspends the whole shell if you type ^Z with a non-login shell bash suspends only the sleep and then leaves the while loop in favor of printing a new shell prompt dash makes this command unsuspendable With ksh93 , things work very different: ksh93 does the same, while the command is started the first time, but as sleep is a buitin in ksh93, ksh93 has a handler that causes the while loop to fork off the main shell and then suspend at the time when you type ^Z. If you in ksh93 later type fg , the forked off child that still runs the loop is continued. You see the main difference when comparing the jobcontrol messages from bash and ksh93: bash reports: [1]+ Stopped sleep 1 but ksh93 reports: ^Z[1] + Stopped while true; do echo .; sleep 1; done zsh behaves similar to ksh93 With both shells, you have a single process (the main shell) as long as you don't type ^Z, and two shell processes after you typed ^Z. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144924/"
]
} |
245,709 | I saw in my syslog kernel.perf_event_max_sample_rate get changed. I was wondering if I could write a quick script to log this variable every few minutes. Currently it is: sysctl -a | grep kernel.perf_event_max_sample_rate In the man page sysctl says sysctl - configure kernel parameters at runtime Does that mean that my script would get the parameter as it was set when the kernel starts? Would it pick up changes? | So one of the big things about learning to Unix is reading the bloody man page: I'm not just being a get off my lawn grumpy old man, there REALLY IS valuable information in there. In this case: DESCRIPTIONsysctl is used to modify kernel parameters at runtime. The parameters available are those listed under /proc/sys/. Procfs isrequired for sysctl support in Linux. You can use sysctl to both read and write sysctl data. So we can: $sudo sysctl -a | grep kernel.perf_event_max_sample_ratekernel.perf_event_max_sample_rate = 50000sysctl: reading key "net.ipv6.conf.all.stable_secret"sysctl: reading key "net.ipv6.conf.default.stable_secret"sysctl: reading key "net.ipv6.conf.enp3s0.stable_secret"sysctl: reading key "net.ipv6.conf.lo.stable_secret"sysctl: reading key "net.ipv6.conf.wlp1s0.stable_secret" By reading the manpage we learn that -a is "display all values currently available", but we also can see: SYNOPSIS sysctl [options] [variable[=value]] [...] sysctl -p [file or regexp] [...] which means we can shorten the above command to: $ sudo sysctl kernel.perf_event_max_sample_ratekernel.perf_event_max_sample_rate = 50000 Or we can: $ more /proc/sys/kernel/perf_event_max_sample_rate 50000 So, TL;DR: Yes, you can write a script to log this variable every few minutes, but if it's going to show up in the logs when it changes, why would you? It would probably be more efficient to read the value right out of /proc/sys/kernel/perf_event_max_sample_rate than to use sysctl, and it would be more efficient to ask for the specific value from sysctl than to use grep. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144946/"
]
} |
245,715 | After I successfully upgraded OpenBSD, I wanted to upgrade my sources. I want to know, is it safer to delete /usr/src and then extract src.tar.gz , or is it better to preserve the already existing /usr/src and then extract src.tar.gz ? Will there be any repercussions caused by deleting /usr/src , or can I simply replace it? | So one of the big things about learning to Unix is reading the bloody man page: I'm not just being a get off my lawn grumpy old man, there REALLY IS valuable information in there. In this case: DESCRIPTIONsysctl is used to modify kernel parameters at runtime. The parameters available are those listed under /proc/sys/. Procfs isrequired for sysctl support in Linux. You can use sysctl to both read and write sysctl data. So we can: $sudo sysctl -a | grep kernel.perf_event_max_sample_ratekernel.perf_event_max_sample_rate = 50000sysctl: reading key "net.ipv6.conf.all.stable_secret"sysctl: reading key "net.ipv6.conf.default.stable_secret"sysctl: reading key "net.ipv6.conf.enp3s0.stable_secret"sysctl: reading key "net.ipv6.conf.lo.stable_secret"sysctl: reading key "net.ipv6.conf.wlp1s0.stable_secret" By reading the manpage we learn that -a is "display all values currently available", but we also can see: SYNOPSIS sysctl [options] [variable[=value]] [...] sysctl -p [file or regexp] [...] which means we can shorten the above command to: $ sudo sysctl kernel.perf_event_max_sample_ratekernel.perf_event_max_sample_rate = 50000 Or we can: $ more /proc/sys/kernel/perf_event_max_sample_rate 50000 So, TL;DR: Yes, you can write a script to log this variable every few minutes, but if it's going to show up in the logs when it changes, why would you? It would probably be more efficient to read the value right out of /proc/sys/kernel/perf_event_max_sample_rate than to use sysctl, and it would be more efficient to ask for the specific value from sysctl than to use grep. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
245,753 | I'm on OSX 10.11.1 and occasionally my bash terminal gets mangled. It often happens when I accidentally cat a binary file. The result can be seen on the image below. The output becomes weird, and I can't type ascii characters anymore. Even though this happens occasionally, I couldn't find a way to consistently reproduce the issue. Online search recommends doing cat /bin/* , but that works sporadically, only after a couple dozen tries. I want to do this so I can find an easy solution how to handle this in tmux . How do I consistently get bash to a "mangled" state? Is there maybe a magic unicode character that can do this? | This looks like the DEC special graphics character set . Reading the xterm control sequences docs , it sounds like the terminal uses those when receiving ESC ( 0 . So you should be able to reproduce using printf '\033(0' or printf '\033(0' > corrupt-my-terminalcat corrupt-my-terminal And get back using printf '\033(B' which according to the same page selects USASCII. Other ways to restore the state include tput sgr0 # resets all terminal attributes to their defaults and reset # reinitializes the terminal You could tput sgr0 in your PROMPT_COMMAND (bash), or precmd (zsh) to ensure it always gets reset automatically. Or you could just make sure to use less , vim , or anything other than cat to view a file. To make less act like cat and automatically quit if the file is under one page long, run less -FX , or do export LESS=-FX . Or if you don't want to always use those less options, make a new alias, e.g. alias c='less -FX' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
245,768 | Can one user, or maybe root, control another user's user level systemd services? I've tried sudo -u <some user> systemctl --user restart <some service> , but it complains about dbus: Failed to get D-Bus connection: Connection refused . | I had the same problem when I remotely logged into my gentoo box via ssh. In my case this was because the XDG_RUNTIME_DIR and DBUS_SESSION_BUS_ADDRESS environment variables were missing. Run the following commands and try again: export XDG_RUNTIME_DIR="/run/user/$UID"export DBUS_SESSION_BUS_ADDRESS="unix:path=${XDG_RUNTIME_DIR}/bus" If this helps, you can put those commands into your .bashrc. I guess there must be a more elegant solution than .bashrc but that depends on your distro. Here is where I found that solution. Edit: logged in as root, I managed to successfully run systemctl --user as another user using su as follows: su -c 'XDG_RUNTIME_DIR="/run/user/$UID" DBUS_SESSION_BUS_ADDRESS="unix:path=${XDG_RUNTIME_DIR}/bus" systemctl --user status' username or using sudo (note, I had to explicitely add the respective users UID (1000) to the path '/run/user/', but if you are running it from a bash script you can use $SUDO_UID instead): sudo -u username XDG_RUNTIME_DIR="/run/user/1000" DBUS_SESSION_BUS_ADDRESS="unix:path=${XDG_RUNTIME_DIR}/bus" systemctl --user status | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/245768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26423/"
]
} |
245,772 | Sample command: drush cc all works, but this: sudo drush cc all gives me: sudo: drush: command not found Why? How to fix this? | When you sudo , you get a preconfigured $PATH , which is (supposed to be) something like the root user's default path. Your program is not in that list of directories identified by $PATH . See for example How to make sudo preserve $PATH? Why are PATH variables different when running via sudo and su? sudoers(5) (see the various settings which relate to PATH ) sudo(8) sudo tries to be safe when executing external commands. There are two distinct ways to deal with environment variables. By default, the env_reset sudoers option is enabled. This causes commands to be executed with a minimal environment containing TERM , PATH , HOME , SHELL , LOGNAME , USER and USERNAME in addition to variables from the invoking process permitted by the env_check and env_keep sudoers options. There is effectively a whitelist for environment variables. If you cannot configure sudo to preserve your $PATH , the usual workaround is to specify the complete pathname of the program. That may not work well with scripts that call other executables in the (not-accessed) directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139365/"
]
} |
245,780 | But they give instructions like cd downloaded_program./configuremake install This creates the ELF that is needed, and probably some .so files. Why not put those inside a zip file for download, like with windows apps? Is there any reason why they need to be compiled by the user? | Let's analyse the factors... Analysis : DEPENDENCIES ACCORDING TO PLATFORM : There are some issues that arise in an environment where developers are creating and maintaining several architecture-specific variants of an application: Different source code is required for different variants — Different UNIX-based operating systems may use different functions to implement the same task (for example, strchr(3) vs. index(3)). Likewise, it may be necessary to include different header files for different variants (for example, string.h vs. strings.h). Different build procedures are required for different variants — The build procedures for different platforms vary. The differences might involve such particulars as compiler locations, compiler options, and libraries. Builds for different variants must be kept separate — Since there is a single source tree, care must be taken to ensure that object modules and executables for one architecture do not become confused with those for other architectures. For example, the link editor must not try to create an IRIX–5 executable using an object module that was built for SunOS–4. Every operating system has its own linking management scheme and must prepare the ELF (Executable and Linking Format) file as it needs it. The compiler will generate a build that is a sequence of instructions , and distinct architectures mean different instruction sets ( Comparison of Instruction Set Architectures ). So, the output of the compiler is distinct for each architecture (Ex: x86, x86-64, ARM, ARM64, IBM Power ISA, PowerPC, Motorola's 6800, MOS T 6502, and so many others ) SECURITY : If you download a binary, you can not be sure if it does what it says it does, but you can try to audit the source code and use a self compiled binary in your system. In spite of this, the user Techmag made a good point in his comment, auditing the code requires knowledgeable and competent coders to assess code and is not a safety guarantee. MARKET : In this section there are a lot of factors, but i'll try to resume it: Not every company aims to reach all the platforms, it depends on the market and the platforms popularity and what they want to sell. Free software have the spirit of making software as widely available as possible, but it doesn't imply that the software is designed for every platform, it depends on the community who supports it. Conclusion : Not every software is designed for every platform. Providing binaries for all the architectures and platforms implies to compile it, test it, and maintain it for all the platforms. That's more work that is sometimes just too expensive, and can be avoided if the user compiles it in its own platform. Also, the user will be aware of what he's executing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/245780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144984/"
]
} |
245,801 | I know that * references all files excluding hidden files, how to reference all files including hidden files whose names begin with a . in bash? | bash has a dotglob option that makes * include names starting with . : echo * # let's see some filesshopt -s dotglob # enable dotglobecho * # now with dotfilesshopt -u dotglob # disable dotglob againecho * # back to the beginning | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145001/"
]
} |
245,804 | When I have a linux mint bootable usb drive in my computer, it asks if I want to install linux mint, or install linux mint configuration (I think that's that, I don't feel like replacing ubuntu with mint) in the grub. When I have ubuntu on the drive, it gives me the option to run it without installing. This is not included with linux mint. Instead of risking all my files being gone (and then taking the time to restore them), I decided to ask the experts. | bash has a dotglob option that makes * include names starting with . : echo * # let's see some filesshopt -s dotglob # enable dotglobecho * # now with dotfilesshopt -u dotglob # disable dotglob againecho * # back to the beginning | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145002/"
]
} |
245,849 | I've got a machine (Debian jessie) used to "jump" to another machines, with different domains.... actually many domains. As man resolv.conf tell us, search list for host-name lookup is limited up to 6 domains or 256 characters. How can I increase the number of domains lookup? Thanks in advance. | From the man page for resolv.conf In glibc 2.25 and earlier, the search list is limited to six domains with a total of 256 characters. Since glibc 2.26, the search list is unlimited. As such, upgrading glibc should resolve this issue.For Debians Buster and after, along with Ubuntus 17.10 and after, the package version of glibc is at or above 2.26, and only requires an apt update. It is possible to upgrade by hand if necessary otherwise. RHEL8 is baselined on glibc version 2.28 so no update is required; (and unreasonable for RHEL7 and earlier). Per distrowatch , Fedora 27 was the first to implement glibc 2.26. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140217/"
]
} |
245,909 | I have this bash script: gunzip -c /var/log/cisco/cisco.log-$(date +%Y%m%d).gz | awk '/ath_bstuck_tasklet/ { print $4 }' | sort | uniq -c > /tmp/netgear_beacon.txtecho "There are `wc -l /tmp/netgear_beacon.txt | awk '{print $1}'` Stuck beacon; resetting" >> /tmp/netgear_beacon.txtgunzip -c /var/log/cisco/cisco.log-`date +%Y%m%d`.gz | awk '/Virtual device ath0 asks to queue packet/ { print $4 }' | sort | uniq -c > /tmp/netgear_buffer_queue.txtecho "There are `wc -l /tmp/netgear_buffer_queue.txt | awk '{print $1}'` routers with 'Virtual device ath0 asks to queue packet' errors" >> /tmp/netgear_buffer_queue.txtgunzip -c /var/log/cisco/cisco.log-`date +%Y%m%d`.gz | awk '/CMS_MSG_DNSPROXY_RELOAD/ { print $4 }' | sort | uniq -c > /tmp/netgear_dns.txtecho "There are `wc -l /tmp/netgear_dns.txt | awk '{print $1}'` routers with 'DNS Proxy Issue' errors" >> /tmp/netgear_dns.txtgunzip -c /var/log/cisco/cisco.log-$(date +%Y%m%d).gz | awk '/beacon/ { print $4 }' | sort | uniq -c > /tmp/netgear_beacon_frame.txtecho "There are `wc -l /tmp/netgear_beacon_frame.txt | awk '{print $1}'` routers with beacon frame errors" >> /tmp/netgear_beacon_frame.txtgunzip -c /var/log/cisco/cisco.log-$(date +%Y%m%d).gz | awk '/ACK/ { print $4 }' | sort | uniq -c | awk -v x=50 '$1 >= x' > /tmp/netgear_ACK.txtecho "There are `wc -l /tmp/netgear_ACK.txt | awk '{print $1}'` routers with more than 50 ACK" >> /tmp/netgear_ACK.txt I would try to not repeat the gunzip command every time. I would run it just once and use it for all steps. I was thinking a variable, but is it the best practice? | There are no "best practices". Only things that make sense and make things easier. Extracting the common parts and parameterizing the rest is one such thing: lines="`gunzip -c /var/log/cisco/cisco.log-$(date +%Y%m%d).gz`"#gunzip would always output the same thing on the same day, so #just run it once and store the results in a variablegrepAndLog(){ local regex="$1" file="$2" msg="$3" filter="${4:-cat}" #^names for positional parameters printf "%s\n" "$lines" | grep "$regex" | cut -d' ' -f4 | sort | uniq -c | eval "$filter" > "/tmp/$file" local count=`wc -l < "/tmp/$file"` echo "There are $count "" $msg" >> "/tmp/$file"}grepAndLog ath_bstuck_tasklet netgear_bacon.txt \ 'Stuck beacon; resetting'grepAndLog netgear_buffer_queue netgear_buffer_queue.txt \ "routers with 'Virtual device ath0 asks to queue packet' errors"grepAndLog CMS_MSG_DNSPROXY_RELOAD netgear_dns.txt \ " routers with 'DNS Proxy Issue' errors"grepAndLog ath_bstuck_tasklet netgear_bacon.txt \ " routers with beacon frame errors"grepAndLog ACK netgear_ACK.txt \ " routers with more than 50 ACK" 'awk -v x=50 "\$1 >= x"' It's still a mainly-shell solution. But IMO more readable and over 40% shorter. About the code: I'm using grep "$regex" | cut -d' ' -f4 instead of the awk expression.Other than that the grepAndLog function is a generalization of what you do in each line of your script:You have some input (the output of gunzip), you grep that for an expression (the $regex parameter), and output the resulting lines, sorted and prefixed with count into a $file . Then you append the line count(I do wc -l < "$file" instead of wc -l "$file" | awk ... ) wrapped in a message whose beginning is constant and whose end varies ( $msg ). In your last line you don't simply grep, but you use another filter on top of that.Instead of creating an if branch for that in the function, I simply use cat as an implicit default additional filter in the normal cases where no fourth parameter exists ( local filter="${4:-cat}" means create a function-local variable filter whose contents is the fourth parameter given to the function, or cat if no fourth parameter is provided). cat gets overriden if a fourth parameter is given to grepAndLog . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138120/"
]
} |
245,920 | I was wondering whether using wget it was possible to download an RPM and then pipe it through sudo rpm -i to install it, in a single line? I realize I could just run: wget -c <URL>sudo rpm -i <PACKAGE-NAME>.rpm to install the package but I was wondering whether it might be possible to do this in a single line using the quiet and write to standard output options of wget. I have tried using: wget -cqO- <URL> | sudo rpm -i but it returned: rpm: no packages given for install | RPM has native support to download a package from a URL. You can do: sudo rpm -i <URL> There is no need to download the RPM manually. If this support didn't exist, you could use bash 's process substitution. sudo bash -c 'rpm -i <(wget -O - <URL>)' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/245920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
245,937 | I'm trying to re-encode video streams from a Matroska file to save space, while keeping all the subtitles as-is, using ffmpeg. I want to write a generic command that works without me having to specify exact stream numbers. Now I can't figure out how to let ffmpeg pick its default video stream and default audio stream and then all subtitles. The current input file I'm working with has these streams, but other files will have different streams. [lavf] stream 0: video (mpeg2video), -vid 0 [lavf] stream 1: audio (ac3), -aid 0, -alang eng, Surround 5.1 [lavf] stream 2: audio (ac3), -aid 1, -alang fre, Surround 5.1 [lavf] stream 3: audio (ac3), -aid 2, -alang ita, Surround 5.1 [lavf] stream 4: audio (ac3), -aid 3, -alang spa, Surround 5.1 [lavf] stream 5: audio (ac3), -aid 4, -alang eng, Stereo [lavf] stream 6: subtitle (dvdsub), -sid 0, -slang eng [lavf] stream 7: subtitle (dvdsub), -sid 1, -slang fre [lavf] stream 8: subtitle (dvdsub), -sid 2, -slang ita [lavf] stream 9: subtitle (dvdsub), -sid 3, -slang spa [lavf] stream 10: subtitle (dvdsub), -sid 4, -slang ara [lavf] stream 11: subtitle (dvdsub), -sid 5, -slang dan [lavf] stream 12: subtitle (dvdsub), -sid 6, -slang dut [lavf] stream 13: subtitle (dvdsub), -sid 7, -slang fin [lavf] stream 14: subtitle (dvdsub), -sid 8, -slang ice [lavf] stream 15: subtitle (dvdsub), -sid 9, -slang nor [lavf] stream 16: subtitle (dvdsub), -sid 10, -slang por [lavf] stream 17: subtitle (dvdsub), -sid 11, -slang swe [lavf] stream 18: subtitle (dvdsub), -sid 12, -slang fre [lavf] stream 19: subtitle (dvdsub), -sid 13, -slang ita [lavf] stream 20: subtitle (dvdsub), -sid 14, -slang spa Commands I have tried: ffmpeg -i IN.mkv -c:v libx264 -threads 4 -speed 1 -f matroska OUT.mkv Result: One video stream, one audio stream, no subtitle streams . ffmpeg -i IN.mkv -c:v libx264 -threads 4 -speed 1 -f matroska -c:s copy OUT.mkv Result: One video stream, one audio stream, one subtitle stream . ffmpeg -i IN.mkv -c:v libx264 -threads 4 -speed 1 -f matroska -map 0 OUT.mkv Result: All video , all audio , all subtitles. ffmpeg -i IN.mkv -c:v libx264 -threads 4 -speed 1 -f matroska -c:s copy -map 0:s OUT.mkv Result: No video , no audio , all subtitles. As far as I can tell from the manual, -c:s copy is supposed to copy all the streams, not just the default one, but it won't. Perhaps it's a bug? To clarify, what I'm after is the result: one v ideo, one a udio and all s ubtitles. | The stream selection default behavior only selects one stream per type of stream , so inputs with multiple audio streams will create an output with one audio stream. To disable this behavior and manually choose desired streams use the -map option . These examples use -c copy to stream copy (re-mux) from the input to to the output. No re-encoding occurs. Stream copy all streams ffmpeg -i input -map 0 -c copy output 1st video stream, 2nd audio stream, all subtitles ffmpeg -i input -map 0:v:0 -map 0:a:1 -map 0:s -c copy output 3rd video stream, all audio streams, no subtitles This example uses negative mapping to exclude the subtitles. ffmpeg -i input -map 0:v:2 -map 0:a -map -0:s -c copy output Choosing streams from multiple inputs All video from input 0, all audio from input 1: ffmpeg -i input0 -i input1 -map 0:v -map 1:a -c copy output | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/245937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145074/"
]
} |
245,946 | I read the following in the grymoire : A simple example is changing "day" in the "old" file to "night" in the "new" file: sed s/day/night/ <old >new Or another way (for UNIX beginners), sed s/day/night/ old >new Why might the author consider the first form more advanced? I mean, what are the advantages of using this form over the "beginner's" syntax? | One advantage to allowing the shell to do the open() like: utility <in >out as opposed to allowing the named utility to do the open() like: utility in >out ...is that the file-descriptor is secured before the named utility is called, or else if there is an error encountered during the open() , the utility is never called at all. This is the best way to guard against side-effects of possible race conditions - as can happen from time to time when working with streams and the stream editor . If a redirection fails, the shell short-circuits the call to the utility and writes an error message to stderr - the shell's stderr and not whatever you might have temporarily directed it to for the utility (well, that depends on the command-line order of redirections as well) - in a standard diagnostic format. The most simple way to test if you can open a file is to open it, and < does that implicitly before anything else. Probably the most obvious race condition indicated in the commands in your question involves the out redirection. In both forms the shell does the > write open as well and this happens regardless of whether sed can successfully open the readfile in the second form. So out gets truncated - and possibly needlessly. That could be bad if you only wanted to write your output if you could successfully open your input. That's not a problem, though, if you always open your input first, as is done in the first form. Otherwise, there are at least 10 numerically referenced file descriptors that can be manipulated with shell redirection syntax in that way, and these combinations can get kind of hairy. Also, when the shell does the open, the descriptor does not belong to the called command - as it does with the second version - but to the shell, and the called command only inherits it. It inherits in the same way any other commands called in the same compound command might be, and so commands can share input that way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55127/"
]
} |
245,957 | I recently installed Ubuntu 14.04 on my laptop. I already had a single Windows 7 partition and two linux partitions. The problem is that Ubuntu overwrote the grub bootloader, and now there is no option to boot to my encrypted debian install Here is my disk layout Windows partition /dev/sda1 Extended partition /dev/sda4 Ubuntu / and /boot on /dev/sda5 /boot for Debian (ext3) /dev/sda2 LUKS volume LVM ROOT-FS / for Debian SWAP-FS swap for Debian I want to be able to boot to the encrypted debian install, Ubuntu and Windows from the grub boot screen. How can I do that? I don't want to use paid or closed-source software. Bonus: move the debian /boot and grub to a usb stick and boot from that. | One advantage to allowing the shell to do the open() like: utility <in >out as opposed to allowing the named utility to do the open() like: utility in >out ...is that the file-descriptor is secured before the named utility is called, or else if there is an error encountered during the open() , the utility is never called at all. This is the best way to guard against side-effects of possible race conditions - as can happen from time to time when working with streams and the stream editor . If a redirection fails, the shell short-circuits the call to the utility and writes an error message to stderr - the shell's stderr and not whatever you might have temporarily directed it to for the utility (well, that depends on the command-line order of redirections as well) - in a standard diagnostic format. The most simple way to test if you can open a file is to open it, and < does that implicitly before anything else. Probably the most obvious race condition indicated in the commands in your question involves the out redirection. In both forms the shell does the > write open as well and this happens regardless of whether sed can successfully open the readfile in the second form. So out gets truncated - and possibly needlessly. That could be bad if you only wanted to write your output if you could successfully open your input. That's not a problem, though, if you always open your input first, as is done in the first form. Otherwise, there are at least 10 numerically referenced file descriptors that can be manipulated with shell redirection syntax in that way, and these combinations can get kind of hairy. Also, when the shell does the open, the descriptor does not belong to the called command - as it does with the second version - but to the shell, and the called command only inherits it. It inherits in the same way any other commands called in the same compound command might be, and so commands can share input that way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144306/"
]
} |
245,962 | I have a large text file where a part of it looks like this (edited values): JULIANA XXXX006060 LI1033322 THC BRL 730.00XXXX006296 AA1004737 THC BRL 1,740.00SANTOS JULIANA XXXX006668 AA1004786 THC BRL 8,150.00SANTOS JULIANA CABINDA XXXX006697 AA1004777 THC BRL 2,325.00SANTOS JULIANA XXXX006699 AA1004790 THC BRL 2,325.00JULIANA BATA XXXX006141 CCC012946 THC BRL 1,460.00JULIANA BATA XXXX006153 CCC013054 THC BRL 870.00JULIANA XXXX006269 CCC013105 THC BRL 870.00JULIANA XXXX006295 CCC013083 THC BRL 870.00JULIANA BATA XXXX006305 CCC013043 THC BRL 1,460.00 I want to always grab (with a cut or awk or something else) the string that starts with XXXX00 , but it's never in the same field number. How can I do that in a shell-script? | Just grep for it: grep -oE 'XXXX00[0-9]*' file -o : Prints only the matching part. -E : Activates extended regular expressions. [0-9]* : After the string to search, only numbers should appear. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/245962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94943/"
]
} |
245,989 | I'm trying to list all the variables with a certain prefix, but that prefix is dynamically determined. For example: prefix=apple_apple_one=a1apple_two=a2 If I simply do: echo ${!apple_@} I can get the variable names that start with apple_. However, I would like to do this by using the variable prefix somehow. Everything I've tried leads to a bad substitution. For example I cannot: echo ${!${prefix}@} | Bash doesn't parse nested variable expansions. You can do what you want with eval : build the shell snippet ${!apple_@} , then use it in a command that you execute with eval . prefix=apple_eval 'vars=(${!'"$prefix"'@})' Make sure that prefix contains only valid identifier characters, otherwise the snippet could result in anything. eval is usually necessary to work with variables whose name is calculated dynamically, but in this specific case there's a completely different way to do the same thing: use the completion system. Bash's completion system works even from a script. The compgen builtin lists one completion per line, which is ambiguous when completions can contain newlines, but this isn't the case here — they don't even contain wildcard characters, so the output can be split with a simple unquoted expansion. This will safely output nothing if the prefix contains characters that aren't valid in identifiers. vars=($(compgen -v "$prefix")) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/245989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19912/"
]
} |
246,026 | In Bourne like shell which support array variable, we can use some parsing to check if variable is an array. All commands below were run after running a=(1 2 3) . zsh : $ declare -p atypeset -a aa=( 1 2 3 ) bash : $ declare -p adeclare -a a='([0]="1" [1]="2" [2]="3")' ksh93 : $ typeset -p atypeset -a a=(1 2 3) pdksh and its derivative: $ typeset -p aset -A atypeset a[0]=1typeset a[1]=2typeset a[2]=3 yash : $ typeset -p aa=('1' '2' '3')typeset a An example in bash : if declare -p var 2>/dev/null | grep -q 'declare -a'; then echo array variablefi This approach is too much work and need to spawn a subshell. Using other shell builtin like =~ in [[ ... ]] do not need a subshell, but is still too complicated. Is there easier way to accomplish this task? | I don't think you can, and I don't think it actually makes any difference. unset aa=xecho "${a[0]-not array}" x That does the same thing in either of ksh93 and bash . It looks like possibly all variables are arrays in those shells, or at least any regular variable which has not been assigned special attributes, but I didn't check much of that. The bash manual talks about different behaviors for an array versus a string variable when using += assignments, but it afterwards hedges and states that the the array only behaves differently in a compound assignment context. It also states that a variable is considered an array if any subscript has been assigned a value - and explicitly includes the possibility of a null-string. Above you can see that a regular assignment definitely results in a subscript being assigned - and so I guess everything is an array. Practically, possibly you can use: [ 1 = "${a[0]+${#a[@]}}" ] && echo not array ...to clearly pinpoint set variables that have only been assigned a single subscript of value 0. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38906/"
]
} |
246,048 | I'm looking for something to concatenate all files with given extension within a directory, except one. Like: cat *.txt !(DISCARD.txt) > catKEPT This should concatenate all *.txt files in directory, except DISCARD.txt. | find . -maxdepth 1 -iname '*.txt' -not -name 'DISCARD.txt' -exec cat {} +>catKEPT | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
246,052 | I have a mirrored rpool: NAME USED AVAIL REFER MOUNTPOINTrpool 72.1G 1.22G 39.5K /rpoolrpool/ROOT 67.9G 1.22G 31K legacyrpool/ROOT/solaris 67.9G 1.22G 19.8G /rpool/ROOT/solaris/var 48.0G 1.22G 47.8G /varrpool/dump 1.25M 1.22G 1.02M -rpool/export 53.9M 1.22G 32K /exportrpool/export/home 53.8M 1.22G 33K /export/homerpool/export/home/m 53.8M 1.22G 53.7M /export/home/mrpool/swap 4.13G 1.35G 4.00G - My /var used a lot of space, probably some logs from samba I've read Solaris 11 and zfs, i don't understand space used , but I haven't any snapshot for /var (I've already deleted auto-snapshots): root@myhost:~# zfs list -t allNAME USED AVAIL REFER MOUNTPOINTrpool 72.1G 1.22G 39.5K /rpoolrpool@zfs-auto-snap_hourly-2015-11-27-19h04 19K - 39.5K -rpool@zfs-auto-snap_hourly-2015-11-28-10h08 19K - 39.5K -rpool@zfs-auto-snap_hourly-2015-11-28-11h08 0 - 39.5K -rpool/ROOT 67.9G 1.22G 31K legacyrpool/ROOT/solaris 67.9G 1.22G 19.8G /rpool/ROOT/solaris@install 106M - 2.99G -rpool/ROOT/solaris/var 48.0G 1.22G 47.8G /varrpool/ROOT/solaris/var@install 188M - 304M -rpool/dump 1.25M 1.22G 1.02M -rpool/export 53.9M 1.22G 32K /exportrpool/export/home 53.8M 1.22G 33K /export/homerpool/export/home/m 53.8M 1.22G 53.7M /export/home/mrpool/export/home/m @zfs-auto-snap_hourly-2015-11-28-10h08 94K - 53.7M -rpool/export/home/m @zfs-auto-snap_hourly-2015-11-28-11h08 34K - 53.7M -rpool/swap 4.13G 1.35G 4.00G - it seems to me, the space is used by current files in /var, but when I check root@myhost:/var# du -sh 14G . I cannot find the half of my space... UPDATE: Okay, I've restarted samba service # svcadm restart cswsamba And now root@myhost:/var# zfs list -t allNAME USED AVAIL REFER MOUNTPOINTrpool 39.3G 34.0G 39.5K /rpoolrpool@zfs-auto-snap_hourly-2015-11-27-19h04 19K - 39.5K -rpool@zfs-auto-snap_hourly-2015-11-28-10h08 19K - 39.5K -rpool@zfs-auto-snap_hourly-2015-11-28-12h08 0 - 39.5K -rpool/ROOT 35.1G 34.0G 31K legacyrpool/ROOT/solaris 35.1G 34.0G 19.8G /rpool/ROOT/solaris@install 106M - 2.99G -rpool/ROOT/solaris/var 15.1G 34.0G 15.0G /varrpool/ROOT/solaris/var@install 188M - 304M -rpool/ROOT/solaris/var@zfs-auto-snap_hourly-2015-11-28-12h08 2.47M - 14.8G -rpool/dump 1.25M 34.0G 1.02M -rpool/export 54.0M 34.0G 32K /exportrpool/export/home 53.9M 34.0G 33K /export/homerpool/export/home/m 53.9M 34.0G 53.7M /export/home/mrpool/export/home/m @zfs-auto-snap_hourly-2015-11-28-10h08 94K - 53.7M -rpool/export/home/m @zfs-auto-snap_hourly-2015-11-28-11h08 80K - 53.7M -rpool/export/home/m @zfs-auto-snap_hourly-2015-11-28-12h08 66K - 53.7M -rpool/swap 4.13G 34.2G 4.00G - What happened and how can I keep clear of this error? | find . -maxdepth 1 -iname '*.txt' -not -name 'DISCARD.txt' -exec cat {} +>catKEPT | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145148/"
]
} |
246,082 | I have a bash script containing a group of commands in curly braces { ... } . This group contains some initial echo commands and then one loop . At each iteration the loop executes various slow commands (basically with curl and some extra parsing). Each iteration is slow (because of network interaction) but it prints one line (of python code); as far as I can see, there should be no buffering issue coming from the commands themselves because they terminate their job and leave. The whole group of commands is piped to python -u (I also tried with tail -f in order to check) and obviously the whole loop is executed before anything is read by python -u or tail -f . I know how to unbuffer (when possible) one command with various tools like stdbuf but I don't think it can help here because it looks like the issue comes from the command-grouping rather than from such or such command. Any hint? | (Note to future readers: the tone of exasperation here is not for the question, but for the mistakes I made trying to answer it and the multiple edits they entailed.) Oh, for pity's sake. The problem is in tail -f . This works just fine: #!/bin/bashprintf 'hi\n'{ for i in 1 2 3 4; do sleep 0.5 /bin/echo $i done;} | catprintf 'bye\n' It's not the pipe, it's not the group. It's tail . As in, chasing our own tails! So, tail -f failed because it doesn't output right away for some reason. Not sure why python -u is failing, but I don't think it's anything in the script. Maybe try unbuffer with it. Try your script with cat , at least, and verify that it's unbuffered in that case. Earlier failed attempt intentionally left here so future readers can make sense of the comments. This script exhibits the same kind of buffering problem you're getting: #!/bin/bashprintf 'hi\n'{ for i in 1 2 3 4; do sleep 0.5 printf '%s\n' $i done;} | tail -fprintf 'bye\n' This one does not. Output inside the group is redirected to stderr, then stderr from the whole group is piped to the command. Since it's stderr, it's unbuffered. #!/bin/bashprintf 'hi\n'{ for i in 1 2 3 4; do sleep 0.5 printf '%s\n' $i 1>&2 done;} |& tail -fprintf 'bye\n' Adapted from Wang HongQin's answer in this question . The difficulty was in finding a way to unbuffer the pipe with braces rather than an explicit command. Had to fiddle around a while to get the redirection working properly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66810/"
]
} |
246,104 | I have a list of IP addresses in file: 72.204.55.250 72.204.55.250 72.204.55.250 72.204.55.250 72.204.55.250 96.41.51.202 208.115.113.91 178.137.94.166 178.137.94.166 208.115.113.91 96.41.51.202 141.8.143.179 141.8.143.179 Now I am going to sort them and call uniq -c service and I get: 2 141.8.143.179 2 178.137.94.166 2 208.115.113.91 5 72.204.55.250 2 96.41.51.202 Now I will sort them by most frequent ( sort -rn ) but my problem is to also sort them by IP address when the number of repetitions is same in descending order. I've found a sort command for only IP address which works: sort -rn -t . -k1,1 -k2,2 -k 3,3 -k4,4 but as I mentioned it above, I don't know how to combine it with first column (number of repetitions) to get these expected results: 5 72.204.55.250 2 208.115.113.91 2 178.137.94.166 2 141.8.143.179 2 96.41.51.202 How can I achieve that? Thanks in advance for any help. | If your sort can do a stable sort , e.g. GNU sort with the -s or --stable option, lines with fields unrelated to the sort keys will not be sorted by those fields when there are ties, but will stay in their same relative positions. $ sort -n -t. -k1,1 -k2,2 -k3,3 -k4,4 | uniq -c | sort -n -r -s 5 72.204.55.250 2 96.41.51.202 2 141.8.143.179 2 178.137.94.166 2 208.115.113.91 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246104",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145190/"
]
} |
246,143 | When I try stepping through a program, gdb throws this error std::ostream::operator<< (this=0x6013c0 <std::cout@@GLIBCXX_3.4>, __n=2)at /build/gcc/src/gcc-build/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/ostream.tcc:110110 /build/gcc/src/gcc-build/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/ostream.tcc: No such file or directory. This is the program I am trying to debug. #include <iostream>int printPrime(int, int);int main(){ int t, c; std::cin >> t; c = t; int m[t], n[t]; while (t--) { std::cin >> m[t] >> n[t]; } while (c--) { printPrime(m[c], n[c]); std::cout << std::endl; } return 0;}int printPrime(int m, int n){ do { int c = m; int lim = c>>2; if (c <= 1) continue; while (c-- && c>lim) { if (m%c == 0) { if (c == 1) { std::cout << m << std::endl; break; } break; } } } while(m++ && m<=n);} There is no problem with the program code as it runs correctly. I guess it is a problem with my install of GDB on Arch. The error is shown when it encounters cin or cout . This error doesn't show when I tried running it in my Ubuntu VM | I've filled a bug report against this issue: https://bugs.archlinux.org/task/47220 This happens because the ostream source file cannot be found. Workaround 1 You can strip the libstdc++ library: sudo strip /usr/lib/libstdc++.so.6 And then gdb will not try to open the source file and the error will not appear anymore. You can switch back to the unstripped version by reinstalling it with: sudo pacman -S gcc-libs Workaround 2 You can add a substitution rule in gdb: gdb tst(gdb) set substitute-path /build/gcc/src/gcc-build/x86_64-unknown-linux-gnu/libstdc++-v3/include /usr/include/c++/5.2.0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95393/"
]
} |
246,167 | I have tried to run this script a few times now and it keeps hanging. I have 60,000 images and it goes through about a 1000 and then stops. I have been running the top command and the process pngquant dissapears it seems. Any help would be much appreciated. The last output I got was Optimized PNG: ./12/thumbnail/112.pngOptimized PNG: ./12/thumbnail/photography.pngOptimized PNG: ./12/thumbnail/Christmas-01.pngOptimized PNG: ./12/thumbnail/OA1920.pngerror: cannot open ./12/thumbnail/The for readingOptimized PNG: ./12/thumbnail/Theerror: cannot open Official for readingOptimized PNG: Officialerror: cannot open AndreasCY for readingOptimized PNG: AndreasCY And then it stopped. The script looks like this. #!/bin/bashcd /var/www/html/wp-content/uploads/2014/for PNG in $(find . -name '*.png'); do pngquant --ext .png --force 256 ${PNG} echo "Optimized PNG: ${PNG}"done | Rui’s answer contains some good advice,although he seems to have missed the actual causeof the failure of your command. It appears from the output that you show that you have a filenamed ./12/thumbnail/The Official AndreasCY … . png . When you say for PNG in $(find . -name '*.png') it’s like saying for PNG in ./12/thumbnail/112.png ./12/thumbnail/photography.png ./12/thumbnail/Christmas-01.png \ ./12/thumbnail/OA1920.png ./12/thumbnail/The Official AndreasCY … . png … with the result that ./12/thumbnail/The , Official , and AndreasCY get treated as if they were separate filenames —and, since they aren’t, you get the error you got. You should be very careful of using $(…) to generate a list of filenames —by which I mean, avoid doing that if at all possible. find … -exec is a much better approach. However, the command Rui gave won’t work: -exec "Optimized PNG: " will fail because there’s no such command as "Optimized PNG: " . I suggest that you keep the following partsof your original answer and Rui’s answer: #!/bin/shcd /var/www/html/wp-content/uploads/2014 (You don’t really need the “/” at the end.) find . -name '*.png' -type f … and then finish it with one of the following: -exec pngquant --ext .png --force 256 {} \; -exec echo "Optimized PNG:" {} \; -exec pngquant --ext .png --force 256 {} \; -printf "Optimized PNG: %p\n" -exec sh -c 'pngquant --ext .png --force 256 "$1"; echo "Optimized PNG: $1"' sh {} \; , or -exec myscript {} \; where myscript is #!/bin/shpngquant --ext .png --force 256 "$1"echo "Optimized PNG: $1" Note that you should always quote shell variables(e.g., "$PNG" and "$1" ) unless you have a good reason not to,and you’re sure you know what you’re doing. By contrast, while braces (e.g., ${PNG} ) can be important,they’re not as important as quotes. And, on general principles, I suggest that you use #!/bin/sh unless you're explicitly writing a script that depends on bash features,and will not work with a POSIX-compliant shell — and I advise against that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145228/"
]
} |
246,170 | I can't figure out how to write ! symbol in bash scripts when putting it in double quotes strings. For example: var="hello! my name is $name! bye!" Something crazy happens: $ age=20$ name='boda'$ var="hello! my name is $name! bye!" When I press enter at last command the command repeats itself (types itself) without the last ! : $ var="hello! my name is $name! bye" If I press enter again $ var="hello! my name is $name bye" If i press enter again it disappears nothing gets output $ If I try this: $ echo "hello\! my name is $name\! bye\!" Then it outputs: hello\! my name is boda\! bye\! If i use single quotes then my name doesn't get expanded: $ echo 'hello! my name is $name! bye!' Outputs are: hello! my name is $name! bye! I have it working this way: $ echo "hello"'!'" my name is $name"'!'" bye"'!' But it's one big mess with " and ' impossible to understand/edit/maintain/update. Can anyone help? | Rui’s answer contains some good advice,although he seems to have missed the actual causeof the failure of your command. It appears from the output that you show that you have a filenamed ./12/thumbnail/The Official AndreasCY … . png . When you say for PNG in $(find . -name '*.png') it’s like saying for PNG in ./12/thumbnail/112.png ./12/thumbnail/photography.png ./12/thumbnail/Christmas-01.png \ ./12/thumbnail/OA1920.png ./12/thumbnail/The Official AndreasCY … . png … with the result that ./12/thumbnail/The , Official , and AndreasCY get treated as if they were separate filenames —and, since they aren’t, you get the error you got. You should be very careful of using $(…) to generate a list of filenames —by which I mean, avoid doing that if at all possible. find … -exec is a much better approach. However, the command Rui gave won’t work: -exec "Optimized PNG: " will fail because there’s no such command as "Optimized PNG: " . I suggest that you keep the following partsof your original answer and Rui’s answer: #!/bin/shcd /var/www/html/wp-content/uploads/2014 (You don’t really need the “/” at the end.) find . -name '*.png' -type f … and then finish it with one of the following: -exec pngquant --ext .png --force 256 {} \; -exec echo "Optimized PNG:" {} \; -exec pngquant --ext .png --force 256 {} \; -printf "Optimized PNG: %p\n" -exec sh -c 'pngquant --ext .png --force 256 "$1"; echo "Optimized PNG: $1"' sh {} \; , or -exec myscript {} \; where myscript is #!/bin/shpngquant --ext .png --force 256 "$1"echo "Optimized PNG: $1" Note that you should always quote shell variables(e.g., "$PNG" and "$1" ) unless you have a good reason not to,and you’re sure you know what you’re doing. By contrast, while braces (e.g., ${PNG} ) can be important,they’re not as important as quotes. And, on general principles, I suggest that you use #!/bin/sh unless you're explicitly writing a script that depends on bash features,and will not work with a POSIX-compliant shell — and I advise against that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145231/"
]
} |
246,312 | So I'm trying to get a handle on how Linux's mount namespace works. So, I did a little experiment and opened up two terminals and ran the following: Terminal 1 root@goliath:~# mkdir a broot@goliath:~# touch a/foo.txtroot@goliath:~# unshare --mount -- /bin/bashroot@goliath:~# mount --bind a broot@goliath:~# ls bfoo.txt Terminal 2 root@goliath:~# ls bfoo.txt How come the mount is visible in Terminal 2? Since it is not part of the mount namespace I expected the directory to appear empty here. I also tried passing -o shared=no and using --make-private options with mount , but I got the same result. What am I missing and how can I make it actually private? | If you are on a systemd-based distribution with a util-linux version less than 2.27, you will see this unintuitive behavior. This is because CLONE_NEWNS propogates flags such as shared depending on a setting in the kernel. This setting is normally private , but systemd changes this to shared . As of util-linux 2.27, a patch was made that changes the default behaviour of the unshare command to use private as the default propagation behaviour as to be more intuitive. Solution If you are on a systemd system with util-linux prior to version 2.27, you must remount the root filesystem after running the unshare command: # unshare --mount -- /bin/bash# mount --make-private -o remount / If you are on a systemd system with util-linux version 2.27 or later, it should work as expected in the example you gave in your question, verbatim, without the need to remount. If not, pass --propagation private to the unshare command to force the propagation of the mount namespace to be private. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48419/"
]
} |
246,315 | I have a script that fails to detect zero length strings,the script uses [ -n $value ] in a bash conditional expression, i.e. #!/usr/bin/env bashvalue=""if [ -n $value ]then echo "value is non-zero"fi result value is non-zero If I use [[ -n $value ]] it works, i.e. #!/usr/bin/env bashvalue=""if [[ -n $value ]]then echo "value is non-zero"fi using [[ produces no output as expected. From the man page: [[ expression ]] Return a status of 0 or 1 depending on the evaluation of the conditional expression expression. Expressions are composed of the pri‐ maries described below under CONDITIONAL EXPRESSIONS. Word splitting and pathname expansion are not performed on the words between the [[ and ]]; tilde expansion, parameter and variable expansion, arithmetic expansion, command substitution, process substitution, and quote removal are performed. Conditional operators such as -f must be unquoted to be recognized as primaries. I can't make out an explanation for the behaviour from this. Why does [[ detect zero length strings but [ doesn't? | this is because [[ takes an expression , and [ takes arguments which it translates into an expression . [[ is syntax - it isn't a builtin command as [ is, but rather [[ is a compound command and is more akin to { or ( than it is to [ . in any case, because [[ is parsed alongside $expansions , it has no difficulty understanding the difference between a null-valued expansion and a missing operand. [ , however, is a routine run after all command-line expansions have already taken place, and by the time it evaluates its expressions, $null_expansion has already been expanded away into nothing, and so all it receives is [ -n ] , which may not be a valid expression. [ is spec'd to return true for a not-null single-argument case - as it does for -n here - but the very same spec goes on to say... The two commands: test "$1" test ! "$1" could not be used reliably on some historical systems. Unexpected results would occur if such a string expression were used and $1 expanded to ! , ( , or a known unary primary (such as -n ) . Better constructs are: test -n "$1" test -z "$1" there are upsides and downsides to both forms. [ expressions can be constructed out of expansions, and so: [ "-${z:-n}" "$var" ] ...could be a perfectly valid way to build a test, but is not doable with: [[ "-${z:-n}" "$var" ]] ...which is a nonsense command. The differences are entirely due to the command-line parse-point at which the test is run. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
246,338 | On 2013-01-10 Glenn Fowler posted this to the ast-users mailing list : As has been pointed out several times on the AST and UWIN lists, AT&T gives very little support to OpenSouce software, which is why we have so few people involved with our rather large collection of AST software. In spite of this, ksh , nmake , vczip , UWIN and other AST tools continue to be used in several AT&T projects. It turns out that software isn't the only thing lacking support: both dgk (David Korn) (AT&T fellow, 36 years of service) and gsf (Glenn Fowler) (AT&T fellow, 29 years of service) have been terminated, effective October 10. Our third major partner, Phong Vo (AT&T fellow, 32 years of service), left a few months ago for Google. The UWIN maintainer, Jeff Fellin, is still with AT&T and provides UWIN support for some critical operations. Both dgk and gsf will continue to work on AST software, and might actually have more time (at least in the short run) to focus on it. The download site and mail groups will remain within AT&T for at least the next several months. Our AT&T colleague, dr.ek, AST user and bug detector, will maintain the site. We have secured the astopen.org domain and are investigating non-AT&T hosting options, including a repository with bug tracking. The process of change will take time; the patience of the user community will be greatly appreciated. Its quite a shock to have 3 weeks to plan personal, career, and hacking futures after working in an environment that has essentially been stable for almost 30 years. The user groups will be informed as plans solidify. Korn's own wikipedia page says he worked for AT&T Labs Research until 2013..., but he is now working for Google citation needed . A dgkorn github user account was created in November 2014, but it has been the source of exactly 0 public contributions since that time, and subscribes to as many repos. Since 2013, the related mailing-lists have grown progressively less active. For example, the fourth-quarter ast-developers list for 2013 had posted 156 messages by 2013-12-01, but the same list for fourth-quarter 2015 lists only three messages, and this is the last of them: Subject: Re: [ast-developers] Transitioning ast to GitHub Is there any intention to transition the ast codebase to a source code repository like GitHub? That would make it much easier for the community to contribute. I'm concerned that without such a collaborative environment, ast-related development will stall as bug reports and source-code patches get lost in the ether. Does anyone have a full git repo they can publish somewhere (repo.or.cz, github, whatever)? Git server is down for ages, now even www2.research.att.com (204.178.8.28) went down. This makes one wonder about the future of Kornshell. Has it died? Are we to see no more releases? And, indeed, though AT&T lists all of the AST links at their labs research landing page, none of these seem to work. These are the same dead links listed at kornshell.com for download. Even if the current server state should prove only temporary for now, the dried-up mailing-list doesn't seem to bode well. And so, is the korn shell now kaput? Or is there more activity along these lines elsewhere? | NO tldr: github.com/att/ast and github.com/att/uwin On Jan 19-20, 2016 the following ( 1 | 2 ) messages were posted to the ast-users mailing-list : (and I consider the dgk has some patches comment especially encouraging) Wed, Jan 20 2016; From Glenn Fowler : Thanks Lefty for all the work getting this up and running. I know dgk has some patches in the works. He may be offline for the next few weeks. Tue, Jan 19, 2016; From Eleftherios Koutsofios : hi AST and UWIN users. as many of you noticed, the download site on www.research.att.com went off the air shortly before the end of the year due to some security issue. the timing was unfortunate because several people including me were on vacation so it's been down for a long time. but we've finally managed to move most of that software on GitHub. you can find the AST and UWIN software packages at: https://github.com/att/uwin and https://github.com/att/ast (btw. the /att tree on GitHub hosts a lot of open source software developed by the AT&T Research group. feel free to browse. I'll be putting up some of my code there soon) . /att/ast corresponds to the ast-open package. it includes the software that was also available under individual packages, like ast-ksh, ast-dss, etc., so I decided to only create this one. it has 3 branches, matching the old structure: master (i.e. official), alpha, and beta. beta is the most recent one. it includes the last package I had gotten from Glenn and Dave with some minor fixes to get it to compile on some new OS versions, like Centos 7 and Ubuntu 14. /att/uwin is the source code for the UWIN system. it has a master and a beta branch. I don't have an environment to build and test this on, so I don't know how well it builds. cloning either of these git repos is equivalent to downloading the INIT and ast-open (or INIT and uwin) packages from the old site and then running: ./bin/package read so the next step after the clone step is to run: ./bin/package make vanilla build, where no previous version of NMAKE is available should still work and on some systems that was actually the way to go for me. as an example, to get and compile the beta branch of AST: git clone --branch beta \https://github.com/att/ast.gitcd ast./bin/package make very little of the documentation from the old site has moved to the GitHub site, I'll try to migrate the rest later, I just wanted to get the software up again. thanks lefteris | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/246338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52934/"
]
} |
246,410 | I have the following in my .bashrc file I use for a log: function log(){ RED="\e[0;31m" RESET="\e[0m" echo -e "${RED}$(date)" "${RESET}$*" >> "$HOME"/mylog.txt} But when I do something with an apostrophe in it, it comes up with some sort of prompt and does not log it properly. How do I escape all the text being input into the file? Example: $ log this is a testing's post> hello> > ^C$ Thanks. | The problem you have has nothing to do with echo -e or your log() function. The problem is with apostrophes: log this is a testing's post The shell (bash, in your case) has special meanings for certain characters. Apostrophes (single quotes) are used to quote entire strings, and prevent most other kinds of interpolation. bash expects them to come in pairs, which is why you get the extra prompt lines until you type the second one. If you want a literal single quote in your string, you need to tell bash, by escaping it via \' , like so: log this is a testing\'s post Again, log is beside the point. You can try this out with plain old echo if you like: echo this is a testing\'s post See Which characters need to be escaped in bash for more info. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246410",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117923/"
]
} |
246,419 | I have a started vim . I suspended it ( Ctrl + Z ). So I was in terminal session and I would like to open some file with the same vim session (new tab). I can open the file in new vim session. Actually, path to file could be constructed. For example , vim `find $PWD -name build.xml` Is it possible to open this file inside the running vim session from hosting terminal session? | The problem with Ctrl - Z When you suspend a process with Ctrl - Z , the process gets a SIGTSTP signal, and all execution will stop (i.e., no more CPU cycles), until a SIGCONT signal comes along. You will not be able to send vim any commands or input while it is suspended. In other words, don't use Ctrl - Z . Yet if you have vim compiled with the clientserver feature enabled, you can make use of the --servername and --remote-* options: Use vim --remote When starting your vim session for the first time, use vim --servername VIM [filename ...] (filename is optional if you want to start with a blank session). Leave it running in your terminal. Now you can control it from any other terminal window, tab, machine, etc., via vim --remote commands. To open a file (e.g., file.txt in a new tab of your existing vim session: vim --remote-tab file.txt To use vim 's internal :tabfind functionality (see :help find for more information): vim --remote-send ":tabfind filename.txt<CR>" To use your system's find(1) program instead, as you asked in your question: vim --remote-tab `find $PWD -name build.xml` Multiple sessions You can also specify a different --servername , which is useful if you want multiple vim sessions. In that case, you need to supply the --servername argument every time: vim --servername HAMBURGER # Start new session named "HAMBURGER"vim --servername HAMBURGER --remote-tab `find $PWD -name BACON` Of course you can roll this all into a shell script or two to save yourself some typing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246419",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39370/"
]
} |
246,436 | I would like to have a dynamic motd, but I can't figure out how to do it. I tried what I found, adding /etc/update-motd.d/00-header , 10-sysinfo , 90-footer , and symlinking to /etc/motd /var/run/motd.dynamic , /run/motd.dynamic , /run/motd or /var/run/motd . I've got these lines in /etc/pam.d/sshd : # Print the message of the day upon successful login.# This includes a dynamically generated part from /run/motd.dynamic# and a static (admin-editable) part from /etc/motd.session optional pam_motd.so motd=/run/motd.dynamicsession optional pam_motd.so noupdate I'm also confused with systemd. Is there a way to do this? Could someone provide a example with a simple fortune? | I am able to test simple dynamic-motd with fortune example on my Debian Jessie 8.2 host as below and found the issue to be related to a buggy behavior. mkdir /etc/update-motd.dcd /etc/update-motd.d Created two test files as below and made them executable root@debian:/# cd /etc/update-motd.d/root@debian:/etc/update-motd.d# ls -l total 8-rwxr-xr-x 1 root root 58 Dec 1 23:21 00-header-rwxr-xr-x 1 root root 41 Dec 1 22:52 90-fortuneroot@debian:/etc/update-motd.d# cat 00-header #!/bin/bashechoecho 'Welcome !! This is a header'echoroot@debian:/etc/update-motd.d# cat 90-fortune #!/bin/bashecho/usr/games/fortuneecho However at this time, there was no change in motd. So i strace'd sshd process.From that trace (interesting parts shown below), you can see that newly created motd.new file is renamed to /var/run/motd. However it's later trying to read from /run/motd.dynamic - which was never created 20318 rename("/var/run/motd.new", "/var/run/motd") = 020318 open("/run/motd.dynamic", O_RDONLY) = -1 ENOENT (No such file or directory)20318 open("/etc/motd", O_RDONLY) = 8 The issue seem to be related to inconsistencies with pam_motd module. See bug report https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=743286;msg=2 Simply changing motd file location from /run/motd.dynamic to /run/motd in /etc/pam.d/sshd - makes it work for me root@debian:/etc/pam.d# grep pam_motd sshd#session optional pam_motd.so motd=/run/motd.dynamicsession optional pam_motd.so motd=/run/motdsession optional pam_motd.so noupdate Here is the sample MOTD seen during ssh login ... Welcome !! This is a header* Culus fears perl - the language with optional errorsThe programs included with the Debian GNU/Linux system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extentpermitted by applicable law.You have new mail.Last login: Tue Dec 1 23:49:57 2015 from x.x.x.x | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145386/"
]
} |
246,489 | I'm trying to write up a little helper script that will change permissions and ownership to some sites on a server. Right now, I can either pass in 1 site, or do all via simply skipping that argument. What I am finding is that I need to be able to apply to multiple sites on the server, but not all, so I tried making an attempt at passing an array via: SLIST=("my.site.com" "your.site.com")./website-perms 644 755 kevin "${SLIST[@]}" However, it only does the first item in the array my.site.com How do I fix this so I can pass in an array of sites? FULL CODE #!/bin/bash# Done nightly, and copied to /usr/bin/if [[ $EUID -ne 0 ]]; then echo "This script must be run as root" 1>&2 exit 1fiif [ $# -eq 0 ]; then echo "usage: ./website-perms.sh fileperm folderperm owner (1|2) (1=kevin,2=www-data) (optional) Site Array" exit 1fifunction chg_perms() { echo "find $1 -type f -exec chmod $2 {} \\"; echo "find $1 -type d -exec chmod $3 {} \\"; find $1 -type f -exec chmod $2 {} \; find $1 -type d -exec chmod $3 {} \; echo "-- chg_perms done";}function chg_owner() { echo "chown -R $2:www-data $1"; chown -R $2:www-data $1; echo "-- chg_owner done";}SITES=$4;if [ -z $SITES ]; then for dir in /var/www/httpdocs/*/ do dir=${dir%*/} chg_perms "/var/www/httpdocs/${dir##*/}" $1 $2 chg_owner "/var/www/httpdocs/${dir##*/}" $3 done;else for dir in "${SITES[@]}" #ONLY DOES THE FIRST ITEM do chg_perms "/var/www/httpdocs/$dir" $1 $2 chg_owner "/var/www/httpdocs/$dir" $3 donefi; | Nice script. Usually I would use all remaining arguments as the list of sites. Something like this (I have not tested these mods): if [ $# -lt 3 ]; then echo >&2 "usage: $0 fileperm folderperm owner [site ...]" exit 1fi [ . . . ] fileperm="$1"folderperm="$2"owner="$3"shift 3 # shift first 3 args off of listif [ $# -eq 0 ]; then for dir in /var/www/httpdocs/*/ #stackexchange syntax coloring fix*/ do dir="${dir%/*}" chg_perms "/var/www/httpdocs/${dir##*/}" "$fileperm" "$folderperm" chg_owner "/var/www/httpdocs/${dir##*/}" "$owner" done;else for dir # step through positional args ($1,$2,...,$N) do chg_perms "/var/www/httpdocs/$dir" "$fileperm" "$folderperm" chg_owner "/var/www/httpdocs/$dir" "$owner" donefi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108824/"
]
} |
246,535 | I have a file with .rar extension, ex: foo.rar I want to extract content from that file, how do I extract it? | You can install unrar - "Unarchiver for .rar files" or unp - "unpack (almost) everything with one command" To unrar a file: unrar x <myfile> To unp a file: unp <myfile.rar> Since unrar is not open source, some distros might not have it in their package manager already. If it's not, try unrar-free . Notice that unrar x <myfile> will preserve directory structure in archive, in difference with unrar e <myfile> which will flatten it | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/246535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119746/"
]
} |
246,555 | I have 10 files in current directory: 10test1test2test3test4test5test6test7test8test9test I want to remove all file except 2test and 3test , but I run command rm !(2test|3test) doesn't work. I get the following error: zsh: no matches found: !(2test|3test) | !(pattern) is ksh glob syntax, in zsh , you use ^(pattern) to negate the matching when extendedglob enabled: setopt extendedglobprint -rl -- ^(2test|3test) If you want to use ksh syntax, you need to enable kshglob : setopt kshglobprint -rl -- !(2test|3test) You can also use the and-not / except operator: setopt extendedglobprint -rl -- *test~[23]* ( *test files except those that start with 2 or 3 ). Also not that unless the nobareglobqual option is enabled or you use | s within them, trailing (...) glob grouping operators conflict with glob qualifiers. For example, in !(foo) or ^(foo) , the foo would be treated as a glob qualifier. You'd need ^foo or !(foo)(#q) (the (#q) adds a non-bare (explicit) glob qualifier). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112720/"
]
} |
246,622 | How can I list the named destinations in a PDF file? Named destinations are the formal name for what you might call anchors. Major browsers jump to the named destination foo when you follow a link to http://example.com/some.pdf#foo . I have documents where I can see anchors working, but I can't seem to find a way to list the anchors. Evince, okular and xpdf will jump to them when instructed but don't seem to have an interface that lists them. pdftk dump_data lists bookmarks, but that's not the same thing (that's table of content entries, which may well be at the same position as named destinations but can't be used as anchors). I'm looking for a command line solution (suitable, for example, for use in a completion function after the likes of evince -n ). Inasmuch as this is meaningful, I'd like to list the destinations in the order in which they appear in the document. Bonus: show the target page number and other information that helps figure out approximately where the destination is. See also View anchors in a PDF document on Software Recommendations for a GUI viewer. | Poppler's pdfinfo command-line utility will provide you with page number, position, and name for all named destinations in a PDF. You need at least version 0.58 of Poppler. $ pdfinfo -dests input.pdfPage Destination Name 1 [ XYZ null null null ] "F1" 1 [ XYZ 122 458 null ] "G1.1500945" 1 [ XYZ 79 107 null ] "G1.1500953" 1 [ XYZ 79 81 null ] "G1.1500954" 1 [ XYZ null null null ] "P.1" 2 [ XYZ null null null ] "L1" 2 [ XYZ null null null ] "P.2"(...) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
246,630 | Let's say I open a file with less and look for a pattern such as /time=32 , then exit with q , and maybe run other commands. Then, ifI open the same file again with less and hit n to repeat the lastpattern sought, less remembers it from the first time less wascalled. How is the pattern saved after exiting less for the first time?where is it saved? PS: I am using bash on Ubuntu 12.04 | The pattern is saved in $HOME/.lesshst which contains something like: .less-history-file:.search"journal"67"link A command is preceded by ' . ' (dot) an the argument is preceded by ' " ' (double quotes), so if I edit by hand the .lesshst file to append for example the string "TEST : .less-history-file:.search"journal"67"link"TEST the next time I'll open a file and press n key, less will search for the string "TEST". | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125765/"
]
} |
246,672 | This is a question about a problem I could fix but I do not know why the fix works. I wanted to be able to adjust the backlight from keyboard and the only thing which fixed that was changing a line in the grub as GRUB_CMDLINE_LINUX_DEFAULT="acpi_osi=" does anyone know why this fixes the backlight issue? I am using Debian 8. | During operating system boot, the OS obtains various ACPI tables from the BIOS and interprets the "tables", which really look more like program code . One quite popular table is the DSDT , but it's not alone. The ACPI tables are created in the form of textual source code (see the example linked above), and are compiled into binary form using a tool called iasl (by Intel). The tables are stored in the BIOS and processed by the OS in the binary form (like an intermediate byte-code or some such), but can be "disassembled" back into source code, if needed. Which is sometimes used by Linux tinkerers to correct bugs or ACPI "version mismatches": the original table is disassembled, possibly corrected in the source code, recompiled by a current version of the IASL, and provided to the Linux kernel as a custom replacement... The ACPI tables (including the DSDT) contain conditional branches - and the ACPI table being interpretted by the booting OS can test for the OS version using a method called _OSI. The host OS, interpretting the table, provides an "OS version string" to the _OSI method. Such as, for some reason, an _OSI string "Windows 2009" refers to "Windows 7" in our reality. Note that this is allegedly not the purpose originally intended for the _OSI method, but never mind :-) In other words, the "program" embodied in an ACPI table (while being interpretted by a host OS) can test, under what Windows version it is running, and modify its behavior based on that. It can init the hardware and various BIOS service interfaces/structures, based on the Windows version detected. Linux has its own assigned _OSI ident, and so does MacOS for instance... yet, as the BIOSes in x86 motherboards typically get tested against contemporary Windows versions, you may actually have better luck if you try to make the ACPI table believe that it's being interpretted by some particular Windows version, rather than by Linux. (Or to try to avoid hitting the "default case" in the branching ACPI code, which may not be well defined.) Which is what the kernel cmdline argument of acpi_osi="some string" is good for. The details of this and other related arguments are somewhat documented in the Linux "kernel parameters" guide . Apart from display backlight, the acpi_osi string can influence miscellaneous other aspects of the BIOS and OS behavior during boot. As an example, just at this moment I'm playing with an old Acer Extensa 5220 laptop (C2D, i965GM north bridge) and in default config, it often fails to resume from suspend (ACPI S3, suspend to RAM). On resume, it would freeze with a black screen and backlight on, or it would perform two restarts and then boot from scratch. I updated the BIOS which alone did not help, but it gave me a certainty that this BIOS update (1.35) was intended to work well with Windows 7. So after trying a number of other things, I finally have pretty good results with acpi_osi=! acpi_osi="Windows 2009" The first part, acpi_osi=! , means "forget any acpi_osi strings that you know at this point" (it's actually a list of strings, rather than just one string, apparently - duh). So we first clear whatever the interpretter originally used, and then set the one desired string. To make it work with a modern Linux kernel, it might be a good idea to specify the most modern Windows version that the BIOS nominally supports. Note that you need quotation marks around "Windows 2009", because the string contains a blank character (ASCII 'space'). Which turns out to be a problem if this cmdline arg needs to be entered into a shell variable in some distro-specific config file, such as /etc/default/grub in Debian and friends (Ubuntu).In that case, just use acpi_osi=\"Windows 2009\" , i.e. use a backslash to "escape" each quotation mark that should make it to the kernel command line. If you then run update-grub (again Debian/Ubuntu), the backslashes get stripped, and the quotation marks end up verbatim in /boot/grub/grub.cfg. Interestingly, if you later check with cat /proc/cmdline , you'll probably find out that the first quotation mark has moved to the very start of the argument: "acpi_osi=Windows 2009" which looks slightly bizzarre :-) I've found out that I should NOT mess with acpi_os_name or acpi_sleep (which otherwise also look promising). YMMV. Note that this is yet another incarnation of a general backward compatibility problem. Cross-compatibility between two different pieces of software, created very far apart in time. Speaking of suspend and resume, Linux has lost support for the old and simple APM BIOS call known as "set power state", so the only interface you can use for suspend+resume is ACPI, which itself is pretty complex, has evolved through several major versions, and very modern Linux versions are no longer thoroughly tested on very old hardware (and its BIOS), and the BIOS probably wasn't free of bugs even when it was new... and even ACPI is now getting superseded by UEFI, which builds on ACPI and brings further peculiarities of its own... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140154/"
]
} |
246,686 | I'm trying to make the WLAN USB stick connect to a wireless network. There was an official Linux driver available for download (v4.0.2_9000.20130911, which supports my Linux kernel version) and I used wifi-radar Both had no success in making it work. Probably the driver is not compatible with my Oracle Linux (based on Red Hat Enterprise Linux 6) # lsusb | grep WLANBus 002 Device 017: ID 0bda:8178 Realtek Semiconductor Corp. RTL8192CU 802.11n WLAN Adapter The problem is that the device still can't be detected even after the driver installation runs to the end. I don't know how to check if it was actually installed, or where it is mounted. # cd RTL8188C_8192C_USB_linux_v4.0.2_9000.20130911/driver/rtl8188C_8192C_usb_linux_v4.0.2_9000.20130911# make(no errors)# make install(no errors)# /sbin/modprobe 8192cu# ifconfig wlan0 upwlan0: unknown interface: No such device# /sbin/iwconfigvirbr0-nic no wireless extensions.eth0 no wireless extensions.eth1 no wireless extensions.virbr0 no wireless extensions.lo no wireless extensions. Is it possible to somehow specify it manally in wifi-radar or what steps should I take? | During operating system boot, the OS obtains various ACPI tables from the BIOS and interprets the "tables", which really look more like program code . One quite popular table is the DSDT , but it's not alone. The ACPI tables are created in the form of textual source code (see the example linked above), and are compiled into binary form using a tool called iasl (by Intel). The tables are stored in the BIOS and processed by the OS in the binary form (like an intermediate byte-code or some such), but can be "disassembled" back into source code, if needed. Which is sometimes used by Linux tinkerers to correct bugs or ACPI "version mismatches": the original table is disassembled, possibly corrected in the source code, recompiled by a current version of the IASL, and provided to the Linux kernel as a custom replacement... The ACPI tables (including the DSDT) contain conditional branches - and the ACPI table being interpretted by the booting OS can test for the OS version using a method called _OSI. The host OS, interpretting the table, provides an "OS version string" to the _OSI method. Such as, for some reason, an _OSI string "Windows 2009" refers to "Windows 7" in our reality. Note that this is allegedly not the purpose originally intended for the _OSI method, but never mind :-) In other words, the "program" embodied in an ACPI table (while being interpretted by a host OS) can test, under what Windows version it is running, and modify its behavior based on that. It can init the hardware and various BIOS service interfaces/structures, based on the Windows version detected. Linux has its own assigned _OSI ident, and so does MacOS for instance... yet, as the BIOSes in x86 motherboards typically get tested against contemporary Windows versions, you may actually have better luck if you try to make the ACPI table believe that it's being interpretted by some particular Windows version, rather than by Linux. (Or to try to avoid hitting the "default case" in the branching ACPI code, which may not be well defined.) Which is what the kernel cmdline argument of acpi_osi="some string" is good for. The details of this and other related arguments are somewhat documented in the Linux "kernel parameters" guide . Apart from display backlight, the acpi_osi string can influence miscellaneous other aspects of the BIOS and OS behavior during boot. As an example, just at this moment I'm playing with an old Acer Extensa 5220 laptop (C2D, i965GM north bridge) and in default config, it often fails to resume from suspend (ACPI S3, suspend to RAM). On resume, it would freeze with a black screen and backlight on, or it would perform two restarts and then boot from scratch. I updated the BIOS which alone did not help, but it gave me a certainty that this BIOS update (1.35) was intended to work well with Windows 7. So after trying a number of other things, I finally have pretty good results with acpi_osi=! acpi_osi="Windows 2009" The first part, acpi_osi=! , means "forget any acpi_osi strings that you know at this point" (it's actually a list of strings, rather than just one string, apparently - duh). So we first clear whatever the interpretter originally used, and then set the one desired string. To make it work with a modern Linux kernel, it might be a good idea to specify the most modern Windows version that the BIOS nominally supports. Note that you need quotation marks around "Windows 2009", because the string contains a blank character (ASCII 'space'). Which turns out to be a problem if this cmdline arg needs to be entered into a shell variable in some distro-specific config file, such as /etc/default/grub in Debian and friends (Ubuntu).In that case, just use acpi_osi=\"Windows 2009\" , i.e. use a backslash to "escape" each quotation mark that should make it to the kernel command line. If you then run update-grub (again Debian/Ubuntu), the backslashes get stripped, and the quotation marks end up verbatim in /boot/grub/grub.cfg. Interestingly, if you later check with cat /proc/cmdline , you'll probably find out that the first quotation mark has moved to the very start of the argument: "acpi_osi=Windows 2009" which looks slightly bizzarre :-) I've found out that I should NOT mess with acpi_os_name or acpi_sleep (which otherwise also look promising). YMMV. Note that this is yet another incarnation of a general backward compatibility problem. Cross-compatibility between two different pieces of software, created very far apart in time. Speaking of suspend and resume, Linux has lost support for the old and simple APM BIOS call known as "set power state", so the only interface you can use for suspend+resume is ACPI, which itself is pretty complex, has evolved through several major versions, and very modern Linux versions are no longer thoroughly tested on very old hardware (and its BIOS), and the BIOS probably wasn't free of bugs even when it was new... and even ACPI is now getting superseded by UEFI, which builds on ACPI and brings further peculiarities of its own... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137750/"
]
} |
246,734 | After writing this answer , I did some googling to find out if I could make the sed y// command be applied only to the match and not to the entire line. I wasn't able to find anything relevant. $ echo clayii | sed -e '/clayii/ y/clayk/kieio/'kieiii In other words, if the search word (clayii) is just one of many words in the input line, I want the y// command to be applied only to that word and not to the remainder of the line. i.e. I don't want this: $ echo can sed ignore everything but the matching word - clayii | sed -e '/clayii/ y/clayk/kieio/'ken sed ignore everithing but the metkhing word - kieiii Is that possible in sed ? or do i need to use something more capable like perl ? | No, the y command applies to all matching characters in the pattern space. Per the POSIX sed documentation (emphasize mine): [ 2addr ]y/ string1 / string2 / Replace all occurrences of characters in string1 with the corresponding characters in string2 . OSX/BSD man page: [2addr]y/ string1 / string2 / Replace all occurrences of characters in string1 in the pattern space with the corresponding characters from string2 . and GNU sed info page: y/ source-chars / dest-chars / Transliterate any characters in the pattern space which match any of the source-chars with the corresponding character in dest-chars . Sure, you could use the hold buffer to save current pattern space then retain only the match, transliterate and restore the pattern space replacing the initial match with the result e.g. compare sed 'y/words/evles/' <<<'words whispered by the drows' with sed 'h;s/.*\(drows\).*/\1/;y/words/evles/;G;s/\(.*\)\n\(.*\)drows\(.*\)/\2\1\3/' <<<'words whispered by the drows' but as soon as you start adding patterns/requirements it gets complicated. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7696/"
]
} |
246,751 | I'm using zsh 5.0.8 version in iterm2 on OSX. I start my computer and printenv shows me the $PATH variable: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/git/bin from my understanding, zsh will source the following file in order: /etc/zshenv~/.zshenv/etc/zshrc~/.zshrc I checked, I don't have the first 3 files, and my .zshrc is basically empty, nothing related to the $PATH variable. Then where is the $PATH variable set??? | I literally just had a battle with this today. On OS X Yosemite PATH is built up in a rather roundabout way. I believe that, as cremefraiche says, ZSH has a built-in $PATH that it uses if nothing else is set, but that's not where yours is coming from. First of all there is a file, /etc/paths , that contains a list of directories. There is also a directory, /etc/path.d that contains more files that contain directories. The program /usr/libexec/path_helper takes these lists of directories, merges them with the existing $PATH variable (if there is one), removes any duplicates, and outputs the result, with the /etc/paths directories listed first. You can try running it yourself, it doesn't do any harm. Here's the output from mine: $ /usr/libexec/path_helper PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/MacGPG2/bin:/Users/alan/.local/bin:/Users/alan/src/go/bin"; export PATH; On it's own, this doesn't do anything, but it's called from, on my machine, /etc/zprofile : if [ -x /usr/libexec/path_helper ]; then eval `/usr/libexec/path_helper -s`fi This might vary on your machine, as it seems Apple have moved this code around a bit in different versions of OS X. Here's the list of all the files that ZSH reads in OS X, in the order they are evaluated: /etc/zshenv ~/.zshenv /etc/zprofile ~/.zprofile /etc/zshrc ~/.zshrc /etc/zlogin ~/.zlogin ~/.zlogout /etc/zlogout Some of these files aren't evaluated in certain circumstances, like when run as non-interactive shell scripts, but I'm not going to discuss that here. It's in the ZSH man page if you're interested. $ man zsh It's worth noting that /etc/zprofile is run after ~/.zshenv , so if you follow the ZSH guidelines and set your $PATH in .zshenv, it's probably going to be clobbered by path_helper . If you're running into this problem it might be worth renaming /etc/zprofile as /etc/zshenv so the system $PATH will be set as early as possible. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112720/"
]
} |
246,755 | I found a strange directory in my home directory on Linux Mint 17.2 Cinnamon (though I'm pretty confident it's on all Linux distros) called .gnupg. It has no access given to ANYONE other than root. So I have three questions: What is this directory? What does it contain? Why is it placed in the user's home directory yet doesn't give them any access? Will it do any harm by just entering the directory as root? | Most dot files have a name that resembles the application that uses it. Unsurprisingly, .gnupg is used by GnuPG . GnuPG (also known as GPG) is a program that encrypts and signs files. As soon as you invoke it for the first time, it will create a .gnupg directory in your home directory and a few files in it. This directory contains a lot of private information (e.g. who your contacts are), so it's accessible only to the owner. This could happen, for example, if someone sends you a signed email; if your email client supports PGP email then it will attempt to verify the signature (and fail since you don't have the sender's public key in your GPG keyring). The real question here is why this directory in your home directory is owned by root. The answer is that you ran GPG as root, but with HOME set to your own home directory. Or, more precisely, you ran a program which ran GPG under the hood. One such program is APT: package management tools ( apt-get , apt , aptitude , etc.) use GPG to verify that the packages that you download are genuine. If you ran something like sudo apt-get install SOMEPACKAGE , this would create a .gnupg directory in your home directory, since sudo doesn't change the home directory by default. The fix is to remove the .gnupg directory, then create it under your user. You can just remove the root-owned directory ( sudo rm -r ~/.gnupg ): any file under your home directory is fair game for you. You could alternatively move it to root's home directory ( sudo mv ~/.gnupg /root ), but it doesn't contain anything important anyway. Then run a GPG command such as gpg --list-keys ; this will populate your ~/.gnupg directory with empty keyring files. Just entering a directory is always harmless. Listing files and viewing their content is usually harmless, but it can be harmful in some configurations because terminals parse escape sequences in what applications print . Under Linux, plain ls or ls -l is fine but ls -N is potentially risky. Plain cat filename is risky but less filename is fine (whereas less +R filename is risky). In the .gnupg directory, there's nothing harmful. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139098/"
]
} |
246,798 | I am trying to create script which can let me to put number of port as parameter and then find the name of service for this port.Here is the script: #!/bin/bashgrep -E "\W$1\/" /etc/services | awk '{ print $1 }'if [ $? -eq 0 ]; then echo "Service(s) is(are) found correctly!"elseecho "There is(are) no such service(s)!"fi Everything works perfectly but there is one problem. If I type such port as 99999 (or another fiction port) - "exit status" for grep -E "\W$1\/" /etc/services | awk '{ print $1 }' also will be 0. In this way all results of my scripts will be correct and else statement in my script won't work. What am I going to do to find the solution to this problem and let my else works fine with "exit status"? | You don't need grep here at all, awk can do the pattern match on the port number. awk can also keep track of whether the port was found or not and exit with an appropriate exit code. For example: $ port=23$ awk '$2 ~ /^'"$port"'\// {print $1 ; found=1} END {exit !found}' /etc/services telnet$ echo $?0$ port=99999$ awk '$2 ~ /^'"$port"'\// {print $1 ; found=1} END {exit !found}' /etc/services $ echo $?1 The exit !found works because awk variables default to zero (or true) if they haven't previously been defined - exit !0 is exit 1 . So if we set found=1 when we match then exit !found in the END block is exit 0 . Here's how to use that awk script with your if/then/else. #!/bin/bashawk '$2 ~ /^'"$1"'\// {print $1 ; found=1} END {exit !found}' /etc/services if [ $? -eq 0 ]; then echo "Service(s) is(are) found correctly!"else echo "There is(are) no such service(s)!"fi You can also do it like this: if awk '$2 ~ /^'"$1"'\// {print $1;found=1} END{exit !found}' /etc/services ; then echo "Service(s) is(are) found correctly!"else echo "There is(are) no such service(s)!"fi Or even like this: awk '$2 ~ /^'"$1"'\// {print $1 ; found=1} END {exit !found}' /etc/services \ && echo "Service(s) is(are) found correctly!" \ || echo "There is(are) no such service(s)!" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137094/"
]
} |
246,825 | What's the right command to copy the content from file X and File Y to create a new File XY with all letters being lowercase? | Use a combination of cat and dd : cat LIST_OF_FILES | dd of=OUTPUT_FILE conv=lcase An example: $ cat file1.txt I am File 1.$ cat file2.txt Here is File 2!$ cat file1.txt file2.txt | dd of=file12.txt conv=lcase0+1 records in0+1 records out29 bytes (29 B) copied, 0,000301417 s, 96,2 kB/s$ cat file12.txt i am file 1.here is file 2! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145659/"
]
} |
246,827 | I'm using Mate 1.2.0 in Linux Mint 13. The two text editors i use are Gedit and Geany (v0.25): i use Geany for all my coding as i prefer the syntax highlighting and some other interface features. One thing that bugs me though is this behaviour: open a file in Geany in workspace 1 go to workspace 2 double click a file to open it (in Geany) the desktop switches to workspace 1 again and opens the file in Geany. When i do this in Gedit, it opens a new instance of Gedit in that workspace, which suits my style of working perfectly, where i have different projects open in each workspace. I can start another instance of Geany from the programs menu, and move one into the other workspace, but it doesn't change the behaviour: I then see this: open a file in Geany in workspace 1 go to workspace 2 start a new instance of Geany from the program menu (so i now have one per workspace) double click a file to open it (in Geany) the desktop switches to workspace 1 again and opens the file in the first instance of Geany. So it's like it always opens a file in the "primary" Geany, and switches to whatever workspace that happens to be in. Is there a way i can change this behaviour? I'd like it to be like so: On opening a file: is there a Geany running in this workspace? yes: open the file in that Geany no: open a new Geany in this workspace and open the file in that. I can't see an option relating to this in the settings. Any advice appreciated! thanks | Use this batch to open Geany. This will open a separate socket specific to each workspace. For example, in Thunar, use 'open with other application' and point to this batch file. #!/bin/shsocket=`xprop -root _NET_CURRENT_DESKTOP`socket=${socket##* }if [ "$socket" ]then if [ "$DISPLAY" ] then socket="${DISPLAY%.*}-$socket" socket=${socket#*:} else socket="NODISPLAY-$socket" fi exec geany --socket-file "/tmp/geany_socket_$socket" "$@"else exec geany "$@"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27368/"
]
} |
246,846 | I've been trying to set my locale to en_US.UTF-8 without any success. Based off of other answers around the internet, I should first generate the locale with sudo locale-gen en_US.UTF-8 And then apply it with sudo dpkg-reconfigure locales However, running locale-gen does something weird: user@Host /home/user $ sudo locale-gen en_US.UTF-8Generating locales (this might take a while)... en_US.ISO-8859-1... doneGeneration complete. As you see, it never actually generates UTF-8, but instead keeps falling back to ISO-8859-1. I can never manage to set LC_ALL to en_US.UTF-8 , probably because it can't generate. Am I doing something wrong? I'm running Debian 8.1. | You've tried to apply a recipe for Ubuntu under Debian. That usually works, but in this specific case it doesn't. Ubuntu is derived from Debian, and doesn't change much apart from the installer and the GUI. The locale-gen command is one of those few other things that it changes. I don't know why. Under Debian, the locale-gen command takes no arguments and regenerates the compiled locale definitions according to the configured list of locales. To modify the selection of locales that you want to use, edit the file /etc/locale.gen then run the locale-gen command. Alternatively, run dpkg-reconfigure locales as root, select the additional locales you want (and deselect the ones you don't want), and press OK. Under Ubuntu, if you run the locale-gen command without arguments, it regenerates the compiled locale definitions according to the configured list of locales. But if you pass some arguments, they're added to the list and generated immediately. The list of locales is kept in /var/lib/locales/supported.d/local . Running dpkg-reconfigure locales just regenerates the compiled locales without giving you an opportunity to modify the selection. In summary, to add en_US.UTF-8 to the list of usable locales: Debian, interactive: dpkg-reconfigure locales Debian, automated: sed -i 's/^# *\(en_US.UTF-8\)/\1/' /etc/locale.gen && locale-gen Ubuntu, automated: locale-gen en_US.UTF-8 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/246846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86192/"
]
} |
246,859 | I'm running Debian Jessie 8.2 from a live boot, from a USB stick. It works; I can boot and use it normally, but I can't connect to WiFi. I've read that I probably need non-free drivers, (but right now i just want it to work, and it doesn't matter if the drivers are non-free). I've been using this tutorial , which I've been following pretty well. My network controller is Broadcom Corporation bcm4313 802.11bgm wireless network adapter. The first weird thing was that the sources.list file that was mentioned didn't have the line exactly as in the article -- the difference was that it said http instead of ftp.us -- all the rest of that line was the same. Would the fact that I'm in Argentina be a reason for this? I saved the changes, and then as said in the article, ran apt-get update in the terminal. Question: Why is there a # before the instruction in the tutorial? The terminal responded with output saying failed to fetch and could not resolve . Also, how do I copy text from Debian? By the way, to understand this process better, we aren't trying to download things, right? Because that's my problem: Debian has no Internet connection. | You've tried to apply a recipe for Ubuntu under Debian. That usually works, but in this specific case it doesn't. Ubuntu is derived from Debian, and doesn't change much apart from the installer and the GUI. The locale-gen command is one of those few other things that it changes. I don't know why. Under Debian, the locale-gen command takes no arguments and regenerates the compiled locale definitions according to the configured list of locales. To modify the selection of locales that you want to use, edit the file /etc/locale.gen then run the locale-gen command. Alternatively, run dpkg-reconfigure locales as root, select the additional locales you want (and deselect the ones you don't want), and press OK. Under Ubuntu, if you run the locale-gen command without arguments, it regenerates the compiled locale definitions according to the configured list of locales. But if you pass some arguments, they're added to the list and generated immediately. The list of locales is kept in /var/lib/locales/supported.d/local . Running dpkg-reconfigure locales just regenerates the compiled locales without giving you an opportunity to modify the selection. In summary, to add en_US.UTF-8 to the list of usable locales: Debian, interactive: dpkg-reconfigure locales Debian, automated: sed -i 's/^# *\(en_US.UTF-8\)/\1/' /etc/locale.gen && locale-gen Ubuntu, automated: locale-gen en_US.UTF-8 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/246859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133124/"
]
} |
246,888 | I would like to convert some Linux man pages to HTML without using groff. My prejudice against groff is due to some PNG rendering issues it is giving me that seems to be localized to Sabayon (as these issues do not seem to occur on my VirtualBox VMs for other distros). I realize this is a bug, but a solution seems to not be in the near future so I would like to ask if there are other ways to convert Linux man pages to HTML. Using the HTML pages at http://linux.die.net/man is not an acceptable solution as some of the man pages I am interested in are not there (e.g., emerge(1) is not there). | There are plenty of alternatives such as roffit , troff , man2html . There's also perl based online manpage browsers, such as manServer . My favorite is pandoc , though sadly it doesn't seem to support ROFF input by default (though you can probably use it if you need to chain multiple transformation filters together. man2html example: zcat /usr/share/man/man1/dd.1.gz \ | man2html \ | sudo tee /var/www/html/dd.html roffit example: git clone git://github.com/bagder/roffit.gitcd roffitzcat /usr/share/man/man1/dd.1.gz \ | perl roffit \ | sudo tee /var/www/html/dd-roffit.html Other tools: troffcvt does about the same thing. The 'real' troff - Gonna try out http://heirloom.sourceforge.net/doctools.html . I suspect schily has OpenSolaris and friends in mind :-). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
246,894 | I'm running Redhat 6.6 and experienced a power failure over the holiday weekend. The / partition is showing 100% full. How do I check to see which files are actually causing the overusage? [root@sms1 ~]# df -HFilesystem Size Used Avail Use% Mounted on/dev/mapper/vg_sms1-lv_root 53G 51G 0 100% /tmpfs 34G 0 34G 0% /dev/shm | There are plenty of alternatives such as roffit , troff , man2html . There's also perl based online manpage browsers, such as manServer . My favorite is pandoc , though sadly it doesn't seem to support ROFF input by default (though you can probably use it if you need to chain multiple transformation filters together. man2html example: zcat /usr/share/man/man1/dd.1.gz \ | man2html \ | sudo tee /var/www/html/dd.html roffit example: git clone git://github.com/bagder/roffit.gitcd roffitzcat /usr/share/man/man1/dd.1.gz \ | perl roffit \ | sudo tee /var/www/html/dd-roffit.html Other tools: troffcvt does about the same thing. The 'real' troff - Gonna try out http://heirloom.sourceforge.net/doctools.html . I suspect schily has OpenSolaris and friends in mind :-). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145696/"
]
} |
246,900 | How can I check how many white spaces (' ', \t ) there are in the first line of a file? | A straightforward way would be to select just the first line, dropnon-witespace characters from it, and count how many characters areleft: head -n 1 | tr -cd ' \t' | wc -c | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102176/"
]
} |
246,909 | On my system I'm unable to install the recommended graphics driver, so something must be wrong with my installation. The GPU chipset is ATI ES1000, but the recommended driver is NVIDIA NVS300 downloaded from the server vendor's site. The maximum graphics resolution of the onboard graphics controller ATI ES1000 with the native driver of Microsoft Windows 2012 is 1280 x 1024. ATI has not planned to support ATI ES1000 graphics chip with Windows 2012. So there"s no OEM driver available which could be installed on PRIMERGY TX100 S3 or TX100 S3p with Microsoft Windows 2012. For higher graphics resolutions on PRIMERGY TX100 S3 or TX100 S3p, the PCIe graphics controller NVIDIA® Quadro® NVS 300 can be used. Before installation I switched to runlevel 3 ( init 3 ) and blacklisted nouveau driver ( echo blacklist nouveau > /etc/modprobe.d/nvidia.conf ). None of the conflicting drivers is present: # lsmod | grep -e nouveau -e rivafb -e nvidiafb(empty) These are all steps that should be needed, what else can be wrong on my Oracle Linux (based on Red Hat Enterprise Linux 6.7, Kernel Linux 3.8.13-118.2.1.el6uek.x86_64, GNOME 2.28.2), I was thinking incompatible kernel or some GPU driver conflict? List of OS supported by the driver: Red Hat Enterprise Linux 6.6 (x86_64)Red Hat Enterprise Linux 6.7 (x86_64)Red Hat Enterprise Linux 7 GA (x86_64)Red Hat Enterprise Linux 7.1 (x86_64)SUSE Linux Enterprise Server 11 SP3 (x86_64)SUSE Linux Enterprise Server 11 SP4 (x86_64) The main error: ERROR: Unable to load the kernel module 'nvidia.ko'. This happens most frequently when this kernel module was built against the wrong or improperly configured kernel sources, with a version of gcc that differs from the one used to build the target kernel, or if a driver such as rivafb, nvidiafb, or nouveau is present and prevents the NVIDIA kernel module from obtaining ownership of the NVIDIA graphics device(s), or no NVIDIA GPU installed in this system is supported by this NVIDIA Linux graphics driver release. Output from /var/log/nvidia-installer.log : -> Kernel module compilation complete.-> Unable to determine if Secure Boot is enabled: No such file or directoryERROR: Unable to load the kernel module 'nvidia.ko'. This happens most frequently when this kernel module was built against the wrong or improperly configured kernel sources, with a version of gcc that differs from the one used to build the target kernel, or if a driver such as rivafb, nvidiafb, or nouveau is present and prevents the NVIDIA kernel module from obtaining ownership of the NVIDIA graphics device(s), or no NVIDIA GPU installed in this system is supported by this NVIDIA Linux graphics driver release.Please see the log entries 'Kernel module load error' and 'Kernel messages' at the end of the file '/var/log/nvidia-installer.log' for more information.-> Kernel module load error: insmod: error inserting './kernel/nvidia.ko': -1 No such device-> Kernel messages:survey done event(5c) band:0 for wlan0==>rtw_ps_processor .fw_state(8)==>ips_enter cnts:5===> rtw_ips_pwr_down...................====> rtw_ips_dev_unload...usb_read_port_cancelusb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)usb_write_port_cancel ==> rtl8192cu_hal_deinit bkeepfwalive(0)card disble without HWSM...........<=== rtw_ips_pwr_down..................... in 29msusb 2-1.2: USB disconnect, device number 7usb 2-1.2: new low-speed USB device number 8 using ehci-pciusb 2-1.2: New USB device found, idVendor=093a, idProduct=2510usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0usb 2-1.2: Product: USB Optical Mouseusb 2-1.2: Manufacturer: PixArtinput: PixArt USB Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/input/input7hid-generic 0003:093A:2510.0005: input,hidraw1: USB HID v1.11 Mouse [PixArt USB Optical Mouse] on usb-0000:00:1d.0-1.2/input0NVRM: No NVIDIA graphics adapter found!NVRM: NVIDIA init module failed!ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. | A straightforward way would be to select just the first line, dropnon-witespace characters from it, and count how many characters areleft: head -n 1 | tr -cd ' \t' | wc -c | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137750/"
]
} |
246,915 | For all files in a directory, I want to replace the underscores in the filename with spaces. I tried this solution, which does the opposite of what I want: https://stackoverflow.com/questions/1806868/linux-replacing-spaces-in-the-file-names But switched the space with the underscore. That does not work, giving the error ´x´ is not a directory Where x is the last word in the filename, for example hello_world_x What is the correct command to replace underscores with spaces for all files in a directory? | After you cd to the correct directory, this script will reliably solve your need (not portable because of the ${var//pat/str} expansion): #!/bin/bashset -- *_*for file; do mv -- "$file" "${file//_/ }"done *_* The glob *_* will select all files that have an _ in their names. set -- Those names (even including spaces or new-lines) will be reliably set to the positional parameters $1 , $2 , etc. with the simple command set -- "list" for file; Then, each positional parameter will be (in turn) assigned to the var file. do ... done contains the commands to execute (for each $file ). mv -- "$file" "${file//_/ }" will move (rename) each file to the same name with each (all) _ replaced by (space). Note : You may add the -i (interactive) option to avoid overwriting already existing files. If the file exist, mv will ask. With a caveat: there needs to be an interactive shell where mv could communicate with the user. mv -i -- "$file" "${file//_/ }" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/246915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23751/"
]
} |
246,935 | I'm working on a systemd .service script that is supposed to start after a CIFS network location is mounted via /etc/fstab to /mnt/ on boot-up. The script waits for an OpenVPN dependency script to launch first, but I also want it to wait for mount to complete. /etc/systemd/system/my-daemon.service : [Unit]Description=Launch My DaemonAfter=network.target vpn-launch.serviceRequires=vpn-launch.service I tried to add systemd.mount to the line: After=network.target vpn-launch.service systemd.mount , but it didn't give the results I was hoping for. | a CIFS network location is mounted via /etc/fstab to /mnt/ on boot-up. No, it is not. Get this right, and the rest falls into place naturally. The mount is handled by a (generated) systemd mount unit that will be named something like mnt-wibble.mount . You can see its actual name in the output of systemctl list-units --type=mount command. You can look at it in detail just like any other unit with systemctl status . Very simply, then: you have to order your unit to be started after that mount unit is started. After=network.target vpn-launch.service mnt-wibble.mount Further reading https://unix.stackexchange.com/a/236968/5132 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/246935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136753/"
]
} |
246,950 | Here's a layup for someone... I'm having to run a command repeatedly: $ wp input csv MyCSV01.csv directory_name$ wp input csv MyCSV02.csv directory_name$ wp input csv MyCSV03.csv directory_name The only change is the filename is incrementing. How can I run all these back to back? Perhaps find all the files that start with MyCSV* and then run them in order? And/or specify a range of the files to run MyCSV03.csv through MyCSV05.csv ? Ideally, the solution is short enough for the command line, but it could be a script. | for i in {01..20}; do #replace with your own range echo \ wp input csv "MyCSV$i.csv" directory_namedone Comment out the echo line if it gives you the results you want. zsh , which you tagged your question with, has a shorter form: for i (MyCSV{01..20}.csv) wp input csv $i directory_name Or you could use its zargs function: autoload zargs # best in ~/.zshrczargs -i MyCSV{01..20} -- wp input csv {} directory_name | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19617/"
]
} |
246,990 | I'm working in HDFS and am trying to get the entire line where the 4th column starts with the number 5: 100|20151010|K|5001695|20151010|K|1010309|20151010|R|5005410|20151010|K|5001107|20151010|K|1062652|20151010|K|5001 Hence should output: 100|20151010|K|5001309|20151010|R|5005410|20151010|K|5001652|20151010|K|5001 | The simplest approach would probably be awk : awk -F'|' '$4~/^5/' file The -F'|' sets the field separator to | . The $4~/^5/ will be true if the 4th field starts with 5 . The default action for awk when something evaluates to true is to print the current line, so the script above will print what you want. Other choices are: Perl perl -F'\|' -ane 'print if $F[3]=~/^5/' file Same idea. The -a switch causes perl to split its input fields on the value given by -F into the array @F . We then print if the 4th element (field) of the array (arrays start counting at 0) starts with a 5 . grep grep -E '^([^|]*\|){3}5' file The regex will match a string of non- | followed by a | 3 times, and then a 5 . GNU or BSD sed sed -En '/([^|]*\|){3}5/p' file The -E turns on extended regular expressions and the -n suppresses normal output. The regex is the same as the grep above and the p at the end makes sed print only lines matching the regex. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/246990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145760/"
]
} |
247,004 | I'm looking for an elegant one-liner (eg, awk ) that will shorten a string of a Unix path using the first character of each parent/intermediate level, but the full basename. Easier to show by examples: /path/to/file → /p/t/file /tmp → /tmp /foo/bar/.config/wizard_magic → /f/b/./wizard_magic /foo/bar/.config/wizard_magic → /f/b/.c/wizard_magic In light of good points by @MichaelKjörling and @ChrisH below, this example shows how we might show the first two characters when the first character is a dot. | For this test file: $ cat path/path/to/file/tmp/foo/bar/.config/wizard_magic The abbreviations can be generated with this awk code: $ awk -F/ '{for (i=1;i<NF;i++) $i=substr($i,1,1)} 1' OFS=/ path/p/t/file/tmp/f/b/./wizard_magic Edit1: Using two characters for dot-names This version abbreviates directory names to one character except for names that start with . which are abbreviated to two characters: $ awk -F/ '{for (i=1;i<NF;i++) $i=substr($i,1,1+($i~/^[.]/))} 1' OFS=/ path/p/t/file/tmp/f/b/.c/wizard_magic How it works -F/ This tells awk to use a slash as the field separator on input. for (i=1;i<NF;i++) $i=substr($i,1,1) This loops over each field, except the last, and replaces it with just its first character. EDIT1: In the revised version, we make the length of the substring 2 when the field starts with . . 1 This tells awk to print the revised line. OFS=/ This tells awk to use a slash as the field separator on output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/69080/"
]
} |
247,050 | I have a perl script $ cat ~/script.plsub main { my ($file) = @_; <STUFF> }}foreach (@ARGV) { main($_);} I want to execute ~/script.pl on every .txt file under the directory ~/foo . I can get the list of .txt files under ~/foo with the command $ find ~/foo -type f -name \*.txt Can I somehow use this command to pass these files to my script? | Yes, there are several ways to accomplish this with the find command. I'll list some in the order that I think is important for understanding in your situation. Your script appears to accept multiple filename arguments, so the most efficient and nearly universal way to accomplish this using the find command is: find ~/foo -type f -name \*.txt -exec perl ~/script.pl {} + This executes your script with as many found filename arguments as possible. Your script will be called multiple times if necessary to process all filenames. Note the + at the end of the line. This is the original and most universal method. This is less efficient for your situation because it invokes perl once for each file found. This usage has been available since the earliest days of Unix . Note the escaped semi-colon ( \; ) at the end of the line (as opposed to the + above). find ~/foo -type f -name \*.txt -exec perl ~/script.pl {} \; Before the -exec ... + syntax was added to find , the xargs command was invented to help increase efficiency when processing lists of filenames or other arguments. This works almost identically to the -exec ... + example above: find ~/foo -type f -name \*.txt -print | xargs perl ~/script.pl If your implementation supports it, you should use the -print0 option of find , along with the -0 argument to xargs . This causes find to print null characters between argument strings and prevents xargs from splitting arguments on anything but the null character. This helps to prevent xargs from splitting arguments incorrectly, in the event that your filenames contain whitespace or some other special characters. Using the -exec ... + syntax is generally a better idea because find then puts the filenames directly into your script's argument list, eliminating a process, and avoiding any interpretation that might happen by piping to xargs . However, xargs might have advantages if you need more control over the process. See the xargs man page. You can also check out the find2perl command which takes the same arguments as find and prints a perl program to do the same thing. You could then incorporate the generated perl code into your script. In the generated script below you would modify the next to the last line to call your function instead of print . $ find2perl foo -type f -name \*.txt # /*#[some preamble code removed for brevity]# Traverse desired filesystemsFile::Find::find({wanted => \&wanted}, 'foo');exit;sub wanted { my ($dev,$ino,$mode,$nlink,$uid,$gid); (($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_)) && -f _ && /^.*\.txt\z/s && print("$name\n");} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/247050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
247,096 | This is the command I am using: system_profiler | sed -n -e '/SATA/SATA Express:/,/Software:/ p' It gives me this error sed: 1: "/SATA/SATA Express:/,/S ...": invalid command code S I don't see why it doesn't work especially when this almost identical command works: system_profiler | sed -n -e '/Hardware Overview:/,/Installations:/ p' I don't know? There's no reason to escape S. | Escape the / between SATA and SATA Express: to make it literal: system_profiler | sed -n -e '/SATA\/SATA Express:/,/Software:/ p' or use a different delimiter using this syntax: system_profiler | sed -n -e '\|SATA/SATA Express:|,/Software:/ p' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
247,108 | I need to automate a process of verification which Unicode characters have actual glyphs defined for them in a True Type Font file. How do I go around doing that? I can't seem to find information on how to make sense of the numbers I seem to be getting when I open a .ttf file in a text editor. | I found a python library, fonttools ( pypi ) that can be used to do it with a bit of python scripting. Here is a simple script that lists all fonts that have specified glyph: #!/usr/bin/env python3from fontTools.ttLib import TTFontimport syschar = int(sys.argv[1], base=0)print("Looking for U+%X (%c)" % (char, chr(char)))for arg in sys.argv[2:]: try: font = TTFont(arg) for cmap in font['cmap'].tables: if cmap.isUnicode(): if char in cmap.cmap: print("Found in", arg) break except Exception as e: print("Failed to read", arg) print(e) First argument is codepoint (decimal or hexa with 0x) and the rest is font files to look in. I didn't bother trying to make it work for .ttc files (it requires some extra parameter somewhere). Note: I first tried the otfinfo tool, but I only got basic multilingual plane characters (<= U+FFFF). The python script finds extended plane characters OK. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145841/"
]
} |
247,165 | I've been using dd in Linux to overwrite an external USB hard drive. When I use the default block size of 512 bytes, with this command: # dd if=/dev/zero of=/dev/sdb throughout the whole operation, the hard drive (/dev/sdb) is being read from and written to alternately, approximately 1GB at a time. I.e. read 1GB ... write 1GB ... read 1GB ... write 1GB etc. As much data is read from the hard drive as is written to it. I know this is happening because it's showing in my custom Conky panel ( diskio_read , diskio_write ), which I know to be a 100% reliable indicator of disk I/O activity. I've repeated this using a different external hard drive on a different computer. It happens via both USB 2.0 and USB 3.0. In contrast, when I do the same thing, but use a block size of 1MB instead, with this command: # dd if=/dev/zero of=/dev/sdb bs=1M apart from a small amount of reading at the start, the hard drive is not read from at all during the operation. Given that this phenomenon has happened on two different computers and two different hard drives of mine, using a standard Linux distro (Xubuntu 14.04), anyone who wants to should presumably be able to replicate it on their own computer. Can someone please explain what is happening here? | If you specify a block size (512 bytes) of less than the block size of the disk (often 4096 bytes, but nowadays maybe more), the block will be partially written, so that the contents of the rest of the block must be preserved before writing. This is because disk blocks cannot be written to with only 512 bytes, but you have to write a full block at once (4096 or larger). When you write this (4096) amount or more, there is no partial write, so it does not have to read. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
247,187 | I want to combine multiple conditions in a shell if statement, and negate the combination. I have the following working code for a simple combination of conditions: if [ -f file1 ] && [ -f file2 ] && [ -f file3 ] ; then # do stuff with the filesfi This works fine. If I want to negate it, I can use the following working code: if ! ( [ -f file1 ] && [ -f file2 ] && [ -f file3 ] ) ; then echo "Error: You done goofed." exit 1fi# do stuff with the files This also works as expected. However, it occurs to me that I don't know what the parentheses are actually doing there. I want to use them just for grouping, but is it actually spawning a subshell? (How can I tell?) If so, is there a way to group the conditions without spawning a subshell? | You need to use { list;} instead of (list) : if ! { [ -f file1 ] && [ -f file2 ] && [ -f file3 ]; }; then : do somethingfi Both of them are Grouping Commands , but { list;} executes commands in current shell environment. Note that, the ; in { list;} is needed to delimit the list from } reverse word, you can use other delimiter as well. The space (or other delimiter) after { is also required. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/247187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
247,311 | I get that I can use mount to set up / directories and that I can use the /etc/fstab to remount them on reboot. Testing the fstab file is also fun with mount -faV . When I'm looking at the fstab file, the number of space is disconcerting. I would have expected one space (like a separator between command parameters) or four spaces (like a tab). I'm seeing seven spaces at a time, almost as convention. My question is: What are all the spaces in the /etc/fstab for? (Perhaps also - Will it matter if I get the wrong number?) | The number of spaces is a way to cosmetically separate the columns/fields. It has no meaning other than that. I.e. no the amount of white space between columns does not matter . The space between columns is comprised of white space (including tabs), and the columns themselves, e.g. comma-separated options, mustn't contain unquoted white space. From the fstab(5) man page: [...] fields on each line are separated by tabs or spaces. and If the name of the mount point contains spaces these can be escaped as `\040'. Example With the following lines alignment using solely a single tab becomes hard to achieve. In the end the fstab without white space looks messier than what you consider disconcerting now. /dev/md3 /data/vm btrfs defaults 0 0/var/spool/cron/crontabs /etc/crontabs bind defaults,bind//bkpsrv/backup /mnt/backup-server cifs iocharset=utf8,rw,credentials=/etc/credentials.txt,file_mode=0660,dir_mode=0770,_netdev Can you still see the "columns"? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/247311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112231/"
]
} |
247,319 | I'm trying in a heredoc to set its output to a local variable as follows : REMOTE_OUTPUT=$(ssh remote@server /bin/bash << EOF find my/path/ -type f -not -path my/path/*/ -type f -mtime -0 | while read filename; do if grep "ERROR" $filename; then filenamebase=$(basename "$filename") echo -e "\n----------------------------------------------------------\n\n$filenamebase failure:\n" grep -n "ERROR" "$filename" | sed G fi doneEOF) But the variable stays null even though the find&grep loop is correct and should indeed return an output. (Otherwise I would also be interested in writing the output of the heredoc into a local file.) | You need to quote the EOF marker, eg <<\EOF or <<'EOF' to stopyour $filename variable from being evaluated before it is passed to the remote. You can see the effect with say /bin/bash -v instead of /bin/bash . I also needed to have the actual EOF marker on a line of its own, with the final ) on the next line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145988/"
]
} |
247,329 | In vim editor, I want to replace a newline character ( \n ) with two new line characters ( \n\n ) using vim command mode. Input file content: This is my first line.This is second line. Command that I tried: :%s/\n/\n\n/g But it replaces the string with unwanted characters as This is my first line.^@^@This is second line.^@^@ Then I tried the following command :%s/\n/\r\r/g It is working properly. Can you explain why it is working fine with second command? | Oddly enough, \n in vim for replacement does not mean newline, but null. ASCII nul is ^@ ( Ctrl + @ ). Historically, vi replaces ^M ( Ctrl + M ) as the line-ending, which is the newline. vim added an extension \r (like the C language) to mean the same as ^M , but the developers chose to make \n mean null when replacing text. This is inconsistent with its use in searches , which find a newline. Further reading: Search and replace (Vim wiki) vim replace character to \n | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/247329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53844/"
]
} |
247,367 | Unfortunately I will need to install this OS on my new PC. Since I only have Linux right now, I'll need to create media to boot from on my PC. I'd prefer it to be USB, since currently I have no access to disc drive. How can I create bootable USB stick with Windows 10, given I have the ISO file? I'm using Gentoo Linux, if you want to see if the tool is available in repos for me. | The easiest way would be to direct write the iso to your usb stick. This can be achieved with this command: dd bs=4M if=/path/to/win10.iso of=/dev/sdx && sync Where /path/to/win10.iso is the location of your Windows 10 iso file and /dev/sdx is the location of your usb drive (you can identify that with the lsblk command). However dd might cause issues with the usb drive, if you want to reuse it for something else. An alternative way would be by creating a new GPT partition table on the disk in something like gparted and giving it the "boot" flag. You'd then need to mount the iso and copy the contents over to a new NTFS partition like this: mkdir Win10mount -o loop /path/to/win10.iso Win10cd Win10cp -a * /mount/usb Where /mount/usb is your mounted partition. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
247,378 | What is the purpose of the exit code in rc.local ? I seem to execute just fine without it. The header shows: #!/bin/sh -e## rc.local - executed at the end of each multiuser runlevel## Make sure that the script will "exit 0" on success or any other# value on error. Who checks the return code? Does it default to 0 anyways? | It doesn't say "always exit 0 ". Read it again without the line break. Make sure that the script will "exit 0" on success or any other value on error. To indicate success, exit 0. To indicate error, exit any other value. There isn't necessarily anything that will check its status, but some init systems will display "[OK]" or "[FAIL]" on screen for the user. In any case, it's good practice to make sure your scripts exit with a meaningful return code. The default exit status, like any other script, will be the exit status of the last command run in the script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
247,385 | I have a huge Folder with thousands of Files.Some Files have some Characters in it, which are not allowed. (UTF-8 signs)So I have a white list of allowed characters and a beginning of a bash-script to get a list of files with the path to it, which have some characters not on that white list. #!/bin/bashregex="^[a-zA-Z0-9._- ]+$"while IFS= read -r -d $'\0'; do filename=`echo "$REPLY" | rev | cut -d/ -f1| rev` filepath=`echo "$REPLY" | rev | cut -d/ -f2- | rev` if ! [[ "$filename" =~ "$regex" ]] then echo "$filepath $filename" fidone < <(find /path/to/folder -type f -print0) This is another beginning of a script find /path/to/folder -type f -regextype posix-extended ! -iregex "\/([A-Z0-9\-\_\.\ \/]*)" And here are some Files in that storage /symlnks/data/DATEN_EINGANG/DATENLIEFERUNG/Aestetico_19-11-2015/Probenbox_Probenkästen.pdf/symlnks/data/DATEN_EINGANG/DATENLIEFERUNG/Aestetico_19-11-2015/Probenbox_final.pdf/symlnks/data/DATEN_EINGANG/DATENLIEFERUNG/Aestetico_19-11-2015/._Probenbox_final.pdf | It doesn't say "always exit 0 ". Read it again without the line break. Make sure that the script will "exit 0" on success or any other value on error. To indicate success, exit 0. To indicate error, exit any other value. There isn't necessarily anything that will check its status, but some init systems will display "[OK]" or "[FAIL]" on screen for the user. In any case, it's good practice to make sure your scripts exit with a meaningful return code. The default exit status, like any other script, will be the exit status of the last command run in the script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146047/"
]
} |
247,418 | So I'm setting up an nginx server with SSL enabled with a server definition something like: server { listen :80; listen [::]:80; server_name example.org; root /foo/bar; ssl on; ssl_certificate /path/to/public/certificate; ssl_certificate_key /path/to/private/key; ...} You get the idea (please forgive any typos). Anyway, what I'm wondering is; if I renew my certificate(s), is there a way to install them without having to restart nginx? For example, if I were to use symbolic links from /path/to/public/certificate and /path/to/private/key , pointing to my current certificate(s), would I still need to restart nginx if I were to simply change these to point to new (renewed) certificates? Are there alternatives? | You will need to RELOAD Nginx in order for the renewed certificates to display the correct expiration date (read the clarification below and the other comments for an explanation of the difference between RELOADING and RESTARTING Nginx). After reloading Nginx, a simple cache-clearing and browse should allow you to view this the updated expiration dates on the SSL cert. Or if you prefer cli, you could always use the old trusty OpenSSL command: echo | openssl s_client -connect your.domain.com:443 | openssl x509 -noout -dates That would give you the current dates on the certificate. In your case the port would be 80 instead of 443 (it was later stated by OP that the ports 80 in the question should have actually been 443, but Nginx will listen on HTTP or HTTPS on whatever ports you give it, as long as they are not currently in use by another process). Many times nginx -s reload does not work as expected. On many systems (Debian, etc.), you would need to use /etc/init.d/nginx reload . Edit to update and clarify this answer: On modern systems with systemd , you can also run systemctl reload nginx or service nginx reload . All of these reload methods are different from restart by the fact that they send a SIGHUP signal that tells Nginx to reload its configuration without killing off existing connections (which would happen with a full restart and would almost certainly be user-impacting). If for some reason, Nginx does not reload your certificate, you can restart it, but note that it will have much more of an impact than reload . To restart Nginx, you would simply run systemctl restart nginx , or on systems without systemd , you would do nginx -s stop && nginx -s start . If all else fails (for whatever reason), just kill the Nginx PID(s), and you can always start it up manually by specifying the configuration file directly using nginx -c /path/to/nginx.conf . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/247418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38840/"
]
} |
247,429 | I have a directory with a few hundred files (real files, no symlinks, no subdirectories). When I use ls -la and sum up the sizes in Excel I get 287190 bytes(?). When I use du -b or du --apparent-size --block-size=1 I get 422358 bytes(?). I thought those two things mean the same thing, why the difference? | du gives disk usage, which is not the same as the sum of all the file sizes. For example, a du -b file will give a different output than making a directory "dir", placing the same file in "dir" and doing du -b dir . On my system that's 30 extra bytes for the "overhead" of a directory. Depending on the contents of the directory, I imagine the directory size would change (but I would be surprised if it was perfectly linear). Also, the relative size of the difference implies that you might have missed a hidden directory with quite a few files in it, or that you might have missed a lot of hidden files (even though you did use the -a flag). In addition, there may be symlinks which cause differences if one tool follows them while the other doesn't. Finally, with some file systems, if the file's contents are small enough they might get inlined into the file system INode, and with many file systems, a single block is reserved to hold the contents of the file even if that block is not fully used. These variations add extra noise when attempting to compare the two. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30206/"
]
} |
247,433 | I made a Postfix + SpamAssassin setup following the instructions most websites recommend for this kind os setup. Basically I edited my master.cf to add: smtp inet n - - - - smtpd -o content_filter=spamassassinspamassassin unix - n n - - pipe user=debian-spamd argv=/usr/bin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient} SpamAssassin actually works fine and my email gets filtered, however I noticed the following. In the past, before SpamAssassin when an email was sent to my sever the headers would show something like: Return-path: <[email protected]>Envelope-to: <[email protected]>Delivery-date: Wed, 02 Dec 2015 12:37:13 +0100Received: from mail.sender-server.dev ... by mail.my-server.dev After SpamAssassin they show: Return-path: <[email protected]>Envelope-to: <[email protected]>Delivery-date: Wed, 02 Dec 2015 12:37:13 +0100Received: from mail.my-server.dev ... by mail.my-server.dev Looks like at enabling SpamAssassin the Received: from is changed from the original server from where the email really came to my own server... Why does this happen? Can't this be fixed in a way my email is filtered by the correct headers are displayed? Thank you. | This was my final solution after research and help from @tarleb My mail delivery was happening over sendmail program, which was adding some additional headers to my email. I could use a mitter (mail filter) to filter incoming email and drop the sendmail usage, however I decided to change to Dovecot LDA for the delivery. My original filter was, at the beginning of Postfix's master.cf : smtp inet n - - - - smtpd -o content_filter=spamassassin And at the end of the file: spamassassin unix - n n - - pipe user=debian-spamd argv=/usr/bin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient} I changed the end of the file to use the Dovecot local delivery by: spamassassin unix - n n - - pipe flags=DROhu user=vmail:vmail argv=/usr/bin/spamc -f -e /usr/lib/dovecot/deliver -f ${sender} -d ${user}@${nexthop} Now edit Postfix's main.cf and add (optional, check (3) bellow): spamassassin_destination_recipient_limit = 1 Now your email will be delivered via Dovecot LDA without header changes. For the curious ones, here are some details on my config: This config can be used with plus-addressing / sub-addressing / recipient delimiters (emails addressed to [email protected] will be delivered into [email protected] inbox) - That's why I added -d ${user}@${nexthop} this will remove the + and everything until the domain. To enable this feature, also be sure to add recipient_delimiter = + into main.cf ; My flags flags=DROhu , they don't add anything abnormal but they can be understood here: http://www.postfix.org/pipe.8.html ; spamassassin_destination_recipient_limit = 1 is required to make sure that every recipient gets individually processed by spamassassin. This is required due to the D flag above (Includes X-Original-To header). If you've the D flag and you don't set spamassassin_destination_recipient_limit = 1 email with multiple destinations won't be delivered! If you don't care about this header you can remove the flag and this isn't needed. Edit: Bonus Content - Move your SPAM to the Junk folder! You can also configure Dovecot to move email detected as SPAM to the Junk IMAP folder. This will make your life easier for sure. Just follow this: Edit /etc/dovecot/conf.d/15-mailboxes.conf and uncomment / add the Junk folder with (should be on the namespace inbox section near mailbox Trash ): mailbox Junk { special_use = \Junk} Install dovecot-sieve with apt-get install dovecot-sieve ; Edit /etc/dovecot/conf.d/90-sieve.conf and comment the line: #sieve = ~/.dovecot.sieve Edit /etc/dovecot/conf.d/90-plugin.conf as: plugin { sieve = /etc/dovecot/sieve/default.sieve} Edit /etc/dovecot/conf.d/15-lda.conf and /etc/dovecot/conf.d/20-lmtp.conf to match: protocol lda/lmtp { # do not copy/paste this line! mail_plugins = $mail_plugins sieve} WARNING : You might have another settings under the protocol selections, keep them. The line protocol lda/lmtp changes in the files, keep the original. Create folder /etc/dovecot/sieve/ Create file /etc/dovecot/sieve/default.sieve with this content: require "fileinto";if header :contains "X-Spam-Flag" "YES" { fileinto "Junk";} Change folder permissions to your virtual email user and group like: chown vmail:vmail /etc/dovecot/sieve/ -R . If you miss this dovecot will complain! Restart everything: service postfix restart; service dovecot restart; service spamassassin restart Try to send an email to some email on the server (from an external server), first a normal email and then another one with this subject: XJS*C4JDBQADN1.NSBN3*2IDNEN*GTUBE-STANDARD-ANTI-UBE-TEST-EMAIL*C.34X . The second email should to into the Junk folder and the first to your inbox. If this doesn't work at your first try, look at the logs: tail -f /var/log/mail.log and send the email while tail is running. A good working setup should report stored mail into mailbox 'INBOX' or stored mail into mailbox 'Junk' . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23085/"
]
} |
247,436 | I am (probably obviously) a relatively new Linux user, so I'm already bracing for the barrage of "why aren't you doing it this way instead..." comments. I'd love to hear them...but I would also really like to fundamentally understand why this isn't working as is. The details: I'm running CentOS 7+ I'm attempting to modify the read-ahead values on my blockdev configs (for a database server) I'm able to implement the changes from cmd line, but I cannot persist them after reboot. Yes, I have rebooted. A lot. In an attempt to persist the changes I've modified the rc.local file. The rc.local file is being implemented like this: #!/bin/bashtouch /var/lock/subsys/local/sbin/blockdev --setra 128 /dev/sda/sbin/blockdev --setra 128 /dev/dm-1/sbin/blockdev --setra 128 /dev/dm-0 | Forget about rc.local . You're using CentOS 7. You have systemd. /etc/rc.local is a double backwards compatibility mechanism in systemd, because it is a backwards compatibility mechanism for a mechanism that was itself a compatibility mechanism in System 5 rc . And as shown by the mess in the AskUbuntu question hyperlinked below, using /etc/rc.local can go horribly wrong. So make a proper systemd service unit. First, create a template service unit . For the sake of example, let's call it /etc/systemd/system/[email protected] : [Unit]Documentation=https://unix.stackexchange.com/questions/247436/Description=Set custom read-ahead on storage device %IBindsTo=dev-%i.device[Service]Type=oneshotExecStart=/sbin/blockdev --setra 128 /dev/%I Arrange for that service unit to be started by the plug-and-play device manager (udev) when the appropriate devices arrive. Your rule, which you'll have to tailor to your specific needs, will look something like: SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="sd[a-z]", ENV{SYSTEMD_WANTS}="custom-readahead@%k" The SYSTEMD_WANTS setting causes udev to start the named service — an instantiation of the template against the device %k . This service then runs blockdev . There is apparently another way of doing this, which relies on udev's ability to set these settings directly. For this, you don't need the systemd template unit or instantiated services. Instead, simply instruct udev directly in its rule: SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{bdi/read_ahead_kb}="128" Notice the difference between == and = . There is no rc.local involved anywhere, either way. Further reading https://unix.stackexchange.com/a/200281/5132 https://unix.stackexchange.com/a/211927/5132 Milosz Galazka (2015-05-11). How to enforce read-only mode on every connected USB storage device . sleeplessbeastie. https://unix.stackexchange.com/a/71409/5132 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145906/"
]
} |
247,497 | I want to do something like specmd file5.awespecmd file6.awespecmd file7.awespecmd file8.awespecmd file9.awespecmd file10.awespecmd file11.awespecmd file12.awe Is there a good way to do this? The best way I can think of is something like ruby -e "10.upto(15){|i| puts i}" | xargs -I {} specmd file{}.awe Which obviously isn't a very good way to do it since it depends on ruby and feels like ruby should be unneccesary in this case. Note: there are more files (eg: file4.awe , file13.awe ) which I don't want, so any globbing (probably?) won't do what I want. | In bash, you can create loops using the builtin command for iterating through a range: for i in {5..12}do specmd file${i}.awedone There are more options in for for other similar situations, I will leave here a link for that. http://www.cyberciti.biz/faq/bash-for-loop/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/247497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38485/"
]
} |
247,560 | I found how to print everything before a slash, however I need to print everything after a slash. I have a string like blablabal/important and I need to print only important . How should I modify the line below to get printed everything after a slash and not before? sed 's:/[^/]*$::' | Since you mentioned in your comment that the file only has one slash, why not just use awk ? Example: ❱ echo "blablabal/important" | awk -F'/' '{print $1}'blablabal ❱ echo "blablabal/important" | awk -F'/' '{print $2}'important | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/247560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146150/"
]
} |
247,576 | I have an USER variable in my script, and I want to see his HOME path based on the USER variable. How can I do that? | There is a utility which will lookup user information regardless of whether that information is stored in local files such as /etc/passwd or in LDAP or some other method. It's called getent . In order to get user information out of it, you run getent passwd $USER . You'll get a line back that looks like: [jenny@sameen ~]$ getent passwd jennyjenny:*:1001:1001:Jenny Dybedahl:/home/jenny:/usr/local/bin/bash Now you can simply cut out the home dir from it, e.g. by using cut, like so: [jenny@sameen ~]$ getent passwd jenny | cut -d: -f6/home/jenny | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/247576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
247,589 | From https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html The basic form of parameter expansion is ${parameter} . ... If the first character of parameter is an exclamation point ( ! ), a level of variable indirection is introduced. Bash uses the value of the variable formed from the rest of parameter as the name of the variable; this variable is then expanded and that value is used in the rest of the substitution, rather than the value of parameter itself. This is known as indirect expansion. The exceptions to this are the expansions of ${!prefix} and ${!name[@]} described below. The exclamation point must immediately follow the left brace in order to introduce indirection. ... ${!prefix*} ${!prefix@} Expands to the names of variables whose names begin with prefix , separated by the fi rst character of the IFS special variable. When ‘@’ is used and the expan- sion appears within double quotes, each variable name expands to a separate word. ${!name[@]} ${!name[*]} If name is an array variable, expands to the list of array indices (keys) assigned in name. If name is not an array, expands to 0 if name is set and null otherwise. When ‘@’ is used and the expansion appears within double quotes, each key expands to a separate word. Can you give some examples for the quoted paragraphs? I have no clue what they mean. | We need to compare (and distinguish between): "${Var}" # Plain variable"${!Var}" # Indirect expansion"${!Var@}" # Prefix expansion"${!Var[@]}" # Array keys expansion"${Var[@]}" # Plain array expansion There are also the * expansions which are very similar but have a small difference. Indirection Example for indirection: $ varname=var_one$ var_one=a-value$ echo "${varname}"var_one$ echo "${!varname} and ${var_one}"a-value and a-value Prefix Example for prefix: $ head_one=foo$ head_two=bar$ printf '<%s> ' "${!head@}"<head_one> <head_two>$ printf '<%s> ' "${!head*}"<head_one head_two> Note that the variables are glued together by the first character of IFS, which by default is an space (as IFS is Space Tab NewLine by default). Plain Array Example of Array (no ! used) to show the small (but important) difference of @ and * : $ Array[1]=This$ Array[2]=is$ Array[3]=a$ Array[4]=simple$ Array[5]=test.$ printf '<%s> ' "${Array[@]}"<This> <is> <a> <simple> <test.>$ printf '<%s> ' "${Array[*]}"<This is a simple test.> The same comment about IFS apply here. Note that I did not assign the index 0 (on purpose) of Array. Note that a simpler way to assign the Array is: $ Array=( "" This is a simple test.) But here the index 0 must be used, and I used an empty value (which is not the same as an un-set value as above). Array LIST For this, a simple indexed array (with numbers) is not so fun: $ Array=( "" A simple example of an array.)$ printf '<%s> ' "${!Array[@]}"<0> <1> <2> <3> <4> <5> <6> $ printf '<%s> ' "${!Array[*]}"<0 1 2 3 4 5 6> But for a Associative array, things become more interesting $ unset Array # erase any notion of variable array.$ declare -A Array # make it associative$ Array=([foo]=one [bar]=two [baz]=three) # give it values.$ printf '<%s> ' "${Array[@]}"<two> <three> <one> # List of values.$ printf '<%s> ' "${!Array[@]}"<bar> <baz> <foo> # List of keys$ printf '<%s> ' "${Array[*]}"<two three one> # One string of list of values.$ printf '<%s> ' "${!Array[*]}"<bar baz foo> # One string of list of keys. Please note that the order is not the same as when assigned. Note: All the uses I presented are quoted "${!Array[@]}" , both the unquoted values ${!Array[@]} and ${!Array[*]} work exactly equal, give the same output (in Bash). But are affected by shell splitting on the IFS value. And the ugly, always problematic "Pathname expansion". Not so useful in general. Or to be used very carefully, in any case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
247,599 | I want to know why an extra echo will print in my shell. I'm using bash version 4.2.46(1). echo `echo `echo $SHELL`` A interesting issue is if I replace '``' with $() it doesn't print extra echo: echo $(echo `echo $SHELL`) And I found that it prints extra echos in odd echo command numbers: echo `echo `echo `echo `echo $SHELL```` | Your two versions: echo `echo `echo $SHELL`` and echo $(echo `echo $SHELL`) are not equivalent. The first backtick command substitution ends as soon as another backtick is seen: When the old-style backquote form of substitution is used, [...] The first backquote not preceded by a backslash terminates the command substitution. Your first version is actually equivalent to: echo $(echo )echo $SHELL$() which is why you get an "echo" in the output - the command you end up running (after substitutions and with extra whitespace removed) is just: echo echo /bin/bash so the output is "echo /bin/bash", just like if you wrote that command out directly. If you must nest backticks, you can backslash the inner pairs to escape them. Your first command could be written correctly as: echo `echo \`echo $SHELL\`` although I wouldn't recommend it — $( ... ) is made for nesting. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37507/"
]
} |
247,604 | If I have this shell script str="A tremor in the Force.The last time I felt it was in the presence of my old master."cat <<< "$str" My understanding is that the command cat <<< "$str" tells the shell to invoke the program /bin/cat and pass it the argument $str - where the double quotes around $str ensure that the shell will pass the argument unaltered. So if the cat program gets the $str variable unaltered - then it must be aware of the variables value?My question is, does the shell pass variables declared in its environment to other system programs that it invokes? | Does the shell pass variables declared in its environment to other system programs that it invokes? Yes, but not in the case of cat <<< "$str" . In Unix-like operating systems, most new programs are executed as a result of the execve() system all: int execve(const char *filename, char *const argv[], char *const envp[]); The shell uses execv to execute programs, and by default, the shell passes all environment variables in envp , similar to the way the command line arguments are passed in argv . Conceptually, envp is an array strings each with the format NAME=value . man 2 execve explains the entire process very clearly and includes working example code. In the case of cat <<<"$str" , the shell supplies "$str" to cat on cat 's standard input, so cat never sees the variable named str , unless it was previously exported by the shell, for example by invoking export str , but cat has no way of knowing that the two are related. <<< is a "Here String" and a shell extension. From the bash man page: The word undergoes brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, and quote removal. Pathname expansion and word splitting are not performed. The result is supplied as a single string to the command on its standard input. The Here String is similar to the traditional idiom: echo "$str" | cat which I recommend. Both, in fact, will add a newline at the end of standard input. I believe both methods are identical, providing that echo does not modify the output in any way (which it can depending on the contents of string, so a safer version is printf '%s' "$str" instead of echo ). In your situation, I would use printf '%s' "$str" | cat because it's the most portable, universal, and easily understood, without locking your script into execution with only one of many shells. If you use the POSIX sh man page for your shell script documentation, you will have less material to master, and the material you learn has application to all Bourne shell derivatives. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
247,612 | Every time I ssh onto my remote server, I need to provide the password. I copied my public key (id_dsa.pub) to the remote server using: ssh-copy-id -i id_dsa.pub user@server I checked it was correctly added to authorized_keys. All the file/directory permissions are correct: ~user 755~user/.ssh 700~user/.ssh/authorized_keys 640~user/.ssh/id_dsa.pub 644 The PasswordAuthentication field in /etc/ssh/sshd_config is set to yes. I put the sshd in debug mode and added the verbose switch to the ssh command. I get the impression that the server did not try to use id_pub.dsa because of the line Skipping ssh-dss key: ........... not in PubkeyAcceptedKeyTypes There is no encrypted disc on server side. Any ideas how to progress?Here is the ssh daemon debug info: sudo /usr/sbin/sshd -d====debug1: sshd version OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014debug1: key_parse_private2: missing begin markerdebug1: read PEM private key done: type RSAdebug1: private host key: #0 type 1 RSAdebug1: key_parse_private2: missing begin markerdebug1: read PEM private key done: type DSAdebug1: private host key: #1 type 2 DSAdebug1: key_parse_private2: missing begin markerdebug1: read PEM private key done: type ECDSAdebug1: private host key: #2 type 3 ECDSAdebug1: rexec_argv[0]='/usr/sbin/sshd'debug1: rexec_argv[1]='-d'Set /proc/self/oom_score_adj from 0 to -1000debug1: Bind to port 22 on 0.0.0.0.Server listening on 0.0.0.0 port 22.debug1: Bind to port 22 on ::.Server listening on :: port 22.debug1: Server will not fork when running in debugging mode.debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8debug1: inetd sockets after dupping: 3, 3Connection from xxx port 63521 on yyy port 22debug1: Client protocol version 2.0; client software version OpenSSH_7.1debug1: match: OpenSSH_7.1 pat OpenSSH* compat 0x04000000debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3debug1: permanently_set_uid: 115/65534 [preauth]debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 [preauth]debug1: SSH2_MSG_KEXINIT sent [preauth]debug1: SSH2_MSG_KEXINIT received [preauth]debug1: kex: client->server [email protected] <implicit> none [preauth]debug1: kex: server->client [email protected] <implicit> none [preauth]debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]debug1: SSH2_MSG_NEWKEYS sent [preauth]debug1: expecting SSH2_MSG_NEWKEYS [preauth]debug1: SSH2_MSG_NEWKEYS received [preauth]debug1: KEX done [preauth]debug1: userauth-request for user damian service ssh-connection method none [preauth]debug1: attempt 0 failures 0 [preauth]debug1: PAM: initializing for "damian"debug1: PAM: setting PAM_RHOST to "freebox-server.local"debug1: PAM: setting PAM_TTY to "ssh"Connection closed by xxxx [preauth]debug1: do_cleanup [preauth]debug1: monitor_read_log: child log fd closeddebug1: do_cleanup Here is the ssh verbose output: $ ssh -v user@serverOpenSSH_7.1p1, OpenSSL 1.0.2d 9 Jul 2015debug1: Connecting to server [xxxx] port 22.debug1: Connection established.debug1: key_load_public: No such file or directorydebug1: identity file /home/user/.ssh/id_rsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/user/.ssh/id_rsa-cert type -1debug1: identity file /home/user/.ssh/id_dsa type 2debug1: key_load_public: No such file or directorydebug1: identity file /home/user/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/user/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/user/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/user/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/user/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.1debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3 pat OpenSSH_6.6.1* compat 0x04000000debug1: Authenticating to server:22 as 'user'debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server->client [email protected] <implicit> nonedebug1: kex: client->server [email protected] <implicit> nonedebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:v4BNHM0Q33Uh6U4VHenA9iJ0wEyi8h0rFVetbcXBKqAdebug1: Host 'server' is known and matches the ECDSA host key.debug1: Found key in /home/user/.ssh/known_hosts:2debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,passworddebug1: Next authentication method: publickeydebug1: Trying private key: /home/user/.ssh/id_rsadebug1: Skipping ssh-dss key /home/user/.ssh/id_dsa for not in PubkeyAcceptedKeyTypesdebug1: Trying private key: /home/user/.ssh/id_ecdsadebug1: Trying private key: /home/user/.ssh/id_ed25519debug1: Next authentication method: passworduser@server's password: | The new openssh version (7.0+) deprecated DSA keys and is not using DSA keys by default (not on server or client). The keys are not preferred to be used anymore, so if you can, I would recommend to use RSA keys where possible. If you really need to use DSA keys, you need to explicitly allow them in your client config using PubkeyAcceptedKeyTypes +ssh-dss Should be enough to put that line in ~/.ssh/config , as the verbose message is trying to tell you. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/247612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146170/"
]
} |
247,643 | From https://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html , - (A hyphen.) Expands to the current option flags as specified upon invocation, by the set builtin command, or those set by the shell itself (such as the -i option). I am not sure what it says. Can you give some examples for the meaning of a hyphen parameter? | You can echo $- to see the currently-enabled shell options: $ echo $-himBH Those are options you can provide with a - on shell invocation — bash -h -B — or later on using set . The flags are defined in the documentation for set . My options above are: -h Locate and remember (hash) commands as they are looked up for execution. This option is enabled by default. -m Job control is enabled (see Job Control ). All processes run in a separate process group. When a background job completes, the shell prints a line containing its exit status. -B The shell will perform brace expansion (see Brace Expansion ). This option is on by default. -H Enable ‘!’ style history substitution (see History Interaction ). This option is on by default for interactive shells. and i for an interactive shell . If I change the active options, say with set -x , I'll get a new value of $- : $ set -x +h$ echo $-imxBH | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
247,655 | I'm trying to figure out how to make a function that can take an array as a parameter and sort it. I think it is done with positional variables, but I'm not sure. | Sort the easy way with sort , tr : arr=($(for i in {0..9}; do echo $((RANDOM%100)); done))echo ${arr[*]}| tr " " "\n" | sort -n | tr "\n" " " Into a new array: arr2=($(echo ${arr[*]}| tr " " "\n" | sort -n)) Without help by tr / sort , for example bubblesort: #!/bin/bash sort () { for ((i=0; i <= $((${#arr[@]} - 2)); ++i)) do for ((j=((i + 1)); j <= ((${#arr[@]} - 1)); ++j)) do if [[ ${arr[i]} -gt ${arr[j]} ]] then # echo $i $j ${arr[i]} ${arr[j]} tmp=${arr[i]} arr[i]=${arr[j]} arr[j]=$tmp fi done done}# arr=(6 5 68 43 82 60 45 19 78 95)arr=($(for i in {0..9}; do echo $((RANDOM%100)); done))echo ${arr[@]}sort ${arr[@]}echo ${arr[@]} For 20 numbers, bubblesort might be sufficient. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140356/"
]
} |
247,755 | I have two systemd scripts that I want to run on boot up. The first systemd script starts OpenVPN, which then executes a shell script that writes the IP of the connection to the file vpn.env . The second systemd script starts Transmission and should bind to the IP adress in vpn.env . My problem seems to be that the execution of the 2nd systemd script is too "quick", and is completed before OpenVPN can start and write vpn.env . Question: Is there some way to add a delay to the second script, perhaps a few seconds, and have it wait for the environment file to be written? systemd OpenVPN script [Unit]Description=VPN Custom Launch ConnectionAfter=network.target[Service]Type=simpleExecStart=/usr/sbin/openvpn --cd /etc/openvpn --config /etc/openvpn/vpn.conf[Install]WantedBy=multi-user.target OpenVPN .sh script, executed when program starts printenv > /etc/openvpn/vpn.env systemd Transmission script [Unit]Description=Transmission BitTorrent Daemon Under VPNAfter=network.target vpn.serviceRequires=vpn.service[Service]User=transmissionType=notifyEnvironmentFile=/etc/openvpn/vpn.envExecStart=/usr/bin/transmission-daemon -f --log-error --config-dir /opt/transmission --bind-address-ipv4 $ifconfig_local --rpc-bind-address 0.0.0.0 --no-portmapExecReload=/bin/kill -s HUP $MAINPID[Install]WantedBy=multi-user.target | Your problem is the Type=simple in the description of the VPN service. The Arch wiki clarifies the manual page , a little: Type=simple (default): systemd considers the service to be started up immediately. The process must not fork. Do not use this type if other services need to be ordered on this service, unless it is socket activated. You probably can make this work by changing the type: Type=oneshot : this is useful for scripts that do a single job and then exit. You may want to set RemainAfterExit=yes as well so that systemd still considers the service as active after the process has exited. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136753/"
]
} |
247,788 | On creating a VNC connection via tunneled SSH connection, I get an error: channel 3: open failed: administratively prohibited: open failed I have found that this happens only when I'm not logged into the host locally as the username on the host I'm trying to connect to using a tunneled VNC connection. SSH Tunnel: ssh -p 6000 -L 5901:127.0.0.1:5901 [email protected] VNC connection: vncviewer localhost:1 I've tried adjusting the settings in /etc/ssh/sshd_config using AllowTunnel yes and without the setting. (I did restart ssh after each change: service ssh restart ) However, the error goes away if I have a local session running on the remote host (i.e. I'm logged in as username locally.) Is anyone else seeing this behavior? It seems like I should be able to start a VNC remotely and access it with out having to logged in locally as well. | The option you are looking for is not AllowTunnel (it is for VPN and level 3 forwarding using tun devices). You are looking for AllowTcpForwarding , which handles local and remote port forwarding of TCP traffic in ssh. Have a look what values is in your server and change it to yes : AllowTcpForwarding yes | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146287/"
]
} |
247,794 | What command could I create that will list the first 4 lines of all the files in a given directory? | [root@xxx httpd]# head -n 4 /var/log/httpd/*==> /var/log/httpd/access_log <==xxxx - - [06/Dec/2015:22:22:45 +0100] "GET / HTTP/1.1" 200 7 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36 Vivaldi/1.0.303.52"xxxx - - [06/Dec/2015:22:22:46 +0100] "GET /favicon.ico HTTP/1.1" 404 291 "http://195.154.165.63:8001/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36 Vivaldi/1.0.303.52"==> /var/log/httpd/access_log-20151018 <==xxxx - - [12/Oct/2015:14:05:42 +0200] "GET /git HTTP/1.1" 404 281 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0"xxxx - - [12/Oct/2015:14:05:42 +0200] "GET /favicon.ico HTTP/1.1" 404 289 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0"xxxx - - [12/Oct/2015:14:05:43 +0200] "GET /favicon.ico HTTP/1.1" 404 289 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0"xxxx - - [12/Oct/2015:14:06:24 +0200] "GET /git HTTP/1.1" 502 465 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0"==> /var/log/httpd/access_log-20151115 <==xxxx - - [14/Nov/2015:18:56:04 +0100] "GET / HTTP/1.1" 200 7 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0"xxxx - - [14/Nov/2015:18:56:05 +0100] "GET /favicon.ico HTTP/1.1" 404 291 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0"xxxx - - [14/Nov/2015:18:56:05 +0100] "GET /favicon.ico HTTP/1.1" 404 291 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0"xxxx - - [14/Nov/2015:18:58:28 +0100] "GET /phpmyadmin HTTP/1.1" 403 294 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0" It's a sample of my httpd directory with the command head -n 4 /var/log/httpd/* for the first 4 lines. Replace head -n 4 by head -n 1 for the first lines. And you can replace the directory /var/log/httpd/* with your directory for example, /my/directory/* but don't forget the wildcard at the end ( * ).This wildcard permits to tell that we want all the (non-hidden) files in the directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144534/"
]
} |
247,924 | I wanted to delete some package in my home file, but the filename was too long ( google-chrome-stable_current_i386.deb ). So, I decided to use the command ls|grep chrome|rm to pipe the files to grep to filter out the chrome file, and then remove it. It didn't work, so I would like to see how I can do this. | This almost made me wince. You might want to stop pointing that shotgun at your foot. Basically any kind of parsing of ls is going to be more complicated and error-prone than established methods like find [...] -exec or globs . Unless someone installed a troll distro for you, your shell has Tab completion. Just type rm google and press Tab . If it doesn't complete immediately, press Tab again to see a list of matching files. Type more characters of the filename to narrow it down until it does complete, then run the command. Pipes != parameters . Standard input is a binary data stream which can be fed to a command asynchronously. Parameters are space separated strings which are passed once and only once to a command when running it. These are very rarely interchangeable. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/247924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146386/"
]
} |
247,981 | On a fresh Arch Linux installation, I had difficulties with the graphics drivers. In the process, I have installed a lot of different drivers to somehow get the display working. Now I need to determine which driver X is using. How to do that? The installed packages include xf86-video-intel , xf86-video-nouveau , nvidia , xorg-drivers . To solve a strange issue to launch any graphical desktop manager i had to replace nividia-libgl with mesa-libgl . Graphics: Intel HD Graphics 4000 / Nvidia GT 750M The content of Xorg.0.log is: http://pastebin.com/YwiMZmG6 | You can check the Xorg startup log file, usually /var/log/Xorg.0.log and look at which modules it is loading. By default Xorg can try to autodetect but you can manually force a driver by putting a Device stanza in an Xorg conf file. Here is what the Xorg startup log will look like for an nvidia card and the nvidia proprietary driver. [ 3702.470] (II) xfree86: Adding drm device (/dev/dri/card0)[ 3702.472] (--) PCI:*(0:3:0:0) 10de:1184:3842:3774 rev 161, Mem @ 0xfa000000/16777216, 0xd8000000/134217728, 0xd6000000/33554432, I/O @ 0x0000cc00/128, BIOS @ 0x????????/524288[ 3702.472] (II) LoadModule: "glx"[ 3702.473] (II) Loading /usr/lib64/opengl/nvidia/extensions/libglx.so[ 3702.476] (II) Module glx: vendor="NVIDIA Corporation"[ 3702.476] compiled for 4.0.2, module version = 1.0.0[ 3702.476] Module class: X.Org Server Extension[ 3702.476] (II) NVIDIA GLX Module 355.11 Wed Aug 26 16:02:11 PDT 2015[ 3702.476] (II) LoadModule: "nvidia"[ 3702.476] (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so[ 3702.476] (II) Module nvidia: vendor="NVIDIA Corporation"[ 3702.476] compiled for 4.0.2, module version = 1.0.0[ 3702.476] Module class: X.Org Video Driver[ 3702.476] (II) NVIDIA dlloader X Driver 355.11 Wed Aug 26 15:38:55 PDT 2015[ 3702.476] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs[ 3702.476] (++) using VT number 7 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/247981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146284/"
]
} |
247,999 | There are many questions on SE that show how to recover from terminal broken by cat /dev/urandom . For those that are unfamiliar with this issue - here what it is about: You execute cat /dev/urandom or equivalent (for example, cat binary_file.dat ). Garbage is printed. That would be okay... except your terminal continues to print garbage even after the command has finished! Here's a screenshot of a misrendered text that is in fact g++ output: I guess people were right about C++ errors sometimes being too cryptic! The usual solution is to run stty sane && reset , although it's kind of annoying to run it every time this happens. Because of that, what I want to focus on in this question is the original reason why this happens, and how to prevent the terminal from breaking after such command is issued. I'm not looking for solutions such as piping the offending commands to tr or xxd , because this requires you to know that the program/file outputs binary before you actually run/print it, and needs to be remembered each time you happen to output such data. I noticed the same behavior in URxvt, PuTTY and Linux frame buffer so I don't think this is terminal-specific problem. My primary suspect is that the random output contains some ANSI escape code that flips the character encoding (in fact, if you run cat /dev/urandom again, chances are it will unbreak the terminal, which seems to confirm this theory). If this is right, what is this escape code? Are there any standard ways to disable it? | No: there is no standard way to "disable it", and the details of breakage are actually terminal-specific, but there are some commonly-implemented features for which you can get misbehavior. For commonly-implemented features, look to the VT100-style alternate character set, which is activated by ^N and ^O (enable/disable). That may be suppressed in some terminals when using UTF-8 mode, but the same terminals have ample opportunity for trashing your screen (talking about GNU screen, Linux console, PuTTY here) with the escape sequences they do recognize. Some of the other escape sequences for instance rely upon responses from the terminal to a query (escape sequence) by the host. If the host does not expect it, the result is trash on the screen. In other cases (seen for instance in network devices with hardcoded escape sequences for the Linux console), other terminals will see that as miscoded, and seem to freeze. So... you could focus on just one terminal, prune out whatever looks like a nuisance (as for instance, some suggest removing the ability to use the mouse for positioning in editors), and you might get something which has no apparent holes. But that's only one terminal. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/247999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41991/"
]
} |
248,012 | From the The GNU Bash Reference Manual , section 3.6 Redirections : Bash handles several filenames speciallywhen they are used in redirections,as described in the following table: /dev/fd/ fd If fd is a valid integer,file descriptor fd is duplicated. /dev/stdin File descriptor 0 is duplicated. /dev/stdout File descriptor 1 is duplicated. /dev/stderr File descriptor 2 is duplicated. what does "duplicated" mean here? Can you give some examples? (The above excerpt is from Edition 4.3 ,last updated 2 February 2014,of The GNU Bash Reference Manual , for Bash , Version 4.3.) | Redirections are implemented via the dup family of system functions. dup is short for duplication and when you do e.g.: 3>&2 you duplicate ( dup2 ) filedescritor 2 onto filedescriptor 3, possibly closing filedescriptor 3 if it's already open (which won't do a thing to your parent process, because this happens in a fork ed off child (if it does not (redirections on shell functions in certain contexts), the shell will make it look as if it did)). When you do: 1<someFile it'll open someFile on a new file descriptor (that's what the open syscall normally does) and then it'll dup2 that filedescriptor onto 1. What the manual says is that if one of the special dev files listed takes the place of someFile , the shell will skip the open -on-a-new-fd step and instead go directly to dup2 ing the matching filedescriptor (i.e., 1 for /dev/stdout, etc.) onto the target (filedescriptor on the left side of the redirection). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/248012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
248,014 | I'm pretty sure I used to be able to create lowercase pi characters using Compose p i (as described on for example fsymbols.com ), but it no longer works. My compose key works for other characters (like Compose a a for “å”), so what could be wrong? I don't have /usr/share/X11/locale/en_GB.utf8/Compose (or ~/.XCompose ), is that something which should have been installed/generated? There is a /usr/share/X11/locale/en_US.utf8/Compose ; would it be sufficient to symlink that from /usr/share/X11/locale/en_GB.utf8/Compose to fix this? $ localeLANG=en_GB.utf8LC_CTYPE="en_GB.utf8"LC_NUMERIC="en_GB.utf8"LC_TIME="en_GB.utf8"LC_COLLATE="en_GB.utf8"LC_MONETARY="en_GB.utf8"LC_MESSAGES="en_GB.utf8"LC_PAPER="en_GB.utf8"LC_NAME="en_GB.utf8"LC_ADDRESS="en_GB.utf8"LC_TELEPHONE="en_GB.utf8"LC_MEASUREMENT="en_GB.utf8"LC_IDENTIFICATION="en_GB.utf8"LC_ALL= Spin-off question . | The X11 compose list is typically under /usr/share/X11/locale/ (this location may vary between distributions though), but the Compose file is not necessarily in the directory named after your LC_CTYPE setting. There is a stage of translation of locale names via the file /usr/share/X11/locale/compose.dir . This translation allows many locales to share the same compose file. (Symbolic links would have been another way, but a text file is easier to distribute and works on platforms that don't have symbolic links — X11 exists outside of Unix.) Most locales that use the UTF-8 encoding for a language written in the Latin alphabet use the compose file for en_US.UTF-8 , located in en_US.UTF-8/Compose . In en_US.UTF-8/Compose , the only way to generate U03C0 (GREEK SMALL LETTER PI) is <dead_greek> <p> . There is no <Multi_key> ( Compose ) sequence. Among the keyboard layouts distributed with X.org, the only one that defines a dead_greek key is the BÉPO layout (a French analog of DVORAK). So there's no way to type π using the Compose key with the default configuration. And the default UK layout doesn't include a way to type π , not even in an XKB variant (a US Mac layout ( us(mac) ) will give you π on AltGr + P however). As far as I can tell, there's never been a standard Compose sequence to insert π on Xorg. If you remember one, you might have been an input method other than X11's built-in mechanism. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/248014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
248,033 | Hi i'm trying to set up an old laptop as a 'server' for testing purposes. As such, I don't want the screen on all day, however i do want the cpu running 24x7. Can the 'lid close' switch be configured somehow to simply turn off the screen but otherwise the laptop is running as normal? FYI: I'm running coreos, but i'm willing to switch to another docker container OS if it makes life easier. | I'm not sure how you missed it in the docs, because when I looked it was plainly there. Place this in logind.conf : HandleLidSwitch=ignore | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/248033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171516/"
]
} |
248,160 | I am currently working in a network that uses LDAP for authentication. Having set zsh as my login shell, I ran into a problem of gaining remote access through ssh to one of the machines on the network that, apparently, doesn't have zsh installed. The login fails with Dec 8 19:16:11 abert sshd[20649]: User sorokine not allowed because shell /bin/zsh does not exist So the question basically is: How can I tell the remote machine to use a different login shell than the one that was configured in LDAP? OpenSSH_6.0p1 Debian-4+deb7u2, OpenSSL 1.0.1e | If your login shell can't be executed on some machine, then you can't log into it over SSH, or by most other methods for that matter. The SSH server always executes your login shell. If you pass a command on the ssh command line then the login shell is executed with -c and the command string¹ as arguments; otherwise the login shell is executed as a login shell with no argument. If there was a way to bypass the login shell, that would be a security hole. An account can be configured as a restricted account by making its login shell a program that only performs one specific task; for example, the login shell could be git-shell to allow only access to a git repository, or rssh , etc. To log in to that machine, you'll need to either arrange for /bin/zsh to be present, or change your login shell to something that is present. What I recommend in a heterogeneous environment like this is to stick to /bin/sh as your login shell, because it's present everywhere. Set the SHELL environment variable to /bin/zsh if it's present, that way you'll get zsh as an interactive shell. if [ -x /bin/zsh ]; then export SHELL=/bin/zshfi While you're at it, this lets you avoid hard-coding the path to zsh . if SHELL=$(command -v zsh); then export SHELLelse unset SHELLfi To get zsh to run automatically for a text mode login, invoke it from your .profile . If you want to use .zprofile to set things up, make it a login shell (but then you won't get the same environment on machines where zsh isn't present, so I don't recommend this). Do this only if this is an interactive login, not when your .profile is executed by a script, during GUI mode login, etc. if case $- in *i*) true;; *) false;; esac && # interactive shell [ -z "$ZSH_VERSION" ] && # not running zsh yet type zsh >/dev/null 2>/dev/null; then # zsh is present exec zshfi ¹ The SSH client concatenates its non-option arguments with spaces in between, and sends the resulting string through the connection. The SSH protocols defines the command as a string, not a list of strings. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/248160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68161/"
]
} |
248,164 | So,Being new to GIT and thus extremely rusty in my bash commands and scripting I've been looking around for different syntax and scripting help. Now, I've found a lot of help and have been able to create the scripts and alias that will make my Git experience more pleasant. However, I came across some nuances that seem to confuse me, specifically related to the "if" command. if [ -z $1 ] ; #<- Zero length stringif [[ -z $1 ]] ; #<- Also Zero Length Stringif [[ "$1" == -* ]] ; #<- Starts with - (hyphen)if [ -z $1 ] && [ -z $2 ] ; #<- both param 1 & 2 are zero lengthif [[ -z $1 ]] && [[ -z $2 ]] ; #<- Also both param 1 & 2 are zero lengthif [[ "$1" == -* ]] || [[ "$2" == -* ]] ; #<- Either param 1 or 2 starts with -if [ "$1" == -* ] || [ "$2" == -* ] ; #<- Syntax Failure, "bash: ]: too many arguments" Why the discrepancy? How to know when the [[ (double) is required and when a [ (single) will do? ThanksJaeden "Sifo Dyas" al'Raec Ruiner | First off, note that neither type of bracket is part of the syntax for if . Instead: [ is another name for the shell built-in test ; [[ ... ]] is a separate built-in, with different syntax and semantics. Here are excerpts from the bash documentation: [ / test test: test [expr] Evaluate conditional expression. Exits with a status of 0 (true) or 1 (false) depending on the evaluation of EXPR. Expressions may be unary or binary. Unary expressions are often used to examine the status of a file. There are string operators and numeric comparison operators as well. The behavior of test depends on the number of arguments. Read the bash manual page for the complete specification. File operators: -a FILE True if file exists. (...) [[ ... ]] [[ ... ]]: [[ expression ]] Execute conditional command. Returns a status of 0 or 1 depending on the evaluation of the conditional expression EXPRESSION. Expressions are composed of the same primaries used by the `test' builtin, and may be combined using the following operators: ( EXPRESSION ) Returns the value of EXPRESSION ! EXPRESSION True if EXPRESSION is false; else false EXPR1 && EXPR2 True if both EXPR1 and EXPR2 are true; else false EXPR1 || EXPR2 True if either EXPR1 or EXPR2 is true; else false When the `==' and `!=' operators are used, the string to the right of the operator is used as a pattern and pattern matching is performed. When the `=~' operator is used, the string to the right of the operator is matched as a regular expression. The && and || operators do not evaluate EXPR2 if EXPR1 is sufficient to determine the expression's value. Exit Status: 0 or 1 depending on value of EXPRESSION. More simply said, [ requires one to take the normal care needed forbash expressions, quote to avoid interpolation, etc. So a proper wayof testing for $foo being the empty string, or being unset, would be: [ -z "$foo" ] or [[ -z $foo ]] It's important to quote in the first case, because setting foo="a b" and then testing [ -z $foo ] would result in test -z receiving twoarguments, which is incorrect. The language for [[ .. ]] is different, and properly knows aboutvariables, much in the way one would expect from a higher-levellanguage than bash. For this reason, it is much less error-prone thanclassic [ / test . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/248164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146450/"
]
} |
248,173 | Out of curiosity, is this possible nowadays? I remember some old Slackware versions did support FAT root partition but I am not sure if this is possible with modern kernels and if there are any distros offering such an option. I am interested in pure DOS FAT (without long names support), VFAT 16/32 and exFAT. PS: Don't tell me I shouldn't, I am not going to use this in production unless necessary :-) | OK, I tried it. First two problems from the beginning: NO support for hard and symbolic links. It means that I had to copy each file, duplicating it and wasting space. Second problem: no special file support at all. This means things like /dev/console are unavailable at boot time to init before even /dev is remounted as tmpfs. Third problem: you will loose permissions enforcing. But out of this, there were no issues. My own system was booted successfully on a vfat volume. Normally I would not do that, too. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/248173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
248,191 | I made an Arch Linux ISO USB drive, and I'm trying to restore it usingthe directions from the Arch Linux wiki. After running: $ sudo dd count=1 bs=512 if=/dev/zero of=/dev/sde && sync parted only recognizes one sector with 512 bytes: $ sudo parted /dev/sde -s printError: /dev/sde: unrecognised disk labelModel: (file)Disk /dev/sde: 512BSector size (logical/physical): 512B/512BPartition Table: unknownDisk Flags: And I'm unable to create new partitions: $ sudo parted /dev/sde -s mklabel msdos$ sudo parted /dev/sde -s mkpart primary fat32 0% 100%Error: Can't have the end before the start! (start sector=1 length=0)Error: Unable to satisfy all constraints on the partition. Leaving out the count and bs flags for dd result in only 10MBbeing written, and not the whole disk: $ sudo dd if=/dev/zero of=/dev/sde && syncdd: writing to ‘/dev/sde’: No space left on device20481+0 records in20480+0 records out10485760 bytes (10 MB) copied, 0.0177212 s, 592 MB/s$ sudo parted /dev/sde -s printError: /dev/sde: unrecognised disk labelModel: (file)Disk /dev/sde: 10.5MBSector size (logical/physical): 512B/512BPartition Table: unknownDisk Flags: | OK, I tried it. First two problems from the beginning: NO support for hard and symbolic links. It means that I had to copy each file, duplicating it and wasting space. Second problem: no special file support at all. This means things like /dev/console are unavailable at boot time to init before even /dev is remounted as tmpfs. Third problem: you will loose permissions enforcing. But out of this, there were no issues. My own system was booted successfully on a vfat volume. Normally I would not do that, too. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/248191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133093/"
]
} |
248,226 | As a simple example, I have a bunch of source code files. I want to store the "head" command output to a variable for all these files. I tried: output=$(head $file) but what happened is that this automatically trimmed all \n characters when storing the output to a variable. How do I store the command output as is without removing \n characters? | It is a known flaw of "command expansion" $(...) or `...` that the last newline is trimmed. If that is your case: $ output="$(head -- "$file"; echo x)" ### capture the text with an x added.$ output="${output%?}" ### remove the last character (the x). Will correct the value of output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/248226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146625/"
]
} |
248,237 | I want to get at the prefix of a variable-length filename. In: some_filename_123-uniq.tar.gzOut: some_filename_123In: some_filename_1-uniq.tar.gzOut: some_filename_1 This does the exact opposite of what I want, -uniq.tar.gz : result=$(echo *-uniq.tar.gz | rev | cut -c-12 | rev)echo $result There will only ever be one -uniq.tar.gz suffixed item. I attempted using parameter substitution which seemed like the easiest way to go but get syntax errors: ${"some_filename_123-uniq.tar.gz"//"-uniq.tar.gz"}bash: ${"some_filename_123-uniq.tar.gz"/""/"-uniq.tar.gz"}: bad substitution | It is a known flaw of "command expansion" $(...) or `...` that the last newline is trimmed. If that is your case: $ output="$(head -- "$file"; echo x)" ### capture the text with an x added.$ output="${output%?}" ### remove the last character (the x). Will correct the value of output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/248237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.