source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
271,956 | I have multiple devices connected to 3 USB ports. I can list them connected with some identification numbers through the following commands: # cd /sys/bus/usb/drivers/usb# ls1-1 1-1.2 1-1.3 1-1.4 bind uevent unbind usb1 usb2# cat 1-1.2/productMy Connected Device I want to put a label in the physical port of "My Connected Device" like "Connect Here". And I need to ensure that the following unbind/bind command can be sent only to that port to that specific device: echo -n "1-1.2" > /sys/bus/usb/drivers/usb/unbind If the hardware is the same, is it reliable to consider that after a reboot the system will always assign 1-1.2 to the same physical USB port? | You won't find a single ISO image, although you could probably build one. The closest you'll get with existing downloads is three Blu-ray disk images, which you'll need to use jigdo to download; see http://cdimage.debian.org/debian-cd/current/amd64/jigdo-bd/ for details. Building a partial mirror is probably more sensible; you can use apt-mirror for that. A full mirror is overkill for your situation. It's doable of course, but it would take up approximately 300GB (for sources, all and amd64 packages)... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/271956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112201/"
]
} |
272,004 | I need to write a script that figures out if a reboot has occurred after an RPM has been installed. It is pretty easy to get the epoch time for when the RPM was installed: rpm -q --queryformat "%{INSTALLTIME}\n" glibc | head -1 , which produces output that looks like this: 1423807455 . This cross checks with rpm -q --info . # date -d@`rpm -q --queryformat "%{INSTALLTIME}\n" glibc | head -1`Fri Feb 13 01:04:15 EST 2015# sudo rpm -q --info glibc | grep "Install Date" | head -1Install Date: Fri 13 Feb 2015 01:04:15 AM EST Build Host: x86-022.build.eng.bos.redhat.com But I am getting stumped on trying to figure out how to get the epoch time from uptime or from cat /proc/uptime . I do not understand the output from cat /proc/uptime which on my system looks like this: 19496864.99 18606757.86 . Why is there two values? Which should I use and why do these numbers have a decimal in them? UPDATE: thanks techraf here is the script that I will use ... #!/bin/shnow=`date +'%s'`rpm_install_date_epoch=`rpm -q --queryformat "%{INSTALLTIME}\n" glibc | head -1`installed_seconds_ago=`expr $now - $rpm_install_date_epoch`uptime_epoch=`cat /proc/uptime | cut -f1 -d'.'`if [ $installed_seconds_ago -gt $uptime_epoch ]then echo "no need to reboot"else echo "need to reboot"fi I'd appreciate any feedback on the script. Thanks | As manuals (and even Wikipedia ) point out: /proc/uptime Shows how long the system has been on since it was last restarted. The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds. On multi core systems (and some linux versions) the second number is the sum of the idle time accumulated by each CPU. The decimal point separates seconds from fractions of a second. To calculate point in time system booted up using this metrics, you would have to subtract the number of of seconds the system has been up (first number) from current time in epoch format rounding up the fraction. For example, the boot time stamp in Ruby (accurate to 100ms, limited by /proc/uptime ): `date +%s.%3N`.split[0].to_f - `cat /proc/uptime`.split[0].to_f | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31667/"
]
} |
272,061 | I am creating executable which will be executed with /bin/sh or /bin/bash script, I have a file which contains a structure like, there will be only one #start and #end tag in the config file, and I want to replace text in between those tags, ...#startFirewallRuleSet global { FirewallRule allow tcp to google.com FirewallRule allow tcp to facebook.com#more rules}#endFirewallRuleSet known-users { FirewallRule allow to 0.0.0.0/0}... Desired output will be, ...#startFirewallRuleSet global { FirewallRule allow tcp to google.com FirewallRule deny tcp to facebook.com FirewallRule deny tcp to twitter.com FirewallRule allow tcp to exaple.com#more rules}#endFirewallRuleSet known-users { FirewallRule allow to 0.0.0.0/0}... How can I replace the whole text between #start and #end with some new text? I just want to add or remove Rules from this config file. This is part of a config file and I want to modify url allowed inside that texts. | Use sed '/#start/,/#end/ replace_command ' For example, if the file is called myconfig ,and you want to replace "allow" with "deny" in that section, you could say sed '/#start/,/#end/s/allow/deny/' myconfig That would leave the file untouched, and display on the standard outputwhat the file would look like after the modification. You should probably do that first,to verify that you've got the command right. If you want to actually change the file, add the -i option: sed -i '/#start/,/#end/s/allow/deny/' myconfig If you want to replace the whole text ( all the text)between those two lines,you can do something slightly simpler than Lucas's answer : sed '/#start/,/#end/c\New text line 1\New text line 2\ ︙ \New text line n -1\New text line n (last)' ← Close quote; no backslash here c is the c hange command in sed (and ed );it means "replace entire line(s)". You cannot simply leave the #start and #end lines untouched. If you want to keep them, you must re-insert them: sed -i '/#start/,/#end/c\#start\FirewallRuleSet global {\ FirewallRule allow tcp to google.com\ FirewallRule deny tcp to facebook.com\ ︙ \\#more rules\}\#end' myconfig /#start/,/#end/ specifies a range —the lines from the first line that contains #start through the first line after that that contains #end . If you need to find lines that contain those strings and nothing else,use /^#start$/,/^#end$/ . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89849/"
]
} |
272,140 | I want to install mongodb in debian jessie, but debian repo only has mongodb 2.4 and I need mongodb 3.x. Mongodb web page instructions for installing latest mongodb in debian only supports wheezy . sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/3.2 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.listsudo apt-get updatesudo apt-get install -y mongodb-org Can I install it this way despite being for wheezy or would it be better to install it from the tar.gz and deal with unmet dependencies by hand? | You can follow the instructions in the MongoDB official site first, import the MongoDB public GPG Key sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 add a source.list configuration file Debian 7 (wheezy) echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/3.2 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list Debian 8 (jessie) echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.2 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list reload aptitude cache sudo apt-get update and finally install the desired version of MongoDB sudo apt-get install -y mongodb-org=3.2.10 mongodb-org-server=3.2.10 mongodb-org-shell=3.2.10 mongodb-org-mongos=3.2.10 mongodb-org-tools=3.2.10 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8705/"
]
} |
272,161 | By looking at a particular line of a text file (say, the 1123th, see below), it seems that there is a non-breaking space, but I am not sure: $ cat myfile.csv | sed -n 1123p | cut -f2Lisztes feher$ cat myfile.csv | sed -n 1123p | cut -f2 | od -An -c -b L i s z t e s 302 240 f e h e r \n 114 151 163 172 164 145 163 302 240 146 145 150 145 162 012 However, the ASCII code in octal indicates that a non-breaking space is 240. So what does the 302 correspond to? Is it something particular to this given file? I am asking the question in order to understand. I already know how to use sed to fix my problem, following this answer : $ cat myfile.csv | sed -n 1123p | cut -f2 | sed 's/\xC2\xA0/ /g' | od -An -c -b L i s z t e s f e h e r \n 114 151 163 172 164 145 163 040 146 145 150 145 162 012 For information, the original file is in the .xlsx ( Excel ) format. As my computer runs Xubuntu , I opened it with LibreOffice Calc (v5.1). Then, I saved it as "Text CSV" with "Character set = Unicode (UTF-8)" and tab as field separator: $ file myfile.csvmyfile.csv: UTF-8 Unicode text | It's the UTF-8 encoding of the U+00A0 Unicode character: $ unicode U+00A0U+00A0 NO-BREAK SPACEUTF-8: c2 a0 UTF-16BE: 00a0 Decimal:   Octal: \0240 Category: Zs (Separator, Space)Bidi: CS (Common Number Separator)Decomposition: <noBreak> 0020$ locale charmapUTF-8$ printf '\ua0' | od -to10000000 302 2400000002 UTF-8 is an encoding of Unicode with a variable number of bytes per character. Unicode as a charset is a superset of iso8859-1 (aka latin1) itself a superset of ASCII. While in iso8859-1, the non-breaking-space character (codepoint 0xa0 in iso8859-1 like in Unicode) would be expressed as a one 0xa0 byte, in UTF-8, only code points 0 to 127 are expressed as one byte (which makes UTF-8 a superset of ASCII or in other words ASCII files are also UTF-8 files). Code points over 128 are encoded with more bytes per characters. See Wikipedia for details of the UTF-8 encoding algorithm. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/272161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34919/"
]
} |
272,197 | mv can't move a directory to a destination with a same-name directory: $ mv fortran/ imperative_PLs/mv: cannot move ‘fortran/’ to ‘imperative_PLs/fortran’: Directory not empty Why does mv not work in this case? Can that be explained from the system calls mv invokes? (Compare to rsync which can) Why is mv designed to not work in this case? What is the rationale or point? | mv doesn't work in this case because it's not been designed to do so. The system calls are (probably) either Move to same filesystem: rename (originally link and unlink ) Move across filesystems: recursive file copy followed by recursive unlink Opinion: I think it's not so much that it was designed not to work, as it wasn't designed to handle this use case. For a "simple" tool that's intended to do one thing well you'd need to provide a set of switches to indicate to mv which of these action paths to take: To bail with an error, as in the current implementation To merge, bailing with an error if a file already exists To merge, replacing any target files that already exist If the merge/replace action is what you want, you can implement it easily enough with cp followed by rm , or by using one of the file tree copying utilities tar , pax , etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
272,260 | ${$(git rev-parse HEAD):0:5}bash: ${$(git rev-parse HEAD):0:5}: bad substitution git rev-parse HEAD returns the hash id, but how do I make a substring out of it? if I divide it into two lines, it works. x=$(git rev-parse HEAD)echo ${x:0:5} But How do I do it in one line? | Using --short option: $ git rev-parse --short=5 HEAD90752$ x=$(git rev-parse --short=5 HEAD)$ printf '%s\n' "$x"90752 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/272260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137196/"
]
} |
272,264 | I want to mount my home folder on a Synology NAS. I can SSH to the NAS but when I try sshfs , I get this error ' read: Connection reset by peer '. I used the following command: sshfs [email protected]:/volume1/homes/john /my_nas -p 1919 I also tried this path: /var/services/homes/john , but had no success. How can I find/debug the problem? | Just enable the SFTP Service in Control Panel->File Services. Then mount with sshfs username@machine:/homes/username /directory | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/272264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138220/"
]
} |
272,271 | I cd to my home directory and typed ls *bash* expecting it will show a list of files contains "bash". However it is not. While, typing ls .bash* works. According to the documentation, * stands for any character, right? But it seems that it doesn't represent . in this case. What is wrong? | The dotglob shell option controls this: $ shopt -s dotglob$ ls *bash*.bash_history .bash_logout .bashrc It is not enabled by default, probably as a usability/safety measure, since most end users don't have to worry about dotfiles and could very easily delete critical home directory files (such as .config , .ssh ) by accident. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31374/"
]
} |
272,288 | I have a script which when executed will produce some files.I modified the script by adding the following two lines at its top mkdir ABCcd ABC So that the formed new files will be inside the directory ABC . My question is how can I add current date to this directory name ABC such as it becomes ABC_mar_26 (no specific criteria on date format, ABC_03_26 is also okay) if I run the script on march 26th. | To get ABC_03_26 : mkdir "ABC_$(date +'%m_%d')" If you want month name: mkdir "ABC_$(LC_ALL=C date +'%b_%d')" Note that %b give you locale's abbreviated month name, but with the first letter capitalized. With zsh , you can: mkdir "ABC_${(L):-$(LC_ALL=C date +'%b_%d')}" or using prompt expansion : $ LC_ALL=C; print -rl -- ${(L)${(%):-%D{%b_%d}}}mar_26 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272288",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116246/"
]
} |
272,317 | Consider this: $ cd /tmp$ echo "echo YES" >> prog/myprog$ chmod +x prog/myprog$ prog/myprogYES$ myprogmyprog: command not found I can temporarily modify PATH to call myprog by name like this: $ PATH="$PATH":$(readlink -f prog) myprogYES ... however I cannot chain commands with this approach: $ PATH="$PATH":$(readlink -f prog) myprog && myprogYESmyprog: command not found ... apparently the modified PATH apparently didn't propagate to the second invocation. I'm aware I could do this: $ PATH="$PATH":$(readlink -f prog) bash -c "myprog && myprog"YESYES ... but then I have to invoke an extra bash process - and even worse, I have to quote. Is there any way to append to PATH temporarily for chained commands in a one-liner, without having to invoke extra bash and quote? Tried backticks, they don't work: $ PATH="$PATH":$(readlink -f prog) `myprog && myprog`myprog: command not found | How about using a subshell: $ (PATH="$PATH:$(readlink -f prog)"; myprog && myprog)YESYES | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272317",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8069/"
]
} |
272,329 | I recently picked up two Mellanox ConnectX-2 10GBit NICs for dirt cheap. I'm trying to get these to be recognized by a pfSense box. The BIOS recognizes the NICs, no problem whatsoever. However, pfSense definitely doesn't. It's not showing up at all. I've heard that I can install the driver myself or at least throw some kind of magical peanut butter some place to make it work out. Sadly, the only evidence of this I have is from this post about FreeNAS . This wasn't applicable to my install though. I also managed to find a post on NAS4Free , but I could not find the ISO mentioned in the post, nor what version of BSD it was based off of. Maybe I missed out on something on the NAS4Free site, but it's not there. I've tried a few snapshots and the current 2.2.6 official release. No luck. Can anyone point me in the right direction with this? | How about using a subshell: $ (PATH="$PATH:$(readlink -f prog)"; myprog && myprog)YESYES | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149218/"
]
} |
272,353 | I am writing an installation script that will be run as /bin/sh . There is a line prompting for a file: read -p "goat may try to change directory if cd fails to do so. Would you like to add this feature? [Y|n] " REPLY I would like to break this long line into many lines so that none of them exceed 80 characters. I'm talking about the lines within the source code of the script; not about the lines that are to be actually printed on the screen when the script is executed! What I've tried: Frist approach: read -p "oat may try to change directory if cd fails to do so. " \ "Would you like to add this feature? [Y|n] " REPLY This doesn't work since it doesn't print Would you like to add this feature? [Y|n] . Second approach: echo "oat may try to change directory if cd fails to do so. " \ "Would you like to add this feature? [Y|n] "read REPLY Doesn't work as well. It prints a newline after the prompt. Adding -n option to echo doesn't help: it just prints: -n goat oat may try to change directory if cd fails to do so. Would you like to add this feature? [Y|n]# empty line here My current workaround is printf '%s %s ' \ "oat may try to change directory if cd fails to do so." \ "Would you like to add this feature? [Y|n] "read REPLY and I wonder if there is a better way. Remember that I am looking for a /bin/sh compatible solution. | First of all, let's decouple the read from the text line by using a variable: text="line-1 line-2" ### Just an example.read -p "$text" REPLY In this way the problem becomes: How to assign two lines to a variable. Of course, a first attempt to do that, is: a="line-1 \line-2" Written as that, the var a actually gets the value line-1 line-2 . But you do not like the lack of indentation that this creates, well, then we may try to read the lines into the var from a here-doc (be aware that the indented lines inside the here-doc need a tab, not spaces, to work correctly): a="$(cat <<-_set_a_variable_ line-1 line-2 _set_a_variable_ )"echo "test1 <$a>" But that would fail as actually two lines are written to $a .A workaround to get only one line might be: a="$( echo $(cat <<-_set_a_variable_ line 1 line 2 _set_a_variable_ ) )"echo "test2 <$a>" That is close, but creates other additional issues. Correct solution. All the attempts above will just make this problem more complex that it needs to be. A very basic and simple approach is: a="line-1"a="$a line-2"read -p "$a" REPLY The code for your specific example is (for any shell whose read supports -p ): #!/bin/dash a="goat can try change directory if cd fails to do so." a="$a Would you like to add this feature? [Y|n] "# absolute freedom to indent as you see fit. read -p "$a" REPLY For all the other shells, use: #!/bin/dash a="goat can try change directory if cd fails to do so." a="$a Would you like to add this feature? [Y|n] "# absolute freedom to indent as you see fit. printf '%s' "$a"; read REPLY | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/272353",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128489/"
]
} |
272,412 | I want to install the latest version of GCC (GNU compiler collection) in Linux Mint 17.3. Currently g++ --version returns 4.8.4, whereas the latest stable release is 5.3. | Your Linux Mint comes pre-installed with a GCC package. So first I would recommend you to check if the package is already present in your system by typing the following command in terminal. apt-cache search gcc In case you're not having any such package then use the following command in terminalfirstly you've add the following repository: sudo add-apt-repository ppa:ubuntu-toolchain-r/test then use the next command: sudo apt-get updatesudo apt-get install g++-4.7 c++-4.7 There is always basic thing we should learn; take it as prerequisite before linux..Learn googling... try to do more hard search ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56755/"
]
} |
272,433 | On this system: $ cat /etc/issueUbuntu 14.04.4 LTS \n \l$ uname -aLinux mypc 3.19.0-56-generic #62~14.04.1-Ubuntu SMP Fri Mar 11 11:03:33 UTC 2016 i686 i686 i686 GNU/Linux$ apt-show-versions -r moreutilsliblist-moreutils-perl:i386/trusty 0.33-1build3 uptodatemoreutils:i386/trusty 0.50 uptodate ... I can use date with a nanosecond precision, no problem: $ date +'%Y%m%d%H%M%S'20160327133441$ date +'%Y%m%d%H%M%S%N'20160327133446582969969 ... however, if I use the same format string with moreutils ' ts , the nanosecond precision fails: $ ping google.com | ts '%Y%m%d%H%M%S%N'20160327133829%N PING google.com (216.58.209.110) 56(84) bytes of data.20160327133829%N 64 bytes from arn06s07-in-f14.1e100.net (216.58.209.110): icmp_seq=1 ttl=54 time=32.0 ms20160327133830%N 64 bytes from arn06s07-in-f14.1e100.net (216.58.209.110): icmp_seq=2 ttl=54 time=31.6 ms^C Any way to get ts to show nano- (or micro-) second precision when prepending timestamps to stdin? | Starting from moreutils 0.31 %.S specifier is available, use it instead of %S : ping google.com | ts '%Y%m%d-%H:%M:%.S'20160327-15:01:11.361885 PING google.com (216.58.209.206) 56(84) bytes of data.20160327-15:01:11.362056 64 bytes from bud02s22-in-f206.1e100.net (216.58.209.206): icmp_seq=1 ttl=57 time=26.3 ms20160327-15:01:12.314243 64 bytes from bud02s22-in-f206.1e100.net (216.58.209.206): icmp_seq=2 ttl=57 time=26.2 ms20160327-15:01:13.315651 64 bytes from bud02s22-in-f206.1e100.net (216.58.209.206): icmp_seq=3 ttl=57 time=26.3 ms | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/272433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8069/"
]
} |
272,459 | (Edit: This applies to much more than postfix; it's just where I noticed/debugged it) I've installed postfix, but when it starts up and creates its chroot, it gets an empty copy of /etc/resolv.conf which means it can't resolve any domains. I added some logging in various network scripts to see when resolve.conf is being wiped/re-populated and when postfix is starting... Here's the log from a boot: Sun Mar 27 19:12:30 UTC 2016 EXECUTE: root + /sbin/resolvconf2 -d eth0 -fSun 27 Mar 19:12:31 UTC 2016 Postfix startup scriptSun Mar 27 19:12:37 UTC 2016 EXECUTE: root + /sbin/resolvconf2 -a eth0 Note there are 7 seconds between resolvconf being called to wipe the config and then re-populate it. For this period, /etc/resolv.conf is effectively empty. It's between these calls that postfix (and many other services) starts up. It seems strange that services are being started in this huge gap between resolvconf being cleared/recreated. This is a clean install of Raspbian with Postfix installed and no other changes. EDIT : Looking in syslog, there are actually tons of things failing due to no DNS in the period between dhcpcd starting and finishing. Seems flawed that other services are trying to start concurrently? | Ok, after many wasted hours, I found this in raspi-conf... So it seems this is broken-by-design. The default of "fast boot" comes at the expense of random failure. On a clean Raspian install even without postfix installed, my syslog contains lots of DNS errors from various scripts during the DHCP process. So, the fix is to set this to "Slow" boot, which creates a script that waits for the network at boot. Edit : You can script the call to raspi-config like this: sudo raspi-config nonint do_wait_for_network Slow This fixes both the postfix issue that I noticed, and also clears up a ton of DNS-related errors normally written to syslog at boot. I think as default behaviour, this is crazy. I've posted feedback on GitHub . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84496/"
]
} |
272,476 | I'm trying to create a symbolic link to /home/user/www from /var/www/html so I can access the directory in my home folder from the URL: http://localhost/www/ I did: ln -s /home/user/www /var/www/html but when i access the URL above, the server returns 403 Forbidden . The directory /home/user/www has permissions 775. I'm on Ubuntu 14.04. | Ok, after many wasted hours, I found this in raspi-conf... So it seems this is broken-by-design. The default of "fast boot" comes at the expense of random failure. On a clean Raspian install even without postfix installed, my syslog contains lots of DNS errors from various scripts during the DHCP process. So, the fix is to set this to "Slow" boot, which creates a script that waits for the network at boot. Edit : You can script the call to raspi-config like this: sudo raspi-config nonint do_wait_for_network Slow This fixes both the postfix issue that I noticed, and also clears up a ton of DNS-related errors normally written to syslog at boot. I think as default behaviour, this is crazy. I've posted feedback on GitHub . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163033/"
]
} |
272,491 | I make a list of all files in my PC. FILES=$(find . -type f -name '*' -printf "%s/%f\n" | sort -n) The output should be: "size/file_name"56872/file.txt98566/test1000254/foo My PC give me error find: -printf: unknown primary or operator Any solutions? | The -printf option is not in POSIX find . It is a feature of GNU find , e.g., on Linux. The particular implementation you are using is not shown; it might be POSIX without extensions. For instance, it is not in FreeBSD , or OSX . Without that, you can use some alternative, e.g., this (which will not handle embedded blanks, etc., but makes few assumptions about your tools): find . -type f -exec ls -ld {} \; | awk '{ gsub("^.*/","",$9); printf "%s/%s\n", $5, $9; }' With more information about the available tools, it is (usually) possible to improve the solution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163039/"
]
} |
272,492 | I have the following pxelinux config: DEFAULT vesamenu.c32PROMPT 0MENU TITLE In The Moon NetworkLABEL install1404serverMENU LABEL Install Ubuntu 14.04.1 Server AMD64include ubuntu-installer/amd64/boot-screens/menu.cfgdefault ubuntu-installer/amd64/boot-screens/vesamenu.c32 All mentioned files are accessed by tftpd. When I do network boot, my menu appears. When I select the (single) item, the following error message appears: Failed to load COM32 file ubuntu-installer/amd64/boot-screens/vesamenu.c32 Loading is occurred on virtual machine. What is happening which causes this error message? | The -printf option is not in POSIX find . It is a feature of GNU find , e.g., on Linux. The particular implementation you are using is not shown; it might be POSIX without extensions. For instance, it is not in FreeBSD , or OSX . Without that, you can use some alternative, e.g., this (which will not handle embedded blanks, etc., but makes few assumptions about your tools): find . -type f -exec ls -ld {} \; | awk '{ gsub("^.*/","",$9); printf "%s/%s\n", $5, $9; }' With more information about the available tools, it is (usually) possible to improve the solution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28089/"
]
} |
272,496 | Hello I just installed Debian and have only one user since it's my personal computer. I would like for all softwares, x, xfce, terminal, vim, shell to always have the same configuration without me having to to manually configure everything twice once from user and once from root. Is there a convenient way to have the same configuration for both accounts ? | The -printf option is not in POSIX find . It is a feature of GNU find , e.g., on Linux. The particular implementation you are using is not shown; it might be POSIX without extensions. For instance, it is not in FreeBSD , or OSX . Without that, you can use some alternative, e.g., this (which will not handle embedded blanks, etc., but makes few assumptions about your tools): find . -type f -exec ls -ld {} \; | awk '{ gsub("^.*/","",$9); printf "%s/%s\n", $5, $9; }' With more information about the available tools, it is (usually) possible to improve the solution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149443/"
]
} |
272,506 | man ssh says: SSH_ASKPASS If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If ssh does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS and open an X11 window to read the passphrase. I'd like SSH to use an askpass program even if it was run from a terminal. On occasion, I have to connect to servers, where there's some delay in showing a password prompt (maybe due to network issues, maybe due to attempted reverse DNS lookups, …). I get annoyed and switch to something else, and forget about the attempted connection. (Insert joke about attention span of a goldfish.) When I finally get back to it, the prompt's timed out and even a correct password would just result in a closed connection. Keys would be one solution, but not every system I use has my usual SSH keys. However, I usually use Ubuntu systems, and Ubuntu has an SSH askpass program installed by default. If an askpass window popped up, however, I'd be immediately aware of it. That's a good enough compromise for me, if only I can get it to work. | This will be a bit more complicated, but combination of several pieces will make it working: Explanation To force ssh to use $SSH_ASKPASS program, you can't allow ssh to see the real tty . It is just condition. This can be done by using setsid and using -n switch to ssh . This case would initiate connection, but you would not be able to interact with the shell, which is probably also your requirement ;) (and also breaks your local TTY). But you can give up the "first session". You should also add -N switch, which will suppress the remote command and will do just the authentication . Also the possible output "junk" can be redirected to &> /dev/null if you are not interested in it. Set up ControlMaster in ssh_config . It is cool feature and once the connection is established, you can "fire up" sessions pretty fast. This snippet in ~/.ssh/config should do that: ControlPath ~/.ssh/controlmasters/%r@%h:%pControlMaster autoControlPersist 5m You can add that into some host block listing your "slow candidates", or just everywhere. It is almost no overhead. Final line Then you should be able to connect in this way to the host you expect it will take a while: setsid ssh -nN host# wait, insert password in the X11 promptssh host# will bring you directly to your session Whole process might be simplified by alias or bash function doing both in one step, but it is left on readers imagination. Only command-line arguments You can join both things together on command-line without ssh_config part: setsid ssh -nNMS ~/.ssh/masters/%C host# wait, insert password in the X11 promptssh -S ~/.ssh/masters/%C host# will bring you directly to your session The following function should work when SSH options aren't specified: ssh() { if ! command ssh -o PasswordAuthentication=no "$1" true then setsid -w ssh -fnN "$1" fi command ssh "$@"} -f instructs SSH to go to the background just before program execution, which is after it has got the password. -w tells setsid to wait for the program to end. In this case, that happens when SSH goes to the background. Combined with ssh -f , the manual wait between the two SSH commands can be eliminated. The function assumes the first argument is the hostname. The test is just to prevent unnecessary SSH connections. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70524/"
]
} |
272,541 | If I have a pre-existing ENV variable, is it possible to declare a new local variable with the child process using the original value? Keep in mind, I don't know which variables the child process will use. So saving the value of the original and doing var=original mycmd -with -args won't be sufficient. # print.sh:echo "from print.sh: $MY_VAR"# local.sh:run () { declare MY_VAR="should not be seen" bash print.sh}MY_VAR="original" run The above prints: from print.sh: should not be seen instead of: from print.sh: original even if I use local , declare , etc. in the function body I was hoping declare/local/typeset had some options, but I haven't found any that says: set a value for this variable locally, but child processes use the original. | This will be a bit more complicated, but combination of several pieces will make it working: Explanation To force ssh to use $SSH_ASKPASS program, you can't allow ssh to see the real tty . It is just condition. This can be done by using setsid and using -n switch to ssh . This case would initiate connection, but you would not be able to interact with the shell, which is probably also your requirement ;) (and also breaks your local TTY). But you can give up the "first session". You should also add -N switch, which will suppress the remote command and will do just the authentication . Also the possible output "junk" can be redirected to &> /dev/null if you are not interested in it. Set up ControlMaster in ssh_config . It is cool feature and once the connection is established, you can "fire up" sessions pretty fast. This snippet in ~/.ssh/config should do that: ControlPath ~/.ssh/controlmasters/%r@%h:%pControlMaster autoControlPersist 5m You can add that into some host block listing your "slow candidates", or just everywhere. It is almost no overhead. Final line Then you should be able to connect in this way to the host you expect it will take a while: setsid ssh -nN host# wait, insert password in the X11 promptssh host# will bring you directly to your session Whole process might be simplified by alias or bash function doing both in one step, but it is left on readers imagination. Only command-line arguments You can join both things together on command-line without ssh_config part: setsid ssh -nNMS ~/.ssh/masters/%C host# wait, insert password in the X11 promptssh -S ~/.ssh/masters/%C host# will bring you directly to your session The following function should work when SSH options aren't specified: ssh() { if ! command ssh -o PasswordAuthentication=no "$1" true then setsid -w ssh -fnN "$1" fi command ssh "$@"} -f instructs SSH to go to the background just before program execution, which is after it has got the password. -w tells setsid to wait for the program to end. In this case, that happens when SSH goes to the background. Combined with ssh -f , the manual wait between the two SSH commands can be eliminated. The function assumes the first argument is the hostname. The test is just to prevent unnecessary SSH connections. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42107/"
]
} |
272,580 | For a class on IT security, I want to demonstrate privilege escalation to the students. To do so, I looked through the exploit/linux/local list in the Metasploit Framework, finding (among others) exploit/linux/local/sock_sendpage from August 2009. I set up a VM with 32-bit Ubuntu Server 9.04 ( http://old-releases.ubuntu.com/releases/9.04/ubuntu-9.04-server-amd64.iso ) from April 2009. uname -r gives me 2.6.28-11-generic . According to the exploit's description All Linux 2.4/2.6 versions since May 2001 are believed to be affected: 2.4.4 up to and including 2.4.37.4; 2.6.0 up to and including 2.6.30.4 So it seems that the Ubuntu server that I set up should be suitable for demonstration. However, I was not able to get it to work. I added a (regular) user on the server and SSH access works. From within the Metasploit Framework, I can create an SSH session using auxiliary/scanner/ssh/ssh_login . However, when I run the exploit, I get [*] Writing exploit executable to /tmp/mlcpzP6t (4069 bytes)[*] Exploit completed, but no session was created. I don't get any further information, not even when setting DEBUG_EXPLOIT to true. /tmp is writabe, also from within the Metasploit SSH session: $ sessions -c "touch /tmp/test.txt"[*] Running 'touch /tmp/test.txt' on shell session 1 ([redacted])$ sessions -c "ls -l /tmp"[*] Running 'ls -l /tmp' on shell session 1 ([redacted])total 0-rw-r--r-- 1 [redacted] [redacted] 0 2016-03-28 09:44 test.txt I also tried setting WriteableDir to the user's home directory on the server, but without any changes. What am I missing here? Is this version of Ubuntu server (that I have deliberately not updated!) not vulnerable? | The 9.04 release was supported until October 23 2010. The vulnerability you found was reported in August 2009. It seems reasonable that, since the release was still current and supported at the time, the ISO was patched and what you downloaded was a version that is no longer vulnerable. Furthermore, you seem to have demonstrated quite nicely that it isn't vulnerable. After all, you tried the exploit and it looks like it failed. Why don't you try a newer exploit? Something like CVE-2013-2094 which should also affect Ubuntu , for example. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/272580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98962/"
]
} |
272,602 | I'm trying to install some packages on my server (Debian8) and for some of they, I always have dependencies problems. For example, I have executed this command: apt-get install jetty9 And It answer that I've asked impossible things and say that the dependency libjetty9-extra-java is not satisfied . I can't understand why. Can someone tell me why I get such errors? Here is the error : Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: jetty9 : Depends: libjetty9-extra-java (>= 9.2.14-1~bpo8+1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. And this is my sources.list : #de://debian.mirrors.ovh.neb httpt/debian/ jessie main #deb-src http://debian.mirrors.ovh.net/debian/ jessie main deb http://security.debian.org/ jessie/updates main deb-src http://security.debian.org/ jessie/updates main # jessie-updates, previously known as 'volatile' deb http://debian.mirrors.ovh.net/debian/ jessie-updates main deb-src http://debian.mirrors.ovh.net/debian/ jessie-updates main # jessie-backports, previously on backports.debian.org deb http://debian.mirrors.ovh.net/debian/ jessie-backports main deb-src http://debian.mirrors.ovh.net/debian/ jessie-backports main deb http://debian.mirrors.ovh.net/debian/ jessie main contrib non-free deb-src http://debian.mirrors.ovh.net/debian/ jessie main contrib non-free | Your jetty9 package is using the backports, as it can be seen by the bpo8 string. As you already have jessie-backports configured in sources.list, do: sudo apt-get updatesudo apt-get -t jessie-backports jetty9 The -t jessie backports is a hint to apt for using the jessie-backports repository. Also check: https://packages.debian.org/jessie-backports/jetty9 backports.debian.org "Backports are packages taken from the next Debian release (called "testing"), adjusted and recompiled for usage on Debian stable. Because the package is also present in the next Debian release, you can easily upgrade your stable+backports system once the next Debian release comes out." | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163116/"
]
} |
272,611 | I mistakenly entered chsh -s /usr/bin instead of chsh -s /bin/bash and now I can't log into a root shell, how do I start a bash shell as root manually ? | While root does not have access, a user in the sudo group can still run privileged commands - it seems the error is not in sudo, but elsewhere in the sudo chsh command (e.g. chsh error). As such your sudo is apparently working. The passwd file can be edited with: sudo vipw And the root shell changed manually. (first line of /etc/passwd usually) root:x:0:0:root:/root:/bin/bash Fom man vipw The vipw and vigr commands edits the files /etc/passwd and /etc/group, respectively. With the -s flag, they will edit the shadow versions of those files, /etc/shadow and /etc/gshadow, respectively. The programs will set the appropriate locks to prevent file corruption. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/272611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149443/"
]
} |
272,617 | I am writing a program that will be sending external emails using postfix with the mail -s command and I need to verify that the email was sent to the specified address. In which case, I am curious if postfix right away reports an error that I am able to get as a return code if email failed sending or if postfix will say it was a success as long as a valid email ([email protected]) was entered and then later report in a log file or such that the email was unable to be sent? Also if the network is down, will postfix still return success as long as a valid email address is used, or will a failure be reported right away? | While root does not have access, a user in the sudo group can still run privileged commands - it seems the error is not in sudo, but elsewhere in the sudo chsh command (e.g. chsh error). As such your sudo is apparently working. The passwd file can be edited with: sudo vipw And the root shell changed manually. (first line of /etc/passwd usually) root:x:0:0:root:/root:/bin/bash Fom man vipw The vipw and vigr commands edits the files /etc/passwd and /etc/group, respectively. With the -s flag, they will edit the shadow versions of those files, /etc/shadow and /etc/gshadow, respectively. The programs will set the appropriate locks to prevent file corruption. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/272617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72302/"
]
} |
272,662 | I spent a few days writing a python script, and creating a systemd unit file for it. During testing, the script logged a lot of errors to journald. I would like to clear those errors from journald now that I'm done. There are several ways to clear the entire journal, as described here: How to clear journalctl including using journalctl --vacuum-time=2d , using journalctl --vacuum-size=500M , and temporarily setting SystemMaxUse= in /etc/systemd/journald.conf to a very low value. All of these appear to clear the entire journal, effecting all units. I just need to clear the entries for a single unit. Is this possible? | Use my Python 3 program copy_journal.py on the journal files in /var/log/journal from which you want to remove entries. For instance, to make a copy of system.journal without log entries for NetworkManager.service : $ journalctl --file=system.journal | wc 167 1934 18825$ journalctl --file=system.journal | grep -v NetworkManager | wc 77 881 8421$ python3 copy_journal.py --remove-unit=NetworkManager.service system.journal system-without-nm.journal$ journalctl --file=system-without-nm.journal | wc 77 881 8421 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91667/"
]
} |
272,698 | I try to initialize an array in bash-4.2 next way: ring=()ls -las | tail -n +4 | while read line> do> ring+=("$line")> echo ${ring[-1]}> done3924 -rw-r--r-- 1 username group 4015716 Mar 23 15:14 script.jar4 -rw-r--r-- 1 username group 9 Feb 29 12:40 rec.lst5541 -rw-r--r-- 1 username group 5674226917 Mar 28 15:25 debug.out8 -rw-r--r-- 1 username group 6135 Mar 25 12:16 script.class8 -rw-r--r-- 1 username group 6377 Mar 25 11:57 script.java8 -rwxr-xr-x 1 username group 4930 Mar 8 15:21 script-0.0.0.sh8 -rwxr-xr-x 1 username group 6361 Mar 28 15:27 script-0.0.1.shecho ${ring[0]}echo "${ring[0]}"echo "${ring[@]}" What is wrong, why i get empty array after the finish of the loop? | Your problem is that in a pipeline ( command1 | command2 | command3 ... ) the commands are ran in subshells. Variables are not shared between subshells or between subshells and the main shell. The ring in the while loop is different than the ring in the main shell. One way to overcome this is to use process substitution: while read line; do ring+=("$line"); echo ${ring[-1]}; done < <(ls -las|tail -n +4) The <(command) syntax is called process substitution and will redirect the output of the command to a named pipe. Which is then redirected with the familiar < as if it was a file. When you use < , there is no subshell, so the ring variable will be set. Note that there is a shell builtin command to fill an array from the lines of a file: mapfile -t ring < <(ls -las | tail -n +4) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/272698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163160/"
]
} |
272,765 | I want to extract the access day of a file in a shell script. I tried by writing the following script but it doesn't work. When I echo accessday , the output is "staff". file=$1accessday=$(ls -lu $file | cut -d ' ' -f 6) | Don't parse ls . This is a job for stat . To get the last access time in human readable format: stat -c '%x' file.txt in seconds since epoch: stat -c '%X' file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150680/"
]
} |
272,779 | Last weekend we had to change the time from 02:00 to 03:00. Question: What would happen if there was a cronjob at 02:30? crond is a very old solution for scheduling, it should probably handle it, but don't know how. | It probably depends on your cron implemenation, but the popular Vixie cron states in the manual: cron then wakes up every minute, examining all stored crontabs, checking each command to see if it should be run in the current minute. and Special considerations exist when the clock is changed by less than 3 hours, for example at the beginning and end of daylight savings time. If the time has moved forwards, those jobs which would have run in the time that was skipped will be run soon after the change. Conversely, if the time has moved backwards by less than 3 hours, those jobs that fall into the repeated time will not be re-run. Only jobs that run at a particular time (not specified as @hourly, nor with '*' in the hour or minute specifier) are affected. Jobs which are specified with wildcards are run based on the new time immediately. Since the DST change was less than 3 hours, your program would run shortly after 3:00 AM I am not sure if this is Vixie cron specific behaviour, I seem to recall this is how my PDP-11 worked as well back in the 80s but I am not sure. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/272779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112826/"
]
} |
272,850 | with logical I mean everything legal in the command ip link as in, for instance: ip link add link dum0 name dum0.200 type vlan protocol 802.1Q id 200 where the logical type would be "vlan". All valid types are, to quote the man page: vlan | veth | vcan | dummy | ifb | macvlan | macvtap | can | bridge | ipoib | ip6tnl | ipip | sit | vxlan |gre | gretap | ip6gre | ip6gretap | vti Note that this clearly is not the physical device type (like ethernet, wifi, ppp etc.) as asked in this question , which does contain a gem of a reference to the physical type which led me to test for it : find /sys/class/net ! -type d | xargs --max-args=1 realpath | while read d; do b=$(basename $d) ; n=$(find $d -name type) ; echo -n $b' ' ; cat $n; donedum0.200 1dum0.201 1dum1.300 1dum1.301 1dummy0 1ens36 1ens33 1lo 772dum0 1dum1 1wlan0 1 But which apparently finds both dummy, vlan and wlan devices to be of type ARPHRD_ETHER. Does somebody know more? Thanks in advance. ==== Revising this in 2022 . It's from a system with two real ethernet interfaces, one wifi, docker installed but inactive, and libvirt with two networks and five virtual machines. The jq is from stedolan.github.io/jq, commonly installed with a decent package manager. $ ( sudo ip -details -j l | jq -r '.[]|"@", .ifname, .link_type, .linkinfo.info_data.type, .linkinfo.info_kind, .linkinfo.info_slave_kind' | tr '\n' ' ' | tr '@' '\n' ; echo ) | column -tlo loopback null null nullenp43s0 ether null null nullwlp0s20f3 ether null null nulldocker0 ether null bridge nullvirbr2 ether null bridge nullvirbr1 ether null bridge nullenx00e04c680108 ether null null nullvnet0 ether tap tun bridgevnet1 ether tap tun bridgevnet2 ether tap tun bridgevnet3 ether tap tun bridgevnet4 ether tap tun bridgevnet5 ether tap tun bridgevnet6 ether tap tun bridgevnet7 ether tap tun bridgevnet8 ether tap tun bridgevnet9 ether tap tun bridge | A simpler solution: ip -details link show For virtual devices, device type is shown on the third line. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/272850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44864/"
]
} |
272,858 | If I want to move a file called longfile from /longpath/ to /longpath/morepath/ can I do something like mv (/longpath)/(longfile) $1/morepath/$2 i.e. can I let the bash know that it should remember a specific part of the input so I can reuse it later in the same input? (In the above example I use an imaginary remembering command that works by enclosing in parantheses and an imaginary reuse command $ that inserts the content of the groups) | You could do this: mv /longpath/longfile !#:1:h/morepath/ See https://www.gnu.org/software/bash/manual/bashref.html#History-Interaction !# is the current command :1 is the first argument in this command :h is the "head" -- think dirname /morepath/ appends that to the head and you're moving a file to a directory, so it keeps the same basename. If you want to alter the "longfile" name, say add a ".txt" extension, you could mv /longpath/longfile !#:1:h/morepath/!#:1:t.txt Personally, I would cut and paste with my mouse. In practice I never get much more complicated than !! or !$ or !!:gs/foo/bar/ | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/272858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106476/"
]
} |
272,868 | How can I use the download-dl to download video through url playlist only format mp4 instead format .mkv or .webm ? I use this command to download videos: youtube-dl -itcv --yes-playlist https://www.youtube.com/playlist?list=.... The result this command are video with extension .mp4 , .mkv or .webm | To list the available formats type: youtube-dl -F url Then you can choose to download a certain format-type by entering the number for the format code (in the sample below 11 ): youtube-dl -f 11 url Example from webupd8 youtube-dl -F http://www.youtube.com/watch?v=3JZ_D3ELwOQ sample output: [youtube] Setting language[youtube] 3JZ_D3ELwOQ: Downloading webpage[youtube] 3JZ_D3ELwOQ: Downloading video info webpage[youtube] 3JZ_D3ELwOQ: Extracting video information[info] Available formats for 3JZ_D3ELwOQ:format code extension resolution note 171 webm audio only DASH webm audio , audio@ 48k (worst)140 m4a audio only DASH audio , audio@128k160 mp4 192p DASH video 133 mp4 240p DASH video 134 mp4 360p DASH video 135 mp4 480p DASH video 136 mp4 720p DASH video 137 mp4 1080p DASH video 17 3gp 176x144 36 3gp 320x240 5 flv 400x240 43 webm 640x360 18 mp4 640x360 22 mp4 1280x720 (best) You can choose best and type youtube-dl -f 22 http://www.youtube.com/watch?v=3JZ_D3ELwOQ To get the best video quality (1080p DASH - format "137") and best audio quality (DASH audio - format "140"), you must use the following command: youtube-dl -f 137+140 http://www.youtube.com/watch?v=3JZ_D3ELwOQ EDIT You can get more options here Video Selection: --playlist-start NUMBER Playlist video to start at (default is 1)--playlist-end NUMBER Playlist video to end at (default is last)--playlist-items ITEM_SPEC Playlist video items to download. Specify indices of the videos in the playlist separated by commas like: "--playlist-items 1,2,5,8" if you want to download videos indexed 1, 2, 5, 8 in the playlist. You can specify range: "--playlist-items 1-3,7,10-13", it will download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13.--match-title REGEX Download only matching titles (regex or caseless sub-string)--reject-title REGEX Skip download for matching titles (regex or caseless sub-string)--max-downloads NUMBER Abort after downloading NUMBER files--min-filesize SIZE Do not download any videos smaller than SIZE (e.g. 50k or 44.6m)--max-filesize SIZE Do not download any videos larger than SIZE (e.g. 50k or 44.6m)--date DATE Download only videos uploaded in this date--datebefore DATE Download only videos uploaded on or before this date (i.e. inclusive)--dateafter DATE Download only videos uploaded on or after this date (i.e. inclusive)--min-views COUNT Do not download any videos with less than COUNT views--max-views COUNT Do not download any videos with more than COUNT views--match-filter FILTER Generic video filter (experimental). Specify any key (see help for -o for a list of available keys) to match if the key is present, !key to check if the key is not present,key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against a number, and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?) after the operator.For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike functionality is not available at the given service), but who also have a description, use --match-filter "like_count > 100 & dislike_count <? 50 & description" .--no-playlist Download only the video, if the URL refers to a video and a playlist.--yes-playlist Download the playlist, if the URL refers to a video and a playlist.--age-limit YEARS Download only videos suitable for the given age--download-archive FILE Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it.--include-ads Download advertisements as well (experimental) | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/272868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150122/"
]
} |
272,871 | I have a file that looks like this: foo03afoo02bquux01afoo01afoo02afoo01bfoo03bquux01b I'd like it ordered by the last character (so a and b appear together) and then by the preceding number, and then by the prefix (though this is not essential). So that it results in: foo01aquux01afoo02afoo03afoo01bquux01bfoo02bfoo03b It actually doesn't particularly matter where quux01a and quux01b appear, as long as they're in the relevant group -- they can appear as shown, before foo01b , or after foo03b . Why? These are server names used in a blue/green deployment, so I want the 'A' servers together, then the 'B' servers. I found the -k switch to GNU sort, but I don't understand how to use it to specify a particular character, counting from the end of the string. I tried cat foos | rev | sort | rev , but that sorts foo10a and foo10b (when we count up that far) into the wrong place. | To list the available formats type: youtube-dl -F url Then you can choose to download a certain format-type by entering the number for the format code (in the sample below 11 ): youtube-dl -f 11 url Example from webupd8 youtube-dl -F http://www.youtube.com/watch?v=3JZ_D3ELwOQ sample output: [youtube] Setting language[youtube] 3JZ_D3ELwOQ: Downloading webpage[youtube] 3JZ_D3ELwOQ: Downloading video info webpage[youtube] 3JZ_D3ELwOQ: Extracting video information[info] Available formats for 3JZ_D3ELwOQ:format code extension resolution note 171 webm audio only DASH webm audio , audio@ 48k (worst)140 m4a audio only DASH audio , audio@128k160 mp4 192p DASH video 133 mp4 240p DASH video 134 mp4 360p DASH video 135 mp4 480p DASH video 136 mp4 720p DASH video 137 mp4 1080p DASH video 17 3gp 176x144 36 3gp 320x240 5 flv 400x240 43 webm 640x360 18 mp4 640x360 22 mp4 1280x720 (best) You can choose best and type youtube-dl -f 22 http://www.youtube.com/watch?v=3JZ_D3ELwOQ To get the best video quality (1080p DASH - format "137") and best audio quality (DASH audio - format "140"), you must use the following command: youtube-dl -f 137+140 http://www.youtube.com/watch?v=3JZ_D3ELwOQ EDIT You can get more options here Video Selection: --playlist-start NUMBER Playlist video to start at (default is 1)--playlist-end NUMBER Playlist video to end at (default is last)--playlist-items ITEM_SPEC Playlist video items to download. Specify indices of the videos in the playlist separated by commas like: "--playlist-items 1,2,5,8" if you want to download videos indexed 1, 2, 5, 8 in the playlist. You can specify range: "--playlist-items 1-3,7,10-13", it will download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13.--match-title REGEX Download only matching titles (regex or caseless sub-string)--reject-title REGEX Skip download for matching titles (regex or caseless sub-string)--max-downloads NUMBER Abort after downloading NUMBER files--min-filesize SIZE Do not download any videos smaller than SIZE (e.g. 50k or 44.6m)--max-filesize SIZE Do not download any videos larger than SIZE (e.g. 50k or 44.6m)--date DATE Download only videos uploaded in this date--datebefore DATE Download only videos uploaded on or before this date (i.e. inclusive)--dateafter DATE Download only videos uploaded on or after this date (i.e. inclusive)--min-views COUNT Do not download any videos with less than COUNT views--max-views COUNT Do not download any videos with more than COUNT views--match-filter FILTER Generic video filter (experimental). Specify any key (see help for -o for a list of available keys) to match if the key is present, !key to check if the key is not present,key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against a number, and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?) after the operator.For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike functionality is not available at the given service), but who also have a description, use --match-filter "like_count > 100 & dislike_count <? 50 & description" .--no-playlist Download only the video, if the URL refers to a video and a playlist.--yes-playlist Download the playlist, if the URL refers to a video and a playlist.--age-limit YEARS Download only videos suitable for the given age--download-archive FILE Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it.--include-ads Download advertisements as well (experimental) | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/272871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46851/"
]
} |
272,881 | I recently tried to install OpenBSD to my Soekris net4526, but the 64MB onboard storage is too small. Is there any way to make OpenBSD smaller, because even the smallest configuration (bsd and baseXX.tgz only) doesn't fit.I tried with OpenBSD 3.9. Can you give me some links? | The good news is it can be done, but you'll need to know what you're doing and you won't be able to ask for any help on the openbsd mailing lists. You'll need: a more powerful build machine then your soekris a list of things to delete which will be based on whatever compromises you're willing to make. (You haven't given any detail about what you're planning to use this machine for). I just downloaded the latest base.tgz snapshot. It's 148M in size. Here are some ideas about things you could look to remove from base: if you can live without Perl, removing it will save you 54.5M without perl, you may as well delete the pkg_* tools and /etc/signify/openbsd-*-pkg.pub files. You can also delete some other odds and ends like fw_update, libexec/security, etc. the terminfo database, 5.6M /usr/bin/spell, /usr/bin/deroff (only kept around because it's used by spell) and /usr/share/dict will save 3.5M prune the zoneinfo, 3M /etc/firmware will save 2.3M maybe you don't need /sbin/isakmpd which will save 1.8M /usr/share/man/ will save 1.3M (a select few man pages are installed in base not in the man set). without man pages, you may as well delete /usr/bin/man, /usr/bin/mandoc, /etc/examples/man.conf you can also probably delete libsqlite for 3M delete dig, host, nslookup for 1.4M /usr/share/misc will save 1.2M cvs will save 0.7M /usr/bin/file and /etc/magic will save 0.6M texinfo will save 0.5M /usr/mdec will save 0.3M /var/sysmerge/etc.tgz will save 0.2M At this point you'll be close. Maybe around 70M of usage, so you'll have to start deleting things you wouldn't use. For example, in /usr/sbin do you need pppd? Do you need httpd? You probbably don't need installboot, etc, etc. You will need to go through with a fine tooth comb based on your use case. One other thing you can experiment with is to compile your system with -Os instead of -O2. It might be worth checking if it saves space too. But note -Os is not a well tested gcc code-path on OpenBSD. It would not surprise me if you run into compiler bugs by doing this. So I think the point, is it can be done if you're willing to sink enough time into this as a project. Only you can decide if you want to create such a stripped down version of OpenBSD. And again, don't expect any help from the openbsd mailing lists. People will laugh at you if you ask for help with this project over there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/272881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157655/"
]
} |
273,001 | I am trying to automate these steps so that I don't need to do this on every machines manually. I need to install latest app server software (abc.tar.gz) on all the unix boxes. I need to do this in around 12 machines - "machine1" ... "machine12" . I have a master machine "machine0" which has abc.tar.gz file so I was thinking to run my script from this machine only and install abc.tar.gz software by following below steps in all those 12 machines one by one. My unix account id is david and my process runs as golden user. This is the steps I follow if I am installing abc.tar.gz in "machine1": david@machine1:~$ sudo scp david@machine0:/home/david/abc.tar.gz .david@machine1:~$ sudo stop test_serverdavid@machine1:~$ sudo su - goldengolden@machine1:~$ cd /opt/processgolden@machine1:/opt/process$ rm -rf *golden@machine1:/opt/process$ exitdavid@machine1:~$ sudo cp abc.tar.gz /opt/processdavid@machine1:~$ sudo chown -R golden /opt/processdavid@machine1:~$ sudo su - goldengolden@machine1:~$ cd /opt/processgolden@machine1:/opt/process$ tar -xvzf abc.tar.gzgolden@machine1:/opt/process$ exitdavid@machine1:~$ sudo start test_server How can I automate this so that it can do the same steps in all 12 machines one bye one? I mean install abc.tar.gz in machine1 first, then install on machine2 and go on.. I want to run this script from machine0 only. Is there any way to automate this? | I would recommend a provisioning tool such as Fabric or Ansible for this. Both are fairly easy to set up, and will allow you to do more the just the command you need in this case, since you be able to build off you initial playbooks or fabfile depending on which tool you go with. Docs for Fabric can be found here: http://docs.fabfile.org/en/1.10/ Docs for Ansible can be found here: http://docs.ansible.com/ansible/intro_getting_started.html You'll need some comfort with SSH keys and SSH configurations for both tools, which you can find in the docs as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102842/"
]
} |
273,059 | I'm using bash version 4.3.42(1)-release in an ArchLinux/Gnome environment.When I type my commands some of the written characters gets transformed in some weird ones. Overall, all displayed text looks strange. My character encoding is set to Unicode (UTF-8). I also checked my input language, which is correct. Since it worked before I assume it has something to do with an update but I'm not sure. The following picture shows the output of bash -version and at the bottom the two words minus and moreover where you can see the strange behavior. How can i fix this? The output of my locale $ locale LANG=en_US.UTF-8LC_CTYPE="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_PAPER="en_US.UTF-8"LC_NAME="en_US.UTF-8"LC_ADDRESS="en_US.UTF-8"LC_TELEPHONE="en_US.UTF-8"LC_MEASUREMENT="en_US.UTF-8"LC_IDENTIFICATION="en_US.UTF-8"LC_ALL= Following my set font in /etc/vconsole.conf KEYMAP=deFONT=lat9w-16 | For terminal emulators, you should choose a monospace (a.k.a. fixed with) font. The letters are positioned in a grid, rather than as it would look nice according to the width of each individual letter. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64491/"
]
} |
273,097 | I have a 1400x1400 image in which I want to trim 4 pixels to the left, 1 at the bottom and, to keep square proportions, 3 from the top. The problem is, whenever I perform a crop with jpegtran , pixels get removed from the bottom-right, no matter what I do. For instance, testing just the left part, I tried: jpegtran -perfect -crop 1396x1400+0+0 -outfile crop.jpg image.jpg but that just removes 4 pixels from the right; jpegtran -perfect -crop 1396x1400-4+0 -outfile crop.jpg image.jpg and that again removes 4 pixels from the right; jpegtran -perfect -crop 1396x1400+4+0 -outfile crop.jpg image.jpg and that does not remove any pixels at all; more in general, jpegtran -perfect -crop 1396x1400+x+0 -outfile crop.jpg image.jpgjpegtran -perfect -crop 1396x1400-x+0 -outfile crop.jpg image.jpg with x between 0 and 4, does respectively remove 4-x and x pixels from the right. With x higher than 5, obviously gives an error. I couldn't achieve any left trim. Can anyone help me? I'm using jpegtran from libjpeg-turbo version 1.4.2 (build 20151205) on an Arch Linux x86_64 system. | jpegtran can't losslessly cut at any finer increment than the Minimum Coded Unit size, which varies depending on the channel and the chroma sampling mode . It's going to be 8×8, 16×8, or 16×16. This means the minimum cut size must be an even multiple of 8 or 16, depending on the way the image was encoded. The exceptions are the right and bottom edge when the image dimensions are not an even multiple of 8 or 16px. In that case, you have partial blocks on the right and/or bottom edges. For example, a 17×17 pixel image could have 1px cut from its right or bottom edges, but not 2px from those edges, nor 1px from the top and left edges. Anything else requires re-encoding of the cut blocks. Quoting from the manual: ...lossless crop is restricted by the current JPEG format: the upper left corner of the selected region must fall on an iMCU boundary. If this does not hold for the given crop parameters, we silently move the upper left corner up and/or left to make it so, simultaneously increasing the region dimensions to keep the lower right crop corner unchanged. (Thus, the output image covers at least the requested region, but may cover more.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54566/"
]
} |
273,118 | Is there a way to pipe the output of a command and direct it to the stdout as well? So for example, fortune prints a fortune cookie to stdout and also pipes it to next command: $ fortune | tee >(?stdout?) | pbcopy "...Unix, MS-DOS, and Windows NT (also known as the Good, the Bad, andthe Ugly)."(By Matt Welsh) | Your assumption: fortune | tee >(?stdout?) | pbcopy won't work because the fortune output will be written to standard out twice, so you will double the output to pbcopy . In OSX (and other systems support /dev/std{out,err,in} ), you can check it: $ echo 1 | tee /dev/stdout | sed 's/1/2/'22 output 2 twice instead of 1 and 2 . tee outputs twice to stdout , and tee process's stdout is redirected to sed by the pipe, so all these outputs run through sed and you see double 2 here. You must use other file descriptors, example standard error through /dev/stderr : $ echo 1 | tee /dev/stderr | sed 's/1/2/'12 or use tty to get the connected pseudo terminal: $ echo 1 | tee "$(tty)" | sed 's/1/2/'12 With zsh and multios option set, you don't need tee at all: $ echo 1 >/dev/stderr | sed 's/1/2/'12 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159858/"
]
} |
273,182 | I wanted to install the command locate , which is available via sudo apt-get install mlocate . However, I first ran sudo apt-get install locate which seems to have installed something else. Typing the command locate <package> however seems to call upon mlocate . What is the package locate , and can (should) it be safely removed? | The locate package is the implementation of locate from GNU findutils . The mlocate package is another implementation of the same concept called mlocate . They implement the same basic functionality: quick lookup of file names based on an index that's (typically) rebuilt every night. They differ in some of their functionality beyond basic usage. In particular, GNU locate builds an index of world-readable files only (unless you run it from your account), whereas mlocate builds an index of all files but only lets the calling user see files that it could access. This makes mlocate more useful in most circumstances, but unusable in some unusual installations where it isn't run by the system administrator (because mlocate has to be setuid root ), and a security risk. Under Debian and derivatives, if you install both, locate will run the mlocate implementation, and you need to run locate.findutils to run the GNU implementation. This is managed through alternatives . If you have both installed, they'll both spend time rebuilding their respective index, but other than that they won't conflict with each other. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/273182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136753/"
]
} |
273,185 | I've got a problem: my laptop loads the Nvidia driver despite it having been added to /etc/modprobe/blacklist.conf as blacklist nvidia , as well as in /etc/default/grub , and as rdblacklist nvidia in GRUB_CMDLINE_LINUX . This leads to the machine running hot and not-so-smooth on battery. Why is not Fedora not obeying my blacklist configuration? What can be done? Update. Files: [0] % cat /etc/modprobe.d/bumblebee.conf blacklist nvidiablacklist nouveauoptions bbswitch load_state=0 unload_state=0[0] % cat /etc/default/grub GRUB_TIMEOUT=5GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap nouveau.modeset=0 rd.driver.blacklist=nouveau,nvidia rhgb quiet"GRUB_DISABLE_RECOVERY="true" EDIT: lsmod|grep nvidia [1] % lsmod|grep nvidianvidia 8642560 1drm 335872 12 i915,drm_kms_helper,nvidia | The module might be loaded in the initramfs on boot. You must regenerate the initramfs to include your modifications to /etc/modprobe.d/* Run the following to regenerate your initramfs dracut -f /boot/your-initramfs On reboot, the driver should not be loaded automatically | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93959/"
]
} |
273,207 | If I want files ending with .fas I know I can do the following with ls command : ls -l /users/jenna/bio/*.fas and this lists out all the files ending with .fas . But how can I get files ending with either .fas or .pep ? ls -l /users/jenna/bio/(*.fas | *.pep) doesn't seem to work ? I know there might be other commands that can do this, but is there a way to make it work with ls ? Thanks for any help ! | Use this: ls -l /users/jenna/bio/*.{fas,pep} Or this: find /users/jenna/bio -iname '*.pep' -or -iname '*.fas' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154835/"
]
} |
273,222 | I was wondering if the power management is the same across all linux distros? If no, what are the specific differences? I.e. can changing the distro possibly solve power management issues? Background : I have a Lenovo X240 laptop (I thought this is a safe choice :)) and have several issue with the power management on linux Mint, which I cannot solve despite extensive research on stackoverflow sites and elsewhere. The issue are: The laptop lid does only sometimes lead to suspend to ram; sometimes the laptop freezes after wakeup from hibernation; the battery drain is really fast in suspend to ram..Even if I describe quite specific problems in a specific configuration I believe this question has a broader scope. There is another question that asks for battery life across distros which is a different question although being close. | Use this: ls -l /users/jenna/bio/*.{fas,pep} Or this: find /users/jenna/bio -iname '*.pep' -or -iname '*.fas' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4831/"
]
} |
273,290 | In Linux, I can get last month by using date -d "last month" '+%Y%m' or date -d "1 month ago" '+%Y%m' But say, today is 31st of March, if I run the command at top, it shows 201603, but I want to get last month regardless which day I'm in now; how can I do so? I can achieve that by using workaround like get first day/last day of previous month, but I wonder is there any elegant way to do so? date -d "-$(date +%d) days" '+%Y%m' #get last day of previous month | The usual wisdom is use the 15 of this month. Then subtract 1 month: $ nowmonth=$(date +%Y-%m)$ date -d "$nowmonth-15 last month" '+%Y%m'201602 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158235/"
]
} |
273,301 | I have 6 text files (each corresponds to a specific sample) and each file looks like this: Gene_ID Gene_Name Strand Start End Length Coverage FPKM TPMENSMUSG00000102735 Gm7369 + 4610471 4611406 936 0 0 0ENSMUSG00000025900 Rp1 - 4290846 4409241 10926 0 0 0ENSMUSG00000104123 Gm37483 - 4363346 4364829 1484 0 0 0ENSMUSG00000102175 Gm6119 - 4692219 4693424 1206 0.328358 0.015815 0.008621 I want to collect all the elements from 1 & 2 column in one file and corresponding tpm values(9th column) for each sample in a new file, so wherever there is no tpm value enter 0. My output file should look like this: gene_id gene_name sample1_tpm sample2_tpm sample3_tpm ......sample6_tpm | The usual wisdom is use the 15 of this month. Then subtract 1 month: $ nowmonth=$(date +%Y-%m)$ date -d "$nowmonth-15 last month" '+%Y%m'201602 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163563/"
]
} |
273,316 | I have created an Ansible playbook to create user and set password. But it is giving me an error ERROR: password is not a legal parameter of an Ansible Play ---- hosts: all user: root vars: password: jbJe1oRlSZKQ6 tasks: - user: name=testuser password={{password}} | First: you need to indent password: in your playbook, because you want it to be a variable: vars: password: hashed_password If it's not indented then Ansible considers it a play parameter and throws an error because password is not. Second: unless you are setting the password for a user on OSX, you need to provide a hashed value of a password. Follow the detailed instructions , but basically you need to provide the output of: mkpasswd --method=SHA-512 Or install passlib with: pip install passlib and run: python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.encrypt(getpass.getpass())" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163575/"
]
} |
273,326 | I have this variable in a bash script on ubuntu 12.04 which is set to show the available MB on the root partition: AV_MB=$(df -m |awk NR==2 |awk '{print $4}') Is there an elegant way to combine these two awk expressions into one? Or a shorter way with sed or grep or cut? | Awk works on the "pattern {action}" model, so you can combine those two processes into a single and correct: df -m | awk 'NR==2 {print $4}' This, however, is fragile as the second record could change (on my systems, the root record is the third row), so you can match on the final field of the record for the root filesystem, like so: df -m | awk '$NF == "/" {print $4}' which ensures your pattern matches wherever df prints / . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137597/"
]
} |
273,341 | I have the following problem, my shell script contain something like: mydir=''# config loadsource $mydir/config.sh.... execute various commands My script is placed in my user dir.. let's say /home/bob/script.sh If I'm inside the /home/bob dir and run ./script.sh everything works fine. If I'm outside and want to use the absolute path /home/bob/script.sh the config.sh file is not recalled properly. What value should i assign to $mydir in order to make the script runnable from every path without struggle? mydir=$(which command?) PS: as bonus please also provide an alternative if the script dir is inside the $PATH | The $0 variable contains the script's path: $ cat ~/bin/foo.sh#!/bin/shecho $0$ ./bin/foo.sh./bin/foo.sh$ foo.sh/home/terdon/bin/foo.sh$ cd ~/bin$ foo.sh./foo.sh As you can see, the output depends on the way it was called, but it always returns the path to the script relative to the way the script was executed. You can, therefore, do: ## Set mydir to the directory containing the script## The ${var%pattern} format will remove the shortest match of## pattern from the end of the string. Here, it will remove the## script's name,. leaving only the directory. mydir="${0%/*}"# config loadsource "$mydir"/config.sh If the directory is in your $PATH , things are even simpler. You can just run source config.sh . By default, source will look for files in directories in $PATH and will source the first one it finds: $ help sourcesource: source filename [arguments] Execute commands from a file in the current shell.Read and execute commands from FILENAME in the current shell. Theentries in $PATH are used to find the directory containing FILENAME.If any ARGUMENTS are supplied, they become the positional parameterswhen FILENAME is executed. If you are sure your config.sh is unique or, at least, that it is the first one found in $PATH , you can source it directly. However, I suggest you don't do this and stick to the first method instead. You never know when another config.sh might be in your $PATH . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143935/"
]
} |
273,362 | I have recently started working with Shell scripting. So, I am facing one issue that when i am taking filename as an input from user, then if I give spaces then it does not get handled with my code. Any answers for the same My Code looks like this : echo "---------------- Please provide the filename -------------------------"read filenameif [[ $filename =~ [A-Za-z0-9]+[a-zA-Z0-9_.]*+$ ]]; then printf "some code "else printf "some code"fi Can anyone please help me that how can I handle space character so that if space is provided in input parameter then it should give some error . Thanks | The $0 variable contains the script's path: $ cat ~/bin/foo.sh#!/bin/shecho $0$ ./bin/foo.sh./bin/foo.sh$ foo.sh/home/terdon/bin/foo.sh$ cd ~/bin$ foo.sh./foo.sh As you can see, the output depends on the way it was called, but it always returns the path to the script relative to the way the script was executed. You can, therefore, do: ## Set mydir to the directory containing the script## The ${var%pattern} format will remove the shortest match of## pattern from the end of the string. Here, it will remove the## script's name,. leaving only the directory. mydir="${0%/*}"# config loadsource "$mydir"/config.sh If the directory is in your $PATH , things are even simpler. You can just run source config.sh . By default, source will look for files in directories in $PATH and will source the first one it finds: $ help sourcesource: source filename [arguments] Execute commands from a file in the current shell.Read and execute commands from FILENAME in the current shell. Theentries in $PATH are used to find the directory containing FILENAME.If any ARGUMENTS are supplied, they become the positional parameterswhen FILENAME is executed. If you are sure your config.sh is unique or, at least, that it is the first one found in $PATH , you can source it directly. However, I suggest you don't do this and stick to the first method instead. You never know when another config.sh might be in your $PATH . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163610/"
]
} |
273,366 | I have a file uca.xml (Thunar configuration for custom actions): <?xml encoding="UTF-8" version="1.0"?><actions><action> <!--some code--></action></actions><--blank line here--> Mind that the file ends with a blank line I want a bash command/script to insert a file customAction.txt containing: <action> <!--custom configuration--></action> so it will end looking like: <?xml encoding="UTF-8" version="1.0"?><actions><action> <!--some code--></action><action> <!--custom configuration--></action></actions> I tried a method given by jfgagne : sed -n -i -e '/<\/actions>/r customAction.txt' -e 1x -e '2,${x;p}' -e '${x;p}' uca.xml but it works only if there is at least one character (not a blank line) inserted below the tag . My temporary workaround is a script: echo "someDummyText" >> uca.xmlsed -n -i -e '/<\/actions>/r customaction.txt' -e 1x -e '2,${x;p}' -e '${x;p}' uca.xmlsed -i 's/someDummyText//' uca.xmlsed -i '${/^$/d;}' uca.xml but I believe there is a more elegant, one-line solution with sed. Thanks in advance. Note: this is my very post to the U&L and StackExchange community, so please be lenient with me and do not hesitate to correct me verbosely if I did something wrong. I read the SE FAQ and tried to search for an answer elsewhere with not much luck. | The $0 variable contains the script's path: $ cat ~/bin/foo.sh#!/bin/shecho $0$ ./bin/foo.sh./bin/foo.sh$ foo.sh/home/terdon/bin/foo.sh$ cd ~/bin$ foo.sh./foo.sh As you can see, the output depends on the way it was called, but it always returns the path to the script relative to the way the script was executed. You can, therefore, do: ## Set mydir to the directory containing the script## The ${var%pattern} format will remove the shortest match of## pattern from the end of the string. Here, it will remove the## script's name,. leaving only the directory. mydir="${0%/*}"# config loadsource "$mydir"/config.sh If the directory is in your $PATH , things are even simpler. You can just run source config.sh . By default, source will look for files in directories in $PATH and will source the first one it finds: $ help sourcesource: source filename [arguments] Execute commands from a file in the current shell.Read and execute commands from FILENAME in the current shell. Theentries in $PATH are used to find the directory containing FILENAME.If any ARGUMENTS are supplied, they become the positional parameterswhen FILENAME is executed. If you are sure your config.sh is unique or, at least, that it is the first one found in $PATH , you can source it directly. However, I suggest you don't do this and stick to the first method instead. You never know when another config.sh might be in your $PATH . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273366",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163580/"
]
} |
273,424 | One of my favorite tricks in Bash is when I open my command prompt in a text editor. I do this (in vi mode) by pressing ESC v . When I do this, whatever is in my command prompt is now displayed in my $EDITOR of choice. I can then edit the command as if it were a document, and when I save and exit everything in that temp file is executed. I'm surprised that none of my friends have heard of this tip, so I've been looking for docs I can share. The problem is that I haven't been able to find anything on it. Also, the search terms related to this tip are very common, so that doesn't help when Googling for the docs. Does anyone know what this technique is called so I can actually look it up? | In bind -p listing, I can see the command is called edit-and-execute-command , and is bound to C-x C-e in the emacs mode. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54004/"
]
} |
273,437 | I'm trying to run an rsync between two servers. I'm basing things off of this post: How to rsync files between two remotes? What I find missing is how to facilitate the rsync (via ssh) when a key (the same key) is required for logging into each server. Here's the closest I've got: ssh -i ~/path/to/pem/file.pem -R localhost:50000:SERVER2:22 ubuntu@SERVER1 'rsync -e "ssh -p 50000" -vur /home/ubuntu/test localhost:/home/ubuntu/test' It seems like the initial connection works properly, however I can't seem to figure out how to specific the key and username for SERVER2. Any thoughts? | In bind -p listing, I can see the command is called edit-and-execute-command , and is bound to C-x C-e in the emacs mode. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163655/"
]
} |
273,495 | I would like to perform the same aggregation operations on each of several bunches of files where each bunch is matched by one glob pattern. What I would not like to do is pipe each file name into the aggregation function separately. My first attempt failed because the file names got globbed in the outer loop, flattening the whole collection into a flat list and not treating them as separate batches: for fileglob in /path/to/bunch*1 /path/to/bunch*2 ... ; do stat "$fileglob" | awk [aggregation]done So I hid the * from the loop by escaping it, then unescaped it for the function: for fileglob in /path/to/bunch*1 /path/to/bunch*2 ... ; do realglob=`echo "$fileglob" | sed 's/\\//g'` stat "$realglob" | awk [aggregation]done There has to be a better way. What is it? GNU bash, version 3.2.51 | This requires a careful use of quotes: for fileglob in '/path/to/bunch*1' '/path/to/bunch*2' ... ; do stat $fileglob | awk [aggregation]done But that may fail on filenames with spaces (or newlines). Better to use this: fileglobs=("/path/to/bunch*1" "/path/to/bunch*2")for aglob in "${fileglobs[@]}" ; do set -- $aglob stat "$@" | awk [aggregation]done The glob gets correctly expanded and placed in the positional parameters with: set -- $aglob Then, each parameter is placed as an argument to stat in: stat "$@" And the output of stat goes (as one output) to awk . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8119/"
]
} |
273,496 | I want to display all the characters in a file between strings "xxx" and "yyy" (the quotes are not part of the delimiters). How can I do that ? For example, if i have input "Hello world xxx this is a file yyy", the output should be " this is a file " | You can use the pattern matching flag in sed as follows: echo "Hello world xxx this is a file yyy" | sed 's/.*xxx \(.*\)yyy/\1/' So .*xxx will match from the beginning up to xxx . This is best shown using grep : \1 is a 'Remember pattern' that remembers everything that is within \(.*\) so from xxx up to yyy but not yyy . Finally the remembered string is printed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150680/"
]
} |
273,527 | I have a script that users will prefix, rather than append, arguments to, i.e. they might call command C , command B C , command A B C , and so on. I'd like to be able to simply shift over these arguments from the right, the same way you might shift them from the left with shift . I'm imaginging a shift-right command that behaves like so: echo "$@" # A B Cshift-rightecho "$@" # A Bshift-rightecho "$@" # Ashift-rightecho "$@" # echo "$#" # 0 Is there a clean way to accomplish this? I know I can work around it, but a shift -like solution would be much nicer and simpler. In response to the XY-problem comment, my specific use case is a command that takes either a port or a host and port, e.g. command 123 or command remotehost 123 . I don't want users to have to specify these in reverse order (port then host). It would be fairly clean to say something like (untested, obviously): port=${@: -1}shift-righthost=${1:-localhost} Really though, I'm curious about the question in general, even if there's a better way to solve this specific example. Here's one reasonably clean way to handle the two-argument case without shift-right , just for reference: port=${@: -1}host=${2:+$1}host=${host:-localhost} But hopefully you can appreciate how that becomes more cludgy as the number of arguments increases. | If the list of positional parameters is: $ set -- 0wer 1wdfg 2erty 333 4ffff 5s5s5 Then this will print the arguments without the last: $ echo "${@:1:$#-1}"0wer 1wdfg 2erty 333 4ffff Of course, the parameters could be set to that as well: $ set -- "${@:1:$#-1}"$ echo $@0wer 1wdfg 2erty 333 4ffff That works in bash version 2.0 or above. For other simpler shells, you need a (somewhat tricky) loop to remove the last parameter: unset b; for a; do set -- "$@" ${b+"$b"}; shift; b="$a"; done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19157/"
]
} |
273,529 | Bash has the PROMPT_DIRTRIM option, e.g. when I set PROMPT_DIRTRIM=3 , then a long path like: user@computer: /this/is/some/silly/path would show instead as: user@computer: .../some/silly/path Does a similar option exist for zsh ? | To get a similar effect like in bash , that is including the ... , try using: %(4~|.../%3~|%~) in in your PROMPT variable (which might be also named PS1 in your configuration) place of %~ .This checks, if the path is at least 4 elements long ( %(4~|true|false) ) and, if true, prints some dots with the last 3 elements ( .../%3~ ), otherwise the full path is printed ( %~ ). I noticed that bash seems to shorten paths in the home directory differently, for example: ~/.../some/long/path For a similar effect, you may want to use: %(5~|%-1~/…/%3~|%4~) This checks, whether the path is at least 5 elements long, and in that case prints the first element ( %-1~ ), some dots ( /…/ ) and the last 3 elements. It is not exactly the same as paths, that are not in your home directory, will also have the first element at the beginning, while bash just prints dots in that case. So /this/…/some/silly/path instead of .../some/silly/path But this might not necessarily a bad thing. Instead of %~ you can also use %d (or your current PROMPT might already use %d ). The difference is, that %d shows full absolute paths, while %~ shows shorthands for “named directories”: e.g. /home/youruser becomes ~ and /home/otheruser becomes ~otheruser . If you prefer to use the full path as basis for the shortening, just replace any occurrence of ~ with d . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58056/"
]
} |
273,579 | At the time, I check my system like this: rkhunter --check --enable all --disable none --report-warnings-only Or shorter version: rkhunter -c --enable all --disable none --rwo But this gives me warning messages like this one: Warning: The command '/usr/bin/GET' has been replaced by a script:/usr/bin/GET: a /usr/bin/perl -w script text executable Which - to me - is of no importance, I want to check only for rootkits, and nothing else. But I can't seem to find the proper option to make this work. How to make rkhunter only scan for rootkits? | rkhunter --enable rootkits --rwo will run only the rootkit tests. This can also be set up in the configuration file. The README file has all the details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54880/"
]
} |
273,588 | Received timestamp=1459434658969:ABC: Field id=0 double 11.4DEF: Field id=1 string >def<GHI: Field id=2 string >g_hi< I would like to read a file that contains input in the above format and would like to output the following data into an xml file: ABC: 11.4DEF: defGHI: g_hi | rkhunter --enable rootkits --rwo will run only the rootkit tests. This can also be set up in the configuration file. The README file has all the details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163797/"
]
} |
273,614 | I want to perform some mathematical operations in the shelll. For example: 5+50*3/20 + (19*2)/7 I tried: #!/bin/bash read equ echo "scale=3; $equ" | bc -l Expected output: 17.929 My output: 17.928 | bc is truncating, try this instead: printf "%.3f\n" $(echo "$equ" | bc -l) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74105/"
]
} |
273,624 | I need to search for a keyword using awk, but I want to perform a case-insensitive (non case sensitive) search. I think the best approach is to capitalize both the search term ("key word") and the target line that awk is reading at the same time. From this question I how to use toupper to print in all uppercase, but I don't know how to use it in a match because that answer just shows printing and doesn't leave the uppercase text in a variable. Here is an example, given this input: blablabla &&&Key Word&&&I want all these text and numbers 123and chars !"£$%&as output&&&KEY WORD&&&blablabla I'd like this output: I want all these text and numbers 123and chars !"£$%&as output This is what I have, but I don't know how to add in toupper : awk "BEGIN {p=0}; /&&&key word&&&/ { p = ! p ; next } ; p { print }" text.txt | Replace your expression to match a pattern (i.e. /&&&key word&&&/ ) by another expression explicitly using $0 , the current line: tolower($0) ~ /&&&key word&&&/ or toupper($0) ~ /&&&KEY WORD&&&/ so you have awk 'tolower($0) ~ /&&&key word&&&/ { p = ! p ; next }; p' text.txt You need single quotes because of the $0 , the BEGIN block can be removed as variables are initialised by default to "" or 0 on first use, and {print} is the default action, as mentioned in the comments below. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153322/"
]
} |
273,660 | I assigned a var like this: MYCUSTOMTAB=' ' But using it in echo both: echo $MYCUSTOMTAB"blah blah" or echo -e $MYCUSTOMTAB"blah blah" just returns a single space and the rest of the string: blah blah How can I print the full string untouched? I want to use it for have a custom indent because \t is too much wide for my tastes. | Put your variable inside double quote to prevent field splitting , which ate your spaces: $ MYCUSTOMTAB=' '$ echo "${MYCUSTOMTAB}blah blah" blah blah | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/273660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143935/"
]
} |
273,696 | I have a directory full of logs named in the following style: info.log00001info.log00002info.log00003...info.log09999info.log My current output (using grep -c) I need to analyze the frequency a particular error that happens occasionally, so go to that directory and use grep -crw . -e "FooException BarError" | sort -n | less obtaining something like: ./info.log00001: 1./info.log00002: 0./info.log00003: 42..../info.log09999: 25./info.log: 0 Then, I can ls -lt to see their modification date and analyze when the error happened the most. My desired output (with count and date) Anyway, I'd like find a way to get an output with the count and the date in the same line. That would make my analysis easier. I would like something as: 2015-09-31 10:00 ./info.log00001: 12015-09-31 10:15 ./info.log00002: 02015-09-31 10:30 ./info.log00003: 42...2016-04-01 13:20 ./info.log09999: 252015-09-31 13:27 ./info.log: 0 Additional info Ideally, I'd like to accomplish this with only one command, but first throwing grep 's output to a file and then processing that file would make it, too. Also, I really don't care about the date format or whether the date is at the end or at the beginning of the line. All I want to to is to have the files sorted by date starting with the oldest (which is also the file with the lowest number in its name) I found a way to accomplish something similar with awk , but in my case it would not work, since it parses the filename from grep 's output, and in my case, grep 's output has more text that just the path to the file. I'd really appreciate any feedback on this. | If you have gnu find - and assuming none of your file names contains newlines - you could use find 's -printf to output the mtime in the desired format + the file name then run grep to get the count: find . -type f -printf '%TY-%Tm-%Td %TH:%TM %p: ' -exec grep -cw "whatever" {} \; | sort -k1,1 -k2,2 Alternatively, with zsh you could glob and sort by modification time (via glob qualifiers - . selects regular files, Om sorts in descending order by mtime ) and then for each file print the mtime using the stat module, the file name and then, again, get the count via grep : zmodload zsh/statfor f in ./**/*(.Om)doprintf '%s %s\t%s %s: ' $(zstat -F '%Y-%b-%d %H:%M' +mtime -- $f) $fgrep -cw "whatever" $fdone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163855/"
]
} |
273,697 | I was wondering if there was a program like GNU/screen or tmux that would allow me to attach or detach a session with a running process but would not provide all of the other features such as windows and panes. Ideally the program would be able to run in a dumb terminal (a terminal without clear). My use case is to use either the shell or the terminal that are built into emacs to run a program and have that program keep running even if emacs crashes. Tmux and screen are incompatible with shell because shell does not support clear. And although they work in the terminal the output is improperly formatted in part because of the bottom bar and also because of the quirks of term-mode. Thank you! | If you have gnu find - and assuming none of your file names contains newlines - you could use find 's -printf to output the mtime in the desired format + the file name then run grep to get the count: find . -type f -printf '%TY-%Tm-%Td %TH:%TM %p: ' -exec grep -cw "whatever" {} \; | sort -k1,1 -k2,2 Alternatively, with zsh you could glob and sort by modification time (via glob qualifiers - . selects regular files, Om sorts in descending order by mtime ) and then for each file print the mtime using the stat module, the file name and then, again, get the count via grep : zmodload zsh/statfor f in ./**/*(.Om)doprintf '%s %s\t%s %s: ' $(zstat -F '%Y-%b-%d %H:%M' +mtime -- $f) $fgrep -cw "whatever" $fdone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163861/"
]
} |
273,769 | So basically, I'm trying to delete the files: /var/lib/mysql/db/nomNomina2.* and when I look for them with locate, I get the following output: /var/lib/mysql/db/nomNomina2.MYD/var/lib/mysql/db/nomNomina2.MYI/var/lib/mysql/db/nomNomina2.frm but then I try to $ rm -fv /var/lib/mysql/db/nomNomina2.frm I get no output, but the files still show when using locate. Notice that I can create and delete a file with the same filename in the same location, but it will still show when using locate, and I won't be able to create another table with the same name. Any ideas what could be causing this? filesystem mess? how to correct it? | locate is not dependable for live, current information about what files are present on your system. Information is cached in a database. Also consider the famous line, with link: It's not working! Should I blame caching? For actual current information on what files/directories exist on your box right now , use ls or find or stat or test -e filename && echo it is there or even printf %s\\n * . Pretty much anything except locate will give you up-to-date information about your filesystem. See also LESS=+/BUGS man locate which (on my system) reads in part: BUGS The locate program may fail to list some files that are present, or may list files that have been removed from the system. This is because locate only reports files that are present in the database... You can run updatedb , but honestly if you know exactly where the files are and you are using locate to find them...you are simply doing it wrong. locate tells you a path. It tells you nothing about the existence or nonexistence of files at that path. If you already know the path to the file, you don't need locate , do you? The purpose of locate is to "find filenames quickly", not necessarily accurately or dependably. Note: I'm not saying "don't use locate ." It does have a purpose, when you have no idea where on your system a certain file might be. But once you get the pathname from locate , it has served its purpose and you now need to use other tools to examine/verify/etc. the file you've found. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273769",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163914/"
]
} |
273,861 | In zsh , I want to have unlimited history. I set HISTSIZE= , which works in bash . Now I import an old history mv old_history .history which is pretty big wc -l .history43562 .history If I now close and start zsh again, I see wc -l .history32234 .history Can't I have unlimited history in zsh ? | There is the limit and the possibilities of your machines. HISTFILE="$HOME/.zsh_history"HISTSIZE=10000000SAVEHIST=10000000setopt BANG_HIST # Treat the '!' character specially during expansion.setopt EXTENDED_HISTORY # Write the history file in the ":start:elapsed;command" format.setopt INC_APPEND_HISTORY # Write to the history file immediately, not when the shell exits.setopt SHARE_HISTORY # Share history between all sessions.setopt HIST_EXPIRE_DUPS_FIRST # Expire duplicate entries first when trimming history.setopt HIST_IGNORE_DUPS # Don't record an entry that was just recorded again.setopt HIST_IGNORE_ALL_DUPS # Delete old recorded entry if new entry is a duplicate.setopt HIST_FIND_NO_DUPS # Do not display a line previously found.setopt HIST_IGNORE_SPACE # Don't record an entry starting with a space.setopt HIST_SAVE_NO_DUPS # Don't write duplicate entries in the history file.setopt HIST_REDUCE_BLANKS # Remove superfluous blanks before recording entry.setopt HIST_VERIFY # Don't execute immediately upon history expansion.setopt HIST_BEEP # Beep when accessing nonexistent history. From the ZSH Mailing list : You should determine how much memory you have, how much of it you can allow to be occupied by the history (AFAIK it is always fully loaded into memory) and act accordingly. Removing the limit is not wiser as it leaves you with an idea that there is no limit while it is always limited by available resources. Or if you do not think you will ever hit a problem with resource exhaustion you can just set HISTSIZE to LONG_MAX from limits.h: it is the maximum number HISTSIZE can have. Which explain the Gentoo solution: export HISTSIZE=2000export HISTFILE="$HOME/.history" History won't be saved without the following command: export SAVEHIST=$HISTSIZE To prevent history from recording duplicated entries (such as ls -l entered many times during single shell session), you can set the hist_ignore_all_dups option: setopt hist_ignore_all_dups A useful trick to prevent particular entries from being recorded into a history by preceding them with at least one space. setopt hist_ignore_space | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/273861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58056/"
]
} |
273,876 | The following message appears almost every time I shutdown my computer: A stop job is running for Session c2 of user ... (1min 30s) It waits for 1min30s then continues the shutdown process. I follow this systemd shutdown diagnosis guide and get the shutdown-log.txt (I can't paste directly the log here because it's very long). Unfortunately, I don't understand the log by myself. Could anyone help me to find out what makes my system doesn't shutdown properly? I run Arch Linux with kernel 4.4.5-1-ARCH , my systemd version is 229-3 . Addition 1: I observe that every time I logout, and then shutdown my computer from the login screen, it doesn't get the message A stop job is running... . I tried to logout before shutdown for many times, so I think it doesn't occur by chance. Hope that information could help. Addition 2: It is always session c2 that causes shutdown hanging. So as @n.st suggest, I looked at Diagnosing Shutdown Problems again and stored loginctl session-status c2 instead of dmesg , but then there is nothing on the shutdown-log.txt . I replaced loginctl session-status c2 by systemd-cgls and got the following log: Control group /:-.slice└─init.scope ├─ 1 /usr/lib/systemd/systemd-shutdown reboot --log-level 6 --log-target ... ├─1069 /usr/lib/systemd/systemd-shutdown reboot --log-level 6 --log-target ... ├─1071 /bin/sh /usr/lib/systemd/system-shutdown/debug.sh reboot └─1074 systemd-cgls Any ideas? Note: After I updated to kernel 4.6.4-1-ARCH and systemd 230-7 , the error no longer happened. | A workaround to this problem is to reduce this timeout in /etc/systemd/system.conf down from 90s to for example 10s: DefaultTimeoutStopSec=10s and run the following command in terminal after making changes $ systemctl daemon-reload | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154739/"
]
} |
273,882 | In the canonical How can I replace a string in a file(s)? in 3. Replace only if the string is found in a certain context , I'm trying to implement the replace of a pipe with white spaces in file with structure like this: 12/12/2000|23:16:03|Shell Sc|8332|START|TEXT|WITH|SPACES|-|[END]|[Something else] I need it like this: 12/12/2000|23:16:03|Shell Sc|8332|START TEXT WITH SPACES -|[END]|[Something else] The code: echo "12/12/2000|23:16:03|Shell Sc|8332|START|TEXT|WITH|SPACES|-|[END]|[Something else]" | \ sed 's/\(START.*\)\|/ \1/g' Any ideas ? | The problem with your command is that even with the g flag set, a particular portion of the text to be matched can only be included in a single match. Since .* is greedy, you will only end up removing the final pipe character. Not to mention your space in the replacement text is in the wrong place. You could do this with a repeated s command in a loop, running until it doesn't match anything. Like so: sed -e ':looplabel' -e 's/\(START.*\)|\(.*|\[END\)/\1 \2/;t looplabel' Or, using a shorter loop label: sed -e ':t' -e 's/\(START.*\)|\(.*|\[END\)/\1 \2/;tt' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66371/"
]
} |
273,898 | I know that I can remove substrings from the front of a variable: X=foobarbazecho ${X#foo} # barbaz ...and from the back of a variable: echo ${X%baz} # foobar How do I combine the two? I've tried: echo ${{X#foo}%baz} # "bad substitution"echo ${${X#foo}%baz} # "bad substitution"echo ${X#foo%baz} # foobarbaz I don't think I can use an intermediate variable, because this is being used in find -exec , in something like the following: find ./Source -name '*.src' \ -exec bash -c 'myconvert -i "{}" -o "./Dest/${0#./Source/%.src}.dst"' {} \; | I don't think that's possible (but would love to be proved wrong). However, you can use an intermediate variable. The bash -c run by -exec is just a bash instance, like any other: $ find . -type f./foobarbaz$ find . -type f -exec bash -c 'v=${0#./foo}; echo ${v%baz}' {} \;bar So I see no reason why this wouldn't work (if I understood what you're trying to do correctly): find ./Source -name '*.src' \ -exec bash -c 'v=${0%.src}; \ myconvert -i "{}" -o "./Dest/${v#./Source/}.dst"' {} \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46851/"
]
} |
273,901 | I was trying to do wifi throttling for my new app development (new to this) and was trying to do it using iceFloor application. I'm not 100% sure if this is a problem, but i need to confirm if i really have messed it up or not. When i try to ping www.google.com, i get the below. PING www.google.com (216.58.219.228): 56 data bytesRequest timeout for icmp_seq 0Request timeout for icmp_seq 1Request timeout for icmp_seq 2Request timeout for icmp_seq 3Request timeout for icmp_seq 4^C--- www.google.com ping statistics ---6 packets transmitted, 0 packets received, 100.0% packet loss But i don't have any problem browsing whatsoever. Below are the things i did to mess it up: I did what was told here . Tried to limit the bandwidth with Network Limit Conditioner as stated here I installed iceFloor and tried preset profiles and created custom profile with no configuration. (Apparently, didn't save the default configuration) Below are the things i tried before posting here: I tried to flush all the rules using the below command pfctl -f /etc/pf.conf I tried turned off the the Network Limit conditioner as well. Will uninstalling iceFloor & network limit conditioner help? Any insight & recommendations are very much appreciated. -Prabhu | I don't think that's possible (but would love to be proved wrong). However, you can use an intermediate variable. The bash -c run by -exec is just a bash instance, like any other: $ find . -type f./foobarbaz$ find . -type f -exec bash -c 'v=${0#./foo}; echo ${v%baz}' {} \;bar So I see no reason why this wouldn't work (if I understood what you're trying to do correctly): find ./Source -name '*.src' \ -exec bash -c 'v=${0%.src}; \ myconvert -i "{}" -o "./Dest/${v#./Source/}.dst"' {} \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163997/"
]
} |
273,934 | I want to run a website from my computer. I use XAMPP on a Kali Linux. I installed MySQL server with the command: apt-get install mysql-server After it was successfully installed, I entered the command mysql_secure_installation It prompted me to login to MySQL, but it repeatedly gave the error ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I tried my password several times, with no changes just to check if I have typed it wrong the first time. Can anyone tell me how to fix this? I am following a tutorial from this site . | I don't think that's possible (but would love to be proved wrong). However, you can use an intermediate variable. The bash -c run by -exec is just a bash instance, like any other: $ find . -type f./foobarbaz$ find . -type f -exec bash -c 'v=${0#./foo}; echo ${v%baz}' {} \;bar So I see no reason why this wouldn't work (if I understood what you're trying to do correctly): find ./Source -name '*.src' \ -exec bash -c 'v=${0%.src}; \ myconvert -i "{}" -o "./Dest/${v#./Source/}.dst"' {} \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162995/"
]
} |
273,965 | From the find man page : -exec command ; There are unavoidable security problems surrounding use of the -exec action; you should use the -execdir option instead.-execdir command {} + Like -exec, but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find. This a much more secure method for invoking commands, as it avoids race conditions during resolution of the paths to the matched files. What does this mean? Why are there race conditions with running it from the starting directory? And how are these security risks? | Found the details here : The -exec action causes another program to be run. It passes to the program the name of the file which is being considered at the time. The invoked program will typically then perform some action on that file. Once again, there is a race condition which can be exploited here. We shall take as a specific example the command find /tmp -path /tmp/umsp/passwd -exec /bin/rm In this simple example, we are identifying just one file to be deleted and invoking /bin/rm to delete it. A problem exists because there is a time gap between the point where find decides that it needs to process the -exec action and the point where the /bin/rm command actually issues the unlink() system call to delete the file from the filesystem. Within this time period, an attacker can rename the /tmp/umsp directory, replacing it with a symbolic link to /etc . There is no way for /bin/rm to determine that it is working on the same file that find had in mind. Once the symbolic link is in place, the attacker has persuaded find to cause the deletion of the /etc/passwd file, which is not the effect intended by the command which was actually invoked. Not sure how likely anyone could ever exploit this; but I guess there's the answer! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/273965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84496/"
]
} |
273,971 | We can get CPU information using lscpu command, is there any command to get hard disk information on Linux terminal, in a similar way? | If you are looking for partitioning information you can use fdisk or parted . If you are more interested into how the various partitions are associated with the mount points try lsblk which I often use as: lsblk -o "NAME,MAJ:MIN,RM,SIZE,RO,FSTYPE,MOUNTPOINT,UUID" to include UUID info. And finally smartctl -a /dev/yourdrive gives you detailed info like: === START OF INFORMATION SECTION ===Device Model: WDC WD40EFRX-68WT0N0Serial Number: WD-WCC4E4LA4965LU WWN Device Id: 5 0014ee 261ca5a3fFirmware Version: 82.00A82User Capacity: 4,000,787,030,016 bytes [4.00 TB]Sector Sizes: 512 bytes logical, 4096 bytes physicalRotation Rate: 5400 rpmDevice is: Not in smartctl database [for details use: -P showall]ATA Version is: ACS-2 (minor revision not indicated)SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)Local Time is: Sun Apr 3 10:59:55 2016 CESTSMART support is: Available - device has SMART capability.SMART support is: Enabled and more. Some of these commands need to be run sudo to get all info. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164038/"
]
} |
273,982 | I have some nodejs projects and other files that I want to put on the coreos private server, what is the easiest method to take files from the workstation (windows) and put them into the coreos system? Is there anything I could do other than making a docker container with ftp? The goal is to be able to type with my favorite editor with my pc and then bring it to coreos server in order to build docker files from here. What are the best solution for this? | If you are looking for partitioning information you can use fdisk or parted . If you are more interested into how the various partitions are associated with the mount points try lsblk which I often use as: lsblk -o "NAME,MAJ:MIN,RM,SIZE,RO,FSTYPE,MOUNTPOINT,UUID" to include UUID info. And finally smartctl -a /dev/yourdrive gives you detailed info like: === START OF INFORMATION SECTION ===Device Model: WDC WD40EFRX-68WT0N0Serial Number: WD-WCC4E4LA4965LU WWN Device Id: 5 0014ee 261ca5a3fFirmware Version: 82.00A82User Capacity: 4,000,787,030,016 bytes [4.00 TB]Sector Sizes: 512 bytes logical, 4096 bytes physicalRotation Rate: 5400 rpmDevice is: Not in smartctl database [for details use: -P showall]ATA Version is: ACS-2 (minor revision not indicated)SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)Local Time is: Sun Apr 3 10:59:55 2016 CESTSMART support is: Available - device has SMART capability.SMART support is: Enabled and more. Some of these commands need to be run sudo to get all info. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/273982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112485/"
]
} |
273,989 | I've installed Raspian Jessie Lite and added on the minimum I can to get a browser running fullscreen. I started off with IceWeasel: sudo apt-get install -y x-window-system iceweasel And put this into my .xinitrc : iceweasel "http://localhost/" Now, when I run startx it loads IceWeasel. However, it only took up a small; portion of the screen. I was able to fix that by loading IceWeasel, closing it, then modifying the file that stored the window size and make it 1920x1080. That was all fine, until I discovered IceWeasel didn't support all th nice new ECAMScript goodness Chrome did. So, I'm trying to swap for Chromium. I've managed to get it all installed, and I've changed my .xinitrc to this: chromium-browser --start-maximized --kiosk http://localhost/ However, when this launches it only uses about (possibly exactly) half of the screen! I've tried various options but can't get it working. --start-fullscreen is even weirder and renders correctly but gets chopped in half! :( Note: I'm trying to avoid installing any window manager/etc, as it seems like it shouldn't be required when IceWeasel is already all working correctly!? IceWeasel: Chromium ( --start-maximized and --kiosk ): Chromium ( --start-fullscreen ): | Ok, with help from this thread I got it working. Although that poster said it didn't work, I edited .config/chromium/Default/Preferences and explicitly set the window size: Before "window_placement":{ "bottom":1060, "docked":false, "left":10, "maximized":true, "right":950, "top":10 // ... After "window_placement":{ "bottom":1080, "docked":false, "left":0, "maximized":true, "right":1920, "top":0 // ... I wondered if maybe this had been set badly by the first load of the app not being fullscreen, but I tried deleting ~/.config and then loading it again, but it just recreated it with the left half of the screen. I guess I'll have to script loading Chromium, killing it, then rewriting that part of the file in my setup script! ;( | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/273989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84496/"
]
} |
274,024 | I would like to name my files and folders according to a specific date, e.g. 03.04.2016 . I see I can use them but are there any disadvantages to using periods in files or folder names? | They are valid and you can use them but yes, there are disadvantages. A period is often used in regular expressions to represent a single character. A period in filenames is often used as the standard separator between filename and extensions. A period at the start of a filename is used to indicate configuration and/or hidden files. For these reasons using periods in filenames for other purposes often leads to issues down the road with other command line functions and other tools and frameworks not expecting them and not working correctly. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/274024",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28643/"
]
} |
274,026 | I have a huge file in which I want to look for a word, say pattern . I'm trying to illustrate my case with an example. Suppose I have in my file somewhere Sample-pattern="abc" I want to write a script that will echo Sample and abc i.e. the word preceding the hyphen and the word inside the quotes So if there was Sample2-pattern="xyz" it would echo Sample2 and xyz Note that there might be the word pattern in that file elsewhere. But if the word pattern has anything except hyphen before it, the script should ignore it. For example, if there is Sample3pattern..... it should ignore it. | They are valid and you can use them but yes, there are disadvantages. A period is often used in regular expressions to represent a single character. A period in filenames is often used as the standard separator between filename and extensions. A period at the start of a filename is used to indicate configuration and/or hidden files. For these reasons using periods in filenames for other purposes often leads to issues down the road with other command line functions and other tools and frameworks not expecting them and not working correctly. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/274026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156333/"
]
} |
274,032 | Is it a good idea to create a cron job for apt-get update; apt-get upgrade for my webserver? So not apt-get dist-upgrade . | Yes, to a limited extent. But you don't have to. There's a package called unattended-upgrades that will do it for you. Description-en: automatic installation of security upgrades This package can download and install security upgrades automatically and unattended, taking care to only install packages from the configured APT source, and checking for dpkg prompts about configuration file changes. . This script is the backend for the APT::Periodic::Unattended-Upgrade option. This package is intended for use with security upgrades, as the description suggests. Automated installation of security upgrades is a reasonably safe option. Doing other kinds of unattended upgrades is riskier. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46235/"
]
} |
274,042 | In scripts, errors are usually sent to file descriptor 2 with &2 , ie: echo "error" >&2 Sometimes /dev/stderr is used instead: echo "error" > /dev/stderr Looking at /dev/stderr , I see that it is only a symlink to /proc/self/fd/2 , which in turn is a symlink to /dev/pts/5 (on my current terminal). Seems little bit over complicated. Is there some logic behind that ? Is using /dev/stderr and &2 equivalent ? Is any of those preferred over the other ? | The special device /dev/stderr is system-specific, while the file descriptor 2 (not the special device /proc/self/fd/2 ) is portable. If you want to write non-portable code, those special devices are a good place to start. There are a few systems with /dev/stderr : Linux, of course, and OSX . But OSX has no /proc filesystem, and its /dev/stderr is a link to /dev/fd/2 . Further reading: Portability of “> /dev/stdout” What method should I use to write error messages to 'stderr' using 'printf' in a bash script? patches/awk-dev-stderr (autoconf-patches) Redirect stderr and stdout in a Bash script Chapter 20. I/O Redirection (Advanced Bash-Scripting Guide) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/274042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
274,053 | I'm using Raspbian Jessie but there are a few packages I want that aren't available (but are in Debian Stretch repos). I want to temporarily use the Stretch repo to install them (and any otherwise-unsatisfied dependencies) but without making anything else come from there in the future. I understand things might not work; etc.; I'm just trying something out on a throwaway install :) I tried rigging some files (based on this answer ) but I got this.. not sure a) how to fix it and b) whether I'm doing things the right way! W: GPG error: http://ftp.uk.debian.org stretch InRelease: Thefollowing signatures couldn't be verified because the publickey is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010 | As backports has letsencrypt , I recommend using jessie-backports as it brings less new/packages dependencies than drinking directly from stretch. To use Jessie backports and install letsencrypt from it: Add to /etc/apt/sources.list : deb http://httpredir.debian.org/debian jessie-backports main contrib non-free The run: apt-get update As for installing the key, I confirm you can do: gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553 gpg -a --export 8B48AD6246925553 | sudo apt-key add - and also with the key 7638D0442B90D010 gpg --keyserver pgpkeys.mit.edu --recv-key 7638D0442B90D010 gpg -a --export 7638D0442B90D010 | sudo apt-key add - And finally to install letsencrypt : apt-get install -t jessie-backports letsencrypt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84496/"
]
} |
274,120 | An example should clarify my question. This behavior makes sense to me: $ echo hi | cathi$ echo hi | tee >(cat)hihi The first case is obvious. In the 2nd case, we pipe "hi" into tee using command substitution, and one "hi" is printed by the tee 'd cat , while another is printed by tee 's pass through pipe. So far, so good... But what happens to the first "hi" in this case: $ echo hi | tee >(echo yo)yo The return code is 141, a pipe fail. What could be causing this? I'm running Mac OSX El Capitain, bash in default terminal app | I think I’ve figured out how to tweak your experienceto turn it into something other people will be able to reproduce: $ (echo hello; sleep 1; echo world) | tee >(cat)hellohello … and, after a brief delay, worldworld$ echo "$?"0$ (echo hello; sleep 1; echo world) | tee >(echo yo)yohello$ echo "$?"141 As you hopefully understand, > ( command ) creates a pipeto a process running command . The standard input of command is connected to a pathnamethat other commands on the command line (in this case, tee )can open and write to. When command is cat ,the process sits there and reads from stdin until it gets an EOF. In this case, tee has no problemwriting all the data it reads from its stdin to the pipe. But, when command is echo yo ,the process writes yo to the stdout and immediately exits. This causes a problem for tee ;when it writes to a pipe with no process at the other end,it gets a SIGPIPE signal. Apparently OS X’s version of tee writes to the file(s) on the command line first, and then its stdout. So, in your example ( echo hi | tee >(echo yo) ), tee gets the pipe failure on its very first write. Whereas, the version of tee on Linux and Cygwinwrites to the stdout first ,so it manages to write hi to the screen before it dies. In my enhanced example, tee dies when it writes hello to the pipe,so it doesn’t get a chance to read and write world . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/274120",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38051/"
]
} |
274,136 | I have a 24/7 always-on Debian Jessie based headless home server that has a large 1TB SSD for the OS and all of my frequently accessed files. This same system has 4 larger hard disk drives in a SnapRAID array. These are mainly for archiving infrequently accessed Blu-rays and want those drives to remain spun down in standby unless I actually read or write to them. They are all formatted as ext4 and mounted with noatime and nodiratime enabled. So even though no process or program should be regularly accessing those drives in any direct way, the hard drives constantly get spun up from standby. It seems to be related to graphical programs that provide a gui file browser, even something like Chromium. If I don't even browse into those drives, I'm thinking that these processes by simply getting a list of available drives spins up the hard disks. Much like blkid does. The problem is, it's hard to determine the root cause of this since none of these processes are actually reading or writing the filesystem on those drives, so no files are actually changing or being touched. Is there some sort of cache that I can populate or a buffer to prevent these programs from spinning up the hard drive simply by getting a list of available disks? This is honestly driving me insane, since I can't find a reliably way to keep these disks spun-down even though there is no direct access of the filesystem. UPDATE : Thanks to Stephen's answer, I was able to trace the disk activity to gvfs and udisks . It's a real shame that these processes insist on waking up disks in standby when they aren't actually being accessed to do any real I/O with the filesystem. So far I just uninstalled them, knowing that it will remove some functionality from PCManFM and the like. | You can use blktrace ( available in Debian) to trace all the activity to a given device; for example sudo blktrace -d /dev/sda -o - | blkparse -i - or just sudo btrace /dev/sda will show all the activity on /dev/sda . The output looks like 8,0 3 51 135.424002054 16857 D WM 167775248 + 8 [kworker/u16:0] 8,0 3 52 135.424011323 16857 I WM 209718336 + 8 [kworker/u16:0] 8,0 3 0 135.424011659 0 m N cfq496A / insert_request The fifth column is the process identifier, and the last one gives the process name when there is one. You can also store traces for later analysis; blktrace includes a number of analysis tools such as the afore-mentioned blkparse and btt . blktrace is a very low-level tool so it may not be all that easy to figure out what caused activity in the first place, but with the help of the included documentation (see /usr/share/doc/blktrace if you installed the Debian package) and the blktrace paper it should be possible to figure out what's causing the spin-ups. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162981/"
]
} |
274,138 | Is there a way for $val to be set in a () but not be seen by b () ? set -u -e -o pipefaila () { local +x val="myval" echo "in a: VAL= $val" b}b () { echo "in b: VAL= $val"}a Produces: in a: VAL= myvalin b: VAL= myval # This should not happen. I was hoping to use local / typeset options instead of the use of subshells to protect variables from being seen in other functions. I've checked the manual (Functions section, typeset section) and there doesn't seem to be a way. However, I could have easily missed something. | You can use blktrace ( available in Debian) to trace all the activity to a given device; for example sudo blktrace -d /dev/sda -o - | blkparse -i - or just sudo btrace /dev/sda will show all the activity on /dev/sda . The output looks like 8,0 3 51 135.424002054 16857 D WM 167775248 + 8 [kworker/u16:0] 8,0 3 52 135.424011323 16857 I WM 209718336 + 8 [kworker/u16:0] 8,0 3 0 135.424011659 0 m N cfq496A / insert_request The fifth column is the process identifier, and the last one gives the process name when there is one. You can also store traces for later analysis; blktrace includes a number of analysis tools such as the afore-mentioned blkparse and btt . blktrace is a very low-level tool so it may not be all that easy to figure out what caused activity in the first place, but with the help of the included documentation (see /usr/share/doc/blktrace if you installed the Debian package) and the blktrace paper it should be possible to figure out what's causing the spin-ups. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42107/"
]
} |
274,144 | I have a test.wav file. I need to use this file to process an application, with following properties: monochannel 16 kHz sample rate 16-bit Now, I'm using the following commands to attain these properties: sox disturbence.wav -r 16000 disturbence_16000.wavsox disturbence_16000.wav -c 1 disturbence_1600_mono.wavsox disturbence_1600_mono.wav -s -b 16 disturbence_1600_mono_16bit.wav Here to get a single file,three steps are involved and two temporary files are created. It is a time-consuming process. I thought of writing a script to do these process but I'm keeping this is a last option. In single command, can I convert a .wav file to the required format? | sox disturbence.wav -r 16000 -c 1 -b 16 disturbence_16000_mono_16bit.wav gives within one command Sample rate of 16 kHz ( -r 16000 ), one channel (mono) ( -c 1 ), 16 bits bit depth ( -b 16 ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/274144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95366/"
]
} |
274,166 | I'm trying to put the "sox" utility in a two pipes command to resample a mono 44kHz audio file to a 16kHz audio file. It works fine with a single pipe : $ speexdec toto.oga - | sox -V -t raw -b 16 -e signed -c 1 -r 44.1k - -r 16k toto.wav But when I add onather pipe, the sox utility complains : $ speexdec toto.oga - | sox -V -t raw -b 16 -e signed -c 1 -r 44.1k - -r 16k - | cat - > toto.wavsox FAIL formats: can't determine type of `-' Any idea ? | You need to declare the type of the sox output by adding -t wav before the second - . When it's a file name, sox peeps at the name and deduces the type from there, but when it's stdout, the type needs to be declared. You might also want to declare all other settings as well ( -b 16 -e signed -c 1 ) rather than assuming they are transferred from the input; all before the last - that nominates the output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135038/"
]
} |
274,170 | Since I want to protect my ssh connections, I set some global cipher suite options to restrict set of used algorithms. But recently I've encountered a server which doesn't support some of those algorithms. So, I need to selectively enable deprecated algorithms for a specific host record in client (my system) configuration. I found out that the options override is not working as I expected. Let's take a minimal (not-)working example for the github: HostKeyAlgorithms [email protected],ssh-ed25519,[email protected],ecdsa-sha2-nistp521,ecdsa-sha2-nistp256Host github HostKeyAlgorithms ssh-rsa Hostname github.com Port 22 User git PubkeyAuthentication yes IdentityFile ~/.ssh/some-filename-here Having that, I receive the following error ( HostKeyAlgorithms is not overriden at all): debug1: /home/username/.ssh/config line 14: Applying options for github<...>debug2: kex_parse_kexinit: [email protected],ssh-ed25519,[email protected],ecdsa-sha2-nistp521,ecdsa-sha2-nistp256<...>Unable to negotiate with 192.30.252.130: no matching host key type found. Their offer: ssh-dss,ssh-rsa It is similarly not working for the global PubkeyAuthentication no options with an override in a host configuration. Also, the match doesn't help either: match host github HostKeyAlgorithms ssh-rsa So, is there a way to selectively redefine those options? NOTE: I'm using the openssh-7.1_p2-r1 on gentoo. | OpenSSH options might behave somehow strange on the first sight. But manual page for ssh_config documents it well: For each parameter, the first obtained value will be used . The configuration files contain sections separated by “Host” specifications, and that section is only applied for hosts that match one of the patterns given in the specification. The matched host name is usually the one given on the command line (see the CanonicalizeHostname option for exceptions.) You might rewrite your config like this to achieve what you need: Host github HostKeyAlgorithms ssh-rsa Hostname github.com Port 22 User git PubkeyAuthentication yes IdentityFile ~/.ssh/some-filename-hereHost * HostKeyAlgorithms [email protected],ssh-ed25519,[email protected],ecdsa-sha2-nistp521,ecdsa-sha2-nistp256 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103189/"
]
} |
274,175 | I have a function for quickly making a new SVN branch which looks like so function svcp() { svn copy "repoaddress/branch/$1.0.x" "repoaddress/branch/dev/$2" -m "dev branch for $2"; } Which I use to quickly make a new branch without having to look up and copy paste the addresses and some other stuff. However for the message (-m option), I'd like to have it so that if I provide a third parameter then that is used as the message, otherwise the 'default' message of "dev branch for $2" is used. Can someone explain how this is done? | function svcp() { msg=${3:-dev branch for $2} svn copy "repoaddress/branch/$1.0.x" "repoaddress/branch/dev/$2" -m "$msg";} the variable msg is set to $3 if $3 is non-empty, otherwise it is set to the default value of dev branch for $2 . $msg is then used as the argument for -m . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/274175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155535/"
]
} |
274,178 | I want to use Diff only to check if files and directories exist the same in two locations but NOT compare the contents of the files themselves, because that's all I need and a regular Diff just takes too long for the amount of data. How would I go about this? Is there some other Debian standard tool that can accomplish this? | You can't use diff for that. Why would your requirement be to use diff? Why do people always come to conclusions without having examined the possible solutions in detail? You could use diff -qr but that wouldn't be wise from a performance point of view if the only goal is to compare the directory structure as outlined here One of the answers to that question was vimdiff <(cd dir1; find . | sort) <(cd dir2; find . | sort) which will give you a nice side-by-side display of the two directory hierarchies with any common sections folded. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164079/"
]
} |
274,229 | I know I can write to a file by simply doing :w <file> . I would like to know though how can I write to a file by appending to it instead of overwriting it. Example use case: I want to take some samples out of a log file into another file. To achieve that today I can do: Open the log file Select some lines with Shift+v Write to a file: :w /tmp/samples Select some more lines with Shift+v Append to /tmp/samples with :w !cat - >> /foo/samples Unfortunately step 5 is long, ugly and error prone (missing a > makes you lose data). I hope Vim has something better here. | From :h :w : :w_a :write_a E494:[range]w[rite][!] [++opt] >> Append the specified lines to the current file.:[range]w[rite][!] [++opt] >> {file} Append the specified lines to {file}. '!' forces the write even if file does not exist. So, if you have selected the text using visual mode, just do :w >> /foo/samples ( :'<,'> will be automatically prepended). If you miss out on a > , Vim will complain: E494: Use w or w>> | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/274229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72086/"
]
} |
274,273 | I was reading the famous Unix Recovery Legend , and it occurred to me to wonder: If I had a BusyBox shell open, and the BusyBox binary were itself deleted, would I still be able to use all the commands included in the BusyBox binary? Clearly I wouldn't be able to use the BB version of those commands from another running shell such as bash , since the BusyBox file itself would be unavailable for bash to open and run. But from within the running instance of BusyBox, it appears to me there could be two methods by which BB would run a command: It could fork and exec a new instance of BusyBox, calling it using the appropriate name—and reading the BusyBox file from disk to do so. It could fork and perform some internal logic to run the specified command (for example, by running it as a function call). If (1) is the way BusyBox works, I would expect that certain BusyBox-provided commands would become unavailable from within a running instance of BB after the BB binary were deleted. If (2) is how it works, BusyBox could be used even for recovery of a system where BB itself had been deleted—provided there were still a running instance of BusyBox accessible. Is this documented anywhere? If not, is there a way to safely test it? | By default, BusyBox doesn't do anything special regarding the applets that it has built in (the commands listed with busybox --help ). However, if the FEATURE_SH_STANDALONE and FEATURE_PREFER_APPLETS options are enabled at compile time, then when BusyBox sh¹ executes a command which is a known applet name, it doesn't do the normal PATH lookup, but instead runs its built-in applets through a shortcut: Applets that are declared as “noexec” in the source code are executed as function calls in a forked process. As of BusyBox 1.22, the following applets are noexec: chgrp , chmod , chown , cksum , cp , cut , dd , dos2unix , env , fold , hd , head , hexdump , ln , ls , md5sum , mkfifo , mknod , sha1sum , sha256sum , sha3sum , sha512sum , sort , tac , unix2dos . Applets that are declared as “nofork” in the source code are executed as function calls in the same process. As of BusyBox 1.22, the following applets are nofork: [[ , [ , basename , cat , dirname , echo , false , fsync , length , logname , mkdir , printenv , printf , pwd , rm , rmdir , seq , sync , test , true , usleep , whoami , yes . Other applets are really executed (with fork and execve ), but instead of doing a PATH lookup, BusyBox executes /proc/self/exe , if available (which is normally the case on Linux), and a path defined at compile time otherwise. This is documented in a bit more detail in docs/nofork_noexec.txt . The applet declarations are in include/applets.src.h in the source code. Most default configurations turn these features off, so that BusyBox executes external commands like any other shell. Debian turns these features on in both its busybox and busybox-static packages. So if you have a BusyBox executable compiled with FEATURE_SH_STANDALONE and FEATURE_PREFER_APPLETS , then you can execute all BusyBox commands from a BusyBox shell even if the executable is deleted (except for the applets that are not listed above, if /proc/self/exe is not available). ¹ There are actually two implementations of "sh" in BusyBox — ash and hush — but they behave the same way in this respect. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/274273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
274,330 | #!/bin/bashMAXCDN_ARRAY="108.161.176.0/20 94.46.144.0/20 146.88.128.0/20 198.232.124.0/22 23.111.8.0/22 217.22.28.0/22 64.125.76.64/27 64.125.76.96/27 64.125.78.96/27 64.125.78.192/27 64.125.78.224/27 64.125.102.32/27 64.125.102.64/27 64.125.102.96/27 94.31.27.64/27 94.31.33.128/27 94.31.33.160/27 94.31.33.192/27 94.31.56.160/27 177.54.148.0/24 185.18.207.65/26 50.31.249.224/27 50.31.251.32/28 119.81.42.192/27 119.81.104.96/28 119.81.67.8/29 119.81.0.104/30 119.81.1.144/30 27.50.77.226/32 27.50.79.130/32 119.81.131.130/32 119.81.131.131/32 216.12.211.59/32 216.12.211.60/32 37.58.110.67/32 37.58.110.68/32 158.85.206.228/32 158.85.206.231/32 174.36.204.195/32 174.36.204.196/32"$IP = 108.161.184.123if [ $IP in $MAXCDN_ARRAY ]; then: echo "$IP is in MAXCDN range" else: echo "$IP is not in MAXCDN range"fi I have a list of IPs in MAXCDN_ARRAY to be used as whitelist. I want to check if a specific IP address is in range in this array. How can I structure the code so that it can compare all IPs in the array and say the specific IP in in range of this list or not? | You can use grepcidr to check if an IP address is in a list of CIDR networks. #! /bin/bashNETWORKS="108.161.176.0/20 94.46.144.0/20 146.88.128.0/20 198.232.124.0/22 23.111.8.0/22 217.22.28.0/22 64.125.76.64/27 64.125.76.96/27 64.125.78.96/27 64.125.78.192/27 64.125.78.224/27 64.125.102.32/27 64.125.102.64/27 64.125.102.96/27 94.31.27.64/27 94.31.33.128/27 94.31.33.160/27 94.31.33.192/27 94.31.56.160/27 177.54.148.0/24 185.18.207.65/26 50.31.249.224/27 50.31.251.32/28 119.81.42.192/27 119.81.104.96/28 119.81.67.8/29 119.81.0.104/30 119.81.1.144/30 27.50.77.226/32 27.50.79.130/32 119.81.131.130/32 119.81.131.131/32 216.12.211.59/32 216.12.211.60/32 37.58.110.67/32 37.58.110.68/32 158.85.206.228/32 158.85.206.231/32 174.36.204.195/32 174.36.204.196/32"for IP in 108.161.184.123 108.161.176.123 192.168.0.1 172.16.21.99; do grepcidr "$NETWORKS" <(echo "$IP") >/dev/null && \ echo "$IP is in MAXCDN range" || \ echo "$IP is not in MAXCDN range"done NOTE: grepcidr expects the IP address(es) it is matching to be in a file, not just an argument on the command line. That's why I had to use <(echo "$IP") above. Output: 108.161.184.123 is in MAXCDN range108.161.176.123 is in MAXCDN range192.168.0.1 is not in MAXCDN range172.16.21.99 is not in MAXCDN range grepcidr is available pre-packaged for several distros, including Debian: Package: grepcidrVersion: 2.0-1Description-en: Filter IP addresses matching IPv4 CIDR/network specification grepcidr can be used to filter a list of IP addresses against one or more Classless Inter-Domain Routing (CIDR) specifications, or arbitrary networks specified by an address range. As with grep, there are options to invert matching and load patterns from a file. grepcidr is capable of comparing thousands or even millions of IPs to networks with little memory usage and in reasonable computation time. . grepcidr has endless uses in network software, including: mail filtering and processing, network security, log analysis, and many custom applications. Homepage: http://www.pc-tools.net/unix/grepcidr/ Otherwise, the source is available at the link above. Another alternative is to write a perl or python script using one of the many libraries/modules for manipulating and checking IPv4 addresses with those languages. For example, the perl module Data::Validate::IP has an is_innet_ipv4($ip, $network) function; Net::CIDR::Lite has a very similar $cidr->find($ip); method; and Net::IPv4Addr has an ipv4_in_network() function. python has comparable libraries, including ipy , ipaddr , and ipcalc , amongst others. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/274330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81390/"
]
} |
274,360 | Whenever I'm trying to execute this line to configure SELinux to install xrdp from this tutorial: # chcon --type=bin_t /usr/sbin/xrdp# chcon --type=bin_t /usr/sbin/xrdp-sesman I get these errors: chcon: can't apply partial context to unlabeled file '/usr/sbin/xrdp'chcon: can't apply partial context to unlabeled file '/usr/sbin/xrdp-sesman' I'm on CentOS 7.2 64 bit. | I'm also on CentOS 7, and this works for me: chcon -h system_u:object_r:bin_t:s0 /usr/sbin/xrdpchcon -h system_u:object_r:bin_t:s0 /usr/sbin/xrdp-sesman | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164319/"
]
} |
274,428 | I have a pdf file that contains images and I want to reduce its size in order to upload it to a site with a size limit. So, how can I reduce the size of a pdf file from the command-line? | You can use gs - GhostScript (PostScript and PDF language interpreter and previewer) as follows: Set pdfwrite as output device by -sDEVICE=pdfwrite Use the appropriate -dPDFSETTINGS . From Documentation : -dPDFSETTINGS= configuration Presets the "distiller parameters" to one of four predefined settings: /screen selects low-resolution output similar to the Acrobat Distiller "Screen Optimized" setting. /ebook selects medium-resolution output similar to the Acrobat Distiller "eBook" setting. /printer selects output similar to the Acrobat Distiller "Print Optimized" setting. /prepress selects output similar to Acrobat Distiller "Prepress Optimized" setting. /default selects output intended to be useful across a wide variety of uses, possibly at the expense of a larger output file. -o option to output file which also set -dNOPAUSE and -dBATCH (see Interaction-related parameters ) Example: $ du -h file.pdf 27M file.pdf $ gs -sDEVICE=pdfwrite -dPDFSETTINGS=/ebook -q -o output.pdf file.pdf $ du -h output.pdf 900K output.pdf Here -q suppress normal startup messages, and also do the equivalent of -dQUIET which suppresses routine information comments | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/274428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
274,453 | In people's ' .*rc ' files I see online or in various code, I tend to see a lot of people who manually use ANSI escape sequences instead of using tput . I had the understanding that tput is more universal/safe, so this makes me wonder: Is there any objective reason one should use escape sequences in place of tput ? (Portability, robustness on errors, unusual terminals...?) | tput can handle expressions (for instance in sgr and setaf ) which the typical shell-scripter would find less than usable. To get an idea of what is involved, see the output from infocmp with the -f (formatting) option applied. Here is one of examples using those strings from xterm's terminfo descriptions : xterm-16color|xterm with 16 colors, colors#16, pairs#256, setab=\E[ %? %p1%{8}%< %t%p1%{40}%+ %e %p1%{92}%+ %;%dm, setaf=\E[ %? %p1%{8}%< %t%p1%{30}%+ %e %p1%{82}%+ %;%dm, setb= %p1%{8}%/%{6}%*%{4}%+\E[%d%p1%{8}%m%Pa %?%ga%{1}%= %t4 %e%ga%{3}%= %t6 %e%ga%{4}%= %t1 %e%ga%{6}%= %t3 %e%ga%d %; m, setf= %p1%{8}%/%{6}%*%{3}%+\E[%d%p1%{8}%m%Pa %?%ga%{1}%= %t4 %e%ga%{3}%= %t6 %e%ga%{4}%= %t1 %e%ga%{6}%= %t3 %e%ga%d %; m, use=xterm+256color, use=xterm-new, The formatting splits things up - a script or program to do the same would have to follow those twists and turns. Most people give up and just use the easiest strings. The 16-color feature is borrowed from IBM aixterm, which maps 16 codes each for foreground and background onto two ranges; foreground onto 30-37, and 90-97 background onto 40-47, and 100-107 A simple script #!/bin/shTERM=xterm-16colorexport TERMprintf ' %12s %12s\n' Foreground Backgroundfor n in $(seq 0 15)do F=$(tput setaf $n | cat -v) B=$(tput setab $n | cat -v) printf '%2d %12s %12s\n' $n "$F" "$B"done and output show how it works: Foreground Background 0 ^[[30m ^[[40m 1 ^[[31m ^[[41m 2 ^[[32m ^[[42m 3 ^[[33m ^[[43m 4 ^[[34m ^[[44m 5 ^[[35m ^[[45m 6 ^[[36m ^[[46m 7 ^[[37m ^[[47m 8 ^[[90m ^[[100m 9 ^[[91m ^[[101m10 ^[[92m ^[[102m11 ^[[93m ^[[103m12 ^[[94m ^[[104m13 ^[[95m ^[[105m14 ^[[96m ^[[106m15 ^[[97m ^[[107m The numbers are split up because aixterm uses the 30-37 and 40-47 ranges to match ECMA-48 (also known as "ANSI") colors, and uses the 90-107 range for codes not defined in the standard. Here is a screenshot with xterm using TERM=xterm-16color , where you can see the effect. Further reading: infocmp - compare or print out terminfo descriptions tput , reset - initialize a terminal or query terminfo database ECMA-48: Control Functions for Coded Character Sets aixterm Command Aren't bright colors the same as bold? (XTerm FAQ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121116/"
]
} |
274,482 | If you do something silly like cat /var/log/wtmp your terminal can get messed up as shown in the screenshot. I know there are a number of ways of fixing this . One of the ways not mentioned on that post, which I was told about years ago is to run the command highlighted in a red box in the screenshot. head /bin/ls This works. Why? | Terminals are controlled by escape sequences that are sent in-line with the character data to be displayed. That is how, for example, echo -e '\e[34m' will turn the text on many terminals blue. It's echoing some characters to the terminal—they happen to be an escape sequence which sets the foreground color. The terminal was messed up by being instructed to switch into some alternative character set (and possibly a bunch more things). It did that because /var/log/wtmp happened to contain the escape sequences used to switch character sets. Technically, it's not really messed up—it's operating exactly as it is designed to. (Try tput smacs to mess up your terminal on demand; tput rmacs to change that parameter back.) reset , etc. function by sending escape sequences resetting various parameters to their defaults. That "fixed" the terminal. That head /bin/ls trick is working because your /bin/ls (or at least the portion printed by head ) happens to contain an escape sequence changing the terminal parameters back. That's not at all portable—it doesn't work here for example—and likely does a much less thorough job resetting defaults than reset , etc.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274482",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2426/"
]
} |
274,498 | After going through the bash documentation , this question and this one it's still not clear to me how can I perform atomic write (append) operations to a file in bash. I have a script that is run in multiple instances and at some point must write data to a file: echo "$RESULT" >> `pwd`/$TEMP_DIR/$OUT_FILE How is it possible to make all write operations from all concurrently running scripts to that file atomic (so that data from one instance doesn't overlap data from another)? | It seems you need to use flock as in the example from man ( http://linux.die.net/man/1/flock ) (flock -x 200# Put here your commands that must do some writes atomically) 200>/var/lock/mylockfile And put all your commands that must be atomic in (). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/274498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82521/"
]
} |
274,506 | I am running Debian Jessie on VMWare, and the default resolution is 800x600. I can set it to 1360x768 with xrandr , but next time I log in it's still 800x600. Some posts on the internet said to edit my xorg.conf file. I ran Xorg -configure , and got an xorg.conf.new file. I tried testing it, but the screen remains black (running startx without the config works fine). Some other post said to edit the display.xml file in this directory ( .config/xfce4/xfconf/xfce-perchannel-xml ), but I don't have a display.xml file. Any idea how to do this? Log file: http://pastebin.com/YaFrfnum Conf file: http://pastebin.com/nYGg06TJ | One of many ways to change settings in a desktop environment is to use tools that are provided with that environment. In this case XFCE is a desktop environment and it offers such tools, some with graphical interface. And the simplest way to change resolution is to use "Display" that can be found under Application Menu > Settings > Display or it can be invoked from terminal emulator using xfce4-display-settings command. Of course if you do not like a graphical solution, you alway can manually modify the respective file (in this case .config/xfce4/xfconf/xfce-perchannel-xml/displays.xml ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/274506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164427/"
]
} |
274,534 | In Mac, if I have a few Terminal windows open and I restart the computer or quit Terminal, the windows that were open last are opened again with their working directories and command histories retained. Is there a way to similarly reopen Ubuntu Terminal windows? | Technically it depends upon the chosen desktop. The question has been asked here and there without any good answers. For gnome-terminal, a commonly proposed solution is adding the --save-config option, e.g., gnome-terminal --save-config=/home/whatever/foo There is also the dconf-editor which might be effective, depending on what applications you use. In either case, those would only restore a shell to a given working directory; restoring programs running within the shell seems to be glossed over generally (except for the rare case where the application also has session support). Further reading: How to Remember and Restore Running Applications on Next Logon How to Auto Save Sessions in Ubuntu 14.04 Using Dconf-Editor Some fast way to save and restore tabs of Terminal? How to remember multiple tabs' session in terminal? (Alike FF session manager) gnome-save-session won't save gnome-terminals gnome-terminal profiles are not being loaded | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163939/"
]
} |
274,548 | I'm trying to match and merge two sets of sorted data, one set per file. Each file contains two columns: the key field and the associated value. The resulting output should contain three columns: the key field, the value (if any) from the first file, and the value (if any) from the second file. I need to include lines of data that are not matched. First file "john" apple,greencherry,redorange,orange Second file "jane" apple,redbanana,yellowcherry,yellowkiwi,green Desired result apple,green,redbanana,,yellowcherry,red,yellowkiwi,,greenorange,orange, I thought initially that this was a trivial job for join LC_ALL=C join -j1 -a1 -a2 -t',' john jane But the result of the -a1 -a2 puts the unmatched value always in the second column: apple,green,redbanana,yellowcherry,red,yellowkiwi,greenorange,orange I need to be able to see from which source file the unmatched value originates, ideally by having those values in the appropriate second or third column of the result file, but I cannot work out a simple way of achieving this without descending into awk ... getline() type constructs. Any suggestions, please? | You want -o auto : join -t, -j 1 -a 1 -a 2 -o auto john jane From man join : -o FORMAT obey FORMAT while constructing output line ︙ If FORMAT is the keyword ' auto ', then the first line of each filedetermines the number of fields output for each line. Or better explained from GNU Coreutils: join invocation (follow the link into General options in join ): ‘ -o auto ’ If the keyword ‘ auto ’ is specified, infer the output format from the first line in each file. This is the same as the default output format but also ensures the same number of fields are output for each line. Missing fields are replaced with the -e option and extra fields are discarded. % cat john apple,greencherry,redorange,orange% cat jane apple,redbanana,yellowcherry,yellowkiwi,green% join -t, -j 1 -a 1 -a 2 -o auto john janeapple,green,redbanana,,yellowcherry,red,yellowkiwi,,greenorange,orange, | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100397/"
]
} |
274,549 | When I search a file in less or more, it will remove lines and replace them with "skipping". Does anyone know what causes this and/or how to avoid it? After I '/' search a log file using either more or less, I see this: crw-rw-rw- 1 root staff 40, 0 Oct 27 2013 fscsi0crw-rw-rw- 1 root staff 40, 1 Oct 27 2013 fscsi1brw-rw---- 1 root system 10, 9 Oct 27 2013 hd1...skipping...crw-rw-rwT 1 root system 38, 7 Oct 27 2013 vhost7crw-rw-rwT 1 root system 38, 8 Oct 27 2013 vhost8crw-rw-rwT 1 root system 38, 9 Oct 27 2013 vhost9crw------- 1 root system 12, 0 Oct 27 2013 vio0crw------- 1 root system 20, 0 Apr 5 00:53 vty0drwxr-xr-x 2 root system 256 Oct 15 2008 xticrw-rw-rw- 1 root system 2, 3 Oct 27 2013 zero...skipping...crw-rw-rwT 1 root system 38, 7 Oct 27 2013 vhost7crw-rw-rwT 1 root system 38, 8 Oct 27 2013 vhost8crw-rw-rwT 1 root system 38, 9 Oct 27 2013 vhost9 | Those "skipping" lines are perfectly normal. Searching for some string is much faster than displaying each and every line on screen. Therefore if you search for a word, less will scan the file for that word and once it finds a line it will display only that page of lines where it found the word. If you scroll back using your terminal you will see those "skipping" lines. If you want to go backwards in your text just use the proper keys like arrow keys. This will move you through the text as it is without any "skipping"s. PS: You can type h in less for a list of keys and what they do. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164385/"
]
} |
274,584 | I am looking for a way inside a bash script to return the owner of a file. I'm guessing that this is possible using "gawk" but I've honestly got no clue and there doesn't seem to be a comprehensible answer already posted online. | Use stat for that. In a GNU system: To get the username of the owner: stat -c '%U' file.txt To get the user ID (UID) of the owner: stat -c '%u' file.txt Assuming the file is file.txt . For FreeBSD and Mac OS X (thanks to @cas) : For username: stat -f '%Su' file.txt For UID: stat -f '%u' file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164476/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.