source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
174,129 | I have a list of files that are using %20 to indicate a space in their names. I'm currently trying to do the following to change them all from %20 to a space " ". for x in *.txt do mv $x $(echo $x | sed -e 's/%20/ /')done That doesn't play nice. It spits out this: usage: mv [-f | -i | -n] [-v] source target mv [-f | -i | -n] [-v] source ... directory But I do the same thing on some of my images when I'm importing them and it works fine: for x in *_MG*.CR2 do mv $x $(echo $x | sed -e 's/_MG/_IMG/')done Forgetting that one is using .TXT and the other is .CR2 , what am I totally overlooking here? Keeping in mind I'm using a work laptop so I can't add Homebrew to install any additional software. And it's Mac OS X for the OS. | There's no need to use sed, this can be handled by parameter expansion mv -- "$x" "${x//%20/ }" FWIW, I'd be replacing those %20 s with an underscore (or something); I hate file names that contain spaces. But I guess learning how to write bash scripts that can handle spaces and other special characters in file names is a Good Thing. :) As Izkata mentions in the comments, it's very important to quote parameters! Double quotes in bash (& related shells) aren't mere string delimiters, they also signify that you don't want word-splitting to occur. So if x contains a filename with a space in it $x will be treated as two arguments, but "$x" will be treated as a single argument. If we use single quotes we inhibit parameter expansion so '$x' just results in a literal string containing $x , which is generally not what we want. :) PS. Thanks to Monsieur Chazelas for supplying the missing -- in my command line. -- indicates the end of options in the argument list; without it it's possible for file names commencing with - to be interpreted as options. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48151/"
]
} |
174,200 | I'm trying to view the contents of a zip archive using an extremely simple regular expression. This works: rmorton@Rockette:~$ unzip -l Downloads/WeiDU-Linux-236.zip "*/i386/tolower" "*/i386/weidu" "*/i386/weinstall"Archive: Downloads/WeiDU-Linux-236.zip Length Date Time Name--------- ---------- ----- ---- 135308 2013-11-17 21:48 WeiDU-Linux/bin/i386/tolower 774816 2013-11-17 21:47 WeiDU-Linux/bin/i386/weidu 130392 2013-11-17 21:48 WeiDU-Linux/bin/i386/weinstall--------- ------- 1040516 3 files But this does not: rmorton@Rockette:~$ unzip -l Downloads/WeiDU-Linux-236.zip "*/i386/(tolower|weidu|weinstall)"Archive: Downloads/WeiDU-Linux-236.zip Length Date Time Name--------- ---------- ----- ------------- ------- 0 0 files What gives? Do I have a misunderstanding of how regular expressions work on the command line, or am I missing something obvious? | Read the man page of unzip. It doesn't talk about regular expressions, just about the two special characters * and ?. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22172/"
]
} |
174,206 | I'm reading myself for the release of Jessie on Debian, so I'm extra cautious (should be said paranoid) about any message that can cause problems, namely warnings. My system is a desktop with Debian testing/unstable installed, on ext4 partitions for both /boot and / , yet I'm seeing this message while upgrading the grub-pc package in Debian: Installing for i386-pc platform.Installation finished. No error reported.Installing for i386-pc platform.grub-install: warning: File system `ext2' doesn't support embedding.grub-install: warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged..Installation finished. No error reported.Generating grub configuration file ... Why is grub saying that my system is embedded? What is the cause of this? I tried to check the grub-install binary, but I couldn't make sense of it. | Most people coming to this from a search engine are probably wondering, "why do I get this error?": warning: File system `ext2' doesn't support embedding. warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged.. error: will not proceed with blocklists. Because you did, e.g.: grub-install /dev/sda1 instead of grub-install /dev/sda I.e. tried to install to a partition instead of the MBR. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/174206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41104/"
]
} |
174,210 | According to my browser (Firefox 34.0) the installed version of the Shockwave Flash plugin appears to be 11.2.202.424. This version is considered to be insecure: https://helpx.adobe.com/security/products/flash-player/apsb14-27.html The plugin is therefore blocked: https://blocklist.addons.mozilla.org/en-US/firefox/blocked/p796 In the attempt to update the plugin to the version currently considered safe (11.2.202.425), I found out that the recommended version apparantly is already installed: $ yum info flash-pluginLoaded plugins: langpacks, refresh-packagekitInstalled PackagesName : flash-pluginArch : x86_64Version : 11.2.202.425Release : releaseSize : 19 MRepo : installedFrom repo : adobe-linux-x86_64Summary : Adobe Flash Player 11.2URL : http://www.adobe.com/downloads/License : CommercialDescription : Adobe Flash Plugin 11.2.202.425 : Fully Supported: Mozilla SeaMonkey 1.0+, Firefox 1.5+, Mozilla : 1.7.13+ My operating system: $ cat /etc/redhat-release Fedora release 20 (Heisenbug) My questions: Do I have multiple versions of this plugin installed? How can I fix my installation? | I ran into this too, and found the answer in mozilla's bugzilla . In short, it happened because the plugin was updated while Firefox was running, and the pluginreg.dat got corrupted. So: exit firefox rm ~/.mozilla/firefox/*/pluginreg.dat start firefox again and you'll be all set. (The file will be regenerated.) Of course, you'll need to make sure that the .425 version is installed via yum update or other method. Presumably, this problem has been happening harmlessly for many updates — this is just the first where we all noticed it because of the blacklisting. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/174210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17609/"
]
} |
174,218 | I am currently logged into a remote server using a particular user account.I kicked off a really long-running script, and have no idea when it will finish. It is blocking me from using the terminal that I connected to the server with. What will happen if I open a second terminal and connect with the same user account to the same server? Will it log off my connection from the other terminal, and interrupt / kill the other running script? I am using Ubuntu 12.04 LTS. | You can have multiple connections without a problem. Each connection will have its own shell. In the future you may want to start the script with nohup and background the script. This will allow the script to continue to run, even if you loose your shell. Also you can continue to use your existing shell without needing to open a new connection nohup ./foo.sh & | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91690/"
]
} |
174,230 | Which Linux tools help to backup and restore a IMAP mail account including all mail and subfolders? I expect disconnects for large IMAP accounts because of ressource limitiations on the server risk of an interruption increases with the duration. The software should be able to reconnect and continue the job after any interruption. For repeating backups it might be very handy to use incremental backups and to run the backup script in a cron job. | A 7 year old question, which I've searched for now, and there are a few answers, most of are spot on. But I feel at least one is missing, and there is probably room for more: Timeline of answers: Back in 2014 Mehmet mentioned imapsync This is probably still the most focused solution maintained out there, as this is an active stream of revenue for the author Gilles Lamiral. The source is available, currently the latest code is on GitHub Although not available as a distro package (like some of the other options), it does have an official docker-hub hosted image at gilleslamiral/imapsync . For more info see: https://imapsync.lamiral.info/INSTALL.d/Dockerfile It seems someone also created a docker-image for the WebUI. Back in 2017 Quarind mentioned imap-backup This is a ruby based solution, it looks like it's still being maintained. Back in 2021 Patrick Decat mentioned OfflineIMAP offlineimap is Python2 based and not really maintained. offlineimap3 is a Python3 based fork that is actively maintained Available in most distros My research led me to these additional options: isync (the package name for the mbsync command) Home page | Arch Wiki Page | Distro/Package availability The packages below are available on Debian 11 (bullseye), but I don't know much about them yet: imapcopy Unmainted since ~ 2009 interimap Still actively maintained at developer's website mailsync On SourceForge mswatch repo . Requires something to do the actual syncing. vdirsyncer site . Companion to other IMAP synchers, for syncing Calendar and Contacts. Update 2022-05 Specifically for Gmail / Google Workspace mailboxes * : * Not an IMAP solution, but might be related to somebody's search, so I feel it's worth mentioning Got Your Back Gmvault GitHub As I learn more I'll update this, as I'm actively looking for a solution myself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26440/"
]
} |
174,326 | I have a text file, "foo.txt", that specifies a directory in each line: data/bar/foodata/bar/foo/chumdata/bar/chum/foo... There could be millions of directories and subdirectoriesWhat is the quickest way to create all the directories in bulk, using a terminal command ? By quickest, I mean quickest to create all the directories. Since there are millions of directories there are many write operations. I am using ubuntu 12.04. EDIT: Keep in mind, the list may not fit in memory, since there are MILLIONS of lines, each representing a directory. EDIT: My file has 4.5 million lines, each representing a directory, composed of alphanumeric characters, the path separator "/" , and possibly "../" When I ran xargs -d '\n' mkdir -p < foo.txt after a while it kept printing errors until i did ctrl + c: mkdir: cannot create directory `../myData/data/a/m/e/d': No space left on device But running df -h gives the following output: Filesystem Size Used Avail Use% Mounted on/dev/xvda 48G 20G 28G 42% /devtmpfs 2.0G 4.0K 2.0G 1% /devnone 401M 164K 401M 1% /runnone 5.0M 0 5.0M 0% /run/locknone 2.0G 0 2.0G 0% /run/shm free -m total used free shared buffers cachedMem: 4002 3743 258 0 2870 13-/+ buffers/cache: 859 3143Swap: 255 26 229 EDIT:df -i Filesystem Inodes IUsed IFree IUse% Mounted on/dev/xvda 2872640 1878464 994176 66% /devtmpfs 512053 1388 510665 1% /devnone 512347 775 511572 1% /runnone 512347 1 512346 1% /run/locknone 512347 1 512346 1% /run/shm df -T Filesystem Type 1K-blocks Used Available Use% Mounted on/dev/xvda ext4 49315312 11447636 37350680 24% /devtmpfs devtmpfs 2048212 4 2048208 1% /devnone tmpfs 409880 164 409716 1% /runnone tmpfs 5120 0 5120 0% /run/locknone tmpfs 2049388 0 2049388 0% /run/shm EDIT: I increased the number of inodes, and reduced the depth of my directories, and it seemed to work. It took 2m16seconds this time round. | With GNU xargs : xargs -d '\n' mkdir -p -- < foo.txt xargs will run as few mkdir commands as possible. With standard syntax: (export LC_ALL=C sed 's/[[:blank:]"\'\'']/\\&/g' < foo.txt | xargs mkdir -p --) Where it's not efficient is that mkdir -p a/b/c will attempt some mkdir("a") and possibly stat("a") and chdir("a") and same for "a/b" even if "a/b" existed beforehand. If your foo.txt has: aa/ba/b/c in that order, that is, if for each path, there have been a line for each of the path components before, then you can omit the -p and it will be significantly more efficient. Or alternatively: perl -lne 'mkdir $_ or warn "$_: $!\n"' < foo.txt Which avoids invoking a (many) mkdir command altogether. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91690/"
]
} |
174,334 | Below information is taken from man page,I would like to know the difference between bytes-per-inode and Inode-size? -i bytes-per-inode Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the filesystem, since then too many inodes will be made.Be warned that is not possible to expand the number of inodes on a filesystem after it is created, so be careful deciding the correct value for this parameter. -I inode-size Specify the size of each inode in bytes.mke2fs creates 256-byte inodes by default. In kernels after 2.6.10 and some earlier vendor kernels it is possible to utilize inodes larger than 128 bytes to store extended attributes for improved performance.The inode-size value must be a power of 2 larger or equal to 128.The larger the inode-size the more space the inode table will consume, and this reduces the usable space in the filesystem and can also negatively impact performance.Extended attributes stored in large inodes are not visible with older kernels, and such filesystems will not be mountable with 2.4 kernels at all.It is not possible to change this value after the filesystem is created. | Well, first, what is an inode? In the Unix world, an inode is a kind of file entry. A filename in a directory is just a label (a link!) to an inode. An inode can be referenced in multiple locations (hardlinks!). -i bytes-per-inode (aka inode_ratio) For some unknown reason this parameter is sometime documented as bytes-per-inode and sometime as inode_ratio . According to the documentation, this is the bytes/inode ratio . Most human will have a better understanding when stated as either (excuse my english): 1 inode for every X bytes of storage (where X is bytes-per-inode). lowest average-filesize you can fit. The formula (taken from the mke2fs source code ): inode_count = (blocks_count * blocksize) / inode_ratio Or even simplified (assuming "partition size" is roughly equivalent to blocks_count * blocksize , I haven't checked the allocation): inode_count = (partition_size_in_bytes) / inode_ratio Note 1: Even if you provide a fixed number of inode at FS creation time ( mkfs -N ... ), the value is converted into a ratio, so you can fit more inode as you extend the size of the filesystem. Note 2: If you tune this ratio, make sure to allocate significantly more inode than what you plan to use... you really don't want to reformat your filesystem. -I inode-size This is the number of byte the filesystem will allocate/reserve for each inode the filesystem may have. The space is used to store the attributes of the inode (read Intro to Inodes ). In Ext3, the default size was 128. In Ext4, the default size is 256 (to store extra_isize and provide space for inline extended-attributes). read Linux: Why change inode size? Note: X bytes of disjkspace is allocated for each allocated inode, whether is free or used, where X=inode-size. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77177/"
]
} |
174,349 | I was given the files for a mini linux , that boots directly into firefox . It works for all it should be doing, only that I do not get an internet connection. We have 3 DNS servers in the network, which all work. I can ping them, too. But when trying to ping google.de or wget google.de I get a bad address error. nslookup google.de works for some reason. I tracked the issue down to my resolv.conf on the booted system not having the same contents as the resolv.conf that I put into the .iso file. I tried understanding all the factors that go into creating and modifying resolv.conf . I'm not quite sure I got it all, but I definitely didn't find my solution there. So as a last ditch effort, I tried making the resolv.conf file immutable using :~# chattr +i /etc/resolv.conf When rebuilding and booting again to my surprise my file was renamed to resolv.conf~ and in its place was the same standard file that has been haunting me. The file contents make me believe it gets the information from the network itself. When starting the .iso in Virtualbox without internet access, my file is being kept as it is. I tried changing /etc/dhcp/dhclient.conf to not get the information from the net, by deleting domain-name-server and domain-name-search from the request part of the file. Didn't work unfortunately. I don't have the NetworkManager installed. The iso is based on Ubuntu 14.04. There is probably vital information missing. I'm happy to provide it. UPDATE: I think I found the file that clears resolv.conf . It seems to be /usr/share/udhcpc/default.script #!/bin/sh# udhcpc script edited by Tim Riker <[email protected]>[ -z "$1" ] && echo "Error: should be called from udhcpc" && exit 1RESOLV_CONF="/etc/resolv.conf"[ - n "$broadcast" ] && BROADCAST="broadcast $broadcast"[ -n "$subnet" ] && NETMASK="netmask $subnet"case "$1" in deconfig) /bin/ifconfig $interface 0.0.0.0 for i in /etc/ipdown.d/*; do [ -e $i ] && . $i $interface done ;; renew|bound) /bin/ifconfig $interface $ip $BROADCAST $NETMASK if [ -n "$router" ] ; then echo "deleting routers" while route del default gw 0.0.0.0 dev $interface ; do : done metric=0 for i in $router ; do route add default gw $i dev $interface metric $((metric++)) done fi echo -n > $RESOLV_CONF # Start ---------------- [ -n "$domain" ] && echo search $domain >> $RESOLV_CONF for i in $dns ; do echo adding dns $i echo nameserver $i >> $RESOLV_CONF done for i in /etc/ipup.d/*; do [ -e $i ] && . $i $interface $ip $dns done # End ------------------ ;;esacexit 0 It's part of the udhcpc program. A tiny dhcp client, that is part of busybox Will investigate further. UPDATE2 AND SOLUTION: I commented the part out (#Start to #End), that seemingly overwrites the /etc/resolv.conf file and sure enough. That was the culprit. So an obscure script caused all this trouble. I changed the question to reflect, what actually needed to be known to solve my problem, so it would be easier to find for people with the same problem and so I could accept an answer. Thanks for the help here in figuring things out. | You shouldn't manually update your resolv.conf , because all changes will be overwritten by data that your local DHCP server provides. If you want it to be static, run sudo dpkg-reconfigure resolvconf and answer "no" to dynamic updates. If you want to add new entries there, edit /etc/resolvconf/resolv.conf.d/base and run sudo resolvconf -u , it will append your entries and DHCP server's entries. Try to edit your /etc/network/interfaces and add your entries there, like auto eth0 iface eth0 inet dhcp dns-search google.com dns-nameservers dnsserverip and then restart /etc/init.d/networking restart or sudo ifdown -a and sudo ifup -a Your system uses udhcp which is a very small DHCP client program. The udhcp client negotiates a lease with the DHCP server and notifiesa set of scripts when a leases is obtained or lost. You can read about it's usage here or just edit this script (as you did). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/174349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
174,350 | I'm getting the error: Argument list too long when trying to use cUrl to send a file in base64 inside the body of my JSON. I'm using something like this: DATA=$( base64 "$FILE" )curl -X POST -H "Content-Type: application/json" -d '{ "data": "'"$DATA"'"}' $HOST Is there any other way to get the DATA in the body of my JSON? Take into account that I need to read a file in my filesystem, transform it into base64 and then send it inside the body. | If the base64-encoded file is too big to fit in the argument list you are going to have to pass it via a file. One of the easier ways I can think of is to pass it via standard input. From the curl man page , you can use -d @- to read from stdin instead of the command line. curl -X POST -H "Content-Type: application/json" -d @- "$HOST" <<CURL_DATA{ "data": "$DATA" }CURL_DATA | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/174350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94997/"
]
} |
174,371 | Given the following data file... foo 10bar 20oof 50rab 20 ... how would I print column two as a percent of the total of column two? In other words, I want... foo 10 10%bar 20 20%oof 50 50%rab 20 20% ... with less obvious numbers of course. I can create a running total easily enough, but I'm not sure how I can calculate the total before printing the lines . I am doing this in an awk file totals.awk ... #!/usr/bin/awk -fBEGIN{ runningtotal=0}{ runningtotal=runningtotal+$2 print $1 "\t" $2 "\t" runningtotal "\t" $2/runningtotal} So, running ./totals.awk data yields... foo 10 10 1bar 20 30 0.666667oof 50 80 0.625rab 20 100 0.2 Is there a way to loop twice, once to calculate the total, and once to print the lines? Is this possible in AWK, or must I use other utilities? | To create the table with a single call to awk : $ awk 'FNR==NR{s+=$2;next;} {printf "%s\t%s\t%s%%\n",$1,$2,100*$2/s}' data datafoo 10 10%bar 20 20%oof 50 50%rab 20 20% How it works The file data is provided as an argument to awk twice. Consequently, it will be read twice, the first time to get the total, which is stored in the variable s , and the second to print the output. Looking at the commands in more detail: FNR==NR{s+=$2;next;} NR is the total number of records (lines) that awk has read and FNR is the number of records read so far from the current file. Consequently, when FNR==NR , we are reading the first file. When this happens, the variable s is incremented by the value in the second column. Then, next tells awk to skip the rest of the commands and start over with the next record. Note that it is not necessary to initialize s to zero. In awk , all numeric variables are, by default, initialized to zero. printf "%s\t%s\t%s%%\n",$1,$2,100*$2/s If we reach this command, then we are processing the second file. This means that s now holds the total of column 2. So, we print column 1, column 2, and the percentage, 100*$2/s . Output format options With printf , detailed control of the output format is possible. The command above uses the %s format specifier which works for strings, integers, and floats. Three other option that might be useful here are: %d formats numbers as integers. If the number is actually floating point, it will be truncated to an integer %f formats numbers as floating point. It is also possible to specify widths and decimals places as, for example, %5.2f . %e provides exponential notation. This would be useful if some numbers were exceptionally large or small. Make a shell function If you are going to use this more than once, it is an inconvenience to type a long command. Instead create either a function or a script to hole the command. To create a function called totals , run the command: $ totals() { awk 'FNR==NR{s+=$2;next;} {printf "%s\t%s\t%s%%\n",$1,$2,100*$2/s}' "$1" "$1"; } With this function defined, the percentages for a data file called data can be found by running: $ totals data To make the definition of totals permanent, place it in your ~/.bashrc file. Make a shell script If you prefer a script, create a file called totals.sh with the contents: #!/bin/shawk 'FNR==NR{s+=$2;next;} {printf "%s\t%s\t%s%%\n",$1,$2,100*$2/s}' "$1" "$1" To get the percentages for a data file called data , run: sh totals.sh data | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55374/"
]
} |
174,379 | I need to create a virtual output in PulseAudio so as to be able to capture and stream audio from a specific source. I know that it's possible to re-route a specific application's output to a given output device like so, using pavucontrol : I'm looking to add another virtual output to the "Output Devices": Is this possible, and if so, how can I do it? | sudo modprobe snd_aloop Adds a loopback device to ALSA, which appears in the PulseAudio Volume Control. Redirect your stream there, and presto! Not sure how to add multiple loopback devices. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
174,387 | I'm having trouble figuring an mmv pattern. I have a bunch of video files with this pattern: my.program.name.season.NN.episode.NN.-.title.avi and I need to move them to: my.program.name.sNNeNN.title.avi I can't seem to get the right pattern... | TL;DR rename -n 's/^((\w+\.+){3})(.).*\.(.*)\.(.).*\.(.*)\.\-(\..*)$/$1$3$4$5$6$7/' * \w+ matches one or more word characters, i.e. [a-zA-Z0-9_]+ [1] \.+ matches one or more dot( . ) character [2] Note that \. matches the . character. We need to use \. to represent . as . has special meaning in regex. The \ is known as the escape code, which restore the original literal meaning of the following character. (\w+\.+){3} matches maximum 3 times of any accuracies of above [1],[2] group of characters starting from beginning( ^ matches the beginning of names) of the files name. This will match or return my.program.name. Note that extra parentheses around the regex is used for grouping of matching. Grouping match starts with ( and ends with ) and used to provide the so called back-references . A back-reference contains the matched sub-string stored in special variables $1 , $2 , … , $9 , where $1 contains the substring matched the first pair of parentheses, and so on. . The metacharacter dot (.) matches any single character. For example ... matches any 3 characters. so with this (.) we are matching the first character of season which it is s . .*\. matches everything after a single char in above until first . if seen. As you can see we didn't capture it as group of matches because we want to remove that from our name, where this matches eason. . (.*) matches everything after above match. This matches NN . Parentheses used here because we want to keep that in file name. \. matches a single dot after above match. A . after first NN . (.) again with this one we are matching the first single character after above match. this will return only e . .*\. will match everything after above match until first . . Will match pisode. . (.*) matches anything after last matched dot from above match. This will match second NN . \.\- matches a dot . followed by a dash - . Will match or return .- And finally (\..*)$ matches a single dot . and everything after that which ends to end of the file name. $ matches the end of the file name or input string. Note: remove -n option to perform actual renaming. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95028/"
]
} |
174,396 | I want create empty files with the same name, but in another location. Is it possible? find some files use only filenames touch an empty file in another place Something like this: find . -type f -name '*.jpg' -exec basename {}.tmp | touch ../emptys/{} \; | you can use the --attributes-only switch of cp for this purpose, eg. find . -iname "*.txt" -exec cp --attributes-only -t dummy/ {} + From the man page of cp : --attributes-onlydon't copy the file data, just the attributes This will create empty files with all attributes of the original file preserved but no contents. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174396",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95033/"
]
} |
174,413 | I've noticed a few screenshots of terminal windows online which show thin highlighted edges around status bars or highlighted lines. In the following example, note the light grey edging around lines 1, 5, and 389: In this example, notice the yellow edging around the Emacs mode line (status bar): What is the name of this effect, and is it possible with iTerm 2 under OS X 10.10? Update After doing some research and digging into Emacs Customize interface theme code, I found some code that defined the edges. In Emacs parlance, it's called :box , and one of its attributes is line-width . Here's an example of a box line being defined in a theme: '(modeline ((t (:background "Gray10" :foreground "SteelBlue" :box (:line-width 1 :style none) :width condensed)))) The documentation for :box can be found in the Emacs manual's face attributes documentation , though it doesn't mention how it works, or which terminals are supported. I started to think that this might be a special feature of GUI versions of Emacs (such as Aquamacs , but I am pretty sure that I have seen screenshots of what appear to be Ubuntu Unity terminal windows with similar box highlights. | you can use the --attributes-only switch of cp for this purpose, eg. find . -iname "*.txt" -exec cp --attributes-only -t dummy/ {} + From the man page of cp : --attributes-onlydon't copy the file data, just the attributes This will create empty files with all attributes of the original file preserved but no contents. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19434/"
]
} |
174,426 | I am having a very strange permission issue when trying to access any of the files in a certain directory as a specific user ( adventho ). This has been working fine for several months and I just recently noticed that I have been getting these errors and I haven't changed anything in the system for a while. This is what happens when trying to access any of the files as the user: # su adventhoadventho@snail:/root$ stat /home/adventho/public_html/hotelimg/187-1-1403380618.jpgstat: cannot stat `/home/adventho/public_html/hotelimg/187-1-1403380618.jpg': Permission denied However I can access it fine as root: root@snail:~# stat /home/adventho/public_html/hotelimg/187-1-1403380618.jpg File: `/home/adventho/public_html/hotelimg/187-1-1403380618.jpg' Size: 528535 Blocks: 1040 IO Block: 4096 regular fileDevice: 906h/2310d Inode: 918000 Links: 1Access: (0644/-rw-r--r--) Uid: ( 1030/adventho) Gid: ( 1008/adventho)Access: 2014-12-15 17:23:44.318374774 -0500Modify: 2014-06-21 15:56:58.000000000 -0400Change: 2014-10-23 16:44:57.502377342 -0400 Birth: - In fact, doing an ls -la on the directory produces a bunch of "?" in the output, even for . and .. : d????????? ? ? ? ? ? .d????????? ? ? ? ? ? ..-????????? ? ? ? ? ? 106-1-1239840962_800_600_180_135.jpg-????????? ? ? ? ? ? 106-1-1239840962_800_600_240_180.jpg-????????? ? ? ? ? ? 106-1-1239840962_800_600.jpg-????????? ? ? ? ? ? 106-2-1239840963_800_600_180_135.jpg-????????? ? ? ? ? ? 106-2-1239840963_800_600_240_180.jpg-????????? ? ? ? ? ? 106-2-1239840963_800_600.jpg-????????? ? ? ? ? ? 106-3-1239840964_800_600_180_135.jpg-????????? ? ? ? ? ? 106-3-1239840964_800_600_240_180.jpg-????????? ? ? ? ? ? 106-3-1239840964_800_600.jpg But if I do ls -ld hotelimg/ I get an output: drw-rw-r-- 2 adventho www-data 69632 Dec 15 17:23 hotelimg/ If I add anything after the slash, I get permission denied: $ ls -ld hotelimg/../index.phpls: cannot access hotelimg/../some_existent_file: Permission denied$ ls -ld hotelimg/.ls: cannot access hotelimg/.: Permission denied$ ls -ld hotelimg/../ls: cannot access hotelimg/../: Permission denied I tried doing an strace on the ls and this is the output: $ strace ls /home/adventho/public_html/hotelimg/187-1-1403380618.jpgexecve("/bin/ls", ["ls", "/home/adventho/public_html/hotel"...], [/* 13 vars */]) = 0brk(0) = 0x1db6000access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a148000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=26612, ...}) = 0mmap(NULL, 26612, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f931a141000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libselinux.so.1", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260f\0\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0644, st_size=126232, ...}) = 0mmap(NULL, 2226160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9319d0b000mprotect(0x7f9319d29000, 2093056, PROT_NONE) = 0mmap(0x7f9319f28000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1d000) = 0x7f9319f28000mmap(0x7f9319f2a000, 2032, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9319f2a000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/librt.so.1", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220!\0\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0644, st_size=31744, ...}) = 0mmap(NULL, 2128856, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9319b03000mprotect(0x7f9319b0a000, 2093056, PROT_NONE) = 0mmap(0x7f9319d09000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f9319d09000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libacl.so.1", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\"\0\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0644, st_size=35320, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a140000mmap(NULL, 2130560, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f93198fa000mprotect(0x7f9319902000, 2093056, PROT_NONE) = 0mmap(0x7f9319b01000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f9319b01000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300\357\1\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=1603600, ...}) = 0mmap(NULL, 3717176, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f931956e000mprotect(0x7f93196f0000, 2097152, PROT_NONE) = 0mmap(0x7f93198f0000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x182000) = 0x7f93198f0000mmap(0x7f93198f5000, 18488, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f93198f5000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\r\0\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0644, st_size=14768, ...}) = 0mmap(NULL, 2109696, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f931936a000mprotect(0x7f931936c000, 2097152, PROT_NONE) = 0mmap(0x7f931956c000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f931956c000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libpthread.so.0", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@\\\0\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=131107, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a13f000mmap(NULL, 2208672, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f931914e000mprotect(0x7f9319165000, 2093056, PROT_NONE) = 0mmap(0x7f9319364000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7f9319364000mmap(0x7f9319366000, 13216, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9319366000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libattr.so.1", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000\25\0\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0644, st_size=18672, ...}) = 0mmap(NULL, 2113880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9318f49000mprotect(0x7f9318f4d000, 2093056, PROT_NONE) = 0mmap(0x7f931914c000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f931914c000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a13e000mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a13c000arch_prctl(ARCH_SET_FS, 0x7f931a13c7a0) = 0mprotect(0x7f931914c000, 4096, PROT_READ) = 0mprotect(0x7f9319364000, 4096, PROT_READ) = 0mprotect(0x7f931956c000, 4096, PROT_READ) = 0mprotect(0x7f93198f0000, 16384, PROT_READ) = 0mprotect(0x7f9319b01000, 4096, PROT_READ) = 0mprotect(0x7f9319d09000, 4096, PROT_READ) = 0mprotect(0x7f9319f28000, 4096, PROT_READ) = 0mprotect(0x61a000, 4096, PROT_READ) = 0mprotect(0x7f931a14a000, 4096, PROT_READ) = 0munmap(0x7f931a141000, 26612) = 0set_tid_address(0x7f931a13ca70) = 22762set_robust_list(0x7f931a13ca80, 0x18) = 0futex(0x7fff8335414c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, NULL, 7f931a13c7a0) = -1 EAGAIN (Resource temporarily unavailable)rt_sigaction(SIGRTMIN, {0x7f9319153ad0, [], SA_RESTORER|SA_SIGINFO, 0x7f931915d0a0}, NULL, 8) = 0rt_sigaction(SIGRT_1, {0x7f9319153b60, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x7f931915d0a0}, NULL, 8) = 0rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0statfs("/sys/fs/selinux", 0x7fff833540a0) = -1 ENOENT (No such file or directory)statfs("/selinux", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=1440781, f_bfree=1145015, f_bavail=1071826, f_files=366480, f_ffree=337819, f_fsid={-205162666, 1274914527}, f_namelen=255, f_frsize=4096}) = 0brk(0) = 0x1db6000brk(0x1dd7000) = 0x1dd7000open("/proc/filesystems", O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a147000read(3, "nodev\tsysfs\nnodev\trootfs\nnodev\tb"..., 1024) = 385read(3, "", 1024) = 0close(3) = 0munmap(0x7f931a147000, 4096) = 0open("/usr/lib/locale/locale-archive", O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=110939968, ...}) = 0mmap(NULL, 110939968, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f931257c000close(3) = 0ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0ioctl(1, TIOCGWINSZ, {ws_row=39, ws_col=153, ws_xpixel=0, ws_ypixel=0}) = 0stat("/home/adventho/public_html/hotelimg/187-1-1403380618.jpg", 0x1db70d0) = -1 EACCES (Permission denied)open("/usr/share/locale/locale.alias", O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=2570, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a147000read(3, "# Locale name alias data base.\n#"..., 4096) = 2570read(3, "", 4096) = 0close(3) = 0munmap(0x7f931a147000, 4096) = 0open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)write(2, "ls: ", 4ls: ) = 4write(2, "cannot access /home/adventho/pub"..., 70cannot access /home/adventho/public_html/hotelimg/187-1-1403380618.jpg) = 70open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)write(2, ": Permission denied", 19: Permission denied) = 19write(2, "\n", 1) = 1close(1) = 0close(2) = 0exit_group(2) = ? I notice that it mentions selinux, however it is not installed. Just to be double sure, I installed policycoreutils (which installed 55 other packages) and executed sestatus and the output was "disabled". Everything that has ever been installed on the server (with the only exception of lfd/csf) has been from the repositories. I am stumped as to what is causing these permission denied errors. | Read permissions on a directory only allow you to list its contents. To actually be able to access the contents, you need execute permissions. Conversely, having only execute permissions will allow you to access the contents, but not list them. See Execute vs Read bit. How do directory permissions in Linux work? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10457/"
]
} |
174,440 | OK I'm new to this. I installed tmux to run a several days experiment. After typing tmux new -s name I got a new window with green banner at the bottom. I compile and run java program. Now I do not know how to exit the window (while leave it running). The bash (or whatever) cursor is not responding because the java program is still running. My solution so far is to quit the Terminal program completely and reopen it again. Any ideas on how to quit the tmux window without exiting the whole Terminal program? | Detach from currently attached session Session Ctrl + b d or Ctrl + b :detach Screen Ctrl + a Ctrl + d or Ctrl + a :detach | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/174440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95066/"
]
} |
174,446 | My Kubuntu 12.04 system ran out of space on on the root partition and will not boot. The command df -h shows a lot of space available (with only 37% used): /dev/sda2 45G 17G 29G 37% The following page indicates that I should run the balance command: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_get_.22No_space_left_on_device.22_errors.2C_but_df_says_I.27ve_got_lots_of_space $ sudo btrfs fi balance start -dusage=5 /mount/point I'm not entirely confident that this is the best approach, but it is the only one I found. However, when I run that command, I get this error: ERROR: error during balancing '/blah/blah/blah' - No space left on device I get the same error with: $ sudo btrfs fi balance start -dusage=1 /mount/point What is the right solution? | There are ways to get balance to run in this situation. sudo btrfs fi showsudo btrfs fi df /mount/pointsudo btrfs fi balance start -dusage=10 /mount/point If the balance command ends with "Done, had to relocate 0 out of XX chunks", then you need to increase the "dusage" percentage parameter till at least one chunk is relocated. if the balance command fails with: ERROR: error during balancing '/blah/blah/blah' - No space left on device You might actually need to delete files from the device to make some room. Then run the balance command again. However, thanks to Marc's Blog: btrfs - Fixing Btrfs Filesystem Full Problems here is another option: One trick to get around this is to add a device (even a USB key will do) to your btrfs filesystem. This should allow balance to start, and then you can remove the device with btrfs device delete when the balance is finished. It's also been said on the list that kernel 3.14 can fix some balancing issues that older kernels can't, so give that a shot if your kernel is old. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174446",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
174,459 | My friend owns a cyber café and is facing lot of problem with various Windows operating systems.I thought about using Linux Mint or an Ubuntu flavor for the cyber café. I've seen Linux Mint's user interface and consider it more user friendly than any Windows OS, specifically Windows 8. Please tell me if I can suggest to my friend to use Linux Mint, considering security concerns and general usability. | There are ways to get balance to run in this situation. sudo btrfs fi showsudo btrfs fi df /mount/pointsudo btrfs fi balance start -dusage=10 /mount/point If the balance command ends with "Done, had to relocate 0 out of XX chunks", then you need to increase the "dusage" percentage parameter till at least one chunk is relocated. if the balance command fails with: ERROR: error during balancing '/blah/blah/blah' - No space left on device You might actually need to delete files from the device to make some room. Then run the balance command again. However, thanks to Marc's Blog: btrfs - Fixing Btrfs Filesystem Full Problems here is another option: One trick to get around this is to add a device (even a USB key will do) to your btrfs filesystem. This should allow balance to start, and then you can remove the device with btrfs device delete when the balance is finished. It's also been said on the list that kernel 3.14 can fix some balancing issues that older kernels can't, so give that a shot if your kernel is old. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95076/"
]
} |
174,461 | Summary of Answers : while literature promotes "-execdir" as safer than "-exec", do not ignore the operational difference. The former is not a simple reimplementation of the latter. Also, when using find specifically for interactive removals, executing "rm -f" via the "-okdir" option is also viable. Similarly for slightly more complicated constructs. I'm trying to learn the "find" command, and I'm having trouble understanding its output. I apologize if this is a duplicate, but the problem statement is a little convoluted and hard to Google. Let me set up a pet case: I start in some directory (".") with two subdirectories "A" and "B". In both of these, there is a file named "hello.c." So, for example, find . -name "hello.c" would print like so: ./A/hello.c./B/hello.c So far so good. What I get lost on is when I try to do something with the "-execdir" option; let's say I want to use an interactive removal. Then: find . -name "hello.c" -execdir rm -i {} \; or similar. What I expect is rm: remove regular file "./A/hello.c"? and then, answering that, a similar prompt for the "hello.c" in directory "B." What actually appears, unfortunately, is rm: remove regular file "./hello.c"? In this tiny example, I can reasonably infer that it's asking about "./A/hello.c", but if one scales this example up then you get dozens of files that all have their pathnames truncated to "./". And I cannot differentiate between dozens of "./hello.c"s, not without a bearing on which subdirectory they each live in. So, my question boils down to a desire to print fuller pathnames via "-execdir." Could I hear a hint as to what sublety in the manpage escaped me? Little is said about the "{}" substitution. Or if there is some better way to manage this particular case (interactive removal), I should like to hear that too, because I'm not sure my approach is best practice. | The man page for GNU find describes -execdir in part thusly: Like -exec , but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find . So there is no real subtlety involved. -exec invokes the command from the directory that you run find from (and thus needs to provide the path either relative to that directory or absolute from the root), and -execdir invokes the command from the directory containing the file and thus doesn't need to provide the full path, so does not. To get the behavior you are after, you should use -exec rather than -execdir . You can demonstrate this behavior by replacing your rm invocation with for example echo to simply print the list of parameters (in this case, the file name). Or use -print which does that without needing to invoke an external command. (Note: -print is also the default action if none is given.) If you didn't need the confirmation, you could have used -delete instead. For large number of files that is also likely to be more efficient as it avoids having to invoke rm time and time again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87654/"
]
} |
174,562 | I am trying to work with gtk which is located at /usr/include/gtk-3.0/gtk/ .. , but all of the header files in the toolkit have #include <gtk/gtk.h> . Aside from adding /usr/local/gtk-3.0 to PATH or adding gtk-3.0 to all the include preprocessors, what other options does one have with this? | Adding the appropriate directory to your include path is exactly what you're supposed to do in this case, only you're supposed to do it by pkg-config . Accessing the files directly using full pathnames is unsupported. Add something like this to your Makefile : CFLAGS += `pkg-config --cflags gtk+-3.0`LIBS += `pkg-config --libs gtk+-3.0` This will automatically add the correct compiler and linker options for the current system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174562",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72608/"
]
} |
174,566 | I have came across this script: #! /bin/bash if (( $# < 3 )); then echo "$0 old_string new_string file [file...]" exit 0else ostr="$1"; shift nstr="$1"; shift fiecho "Replacing \"$ostr\" with \"$nstr\""for file in $@; do if [ -f $file ]; then echo "Working with: $file" eval "sed 's/"$ostr"/"$nstr"/g' $file" > $file.tmp mv $file.tmp $file fi done What is the meaning of the lines where they use shift ? I presume the script should be used with at least arguments so...? | shift is a bash built-in which kind of removes arguments from the beginning of the argument list. Given that the 3 arguments provided to the script are available in $1 , $2 , $3 , then a call to shift will make $2 the new $1 .A shift 2 will shift by two making new $1 the old $3 .For more information, see here: http://ss64.com/bash/shift.html http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_07.html | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/174566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15387/"
]
} |
174,599 | Will it be possible to use diff on a specific columns in a file? file1 Something 123 item1Something 456 item2Something 768 item3Something 353 item4 file2 Another 123 stuff1Another 193 stuff2Another 783 stuff3Another 353 stuff4 output(Expected) Something 456 item2Something 768 item3Another 193 stuff2Another 783 stuff3 I want to diff the 2nd column of each file, then, the result will contain the diff-ed column but along with the whole line. | awk is a better tool for comparing columns of files. See, for example, the answer to: compare two columns of different files and print if it matches -- there are similar answers out there for printing lines for matching columns. Since you want to print lines that don't match, we can create an awk command that prints the lines in file2 for which column 2 has not been seen in file1: $ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' file1 file2Another 193 stuff2Another 783 stuff3 As explained similarly by terdon in the above-mentioned question , NR==FNR : NR is the current input line number and FNR the current file's line number. The two will be equal only while the 1st file is being read. c[$2]++; next : if this is the 1st file, save the 2nd field in the c array. Then, skip to the next line so that this is only applied on the 1st file. c[$2] == 0 : the else block will only be executed if this is the second file so we check whether field 2 of this file has already been seen ( c[$2]==0 ) and if it has been, we print the line. In awk , the default action is to print the line so if c[$2]==0 is true, the line will be printed. But you also want the lines from file1 for which column 2 doesn't match in file2. This you can get by simply exchanging their position in the same command: $ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' file2 file1Something 456 item2Something 768 item3 So now you can generate the output you want, by using awk twice. Perhaps someone with more awk expertise can get it done in one pass. You tagged your question with /ksh , so I'll assume you are using korn shell. In ksh you can define a function for your diff, say diffcol2 , to make your job easier: diffcol2(){ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' $2 $1 awk 'NR==FNR{c[$2]++;next};c[$2] == 0' $1 $2 } This has the behavior you desire: $ diffcol2 file1 file2Something 456 item2Something 768 item3Another 193 stuff2Another 783 stuff3 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95190/"
]
} |
174,607 | Want to Gzip my all the .csv files into a gzip file. I am using this command. gzip *.csv; Problem File contains '//' in its name that's why command is giving error. Error : - '//File.csv' is not a directory or file. So please suggest me what ways i can do this successfully. one solution i thought. gzip '//File1.csv' '//File2.csv' '//File3.csv' Please suggest me if this is the right way to perform the same. | awk is a better tool for comparing columns of files. See, for example, the answer to: compare two columns of different files and print if it matches -- there are similar answers out there for printing lines for matching columns. Since you want to print lines that don't match, we can create an awk command that prints the lines in file2 for which column 2 has not been seen in file1: $ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' file1 file2Another 193 stuff2Another 783 stuff3 As explained similarly by terdon in the above-mentioned question , NR==FNR : NR is the current input line number and FNR the current file's line number. The two will be equal only while the 1st file is being read. c[$2]++; next : if this is the 1st file, save the 2nd field in the c array. Then, skip to the next line so that this is only applied on the 1st file. c[$2] == 0 : the else block will only be executed if this is the second file so we check whether field 2 of this file has already been seen ( c[$2]==0 ) and if it has been, we print the line. In awk , the default action is to print the line so if c[$2]==0 is true, the line will be printed. But you also want the lines from file1 for which column 2 doesn't match in file2. This you can get by simply exchanging their position in the same command: $ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' file2 file1Something 456 item2Something 768 item3 So now you can generate the output you want, by using awk twice. Perhaps someone with more awk expertise can get it done in one pass. You tagged your question with /ksh , so I'll assume you are using korn shell. In ksh you can define a function for your diff, say diffcol2 , to make your job easier: diffcol2(){ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' $2 $1 awk 'NR==FNR{c[$2]++;next};c[$2] == 0' $1 $2 } This has the behavior you desire: $ diffcol2 file1 file2Something 456 item2Something 768 item3Another 193 stuff2Another 783 stuff3 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40921/"
]
} |
174,609 | I am trying to replace multiple words in a file by using sed -i #expression1 #expression2 file Something 123 item1Something 456 item2Something 768 item3Something 353 item4 Output (Desired) anything 123 stuff1anything 456 stuff2anything 768 stuff3anything 353 stuff4 Try-outs I can get the following output by using sed -i two times. sed -i 's/Some/any/g' file sed -i 's/item/stuff/g' file Can I have any possible way of making this as a single in-place command like sed -i 's/Some/any/g' -i 's/item/stuff/g' file When I tried the above code it takes s/item/stuff/g as a file and tries working on it. | Depending on the version of sed on your system you may be able to do sed -i 's/Some/any/; s/item/stuff/' file You don't need the g after the final slash in the s command here, since you're only doing one replacement per line. Alternatively: sed -i -e 's/Some/any/' -e 's/item/stuff/' file Or: sed -i ' s/Some/any/ s/item/stuff/' file The -i option (a GNU extension now supported by a few other implementations though some need -i '' instead) tells sed to edit files in place; if there are characters immediately after the -i then sed makes a backup of the original file and uses those characters as the backup file's extension. Eg, sed -i.bak 's/Some/any/; s/item/stuff/' file or sed -i'.bak' 's/Some/any/; s/item/stuff/' file will modify file , saving the original to file.bak . Of course, on a Unix (or Unix-like) system, we normally use '~' rather than '.bak', so sed -i~ 's/Some/any/;s/item/stuff/' file | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/174609",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
174,660 | I have a setup script for a Vagrant box where I used to measure single steps with time . Now I would like to conditionally enable or disable the time measurements. For example, previously a line would look like: time (apt-get update > /tmp/last.log 2>&1) Now I thought I could simply do something like this: MEASURE_TIME=true[[ $MEASURE_TIME = true ]] && TIME="time --format=%e" || TIME=""$TIME (apt-get update > /tmp/last.log 2>&1) But this won't work: syntax error near unexpected token `apt-get'`$TIME (apt-get update > /tmp/last.log 2>&1)' What's the problem here? | To be able to time a subshell, you need the time keyword , not command. The time keyword, part of the language, is only recognised as such when entered literally and as the first word of a command (and in the case of ksh93 , then the next token doesn't start with a - ). Even entering "time" won't work let alone $TIME (and would be taken as a call to the time command instead). You could use aliases here which are expanded before another round of parsing is performed (so would let the shell recognise that time keyword): shopt -s expand_aliasesalias time_or_not=TIMEFORMAT=%EMEASURE_TIME=true[[ $MEASURE_TIME = true ]] && alias time_or_not=timetime_or_not (apt-get update > /tmp/last.log 2>&1) The time keyword doesn't take options (except for -p in bash ), but the format can be set with the TIMEFORMAT variable in bash . (the shopt part is also bash -specific, other shells generally don't need that). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14860/"
]
} |
174,674 | I have a text file containing a list of directories with its absolute path $ cat DirectoriesToCopy.txt/data/Dir1/data/Dir2 I want to use rsync to copy all these directories preserving its absolute path to another location. I tried the following rsync command, but it doesn't work rsync -avr --include-from=DirectoriesToCopy.txt --exclude='*/' --exclude='/*' / /media/MyDestination/ What is going wrong here? | Use the following command: rsync -av --include-from=DirectoriesToCopy.txt --include /data/ --exclude='/data/*' --exclude='/*/' / /media/MyDestination/ You need to include /data/ explicitly, you could also have added that to the list in the file. Then exclude all other directories (order is important with includes/excludes). Note that your usage of -r was redundant as that's included in -a . EDIT:You could also accomplish the same result with: rsync -av --relative /data/Dir1 /data/Dir2 /media/MyDestination/ It's not rsync that's forcing you to do difficult things just to copy a couple of directories, it just gives you multiple ways of doing the same thing; in some cases going the include/exclude way may be more suited, here I'd do the --relative thing above (without --relative you'd end up with /media/MyDestination/Dir1 and /media/MyDestination/Dir2 , with the --relative the whole source path is copied to the destination). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23105/"
]
} |
174,688 | I want to start a process that does nothing but is still running. Say I start a process called sadhadxk , and when I run pgrep -x "sadhadxk" I will get the PID number back, like any normal process works. So is there any way to start a dummy process? | You could do: perl -MPOSIX -e '$0="sadhadxk"; pause' & It should set both the process name and argv[0] on systems where it's supported so should show sadhadxk in both ps and ps -f output, so should be matched by both pgrep -x and pgrep -fx . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
174,690 | I wanted to run two sudo commands, piping the output from one to the other. However, when I haven't entered my password for sudo recently, it prompts me for the password. The pipes and two sudos seem to screw it up, so I can't correctly enter my password. My solution to this is to run "sudo ls" or something else that prompts me for my password, so that the piped commands will work without asking me to enter it. This got me wondering if there was a "correct" way to run sudo such that you're not also running some other pointless command. The manpage for sudo doesn't seem to say anything about this. | What you are looking for is sudo -v . From the man page: -v, --validate Update the user's cached credentials, authenticating the user if necessary. (And the counterpart to explicitly remove the credentials: sudo -k ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34831/"
]
} |
174,715 | cat /etc/oratab#test1:/opt/oracle/app/oracle/product/11.2.0.4:N+ASM2:/grid/oracle/app/oracle/product/11.2.0.4:N # line added by Agenttest2:/opt/oracle/app/oracle/product/11.2.0.4:N # line added by Agenttest3:/opt/oracle/app/oracle/product/11.2.0.4:N # line added by Agentoracle@node1 [/home/oracle]cat /etc/oratab | grep -v "agent" | awk -F: '{print $2 }' | awk NF | uniq awk NF is to omit blank lines in the output. Only lines starts with # needs to be ignored. Expected output: /grid/oracle/app/oracle/product/11.2.0.4/opt/oracle/app/oracle/product/11.2.0.4 | awk -F: '/^[^#]/ { print $2 }' /etc/oratab | uniq /^[^#]/ matches every line the first character of which is not a # ; [^ means "none of the charaters before the next (or rather: closing) ] . As only the part between the first two colons is needed -F:' makes awk split the line at colons, and print $2` prints the second part. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92238/"
]
} |
174,780 | I am reading this tutorial , and trying to create a new user with root privileges and then block root access via ssh in a CentOS 7 server. The problem is that the new user is blocked from doing root actions like nano /etc/sudoers . Also, I seem unable to remove the block of root login. So my pre-existing open root session is the only access I have to root functionality until it terminates. How can I successfully add root permissions to the newuser? And how can I successfully turn on/off root login? Here is what I have so far: In /etc/sudoers , I have: ## Allow root to run any commands anywhereroot ALL=(ALL) ALLnewusername ALL=(ALL) ALL Note that I edited /etc/sudoers because /usr/sbin/visudo did not work. In /etc/ssh/sshd_config I have PermitRootLogin yes because I want to turn root login back on until I can get newusername to have root privileges. Also, the last line of the file is AllowUsers newusername . I then typed systemctl reload sshd.service because /etc/init.d/sshd reload threw an error on CentOS 7 . The problem is that currently newusername does not have root privileges and yet I am not able to login as root either. So my pre-existing connection as root is my only way of controlling the machine. EDIT #1 I was able to give the new user sudo privileges with gpasswd -a newusername wheel , but I still cannot log in as root even though I have PermitRootLogin yes in /etc/ssh/sshd_config . How can I get CentOS 7 to respect the settings in /etc/ssh/sshd_config ? I should be able to turn root login on and then off again at will, and have the settings actually work. | I am reading this tutorial, and trying to create a new user with root privileges and then block root access via ssh in a CentOS 7 server. The problem is that the new user is blocked from doing root actions like nano /etc/sudoers. Also, I seem unable to remove the block of root login. So my pre-existing open root session is the only access I have to root functionality until it terminates. How can I successfully add root permissions to the newuser? And how can I successfully turn on/off root login? Strictly speaking, the real use of sudo is to configure the execution of certain specific commands to certain specific users or groups. The way sudo is distributed and configured in some distributions can be somewhat misleading because to become the root user, we can just type su - without involving sudo . This requires the entry of the password for the user, root , and not the user's password. So you could have used this. Try to never use anything except visudo to directly edit /etc/sudoers . Otherwise you could break authentication altogether until you change its permissions back to 0400 (which you cannot do after you log out without utilizing a rescue system of some sort). (The editor used by visudo can be controlled by the VISUAL environment variable. To use it with nano , one option is VISUAL=nano visudo .) The new user already can become root (point 1), but to let this user become root though sudo, just add the user to the right group. On CentOS 7, the traditional group name of wheel was used to allow members of that group to become root via sudo: usermod -a -G wheel codemedic . Use man usermod for more details. You can determine this group name by reading the configuration file: cat /etc/sudoers . To deny access to root via SSH, edit /etc/ssh/sshd_config and make sure that only one uncommented instance of PermitRootLogin is available and set it to a value of no : PermitRootLogin no . Save the file and restart the Secure Shell daemon: systemctl restart sshd . Note that I edited /etc/sudoers because /usr/sbin/visudo did not work. How does visudo not work? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
174,788 | I have a script which compares the output of a command with the output of the same command as it was ran previously, it works most of the time, but every now and then it doesn't work as expected. I've been able to reproduce the issue in one test line. I know I could easily break this out into comparing two separate files and the problem would go away, but I'd like to understand what's actually happening here and if there's a way to achieve what I'm trying to achieve in the way I'm trying to achieve it. Below is the output of my command ran several times, you can see the that it echoes "test" in one of the cases, but most of the time it works as expected. root@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test)root@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test)root@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test)root@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test)root@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test)testroot@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test)root@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test)root@dev:~# comm -13 /tmp/test <(echo '"test"' | cut -d'"' -f2 | sort -u | tee /tmp/test) I'm running Ubuntu 10.04 , bash 4.1-2ubuntu3.5 and coreutils 7.4-2ubuntu3 | Yes, that's a race condition. The problem is that the shell starts all processes in the pipeline at the same time and tee truncates the output file on startup. If tee is faster then comm the file is empty for comm otherwise it is not. The pipeline behaviour can be seen if you run this several times (mabe in a loop): date '+first: %T'|(cat>&2;sleep 5)|date '+second: %T' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95325/"
]
} |
174,817 | I have a script that needs to create temporary files for its work, and clean up after itself. My question is about finding the right base directory for the temporary files. The script needs to work on multiple platforms: Git Bash (Windows), Solaris, Linux, OSX.On each platform, the preferred temp directory is expressed differently: Windows: %TMP% (and possibly %TEMP% ) OSX: $TMPDIR Linux, UNIX: supposed to be $TMPDIR but appears to be unset on multiple systems I tried So in my script I added this boilerplate: if test -d "$TMPDIR"; then :elif test -d "$TMP"; then TMPDIR=$TMPelif test -d /var/tmp; then TMPDIR=/var/tmpelse TMPDIR=/tmpfi This seems too tedious. Is there a better way? | A slightly more portable way to handle temporary files is to use mktemp . It'll create temporary files and return their paths for you. For instance: $ mktemp/tmp/tmp.zVNygt4o7P$ ls /tmp/tmp.zVNygt4o7P/tmp/tmp.zVNygt4o7P You could use it in a script quite easily: tmpfile=$(mktemp)echo "Some temp. data..." > $tmpfilerm $tmpfile Reading the man page, you should be able to set options according to your needs. For instance: -d creates a directory instead of a file. -u generates a name, but does not create anything. Using -u you could retrieve the temporary directory quite easily with... $ tmpdir=$(dirname $(mktemp -u)) More information about mktemp is available here . Edit regarding Mac OS X: I have never used a Mac OSX system, but according to a comment by Tyilo below, it seems like Mac OSX's mktemp requires you to provide a template (which is an optional argument on Linux). Quoting: The template may be any file name with some number of "Xs" appended to it, for example /tmp/temp.XXXX . The trailing "Xs" are replaced with the current process number and/or a unique letter combination. The number of unique file names mktemp can return depends on the number of "Xs" provided; six "Xs" will result in mktemp selecting 1 of 56800235584 (62 ** 6) possible file names. The man page also says that this implementation is inspired by the OpenBSD man page for mktemp . A similar divergence might therefore be observed by OpenBSD and FreeBSD users as well (see the History section). Now, as you probably noticed, this requires you to specify a complete file path, including the temporary directory you are looking for in your question. This little problem can be handled using the -t switch. While this option seems to require an argument ( prefix ), it would appear that mktemp relies on $TMPDIR when necessary. All in all, you should be able to get the same result as above using... $ tmpdir=$(dirname $(mktemp tmp.XXXXXXXXXX -ut)) Any feedback from Mac OS X users would be greatly appreciated, as I am unable to test this solution myself. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/174817",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17433/"
]
} |
174,873 | If you run mc -F you'll see there are [System data] config directory and [User data] config directory [System data] Config directory: /etc/mc/ [User data] Config directory: /home/<username>/.config/mc/ First is system-wide, the second is User specific. The second one seems to be dependent on user's home location;in other words, it is bound to it. That means if you want to (temporarily) start mc with an alternate config as the same user you cannot do it w/o changing (and export ing) the HOME variable prior to it.This 'Changing-HOME-prior-to start' workaround, though does the trick,is hardly acceptable, as it well... does modify the user HOME. Do you think there is a way to either Change the user config dir dynamically before the mc starts (command line option would be the most appropriate thing, but it does not seem to be there) Restore 'natural' HOME for user just after mc started, if changing HOME before is the only way to change user dir location mc instances configured differently must not interfere each other if running simultaneously. | That turned out to be simpler as one might think.MC_HOME variable can be set to alternative path prior to starting mc.Man pages are not something you can find the answer right away =) here's how it works:- usual way [jsmith@wstation5 ~]$ mc -FRoot directory: /home/jsmith[System data]<skipped>[User data] Config directory: /home/jsmith/.config/mc/ Data directory: /home/jsmith/.local/share/mc/ skins: /home/jsmith/.local/share/mc/skins/ extfs.d: /home/jsmith/.local/share/mc/extfs.d/ fish: /home/jsmith/.local/share/mc/fish/ mcedit macros: /home/jsmith/.local/share/mc/mc.macros mcedit external macros: /home/jsmith/.local/share/mc/mcedit/macros.d/macro.* Cache directory: /home/jsmith/.cache/mc/ and the alternative way: [jsmith@wstation5 ~]$ MC_HOME=/tmp/MCHOME mc -FRoot directory: /tmp/MCHOME[System data]<skipped> [User data] Config directory: /tmp/MCHOME/.config/mc/ Data directory: /tmp/MCHOME/.local/share/mc/ skins: /tmp/MCHOME/.local/share/mc/skins/ extfs.d: /tmp/MCHOME/.local/share/mc/extfs.d/ fish: /tmp/MCHOME/.local/share/mc/fish/ mcedit macros: /tmp/MCHOME/.local/share/mc/mc.macros mcedit external macros: /tmp/MCHOME/.local/share/mc/mcedit/macros.d/macro.* Cache directory: /tmp/MCHOME/.cache/mc/ Use case of this feature: You have to share the same user name on remote server (access can be distiguished by rsa keys) and want to use your favorite mc configration w/o overwritting it.Concurrent sessions do not interfere each other. This works well as a part of sshrc-approach described in https://github.com/Russell91/sshrc | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84561/"
]
} |
174,887 | I have been mixing the use of emacs and vi ( vim ) for a long time. Each of them has its advantage. I parse error output from a compilation like process and get a line and column number but I can only use emacs to directly go to a line and column: emacs +15:25 myfile.xml with vi I only have the line number (according to the man page) vi +15 myfile.xml There is an option to go position cursor on a pattern ( vi +/pattern myfile.xml ) which i never got to work. But that would not help me as the pattern is not always the first occurrence in the file. How can I start vi so it goes to column 25 on line 15 of my file? Can I do something with -c option? | You can use: vi '+normal 15G25|' myfile.xml | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95349/"
]
} |
174,904 | How to find a specific file, and move it to the specific directory /var/tmp ? For example I want to find the file 0914_Jul-2014.gz . Remark, the file 0914_Jul-2014.gz , is under ~300 subdirectories: /usr/../../../../../../../../../../../../../../../../../../0914_Jul-2014.gz An example: when I do a find /usr -name '0914_Jul-2014.gz' -exec mv {} /var/tmp The result is mv: cannot stat: File name too long error. | You can use find : find /usr -name '0914_Jul-2014.gz' -exec mv {} /var/tmp \; Or for extremely nested directory hierarchies find /usr -name '0914_Jul-2014.gz' -execdir mv {} /var/tmp \; Although as the documentation states you must ensure that your $PATH environment variable does not reference the current directory (namely . ) if you use -execdir | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
174,933 | I recently needed an updated version of Python3 for a project, so I built it from source; and I believe I made a bit of a mess. All apt based operations now end in an error here: (Reading database ... 320897 files and directories currently installed.)Removing nvidia-prime (0.6.2) ...Traceback (most recent call last): File "/usr/bin/lsb_release", line 28, in <module> import lsb_releaseImportError: No module named 'lsb_release'dpkg: error processing package nvidia-prime (--remove): subprocess installed post-removal script returned error exit status 1Errors were encountered while processing: nvidia-primeE: Sub-process /usr/bin/dpkg returned an error code (1) I believe the nvidia error is just more of a symptom than a problem. This was discovered when I was trying to add a source and was met with: sudo: add-apt-repository: command not found I'd gladly upgrade this box to 14.10, but all upgrade based commands return the same lsb_release message. Any advice on restoring my package management abilities? Edit:Updating with python path info lars@whorus:~/Downloads/Python-3.4.2$ ls -l /usr/bin/python*lrwxrwxrwx 1 root root 9 Dec 18 10:36 /usr/bin/python -> python2.7lrwxrwxrwx 1 root root 9 Apr 18 2014 /usr/bin/python2 -> python2.7-rwxr-xr-x 1 root root 3349512 Mar 22 2014 /usr/bin/python2.7lrwxrwxrwx 1 root root 9 Mar 23 2014 /usr/bin/python3 -> python3.4-rwxr-xr-x 2 root root 4061272 Apr 11 2014 /usr/bin/python3.4-rwxr-xr-x 2 root root 4061272 Apr 11 2014 /usr/bin/python3.4mlrwxrwxrwx 1 root root 10 Mar 23 2014 /usr/bin/python3m -> python3.4m | Ubuntu 14.04 has the lsb_release.py file installed for Python 2.7 as well and lsb_release seems to work under python2.7 as well. You can try this by doing: python2.7 /usr/bin/lsb_release If that works, make a backup of the file /usr/bin/lsb_release and change the first line to read: #! /usr/bin/python2.7 (you can experiment with the -Es options, I would leave them out intially). Once you can run apt-get again, reinstall python3 and it dependencies. You can determine the direct dependencies by using apt-cache depends python3 and use apt-rdepends or reverse-depends (both have to be installed) to get dependencies recursively. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95414/"
]
} |
174,984 | I was reading about "Glob" and "Globbing Pathnames", and I found this strange (to me) part in man pages : "[--0]" matches the three characters '-', '.', '0', since '/' cannot be matched. I am confused! How do two dashes and a 0 match . ? What is the role of the / character here? Is this is a bug in man page? | As explained in the beginning of that paragraph in that man page , '-' character, when put between two characters , represents a range of characters, and also, '-' character, when put as first or last character between brackets , has its literal meaning. So, the first dash really means a '-' character, and the second dash is a range specifier. So the whole pattern consists of all the characters between '-' and '0', which, in the C/POSIX locale (but generally not in others) are: -./0 and since '/' cannot be matched , the pattern matches three characters '-', '.', '0'. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18650/"
]
} |
174,989 | Why is the cronjob below not working?As manually executed script is working, but if I put it on cron job it doesn't push through. #--------Reports-------------25 11 * * * /logs/scripts/chim/currbalance_dump.sh >> /logs/currbal.log script is below #!/bin/bashsftp [email protected] << SFTPcd /home/sftpadm/BanKo/CurrBalance/mget banko_current_balance_`date +%Y%m%d`.csv /logs/Reports/BanKo/CurrBalanceSFTPexit Folder is /logs/scripts/chim/currbalance_dump.sh-rwxrwxrwx 1 kycadm kycadm 174 Sep 29 09:43 currbalance_dump.sh | As explained in the beginning of that paragraph in that man page , '-' character, when put between two characters , represents a range of characters, and also, '-' character, when put as first or last character between brackets , has its literal meaning. So, the first dash really means a '-' character, and the second dash is a range specifier. So the whole pattern consists of all the characters between '-' and '0', which, in the C/POSIX locale (but generally not in others) are: -./0 and since '/' cannot be matched , the pattern matches three characters '-', '.', '0'. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95457/"
]
} |
174,990 | So Wikipedia ( link ) tells me that the command pwd is short for "print working directory", and that makes sense. But for the environment variable, the "P" has to be an acronym for something else than print. I hear people talking about "current working directory", which sounds better and is more intuitive, but still the environment variable seems to be called $PWD, and not $CWD. Nobody ever says "Did you check the print working directory variable?". I am currently playing around with the web application server uWSGI, and when running it tells me (on the uWSGI stats page): "cwd":"/home/velle/greendrinks", so they obviously like the (more intuitive acronym) cwd over pwd . I guess I am trying to figure out if I misunderstood something, or if it is just a matter of having given the environment variable an unintuitive name? | That depends on what you're doing. First of all, $PWD is an environment variable and pwd is a shell builtin or an actual binary: $ type -a pwdpwd is a shell builtinpwd is /bin/pwd Now, the bash builtin will simply print the current value of $PWD unless you use the -P flag. As explained in help pwd : pwd: pwd [-LP] Print the name of the current working directory. Options: -L print the value of $PWD if it names the current working directory -P print the physical directory, without any symbolic links By default, ‘pwd’ behaves as if ‘-L’ were specified. The pwd binary, on the other hand, gets the current directory through the getcwd(3) system call which returns the same value as readlink -f /proc/self/cwd . To illustrate, try moving into a directory that is a link to another one: $ ls -ltotal 4drwxr-xr-x 2 terdon terdon 4096 Jun 4 11:22 foolrwxrwxrwx 1 terdon terdon 4 Jun 4 11:22 linktofoo -> foo/$ cd linktofoo$ echo $PWD/home/terdon/foo/linktofoo$ pwd/home/terdon/foo/linktofoo$ /bin/pwd/home/terdon/foo/foo So, in conclusion, on GNU systems (such as Ubuntu), pwd and echo $PWD are equivalent unless you use the -P option, but /bin/pwd is different and behaves like pwd -P . Source https://askubuntu.com/a/476633/291937 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/174990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89717/"
]
} |
175,018 | I have this file: sometext1{string1}sometext2{string2string3}sometext3{string4string5string6} I want to search this file for a specific string and print everything before this string up to the opening { and everything after this string up to the closing } . I tried to achieve this with sed but if I try to print everything in the range /{/,/string2/ for example sed prints this: sometext1{string1}sometext2{string2sometext3{string4string5string6} If I search for the string "string2" I need the output to be: sometext2{string2string3} Thanks. | Here are two commands. If you want a command that trims up to the last .*{$ line in a sequence (as @don_crissti does with ed ) you can do: sed 'H;/{$/h;/^}/x;/{\n.*PATTERN/!d' ...which works by appending every line to H old space following a \n ewline character, overwriting h old space for every line that matches {$ , and swapping ing h old and pattern spaces for every line that matches ^} - and thereby flushing its buffer. It only prints lines which match a { then a \n ewline and then PATTERN at some point - and that only ever happens immediately following a buffer swap. It elides any lines in a series of {$ matches to the last in the sequence, but you can get all of those inclusive like: sed '/PATTERN.*\n/p;//g;/{$/,/^}/H;//x;D' What it does is swap pattern and h old spaces for every ...{$.*^}.* sequence, appends all lines within the sequence to H old space following a \n ewline character, and D eletes up to the first occurring \n ewline character in pattern space for every line cycle before starting again with what remains. Of course, the only time it ever gets \n ewline in pattern space is when an input line matches ^} - the end of your range - and so when it reruns the script on any other occasion it just pulls in the next input line per usual. When PATTERN is found in the same pattern space as a \n ewline, though, it prints the lot before overwriting it with ^} again (so it can end the range and flush the buffer) . Given this input file (thanks don) : sometext1{string1}sometext2{PATTERNstring3}sometext3{string4string5string6}Header{sometext4{some stringstring unknownhere's PATTERN and PATTERN againand PATTERN tooanother string here}} The first prints: sometext2{PATTERNstring3}sometext4{some stringstring unknownhere's PATTERN and PATTERN againand PATTERN tooanother string here} ...and the second... sometext2{PATTERNstring3}Header{sometext4{some stringstring unknownhere's PATTERN and PATTERN againand PATTERN tooanother string here} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95474/"
]
} |
175,071 | I try to find a script to decrypt (unhash) the ssh hostnames in the known_hosts file by passing a list of the hostnamses . So, to do exactly the reverse of : ssh-keygen -H -f known_hosts Or also, to do the same as this if the ssh config HashKnownHosts is set to No: ssh-keygen -R know-host.com -f known_hostsssh-keyscan -H know-host.com >> known_hosts But without re-downloading the host key (caused by ssh-keyscan). Something like: ssh-keygen --decrypt -f known_hosts --hostnames hostnames.txt Where hostnames.txt contains a list of hostnames. | Lines in the known_hosts file are not encrypted, they are hashed. You can't decrypt them, because they're not encrypted. You can't “unhash” them, because that what a hash is all about — given the hash, it's impossible¹ to discover the original string. The only way to “unhash” is to guess the original string and verify your guess. If you have a list of host names, you can pass them to ssh-keygen -F and replace them by the host name. while read host comment; do found=$(ssh-keygen -F "$host" | grep -v '^#' | sed "s/^[^ ]*/$host/") if [ -n "$found" ]; then ssh-keygen -R "$host" echo "$found" >>~/.ssh/known_hosts fidone <hostnames.txt ¹ In a practical sense, i.e. it would take all the computers existing today longer than the present age of the universe to do it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/175071",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88343/"
]
} |
175,078 | Let's say I start in my local account: avindra@host:~> then I switch to root: host:~ # Then I switch to oracle: [ oracle@host:~] Is there a way for me to drop back into the root shell (the parent), without logging out of the oracle shell? This would be convenient, because the oracle account does not have sudo privileges. A typical scenario with oracle is that I end up in /some/really/deeply/nested/directory, and all kinds of special environment variables are set in particular ways. Here comes the problem: I need to get back into root to touch some system files. Yes, I can log out of oracle to get back to root, but at the cost of losing my current working directory and environment. Is there a way to "switch" to the parent shell using known conventions? | You can simulate a CTRL-Z (which you normally use to temporarily background a process) using the kill command: [tsa20@xxx01:/home/tsa20/software]$ kill -19 $$[1]+ Stopped sudo -iu tsa20[root@xxx01 ~]# fgsudo -iu tsa20[tsa20@xxx01:/home/tsa20/software]$ bash just traps the CTRL-Z key combination. kill -19 sends SIGSTP to the process which is effectively the same thing. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/175078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63602/"
]
} |
175,123 | I'm trying to make a .desktop file for Minecraft. Nothing appears to happen upon executing the file. I've tried assigning the Exec key as follows: Exec= java -jar "~/.minecraft/Minecraft.jar" Exec= java -jar "$HOME/.minecraft/Minecraft.jar" But I'm not sure how to put in the reserved characters (~ and $) correctly. According to Freedesktop's Desktop Entry Specification : If an argument contains a reserved character the argument must be quoted. and Quoting must be done by enclosing the argument between double quotes and escaping the double quote character, backtick character ("`"), dollar sign ("$") and backslash character ("\") by preceding it with an additional backslash character. Implementations must undo quoting before expanding field codes and before passing the argument to the executable program. But that's very confusing to me. | It seems a common workaround to execute sh , which will resolve the special symbols and variables correctly: Exec=sh -c "java -jar ~/.minecraft/Minecraft.jar" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95534/"
]
} |
175,135 | https://serverfault.com/questions/70939/how-to-replace-a-text-string-in-multiple-files-in-linux https://serverfault.com/questions/228733/how-to-rename-multiple-files-by-replacing-word-in-file-name https://serverfault.com/questions/212153/replace-string-in-files-with-certain-file-extension https://serverfault.com/questions/33158/searching-a-number-of-files-for-a-string-in-linux These mentioned articles have all answered my question. However none of them work for me. I suspect it is because the string I am trying to replace has a # in it. Is there a special way to address this? I have image file that had an é replaced by #U00a9 during a site migration. These look like this: Lucky-#U00a9NBC-80x60.jpgLucky-#U00a9NBC-125x125.jpgLucky-#U00a9NBC-150x150.jpgLucky-#U00a9NBC-250x250.jpgLucky-#U00a9NBC-282x232.jpgLucky-#U00a9NBC-300x150.jpgLucky-#U00a9NBC-300x200.jpgLucky-#U00a9NBC-300x250.jpgLucky-#U00a9NBC-360x240.jpgLucky-#U00a9NBC-400x250.jpgLucky-#U00a9NBC-430x270.jpgLucky-#U00a9NBC-480x240.jpgLucky-#U00a9NBC-600x240.jpgLucky-#U00a9NBC-600x250.jpgLucky-#U00a9NBC.jpg and I want to change it to something like this: Lucky-safeNBC-80x60.jpgLucky-safeNBC-125x125.jpgLucky-safeNBC-150x150.jpgLucky-safeNBC-250x250.jpgLucky-safeNBC-282x232.jpgLucky-safeNBC-300x150.jpgLucky-safeNBC-300x200.jpgLucky-safeNBC-300x250.jpgLucky-safeNBC-360x240.jpgLucky-safeNBC-400x250.jpgLucky-safeNBC-430x270.jpgLucky-safeNBC-480x240.jpgLucky-safeNBC-600x240.jpgLucky-safeNBC-600x250.jpgLucky-safeNBC.jpg UPDATE: These examples all start with "LU00a9ucky but here are many images with different names. I am simply targeting the "#U00a9" portion of the string to replace with "safe". | This is not hard, simply make sure to escape the octothorpe (#) in the name by prepending a reverse-slash (\). find . -type f -name 'Lucky-*' | while read FILE ; do newfile="$(echo ${FILE} |sed -e 's/\\#U00a9/safe/')" ; mv "${FILE}" "${newfile}" ;done | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/175135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
175,146 | How check the status of apt-get update ? $ apt-get update ; echo "status is: $?"Err http://security.debian.org stable/updates Release.gpgCould not resolve 'security.debian.org'Hit http://192.168.1.100 stable Release.gpgHit http://192.168.1.100 stable ReleaseHit http://192.168.1.100 stable/main i386 PackagesHit http://192.168.1.100 stable/contrib i386 PackagesHit http://192.168.1.100 stable/non-free i386 PackagesIgn http://192.168.1.100 stable/contrib Translation-enIgn http://192.168.1.100 stable/main Translation-enIgn http://192.168.1.100 stable/non-free Translation-enReading package lists... DoneW: Failed to fetch http://security.debian.org/dists/stable/updates/Release.gpg Could not resolve 'security.debian.org'W: Some index files failed to download. They have been ignored, or old ones used instead.status is: 0 Here there's an error with fetch of security updates but exit status is 0 My goal is a script to check if apt-get update runs correctly. | In your example apt-get update didn't exit with error,because it considered the problems as warnings, not as fatally bad.If there's a really fatal error, then it would exit with non-zero status. One way to recognize anomalies is by checking for these patterns in stderr : Lines starting with W: are warnings Lines starting with E: are errors You could use something like this to emulate a failure in case the above patterns match, or the exit code of apt-get update itself is non-zero: if ! { sudo apt-get update 2>&1 || echo E: update failed; } | grep -q '^[WE]:'; then echo successelse echo failurefi Note the ! in the if .It's because the grep exits with success if the pattern was matched,that is if there were errors.When there are no errors the grep itself will fail.So the if condition is to negate the exit code of the grep . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40628/"
]
} |
175,162 | I've heard that FUSE-based filesystems are notoriously slow because they are implemented in a userspace program. What is it about userspace that is slower than the kernel? | Code executes at the same speed whether it's in the kernel or in user land, but there are things that the kernel code can do directly while user land code has to jump through hoops. In particular, kernel code can map application memory directly, so it can directly copy the file contents between the application memory and the internal buffers from or to which the hardware copies. User code has to either make an extra copy via a pipe or socket, or make a more complex memory sharing operation. Furthermore each file operation has to go through the kernel — the only way for a process to interact with anything is via a system call. If the file operation is performed entirely inside the kernel, there's only one user/kernel transition and one kernel/user transition to perform, which is pretty fast. If the file operation is performed by another process, there has to be a context switch between processes, which requires a far more expensive operation in the MMU . The speed performance is still negligible against most hardware access times, but it can be observed when the hardware isn't the bottleneck, especially as many hardware operations can be performed asynchronously while the main processor is doing something else, whereas context switches and data copies between processes keep the CPU busy. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87091/"
]
} |
175,185 | I am looking for advice on how to search a part of a string within a file and replace the complete line OR append that string to that file if not found. I "played" with sed for a while now, but couldn't get it to work as expected. I need to add: /swapfile none swap sw 0 0 to /etc/fstab (on Ubuntu 14.04 - Trusty Tahr ). Conditions: If any line starting with /swapfile is present in /etc/fstab , remove that line and replace with the string provided above If more than one line starting with /swapfile is found, remove them all and append the string above to the end of the file If no /swapfile is present in /etc/fstab , append the string to /etc/fstab The command must not show console output and must be a "one-liner" (due to automation purposes with puppet) I am confident that's possible, but I simply didn't find a related tutorial about using sed in the way I need it. I used sudo sed -i '$a/swapfile none swap sw 0 0' /etc/fstab but this only appends the string :( | You can do this with sed — it's Turing-complete. But it isn't the best tool for the job. Sed doesn't have a convenient way of remembering that it's already made a replacement. What you can relatively easily do with sed is to blank all the lines starting /swapfile , and add a new one at the end: sed -i '$! s/^\/swapfile[\t ]//; $s/\(^\/swapfile.*\)\?$/\n\/swapfile none swap sw/' /etc/fstab but beyond that we're quickly getting into territory where I wouldn't leave such sed code for another sysadmin to maintain, especially when a simple, readable combination of shell commands would do a better job: { </etc/fstab grep -v '/swapfile[\t ]'; echo '/swapfile none swap sw'; } >/etc/fstab.new && mv /etc/fstab.new /etc/fstab If you want to preserve the existing position of the /swapfile line if it's there and only modify the file if it needs modifying, a combination of shell logic and awk is a better tool. I've used multiple lines here for clarity but you can put all the code on the same line if you like. As a bonus, if the file already contained the intended line (with exact spacing), it won't be modified. awk ' /\/swapfile[\t ]/ {if (replaced) {next} else {replaced=1; print "/swapfile none swap sw"}} 1 {print} END {if (!replaced) print "/swapfile none swap sw"}' /etc/fstab >/etc/fstab.new &&if cmp -s /etc/fstab.new /etc/fstab.new; then rm /etc/fstab.new;else mv /etc/fstab.new /etc/fstab;fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95506/"
]
} |
175,255 | I am trying to open some ports in CentOS 7. I am able to open a port with the following command: firewall-cmd --direct --add-rule ipv4 filter IN_public_allow 0 -m tcp -p tcp --dport 7199 -j ACCEPT By inspecting via iptables -L -n , I get the confirmation that the setting was successful: Chain IN_public_allow (1 references)target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7199 Unfortunately, I cannot make the changes permanent. Even by using the --permanent option like this: firewall-cmd --direct --permanent --add-rule ipv4 filter IN_public_allow 0 -m tcp -p tcp --dport 7199 -j ACCEPT Any idea on how to fix this? Why is the --permanent option not working correctly? | --direct commands cannot be made permanent. Use equivalent zone command: sudo firewall-cmd --zone=public --add-port=7199/tcp --permanent sudo firewall-cmd --reload and to check the result: sudo firewall-cmd --zone=public --list-all | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57266/"
]
} |
175,316 | I've got this wonderful conundrum with a WAV file, whereas I cannot detect it's actual sample size (i.e. how many bits are in a sample) and the number of channels. geek@liv-inspiron:~$ soxi file.wavInput File : 'file.wav'Channels : 2Sample Rate : 44100Precision : 16-bitDuration : 00:03:19.56 = 8800596 samples = 14967 CDDA sectorsFile Size : 35.2MBit Rate : 1.41MSample Encoding: 16-bit Signed Integer PCM MPlayer2 reports the following (but I can only hear noise): geek@liv-inspiron:~$ mplayer file.wav MPlayer2 2.0-701-gd4c5b7f-2ubuntu2 (C) 2000-2012 MPlayer TeamPlaying file.wav.Detected file format: WAV / WAVE (Waveform Audio) (libavformat)[wav @ 0x7f21516c9600]max_analyze_duration reached[lavf] stream 0: audio (pcm_s16le), -aid 0Load subtitles in .Selected audio codec: Uncompressed PCM [pcm]AUDIO: 44100 Hz, 2 ch, s16le, 1411.2 kbit/100.00% (ratio: 176400->176400)AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample)Video: no videoStarting playback... While MPlayer outputs actual sound, and seems to detect a DTS format: geek@liv-inspiron:~$ mplayer file.wav MPlayer 1.1-4.8 (C) 2000-2012 MPlayer TeamPlaying file.wav.libavformat version 54.20.3 (external)Audio only file format detected.Load subtitles in ./==========================================================================Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoderslibavcodec version 54.35.0 (external)AUDIO: 44100 Hz, 2 ch, floatle, 1411.2 kbit/50.00% (ratio: 176400->352800)Selected audio codec: [ffdca] afm: ffmpeg (FFmpeg DTS)==========================================================================AO: [pulse] 44100Hz 2ch floatle (4 bytes per sample)Video: no videoStarting playback... And if I play it with VLC which also outputs actual sound, it reports: Type: AudioCodec: DTS Audio (dts )Channels: 3F2R/LFESample rate: 44100 HzBitrate: 1411 kb/s Some quick math yields 1411 ∕ 44.1 ≈ 31.995465, which implies a 32-bit sample size. So which one is it: 16-bit or 32-bit? Or is it 16-bit per channel? And how many channels does it have? 2 as in Stereo or 5 as in DTS? The info is again conflicting... In other words, is there a tool that can accurately report the technical data for a WAV file, without getting confused by erroneous headers? | As pointed out in this question , an excellent utility for this task is MediaInfo . MediaInfo is a convenient unified display of the most relevant technical and tag data for video and audio files. geek@liv-inspiron:~$ mediainfo file.wav GeneralComplete name : file.wavFormat : WaveFile size : 33.6 MiBDuration : 3mn 19sOverall bit rate mode : ConstantOverall bit rate : 1 411 KbpsAudioFormat : DTSFormat/Info : Digital Theater SystemsMode : 14Format settings, Endianness : LittleCodec ID : 1Duration : 3mn 19sBit rate mode : ConstantBit rate : 1 411.2 KbpsChannel(s) : 6 channelsChannel positions : Front: L C R, Side: L R, LFESampling rate : 44.1 KHzBit depth : 24 bitsCompression mode : LossyStream size : 33.6 MiB (100%) This would confirm that the specific file is DTS with 6 channels, but interestingly that the sample size is actually 24 bits and strangely that the compression mode is lossy. One can also use this utility via a GUI: mediainfo-gui . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55183/"
]
} |
175,325 | I want to use the stat command to get information on a file. I did this: Josephs-MacBook-Pro:Desktop Joseph$ echo 'hello' > info.txtJosephs-MacBook-Pro:Desktop Joseph$ stat info.txt16777220 21195549 -rw-r--r-- 1 Joseph staff 0 6 "Dec 21 20:45:31 2014" "Dec 21 20:45:30 2014" "Dec 21 20:45:30 2014" "Dec 21 20:45:30 2014" 4096 8 0 info.txt The 3rd and 4th lines are the output I got. This happens whenever I use the stat command. Meanwhile everyone on the internet gets stuff like: File: `index.htm'Size: 17137 Blocks: 40 IO Block: 8192 regular fileDevice: 8h/8d Inode: 23161443 Links: 1Access: (0644/-rw-r--r--) Uid: (17433/comphope) Gid: ( 32/ www)Access: 2007-04-03 09:20:18.000000000 -0600Modify: 2007-04-01 23:13:05.000000000 -0600Change: 2007-04-02 16:36:21.000000000 -0600 I tried this on Terminal and iTerm 2 and in a fresh session.On the same laptop, I connected to my CentOS server and put in the same commands. It worked perfectly. This leads me to believe that the terminal application isn't the problem. I'm on a MacBook Pro (Retina, 15-inch, Late 2013) with OS X Yosemite version 10.10.1 What is going on and how can I fix this? | Using the -x option for stat should give you similar output: $ stat -x foo File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: ( 501/ Tyilo) Gid: ( 0/ wheel)Device: 1,4 Inode: 8626874 Links: 1Access: Mon Dec 22 06:17:54 2014Modify: Mon Dec 22 06:17:54 2014Change: Mon Dec 22 06:17:54 2014 To make this the default, you can create an alias and save it to ~/.bashrc : alias stat="stat -x" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/175325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95676/"
]
} |
175,345 | Is there any difference between /run directory and var/run directory. It seems the latter is a link to the former. If the contents are one and the same what is the need for two directories? | From the Wikipedia page on the Filesystem Hierarchy Standard : Modern Linux distributions include a /run directory as a temporary filesystem (tmpfs) which stores volatile runtime data, following the FHS version 3.0. According to the FHS version 2.3, this data should be stored in /var/run but this was a problem in some cases because this directory isn't always available at early boot. As a result, these programs have had to resort to trickery, such as using /dev/.udev, /dev/.mdadm, /dev/.systemd or /dev/.mount directories, even though the device directory isn't intended for such data. Among other advantages, this makes the system easier to use normally with the root filesystem mounted read-only. So if you have already made a temporary filesystem for /run , linking /var/run to it would be the next logical step (as opposed to keeping the files on disk or creating a separate tmpfs ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/175345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94449/"
]
} |
175,352 | I am using gnome 3.14 + debian 8 jessie + nvidia optimus graphic driver. Those borders on animations are driving me insane and I would love some suggestions on how to resolve it :( ? PS, can someone please tell me what is the name of this bug ? | From the Wikipedia page on the Filesystem Hierarchy Standard : Modern Linux distributions include a /run directory as a temporary filesystem (tmpfs) which stores volatile runtime data, following the FHS version 3.0. According to the FHS version 2.3, this data should be stored in /var/run but this was a problem in some cases because this directory isn't always available at early boot. As a result, these programs have had to resort to trickery, such as using /dev/.udev, /dev/.mdadm, /dev/.systemd or /dev/.mount directories, even though the device directory isn't intended for such data. Among other advantages, this makes the system easier to use normally with the root filesystem mounted read-only. So if you have already made a temporary filesystem for /run , linking /var/run to it would be the next logical step (as opposed to keeping the files on disk or creating a separate tmpfs ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/175352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94725/"
]
} |
175,368 | I have a server, named gamma , constantly up and running at work. Sometimes I connect to it from at home, in which case I use the public IP address 55.22.33.99 . Sometimes, I connect to it when I'm at work, and rather than bounce my packets around unnecessarily, I connect via the local IP address, 192.168.1.100 . At the moment, I have them split up into two different entries in ~/.ssh/conf Host gamma-local HostName 192.168.1.100 Port 22 User andreasHost gamma-remote HostName 55.22.33.99 Port 12345 User andreas So, if I'm at work, all I have to type is ssh gamma-local and I'm in; if I'm at home (or anywhere else in the world), I run ssh gamma-remote . When connecting to the server, I would rather not have to type in a different name depending on where I am, I would rather that part be done automatically; for instance, in some cases I have automated scripts that connect who don't know where I am. There is a question that solves this problem by using a Bash script to "try" to connect to the local one first, and if it doesn't connect, try to connect to the remote IP address. This is nice, but (1) seems inefficient (especially since sometimes you have to "wait" for connections to time out as they don't always send an error back immediately) and (2) requires Bash and lugging around the script. Is there an alternate way of achieving this that doesn't rely on the use of Bash scripts, nor "testing" to see if the connection works first? | If you have a way to recognize which network are you on then you can use the Match keyword in ~/.ssh/config to do what you want. This requires OpenSSH ≥6.5. I use something similar to Match originalhost gamma exec "[ x$(/sbin/iwgetid --scheme) != xMyHomeESSID ]" HostName 192.168.1.100 Port 22Host gamma User andreas Port 12345 HostName 55.22.33.99 So I'm using the identifier of the used wifi network to decide whether I'm at home for the purposes of the SSH connection, but checking the IP address assigned to you computer or anything else that differentiates the two networks can be used as well. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
175,380 | From my question Can Process id and session id of a daemon differ? , it was clear that I cannot easily decide the features of a daemon. I have read in different articles and from different forums that service --status-all command can be used to list all the daemons in my system. But I do not think that the command is listing all daemons because NetworkManager , a daemon which is currently running in my Ubuntu 14.04 system, is not listed by the command. Is there some command to list the running daemons or else is there some way to find the daemons from the filesystem itself? | The notion of daemon is attached to processes , not files . For this reason, there is no sense in "finding daemons on the filesystem". Just to make the notion a little clearer : a program is an executable file (visible in the output of ls ) ; a process is an instance of that program (visible in the output of ps ). Now, if we use the information that I gave in my answer , we could find running daemons by searching for processes which run without a controlling terminal attached to them . This can be done quite easily with ps : $ ps -eo 'tty,pid,comm' | grep ^? The tty output field contains "?" when the process has no controlling terminal. The big problem here comes when your system runs a graphical environment. Since GUI programs (i.e. Chromium) are not attached to a terminal, they also appear in the output. On a standard system, where root does not run graphical programs, you could simply restrict the previous list to root's processes. This can be achieved using ps ' -U switch. $ ps -U0 -o 'tty,pid,comm' | grep ^? Yet, two problems arise here: If root is running graphical programs, they will show up. Daemons running without root privileges won't. Note that daemons which start at boot time are usually running as root. Basically, we would like to display all programs without a controlling terminal, but not GUI programs . Luckily for us, there is a program to list GUI processes : xlsclients ! This answer from slm tells us how to use it to list all GUI programs, but we'll have to reverse it, since we want to exclude them. This can be done using the --deselect switch. First, we'll build a list of all GUI programs for which we have running processes. From the answer I just linked, this is done using... $ xlsclients | cut -d' ' -f3 | paste - -s -d ',' Now, ps has a -C switch which allows us to select by command name. We just got our command list, so let's inject it into the ps command line. Note that I'm using --deselect afterwards to reverse my selection. $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --deselect Now, we have a list of all non-GUI processes. Let's not forget our "no TTY attached" rule. For this, I'll add -o tty,args to the previous line in order to output the tty of each process (and its full command line) : $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --deselect -o tty,args | grep ^? The final grep captures all lines which begin with "?", that is, all processes without a controlling tty. And there you go! This final line gives you all non-GUI processes running without a controlling terminal. Note that you could still improve it, for instance, by excluding kernel threads (which aren't processes)... $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --ppid 2 --pid 2 --deselect -o tty,args | grep ^? ... or by adding a few columns of information for you to read: $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --ppid 2 --pid 2 --deselect -o tty,uid,pid,ppid,args | grep ^? | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/175380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94449/"
]
} |
175,385 | I upgraded to Fedora 21, which spotlights GNOME 3.14 (plus the relevant GTK+ material). Unfortunately it seems that this particular update mangles a lot of my older themes, written for now-aging versions of GNOME 3. Where previously they may have squeaked by, they now look a little out-of-place. I don't presume to re-invent the wheel: I would be very happy to take a pre-existing CSS template (e.g. the default Adwaita 3.14 spec) and tweak it here and there to my liking; there will be no fancy flying. Imagine the hair I tore out when I peeked at /usr/share/themes/Adwaita/gtk-3.0/gtk.css: /* Adwaita is the default theme of GTK+ 3, this file is not used */ That puts me in a pickle. I lack the Google-fu to dig the documentation up about where this might be (worse, I have a gut feeling this is something implicitly obvious to GNOME people that I have been missing out on), and for some reason the GNOME developer website resists my attempts at researching their theming specification. In short, I'd like to find a virgin theme specification for GNOME 3.14, assuming one is extant. How may I do this, or how may I modify my approach? | There's only a single line in that .css file because the default theme Adwaita comes as a binary: Adwaita is a complex theme, so to keep it maintainable it's writtenand processed in SASS, the generated CSS is then transformed into agresource file during gtk build and used at runtime in a non-legibleor editable form. For gnome 4.* per the README : How to tweak the theme Default is a complex theme, so to keep it maintainable it's writtenand processed in SASS. The generated CSS is then transformed into agresource file during gtk build and used at runtime in a non-legibleor editable form. It is very likely your change will happen in the _common.scss file.That's where all the widget selectors are defined. Here's a rundown ofthe "supporting" stylesheets, that are unlikely to be the right placefor a drive by stylesheet fix: _colors.scss - global color definitions. We keep the number of definedcolors to a necessary minimum, most colors are derivedfrom a handful of basics. It covers both the light variantand the dark variant. _colors-public.scss - SCSS colors exported through gtk to allow for 3rd partyapps color mixing. _drawing.scss - drawing helper mixings/functions to allow easierdefinition of widget drawing under specific context. Thisis why Default isn't 15000 LOC. _common.scss - actual definitions of style for each widget. This iswhere you are likely to add/remove your changes. You can read about SASS at http://sass-lang.com/documentation/ . Onceyou make your changes to the _common.scss file, GTK will rebuild theCSS files. Also, check the guidelines present in Default-light.scss and Default-dark.scss : // General guidelines: // - very unlikely you want to edit something else than _common.scss // - keep the number of defined colors to a minimum, use the color blending functions // if you need a subtle shade; - if you need to inverse a color function // use the @if directive to match for dark $variant In the same git directory ( Default ) you'll find the files _common.scss , _colors.scss , _colors-public.scss and _drawing.scss For gnome 3.* : Since the code has been included in gtk+ , you can view the source files HERE . As per their readme : _colors.scss - global color definitions. We keep the number of defined colors to a necessary minimum, most colors are derived form a handful of basics. It covers both the light variant and the dark variant._colors-public.scss - SCSS colors exported through gtk to allow for 3rd party apps color mixing._drawing.scss - drawing helper mixings/functions to allow easier definition of widget drawing under specific context. This is why Adwaita isn't 15000 LOC._common.scss - actual definitions of style for each widget. This is where you are likely to add/remove your changes. In the same git directory ( Adwaita ) you can find the _*.scss files mentioned above and the reference schemes: gtk-contained.css gtk-contained-dark.css | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87654/"
]
} |
175,404 | Is this correct? Basically, programs are simply code on hard disk and they are called process(es) when they are in the RAM, right? | While this might be a little clumsy, you could say that. Creating a process takes two steps: Allocate a u area (basically, information about the process that is accessible to the kernel), fill an entry in the process table, initialise all related components... basically, just create another process for the kernel to manage. This is done through the fork system call. Load an executable file into memory. This is done through the exec (now execve ) system call. During this call, 3 main memory areas, called regions are filled: The text region, which consists of a set of instructions your process is to follow : basically, your program. This is contained within your executable file (the compiler writes it based on your source code). The data region, which contains your initialised data (variables with values, e.g. int myvar = 1 ) and enough space to hold unitialised data (called bss ), such as an array (e.g. char buffer[256] ). The stack region. This part is a little trickier to explain, and as I said in a comment, Maurice J. Bach does it better than I ever would (chapter 2, section 2.2.2). Basically, the stack is a dynamic area in memory, which grows as functions are called, and shrinks as they return. When executing a program, frames corresponding to the main function are pushed onto the stack. These frames will be popped when the program terminates. Now, while this might seem enough to run a program, it isn't. Now that your process is running, the kernel still needs to maintain it. Quoting: As outlined in Chapter 2, the lifetime of a process can be conceptually divided into a set of states that describe the process. (The Design of the UNIX Operating System, Maurice J. Bach, chapter 6 : the structure of processes) . This means that your process will not always be "running", nor will it always be in primary storage (what you call "RAM"). For instance: If your process ever goes to sleep (because it is told to by its text , or because it is waiting for something), the kernel may decide to swap it out to secondary storage (usually a swap area). When this happens, your process is no longer in primary storage ("in memory/RAM") : the kernel has saved it, and will be able to reschedule it once it's been loaded back into primary storage. If your process ran enough time, and the kernel decides to preempt it (and let another process run instead for some time), it may swap it out again if it cannot hold it into memory. A typical life for a process is... Created: the fork system call has been used. Ready to run (in memory) : instructions and data have been loaded. Running (switching between user and kernel mode, probably several times..) Sleeping, waking up, sleeping, waking up, ... Exiting (final switch to kernel mode, zombie state, disappearance). Steps 3 and 4 may actually interweave. Note that processes are one of two main concepts on UNIX systems (along with files ). For this reason, it is impossible to cover everything about them in a Q&A format. The book I kept quoting in my answer is an excellent reference for UNIX systems, and while *NIXes/UNIX-like system may differ in some areas, they still rely on the same concepts. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175404",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63649/"
]
} |
175,415 | I have declared functions and variables in bash/ksh and I need to forward them into sudo su - {user} << EOF : #!/bin/bashlog_f() {echo "LOG line: $@"}extVAR="yourName"sudo su - <user> << EOF intVAR=$(date) log_f ${intVAR} ${extVAR}EOF | sudo su - , which is a complicated way of writing sudo -i , constructs a pristine environment. That's the point of a login shell. Even a plain sudo removes most variables from the environment. Furthermore sudo is an external command; there's no way to elevate privileges in the shell script itself, only to run an external program ( sudo ) with extra privileges, and that means any shell variables (i.e. non-exported variables) and functions defined in the parent shell won't be available in the child shell. You can pass environment variables through by not invoking a login shell ( sudo bash instead of sudo su - or sudo -i ) and configuring sudo to let these variables through (with Defaults !env_reset or Defaults env_keep=… in the sudoers file). This won't help you for functions (although bash has a function export facility, sudo blocks it). The normal way to get your functions in the child shell would be to define them there. Take care of quoting: if you use <<EOF for the here document, the content of the here document is first expanded by the parent shell, and the result of that expansion becomes the script that the child shell sees. That is, if you write sudo -u "$target_user" -i <<EOFecho "$(whoami)"EOF this displays the name of the original user, not the target user. To avoid this first phase of expansion, quote the here document marker after the << operator: sudo -u "$target_user" -i <<'EOF'echo "$(whoami)"EOF So if you don't need to pass data from the parent shell to the child shell, you can use a quoted here document: #!/bin/bashsudo -u "$target_user" -i <<'EOF'log_f() {echo "LOG line: $@"}intVAR=$(date)log_f "${intVAR}"EOF While you can make use of an unquoted here document marker to pass data from the parent shell to the child shell, this only works if the data doesn't contain any special character. That's because in a script like sudo -u "$target_user" -i <<EOFecho "$(whoami)"EOF the output of whoami becomes a bit of shell code, not a string. For example, if the whoami command returned "; rm -rf /; "true then the child shell would execute the command echo ""; rm -rf /; "true" . If you need to pass data from the parent shell, a simple way is to pass it as arguments. Invoke the child shell explicitly and pass it positional parameters: #!/bin/bashextVAR="yourName"sudo -u "$target_user" -i sh _ "$extVAR" <<'EOF' log_f() { echo "LOG line: $@" } intVAR=$(date) log_f "${intVAR}" "${1}"EOF If you have multiple variables to pass, it will be more readable to pass them by name. Call env explicitly to set environment variables for the child shell. #!/bin/bashextVAR="yourName"sudo -u "$target_user" -i env extVAR="$extVAR" sh <<'EOF' log_f() { echo "LOG line: $@" } intVAR=$(date) log_f "${intVAR}" "${1}"EOF Note that if you expected /etc/profile and the target user's ~/.profile to be read, you'll have to read them explicitly, or call bash --login instead of sh . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95722/"
]
} |
175,429 | I used to write my scripts like this: some_commandwhile [ $? -ne 0 ] ; do sleep 1 some_commanddone It always irked me that I have to write some_command twice. Can I put it inside the loop test section somehow? | You could use some_command as your test condition. while ! some_command; do sleep 1; done The condition will remain true as long as some_command exits with an error. Or a more verbose way if you want to do additional checks: while ! some_commanddo # add more if desired sleep 1done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79271/"
]
} |
175,444 | I'm trying to execute a command, stored within a variable: cmd="grep -i \"word1\" filename | grep -i \"word2\""eval $cmd But when I execute the script I get the errors: grep: |: No such file or directorygrep: grep: No such file or directory How can I execute commands like the one in my example without getting this errors? | You need to quote "$cmd" - and maybe avoid the " double-quotes. Anyway, to run this you do need to eval it - and this is due to the |pipe . A shell variable does not expand beyond the limits of a single simple command - and you're trying to run a compound command. So: cmd='grep -i "word1" filename | grep -i "word2"'eval "$cmd" Probably when you were expanding $cmd without quotes you ran into filename generation and/or $IFS issues. Just make sure it's reliably quoted and you'll be fine. Also verify that whatever is in "word[12]" doesn't contain double-quotes or backslashes or whatever - else you'll need to reevaluate your quoting style there. Looking closer now at your error and I can tell you exactly what it was: grep: |: No such file or directorygrep: grep: No such file or directory So if I do: echo | echo The first echo prints a \n ewline to the second echo 's stdin . If I do: set \| echoecho "$@" The first echo prints each of its arguments, which are | pipe and echo respectively. The difference is that the | pipe in the first case is interpreted by the shell's parser to be a token and is interpreted as such. In the second case, at the same the shell is scanning for the | pipe token it is also scanning for the $ expand token - and so the | pipe is not yet there to be found because "$@" has not yet been replaced with its value. So if you do: grep -i "word" filename \| grep -i "word2" ...you're likely to get the same results because the | there does not delimit the commands, and is instead an argument to grep in its infiles position. Here's a look at how many fields you get when you split $cmd with a default $IFS : printf '<%s> ' $cmd<grep> <-i> <"word1"> <filename> <|> <grep> <-i> <"word2"> And here's what filename generation might do: touch 'echo 1simple &&' 'echo 2simple'eval echo* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95740/"
]
} |
175,451 | I have a very long text file in french language that I need to clean up. The non ASCII characters have been replaced by combination of odd characters. As an example, the following content : passer de très bonnes fêtes de fin d'année. should become : (as Unicode text) passer de très bonnes fêtes de fin d'année. I have tried sed, but no success. A friend recommended to try Perl. I can easily build a table with the odd sequence of characters and the correct replacing ones. Ideally I would prefer this table to be an independant file for future use. What is the recommended approach for such conversions? | It looks like you had the text encoded in utf-8 (that is good, as it is the standard for Unix), but then something read it as ISO 8859-1 / Microsoft's windows Latin-1 and then output its interpretation. You need to reverse this. e.g. echo "passer de très bonnes fêtes de fin d'année" | iconv --to-code="ISO 8859-1" This will take the broken encoding, and convert it to valid utf-8. If your system is configured to utf-8, then it will read correctly. Explication:If we do echo è | od -t x1 and echo ê | od -t x1 , then we see that the hex codes are c3 a8 0a and c3 aa 0a , we then look here http://www.ascii-code.com/ ( these are iso 8859-1 codes, not ascii ) we see that these codes give è and ê both followed by an invisible character. So now we know what went wrong: something read utf-8, but interpreted it as iso 8859-1. So we now need to reverse it: We read in what ever format it is that we are reading in, and convert to iso 8859-1 (the reverse of what got us here). The result is valid utf-8. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44902/"
]
} |
175,461 | I'm using this in a service declartion: ExecStartPre=/usr/bin/docker pull "$DOCKER_USERNAME/redis-replication:latest" In the log of systemd, I can see this when I try to start the service: Usage: docker pull [OPTIONS] NAME[:TAG]Pull an image or a repository from the registry-a, --all-tags=false Download all tagged images in the repository It looks like systemd didn't execute the proper command but some weird one.What could it be and how to correct that? Edited : Here's my entire unit file [Unit]Description=Run redis replicationAfter=docker.serviceRequires=docker.service[Service]Restart=alwaysRestartSec=10sEnvironmentFile=/etc/vax/credentialsEnvironmentFile=/etc/vax/centos-ipEnvironmentFile=/etc/vax/docker-authEnvironmentFile=/etc/vax/cluster-prefixExecStartPre=-/usr/bin/docker kill redisrepExecStartPre=-/usr/bin/docker rm redisrepExecStartPre=/usr/bin/docker pull "$DOCKER_USERNAME/redis-replication:latest"ExecStart=/usr/bin/docker run --rm --name redisrep -v /var/data/myproject/redis:/data -e S3_ACCESS_KEY=$S3_ACCESS_KEY -e S3_SECRET_KEY=$S3_SECRET_KEY -e S3_BUCKET=$S3_BUCKET -e BACKUP_PREFIX=$BACKUP_PREFIX -e REPLICATE_FROM_IP=$CENTOS_IP -e REPLICATE_FROM_PORT=6379 $DOCKER_USERNAME/redis-replication:latestExecStop=/usr/bin/docker kill redisrep[X-Fleet]MachineMetadata="machineIndex=1" | After some research around, I found that it's ok to use quote in ExecStart definition of a systemd service file. As for using shell variable, it's necessary to use curly braces to clarify where the variable name end when non-space characters is connected to the variable itself. In the above case, the system must have treat $DOCKER_USERNAME/redis as the variable name instead of $DOCKER_USERNAME . Add a curly braces then it is interpreted correctly. Edit : More information about what syntax is ok with systemd can be found here: http://www.freedesktop.org/software/systemd/man/systemd.service.html#Command%20lines Basically most shell notation is ok, with the except of pipe operators. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95803/"
]
} |
175,473 | Following derobert's trick to resize the underlying filesystem when resizing a LVM volume: lvm> lvextend -r -l +100%FREE /dev/VolGroup00/lvolhomefsck from util-linux 2.25.2/sbin/fsck.btrfs: BTRFS file system. Size of logical volume VolGroup00/lvolhome changed from 3.04 GiB (777 extents) to 14.94 GiB (3824 extents). Logical volume lvolhome successfully resizedfsadm: Filesystem "btrfs" on device "/dev/mapper/VolGroup00-lvolhome" is not supported by this tool fsadm failed: 1 The "problem" is that fsadm tool doesn't support btrfs resizing. Dispirited, I decided to do it the hard way (aka manually): sudo btrfs filesystem resize max /dev/mapper/VolGroup00-lvolhomeERROR: can't access '/dev/mapper/VolGroup00-lvolhome' Well, btrfs can't "access" the device, but it can detect it: > sudo btrfs filesystem show Label: none uuid: 53330630-9670-4110-8f04-5a39bfa86478 Total devices 1 FS bytes used 2.75GiB devid 1 size 3.04GiB used 3.03GiB path /dev/mapper/VolGroup00-lvolhome So, what gives? How to resize my btrfs partition inside the logical volume? | Well, that was embarrassing. BTRFS needs to be mounted to be able to resize the partition. How do I resize a partition? (shrink/grow) In order to demonstrate and test the back references, Btrfs devel team has added an online resizer, which can both grow and shrink the filesystem via the btrfs commands. First, ensure that your filesystem is mounted. So, it doesn't matter that I was using a LVM volume, as long as it was mounted. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175473",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41104/"
]
} |
175,496 | I'm trying to create an if else statement to verify that the user has entered something. If they have it should run through the commands, and if not i want to echo a help statement. | An example (fairly easy) is as following. A file named userinput is created which contains the following code. #!/bin/bash# create a variable to hold the inputread -p "Please enter something: " userInput# Check if string is empty using -z. For more 'help test' if [[ -z "$userInput" ]]; then printf '%s\n' "No input entered" exit 1else # If userInput is not empty show what the user typed in and run ls -l printf "You entered %s " "$userInput" ls -lfi To start learning bash, I recommend you to check the following link http://mywiki.wooledge.org/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95665/"
]
} |
175,503 | I've an annoying process that won't die: ps -A | grep "nzbget" gives me: 11394 pts/3 00:00:00 nzbget If I try: pkill nzbget (or 11394) ...I get no response and the process is still alive, top gives me: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND11394 asystem+ 20 0 125448 6244 4984 T 0.0 0.1 0:00.00 nzbget I run nzbget with 'nzbget -s', if I do this after it is running (and I've tried to stop it), I get a [binding socket failed] error and all I can do is reboot. How can I kill this off without rebooting? | You can use kill -9 11394 to kill the process completely ungracefully, this will invoke the following: From The 3 most important "kill" signals on the Linux/UNIX command line : The kernel will let go of the process without informing the process of it. An unclean kill like this could result in data loss. This is the "hardest", "roughest" and most unsafe kill signal available, and should only be used to stop something that seems unstoppable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87673/"
]
} |
175,530 | Suppose a folder with many files with bad extensions,for example Song1.avi.mp3Song2.avi.mp3Song32.web.mp3Song23.mp4.mp3Song2a.mp9.mp3 I want to remove only the second field ( web , avi , mp4 ). I know how to do with sed but I have to put the extension mv -v $i "$(echo $i|sed 's:.flv::g;s:.avi::g;s:.mp4::g')" Does someone know an immediate method with sed or awk or perl to remove only the second bad extension? | You don't need any external utility, you can do it with the shell's own string manipulation functionality . This makes it easier to avoid breaking on file names with special characters. And remember to always use double quotes around variable substitutions . mv -v -- "$i" "${i%.*.*}.${i##*.}" (Obviously this snippet assumes that the file name does contain two extensions. If it doesn't, ${i%.*.*} would be the whole file name.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
175,549 | I have a file I want to sent to multiple separate servers.Is it possible to name multiple destinations? rsync foo.png server1:foo.png server2:foo.png This gives me a "Unexpected remote arg:server1:foo.png | No, there is no way in rsync to specify multiple destinations. You'll need to invoke the command multiple times. If you have many targets, you might want to consider using a network filesystem instead--it might be a better fit for your problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89198/"
]
} |
175,557 | I was looking at Google's style guides for their bash scripts, and saw that they quote the exit status variable $? here . if [[ "$?" -ne 0 ]]; then error_messagefi I thought return values are always numeric, so is there any reason to ever quote them? Is it just a good habit to get into (because you want to quote other special shell variables like "$@")? | Highly recommend You should read this wonderful answer for more details. Setting IFS contains digit can break your code: $ IFS=0$ echo test$ [ $? -eq 0 ] && echo donebash: [: : integer expression expected Some shells may inherit IFS from environment ( dash , ash ), some don't ( bash , zsh , ksh ). But someone can control the environment, your script will break anyway ( $# , $! are also affected). A note, in your example, you used new test [[...]] , so field splitting is turned off, you don't need to quote in this case. It will be matter if you use old test [...] . $ IFS=0$ echo test$ [[ $? -eq 0 ]] && echo donedone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83664/"
]
} |
175,558 | In a CentOS 7 server, I get the following error when I type sudo apachectl restart after I add an include file at the bottom of httpd.conf : Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details. When I then type sudo systemctl status httpd.service -l , the result is: httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Tue 2014-12-23 20:10:37 EST; 2min 15s ago Process: 2101 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS) Process: 2099 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE) Main PID: 2099 (code=exited, status=1/FAILURE) Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec"Dec 23 20:10:37 ip-address httpd[2099]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using fe80::e23f:49ff:feb7:2a21. Set the 'ServerName' directive globally to suppress this messageDec 23 20:10:37 ip-address systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILUREDec 23 20:10:37 ip-address systemd[1]: Failed to start The Apache HTTP Server.Dec 23 20:10:37 ip-address systemd[1]: Unit httpd.service entered failed state. I can get apache to restart if I comment out the include directive, and I can recreate the error again by un-commenting the include directive. How can I get apache to start properly using the contents of the include file? The line at the bottom of httpd.conf that triggers the error is: IncludeOptional sites-enabled/*.conf . The only .conf file in the sites-enabled folder is mydomain.com.conf , which has the following contents: <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com DocumentRoot /var/www/mydomain.com/public_html ErrorLog /var/www/mydomain.com/error.log CustomLog /var/www/mydomain.com/requests.log combined</VirtualHost> The httpd.conf is the same as what comes pre-installed with httpd , except for the one line include directive above. I know because I did sudo yum remove httpd mod_ssl and sudo yum install httpd mod_ssl right before triggering this error. The entire httpd.conf can be read at a file sharing site by clicking on this link . I encountered this problem when explicitly following the steps in this tutorial . When I comment out the include file, http/mydomain.com successfully serves up the static content located in /var/www/html , which is the DocumentRoot defined in httpd.conf . The problem seems to be coming from the VirtualHost directive in the include file shown above. To aid in diagnosis, I have included in EDIT#3 below links to all of the .conf files that are contained in the three include directives in httpd.conf . EDIT #1 When I try m32's advice to change the /etc/hostname to define mydomain.com , apache still will not restart, and the systemctl status httpd.service results in the following: [sudo_user_account@server-ip-address ~]$ sudo systemctl status httpd.service -lhttpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Tue 2014-12-23 14:25:35 EST; 20s ago Process: 31993 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS) Process: 31991 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE) Main PID: 31991 (code=exited, status=1/FAILURE) Status: "Total requests: 1; Current requests/sec: 0; Current traffic: 0 B/sec"Dec 23 14:25:35 hostname systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILUREDec 23 14:25:35 hostname systemd[1]: Failed to start The Apache HTTP Server.Dec 23 14:25:35 hostname systemd[1]: Unit httpd.service entered failed state. EDIT #2 I also tried eyoung100's advice to change the contents of my /etc/hosts file, as defined in the following image, but I still get the same error defined in EDIT#1 above. EDIT#3 Per DerekC's request, I ran sudo apachectl configtest and got: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using fe80::e23f:49ff:feb7:2a21. Set the 'ServerName' directive globally to suppress this messageSyntax OK In addition, per GarethTheRed's suggestion, I examined the include directives in httpd.conf . There are three include directives in httpd.conf . I have listed the three below, along with all the files located in each directive's folder. These are all the standard .conf files installed with httpd . I have not modified any of them yet. You can view each of the .conf files at a file sharing site by clicking on the links below: Include conf.modules.d/*.conf references the following files in the conf.modules.d directory: 00-base.conf 00-dav.conf 00-lua.conf 00-mpm.conf 00-proxy.conf 00-ssl.conf 00-systemd.conf 01-cgi.conf IncludeOptional conf.d/*.conf references the following files in the conf.d directory: autoindex.conf ssl.conf userdir.conf welcome.conf There is also a README file that I am omitting here. In addition, the IncludeOptional sites-enabled/*.conf directive and it's contents were outlined thoroughly in the OP above. Are any of these include files conflicting with the VirtualHost settings in mydomain.com.conf ? EDIT#4 Per garethTheRed's suggestion, I moved mydomain.com.conf to the conf.d directory and then started commenting out lines in mydomain.com.conf one by one until httpd was able to restart. I then started un-commenting lines to see how many lines could remain and have httpd still restart. I was able to get httpd to restart, but systemctl status httpd.service -l continues to produce the same warning: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using fe80::e23f:49ff:feb7:2a21. Set the 'ServerName' directive globally to suppress this message The VirtualHost syntax that allows httpd to start (though continuing to generate the above warning) is as follows: <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com DocumentRoot /var/www/mydomain.com/public_html</VirtualHost> Note that I had to omit the following lines, whose presence escalates the warning into a complete inability to start http: # ErrorLog /var/www/mydomain.com/error.log# CustomLog /var/www/mydomain.com/requests.log combined Also, I ran sudo journalctl -xelu httpd and the terminal replied by repeating the following many times: -- -- Unit httpd.service has finished shutting down.Dec 24 17:48:43 server-ip-address systemd[1]: Stopped The Apache HTTP Server.-- Subject: Unit httpd.service has finished shutting down-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit httpd.service has finished shutting down.Dec 24 17:48:48 server-ip-address systemd[1]: Starting The Apache HTTP Server...-- Subject: Unit httpd.service has begun with start-up-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit httpd.service has begun starting up.Dec 24 17:48:48 server-ip-address httpd[10364]: AH00558: httpd: Could not reliably dDec 24 17:48:48 server-ip-address systemd[1]: Started The Apache HTTP Server.-- Subject: Unit httpd.service has finished start-up-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit httpd.service has finished starting up.-- -- The start-up result is done.lines 887-909/909 (END) Note: The above results remain the same regardless of whether I use eyoung100's hosts file or m32's host file. For this question to be answered, I think I should be able to create log files and also avoid the servername warning. Otherwise, I fear that subsequent steps of configuring httpd will be prone to lingering errors. | The logs are causing the errors because apache cannot write to the root of your website. Even if you were to fix the file permissions, you'd still be blocked by SELinux; which only allows apache to write logs to /var/log/httpd . The easiest solution would be to change your website to log to this directory - maybe with a filename that contains the website name in order to differentiate it from other logs. ErrorLog /var/log/httpd/mydomain_com_error.logCustomLog /var/log/httpd/mydomain_com_requests.log combined To set the hostname of the server and get rid of the AH00558 warning, simply use: hostnamectl set-hostname --static <FQDN of your machine> e.g. hostnamectl set-hostname --static mydomain.com | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
175,624 | I'm using Transmission on EOs and I've downloaded a bunch of torrent files which ended up my Download folder in my /home. I'd like now to move the files to my external hard drive without breaking the link and contribute to further sharing upload of these files. What should I do? | The easiest way I found was to select all torrents in Transmission, then go to the menu Torrent > Set Location and then choose the desired location for torrents. After which, Transmission takes care of moving torrents. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7892/"
]
} |
175,638 | I have a problem that (I think) require a bit of script-magic - not sure what would be best though... I have one directory with lots pictures in different formats - jpg, gif, png, tiff and svg. Some (not all!) of the png-and svg-files are in pairs - ie. one png and one svg version of the same image, both with the same filename, except that the suffix differs (eg. figleaf.png and figleaf.svg). I need a script that will take the file-list (made by ls ), and remove the svg-version of all twins, leaving just the png-version. All other files (non twins) - including svg-files without a png-twin - should remain in the list (together with the png-version of the twins). Alternatively, a script that creates a list of all svg-files with a corresponding png-twin. I think some of twins may have slight difference in between the versions - eg. figleaf.png and FigLeaf.svg - so it would be great if the script could be optionally changed to ignore the case of the letters. | The easiest way I found was to select all torrents in Transmission, then go to the menu Torrent > Set Location and then choose the desired location for torrents. After which, Transmission takes care of moving torrents. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28975/"
]
} |
175,639 | How can I search for string --version= using grep in OSX bash? When I try something like this: grep -rI --exclude-dir=out "--version=" . I get grep: option `--version' doesn't allow an argument | The easiest way I found was to select all torrents in Transmission, then go to the menu Torrent > Set Location and then choose the desired location for torrents. After which, Transmission takes care of moving torrents. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95858/"
]
} |
175,648 | I need to create a config file for my own script: Here is an example: script: #!/bin/bashsource /home/myuser/test/configecho "Name=$nam" >&2echo "Surname=$sur" >&2 Content of /home/myuser/test/config : nam="Mark"sur="Brown" that works! My question: is this the correct way to do this or there're other ways? | source is not secure as it will execute arbitrary code. This may not be a concern for you, but if file permissions are incorrect, it may be possible for an attacker with filesystem access to execute code as a privileged user by injecting code into a config file loaded by an otherwise-secured script such as an init script. So far, the best solution I've been able to identify is the clumsy reinventing-the-wheel solution: myscript.conf password=barecho rm -rf /PROMPT_COMMAND='echo "Sending your last command $(history 1) to my email"'hostname=localhost; echo rm -rf / Using source , this would run echo rm -rf / twice, as well as change the running user's $PROMPT_COMMAND . Instead, do this: myscript.sh (Bash 4) #!/bin/bashtypeset -A config # init arrayconfig=( # set default values in config array [username]="root" [password]="" [hostname]="localhost")while read linedo if echo $line | grep -F = &>/dev/null then varname=$(echo "$line" | cut -d '=' -f 1) config[$varname]=$(echo "$line" | cut -d '=' -f 2-) fidone < myscript.confecho ${config[username]} # should be loaded from defaultsecho ${config[password]} # should be loaded from config fileecho ${config[hostname]} # includes the "injected" code, but it's fine hereecho ${config[PROMPT_COMMAND]} # also respects variables that you may not have # been looking for, but they're sandboxed inside the $config array myscript.sh (Mac/Bash 3-compatible) #!/bin/bashconfig() { val=$(grep -E "^$1=" myscript.conf 2>/dev/null || echo "$1=__DEFAULT__" | head -n 1 | cut -d '=' -f 2-) if [[ $val == __DEFAULT__ ]] then case $1 in username) echo -n "root" ;; password) echo -n "" ;; hostname) echo -n "localhost" ;; esac else echo -n $val fi}echo $(config username) # should be loaded from defaultsecho $(config password) # should be loaded from config fileecho $(config hostname) # includes the "injected" code, but it's fine hereecho $(config PROMPT_COMMAND) # also respects variables that you may not have # been looking for, but they're sandboxed inside the $config array Please reply if you find a security exploit in my code. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40628/"
]
} |
175,656 | I have a bunch of log files that get overwritten ( file.log.1 , file.log.2 etc). When I copy them from the device making them onto my local machine I lose the original time stamps. So I'd like to put them in chronological order. The problem is that I don't necessarily know which is the newest and which is the oldest. What I'd like to be able to do is, if all the logs are in a directory, print something like this: file: file.log.1first line: [first line that isn't whitespace]last line: [last line that isn't whitespace] I can just write a python script to do this, but I'd much rather do it with linux built-ins if possible. Is this a job for awk/sed? Or would this really be better off for a scripting language? If yes to awk/sed, how woul dyou go about doing it? I found this awk command by searching, but it only accepts one file name and will print whatever the last line is (and there can be a variable number of empty lines at the end) awk 'NR == 1 { print }END{ print }' filename | So I like sed the answer can be for file in file.log.*do echo "file: $file" echo -n "first line: " cat "$file" | sed -n '/^\s*$/!{p;q}' echo -n "last line: " tac "$file" | sed -n '/^\s*$/!{p;q}'done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73386/"
]
} |
175,660 | I have a directory structure that looks like: dirA fileA1 fileA2 ...dirB fileB1 fileB2 ... I would like to create a torrent using CLI utilities that contains: dirA/fileA1dirB/fileB1 (Note: this is a simplified example. In reality, there are four directories and thousands of files in each, and I would like to select ~100 files out of each directory. So solutions that involve simply excluding specific files won't work.) So far I have tried: ctorrent only lets you specify a single file or directory mktorrent only lets you specify a single file or directory transmission-create only lets you specify a single file or directory py3torrentcreator only lets you specify a single file or directory. It does allow you to specify a pattern of files to exclude, but there are way too many other files to exclude them individually. I also tried using the Python bindings for libtorrent , but their add_files method strips out the directory names: >>> import libtorrent as lt>>> fs = lt.file_storage()>>> lt.add_files(fs, 'dirA/fileA1')>>> lt.add_files(fs, 'dirB/fileB1')>>> print fs.at(0).pathfileA1>>> t = lt.create_torrent(fs)>>> lt.set_piece_hashes(t, '.')Traceback (most recent call last): File "<stdin>", line 1, in <module>RuntimeError: No such file or directory Is there any way to accomplish this? | The easiest way to do this, that I know of, is to create a single directory containing symlinks to the different files or directories you would like to add to the torrent. Add symlinks to a parent directory cd ~/Shared/parent-dir/ ln -s /path/to/file ln -s /path/to/dir Create your torrent Testing with transmission-create, you can create a new torrent using this source folder and each symlink will be traversed. transmission-create ~/Shared/parent-dir/ There is no way to store the full filepath in a torrent's meta info for files that are not descendants of parent-dir. When a peer downloads the multi-file torrent, a directory is created using name of the torrent that is found in its meta info. This directory is used as the top most parent directory for all files included in the meta info. Here is the output of the meta info for a torrent I have called bt-symlinks.torrent . Notice how only paths to files are stored in the meta info and they always begin with the name(infile) 1 used as their top most 2 directory 3 . transmission-show bt-symlinks.torrent Name: bt-symlinksFile: .torrentGENERAL Name: bt-symlinks Hash: 35af9b734284f9259763defe6095424fe3b79b42 Created by: Transmission/2.82 (14160) Created on: Sat Dec 27 12:04:59 2014 Piece Count: 2357 Piece Size: 64.00 KiB Total Size: 154.4 MB Privacy: Public torrentTRACKERSFILES bt-symlinks/bt-symlinks.torrent (57.40 kB) bt-symlinks/gifs/Bill-Cosby-Jell-o-GIF.gif (810.3 kB) bt-symlinks/gifs/Firefly_Lantern_Animation_by_ProdigyBombay.gif (485.2 kB) bt-symlinks/gifs/L-cake.gif (455.2 kB) bt-symlinks/gifs/L-sweets.gif (871.0 kB) bt-symlinks/gifs/Metroid (NES) Music - Kraids Hideout.mp4 (4.16 MB) bt-symlinks/gifs/Phantasy Star II_Mother Brain.gif (530.5 kB) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95861/"
]
} |
175,744 | In Ubuntu 14.04.1 LTS 64-bit bash I am declearing floating point variables by multiplying floating point bash variables in bc with scale set to 3; however, I cannot get the number of digits after the decimal point to be zero and get rid of the zero to the left of the decimal point. How can I transform, say 0.005000000 into .005 ? This is necessary due to my file naming convention. Thanks for your recommendations. UPDATE: Can I use it for already defined shell variables and redefining them? The following code gives me an error. ~/Desktop/MEEP$ printf "%.3f\n" $wbash: printf: 0.005000: invalid number0,000 The output of locale @vesnog:~$ localeLANG=en_US.UTF-8LANGUAGE=en_USLC_CTYPE="en_US.UTF-8"LC_NUMERIC=tr_TR.UTF-8LC_TIME=tr_TR.UTF-8LC_COLLATE="en_US.UTF-8"LC_MONETARY=tr_TR.UTF-8LC_MESSAGES="en_US.UTF-8"LC_PAPER=tr_TR.UTF-8LC_NAME=tr_TR.UTF-8LC_ADDRESS=tr_TR.UTF-8LC_TELEPHONE=tr_TR.UTF-8LC_MEASUREMENT=tr_TR.UTF-8LC_IDENTIFICATION=tr_TR.UTF-8LC_ALL= The output of echo $w @vesnog:~$ echo $w0.005000 | A simple way is to use printf : $ printf "%.3f\n" 0.0050000000000.005 To remove the leading 0 , just parse it out with sed : $ printf "%.3f\n" 0.005000000000 | sed 's/^0//'.005 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49442/"
]
} |
175,764 | I am using Debian wheezy and I would like to install the package apt-transport-https , which allows to access apt repositories through the https protocol. What really puzzles me is that the apt-get gives me the following message: $ sudo apt-get install apt-transport-https...The following NEW packages will be installed: apt-transport-https0 upgraded, 1 newly installed, 0 to remove and 14 not upgraded.Need to get 109 kB of archives.After this operation, 166 kB of additional disk space will be used.WARNING: The following packages cannot be authenticated! apt-transport-httpsInstall these packages without verification [y/N]? I pressed N because I would like to clarify this before installing the package. Why is no authentication information for this package provided? I would expect this to be the default, especially for a package that provides a secure transfer protocol. | When running apt-get update for a https mirror without apt-transport-https installed, you probably invalidated your cached (sources) data, as a side effect invalidating the signatures - this should fix itself after running "apt-get update" again (you might have to revert to a non-https mirror temporarily). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10767/"
]
} |
175,784 | How do I diff the output of multiple commands? vimdiff can support up to four files, but diff itself seems to support exactly two files. Is it directly possible with some variant of diff , or do I have to save the output of all commands to temporary files, pick one and diff the remainder with it? Context: I have to check the output of a certain command on multiple servers and see if they all agree. For the moment, just reporting if any differences are found seems good, but if possible, I'd like to be able to say: X% servers agree with each other, Y% with each other; or that server Z is the odd one. I have a four-way multi-master LDAP setup, and I want to verify that the ContextCSN values for all four agree with each other. So now I do: #! /bin/bashfor i in {1..4}.ldap do ldapsearch -x -LLL -H ldap://$i -s base -b dc=example,dc=com contextCSN > $i.csn; doneset -e for i in {2..4}do diff -q 1.csn $i.csndone And check the error code of the script. Are there better tools for this? Any tools that can be used on Ubuntu 14.04 welcome. | The tool to do this is Diffuse . It is also generally available from repos (at least in Debian and Arch, where I checked). It works as you would expect it to: diffuse file1 file2 file3 file4 and so on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70524/"
]
} |
175,801 | Can someone tell me a command that will find my external ip for a freebsd 10 system. | Personally, I use wtfismyip.com which returns pure text and does not need parsing: $ wget -qO - http://wtfismyip.com/text123.456.78.9 User Steve Wills pointed out in a comment that the command fetch is installed by default on FreeBSD so it might be a better choice instead of wget . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87538/"
]
} |
175,810 | I'm trying to install the wireless drivers because my macbook pro does not have a ethernet port, then I mount the 3 debian isos like local repositories to install the almost all of dependeces. So, what I tryed to do to install the broadcam 4360 https://wiki.debian.org/bcm43xx https://wiki.debian.org/wl When I try to install this: http://www.broadcom.com/support/802.11/linux_sta.php I get the following problems: KBUILD_NOPEDANTIC=1 make -C /lib/modules/`uname -r`/build M=`pwd`make[1]: warning: jobserver unavailable: using -j1. Add '+' to parent make rule.make[1]: Entering directory '/usr/src/linux-headers-3.16-2-amd64'make[1]: Entering directory `/usr/src/linux-headers-3.16-2-amd64'CFG80211 API is prefered for this kernel versionUsing CFG80211 API CC [M] /home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.o/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c: In function ‘wl_cfg80211_get_key’:/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:1390:2: warning: passing argument 1 of ‘memcpy’ discards ‘const’ qualifier from pointer target type [enabled by default] memcpy(params.key, key.data, params.key_len); ^In file included from /usr/src/linux-headers-3.16-2-common/arch/x86/include/asm/string.h:4:0, from /usr/src/linux-headers-3.16-2-common/include/linux/string.h:17, from /usr/src/linux-headers-3.16-2-common/include/linux/bitmap.h:8, from /usr/src/linux-headers-3.16-2-common/include/linux/cpumask.h:11, from /usr/src/linux-headers-3.16-2-common/arch/x86/include/asm/cpumask.h:4, from /usr/src/linux-headers-3.16-2-common/arch/x86/include/asm/msr.h:10, from /usr/src/linux-headers-3.16-2-common/arch/x86/include/asm/processor.h:20, from /usr/src/linux-headers-3.16-2-common/arch/x86/include/asm/thread_info.h:23, from /usr/src/linux-headers-3.16-2-common/include/linux/thread_info.h:54, from /usr/src/linux-headers-3.16-2-common/arch/x86/include/asm/preempt.h:6, from /usr/src/linux-headers-3.16-2-common/include/linux/preempt.h:18, from /usr/src/linux-headers-3.16-2-common/include/linux/spinlock.h:50, from /usr/src/linux-headers-3.16-2-common/include/linux/seqlock.h:35, from /usr/src/linux-headers-3.16-2-common/include/linux/time.h:5, from /usr/src/linux-headers-3.16-2-common/include/linux/stat.h:18, from /usr/src/linux-headers-3.16-2-common/include/linux/module.h:10, from /home/cristian/Downloads/broadcom/src/include/linuxver.h:40, from /home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:26:/usr/src/linux-headers-3.16-2-common/arch/x86/include/asm/string_64.h:32:14: note: expected ‘void *’ but argument is of type ‘const u8 *’ extern void *memcpy(void *to, const void *from, size_t len); ^/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c: At top level:/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:1778:2: warning: initialization from incompatible pointer type [enabled by default] .get_station = wl_cfg80211_get_station, ^/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:1778:2: warning: (near initialization for ‘wl_cfg80211_ops.get_station’) [enabled by default]/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c: In function ‘wl_notify_connect_status’:/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:2074:4: warning: passing argument 3 of ‘cfg80211_ibss_joined’ makes pointer from integer without a cast [enabled by default] cfg80211_ibss_joined(ndev, (u8 *)&wl->bssid, GFP_KERNEL); ^In file included from /home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:33:0:/usr/src/linux-headers-3.16-2-common/include/net/cfg80211.h:4002:6: note: expected ‘struct ieee80211_channel *’ but argument is of type ‘unsigned int’ void cfg80211_ibss_joined(struct net_device *dev, const u8 *bssid, ^/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:2074:4: error: too few arguments to function ‘cfg80211_ibss_joined’ cfg80211_ibss_joined(ndev, (u8 *)&wl->bssid, GFP_KERNEL); ^In file included from /home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.c:33:0:/usr/src/linux-headers-3.16-2-common/include/net/cfg80211.h:4002:6: note: declared here void cfg80211_ibss_joined(struct net_device *dev, const u8 *bssid, ^/usr/src/linux-headers-3.16-2-common/scripts/Makefile.build:262: recipe for target '/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.o' failedmake[4]: *** [/home/cristian/Downloads/broadcom/src/wl/sys/wl_cfg80211_hybrid.o] Error 1/usr/src/linux-headers-3.16-2-common/Makefile:1350: recipe for target '_module_/home/cristian/Downloads/broadcom' failedmake[3]: *** [_module_/home/cristian/Downloads/broadcom] Error 2Makefile:181: recipe for target 'sub-make' failedmake[2]: *** [sub-make] Error 2Makefile:8: recipe for target 'all' failedmake[1]: *** [all] Error 2make[1]: Leaving directory '/usr/src/linux-headers-3.16-2-amd64' Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 So if someone have installed the drivers to Broadcom 4360 on mac with debian ,please tell me how, and what others suggestions recommend me | EDIT Broadcom 4360 actually comes with either of two distinct chips, 14E4:4360 and 14E4:43A0. There is no driver in Linux for the first one, while wl is an appropriate driver for the second one. You can determine which one you have by means of the following command: lspci -vnn | grep -i net If instead you wish to do this from within Mac OS, hit the Apple -> About this Mac -> More Info-> System Info, and then click on Wi-fi. You will find a line like Card Type: AirPort Extreme (0x14E4, 0x117) which displays Vendor (14E4) and Product (117, in my case) code of the Wi-fi card. There is no support for Broadcom 4360 14E4:4360 on Linux. The definitive guide in these matters is Linux Wireless , which gives in this table the list of all Broadcomm wireless chips, and the available Linux drivers. As you can see, no driver is listed under BCM4360 14E4:4360. Two lines below in the same table, it is shown that the other chip with which 4360 is produced, 14E4:43A0, is instead supported by the proprietary driver wl . The correct procedure to install this driver is described here, in the Debian Wiki . For Wheezy,you should add this line deb http://http.debian.net/debian/ wheezy main contrib non-free to the file /etc/apt/sources.list, then run apt-get update apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') broadcom-sta-dkms and lastly you will need to remove some conflicting drivers which come pre-installed in Debian: modprobe -r b44 b43 b43legacy ssb brcmsmac Now you are good to go: modprobe wl You should also keep the following in mind: about the wl driver, this is what the ever informative Arch Linux wiki has to say: Warning : Even though this driver has matured a lot throughout the years and works quite well now, its usage is recommended only when neither of the two open-source drivers support your device. Please refer to project b43's page for list of supported devices. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
175,814 | What's the FreeBSD variant of Linux's lsblk and blkid ? I want something that provides the same sort of information as lsblk does in the example below: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT/dev/sda 8:0 0 465.8G 0 disk ├─/dev/sda1 8:1 0 1007K 0 part ├─/dev/sda2 8:2 0 256M 0 part /boot├─/dev/sda3 8:3 0 9.8G 0 part [SWAP]├─/dev/sda4 8:4 0 29.3G 0 part /├─/dev/sda5 8:5 0 29.3G 0 part /var├─/dev/sda6 8:6 0 297.6G 0 part /home└─/dev/sda9 8:9 0 16.3G 0 part /dev/sr0 11:0 1 1024M 0 rom I've tried running commands like man -k blk and apropos dev . There's devinfo , but I'm not sure if that's what I'm really looking for since it doesn't seem to give me to /dev/<DEVICE> path for the devices listed. I even tried devstat , but that seems equally unhelpful EDIT: All I really need to know is the /dev/<DEVICE> path for each block device connected, and maybe the label of said device (if any); regardless of whether or not they have been mounted yet. | Use geom disk list . This will show all disk-like devices (technically, every instance of GEOM "DISK" class). For more information: geom | FreeBSD Manual Pages | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/175814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43029/"
]
} |
175,844 | If I just use basename {} .txt, it will work: find . -iname "*.txt" -exec basename {} .txt \; It will just print xxx instead of ./xxx.txt if I want use $(basename {} .txt) in the -exec option, it will fail: find . -iname "*.txt" -exec echo "$(basename {} .txt)" \; It will just print ./xxx.txt How can I solve this problem? I hope I can use $(basename {} .txt) as parameters for other cmd. Do I have to do sh -c or pipe -exec basename {} \; with xargs? | Try: find -iname "*.txt" -exec sh -c 'for f do basename -- "$f" .txt;done' sh {} + Your first command failed, because $(...) run in subshell, which treat {} as literal. so basename {} .txt return {} , your find became: find . -iname "*.txt" -exec echo {} \; which print file name matched. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77353/"
]
} |
175,851 | On CentOS 6.4: I installed a newer version of devtoolset (1.1) and was wondering how I would go about permanently setting these to be default. Right now, when I ssh into my server running CentOS 6, I have to run this command scl enable devtoolset-1.1 bash I tried adding it to ~/.bashrc and simply pasting it on the very last line, without success. | In your ~/.bashrc or ~/.bash_profile Simply source the "enable" script provided with the devtoolset. For example, with the Devtoolset 2, the command is: source /opt/rh/devtoolset-2/enable or source scl_source enable devtoolset-2 Lot more efficient: no forkbomb, no tricky shell | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/175851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95779/"
]
} |
175,852 | I've got multiple-line text files of (sometimes) tab-delimited data. I'd like to output the file so I can glance over it - so I'd like to only see the first 80 characters of each line (I designed the text file to put the important stuff first on each line). I'd thought I could use cat to read each line of the file, and send each line to the next command in a pipe: cat tabfile | cut -c -80 But that seemed broken. I tried monkeying around, and grep appeared to work - but then I found out that, no it didn't (not every line in the file had 80+ characters) - it appears tabs are counted as single characters by cut. I tried: cat tabfile | tr \t \040 | cut -c -80 Even though that would mangle my data a bit, by eliminating the white-space readability. But that didn't work. Neither did: cat tabfile | tr \011 \040 | cut -c -80 Maybe I'm using tr wrong? I've had trouble with tr before, wanting to remove multiple spaces (appears the version of tr that I have access to on this machine has an -s option for squeezing down multiple characters - I may need to play with it more) I'm sure if I messed around I could use perl, awk or sed, or something to do this. However, I'd like a solution that uses (POSIX?) regular commands, so that it's as portable as possible. If I do end up using tr, I'd probably eventually try turning tabs into characters, maybe do a calculation, cut on the calculation, and then turn those characters back into tabs for the output. It doesn't need to be a single line / entered directly on the command line - a script is fine. More info on tab-files: I use tab to break fields, because someday I may want to import data into some other program. So I tend to have only one tab between pieces of content. But I also use tabs to align things with vertical columns, to aid in readability when looking at the plain text file. Which means for some pieces of text I pad the end of the content with spaces until I get to where the tab will work in lining up the next field with the ones above and below it. DarkTurquoise #00CED1 Seas, Skies, Rowboats NatureMediumSpringGreen #00FA9A Useful for trees Magic Lime #00FF00 Only for use on spring chickens and fru$ | I think you're looking for expand and/or unexpand . It seems you're trying to ensure a \t ab width counts as 8 chars rather than the single one. fold will do that as well, but it will wrap its input to the next line rather than truncating it. I think you want: expand < input | cut -c -80 expand and unexpand are both POSIX specified : The expand utility shall write files or the standard input to the standard output with \t ab characters replaced with one or more space characters needed to pad to the next tab stop. Any backspace characters shall be copied to the output and cause the column position count for tab stop calculations to be decremented; the column position count shall not be decremented below zero. Pretty simple. So, here's a look at what this does: unset c i; set --; until [ "$((i+=1))" -gt 10 ]; do set -- "$@" "$i" "$i"; done for c in 'tr \\t \ ' expand; do eval ' { printf "%*s\t" "$@"; echo; } | tee /dev/fd/2 |'"$c"'| { tee /dev/fd/3 | wc -c >&2; } 3>&1 | tee /dev/fd/2 | cut -c -80'done The until loop at top gets a set of data like... 1 1 2 2 3 3 ... It printf s this with the %*s arg padding flag so for each of those in the set printf will pad with as many spaces as are in the number of the argument. To each one it appends a \t ab character. All of the tee s are used to show the effects of each filter as it is applied. And the effects are these: 1 2 3 4 5 6 7 8 9 101 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 661 2 3 4 5 6 7 8 9 101 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 105 Those rows are lined up in two sets like... output of printf ...; echo output of tr ... or expand output of cut output of wc The top four rows are the results of the tr filter - in which each \t ab is converted to a single space . And the bottom four the results of the expand chain. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/175852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147281/"
]
} |
175,901 | I'd like to add an alias to a kernel module, e.g. make nvidia-343 available as nvidia on Ubuntu 14.10 with Linux 3.18.1, so that it can be loaded under the alias name and so that the alias appear in the list of aliases in modinfo . The current level of explanation of what a kernel module alias is in the manpages of modprobe , modinfo , modinfo , etc. is rather ridiculous because it is zero (see https://bugs.launchpad.net/ubuntu/+source/kmod/+bug/1405669 as well). Adding a line in the form of alias <name> <alias> to /etc/modprobe.conf as described at http://www.tldp.org/LDP/lkmpg/2.6/html/x44.html doesn't work (alias not listed in modinfo output) (I guess(!) these are the docs for 2.6.x anyway). | I think you know all you need to know about module aliases. Adding that line in /etc/modprobe.conf does define an alias: doesn't it work when you run modprobe <name> ? It doesn't work with modinfo because that program doesn't support aliases: they're a concept of the modprobe program, not of the lower-level tools like insmod and modinfo . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63502/"
]
} |
175,930 | When I boot, PulseAudio defaults to sending output to Headphones. I'd like it to default to sending output to Line Out. How do I do that? I can manually change where the output is current sent as follows: launch the Pulseaudio Volume Control application, go to the Output Devices tab, and next to Port, select the Line Out option instead of Headphones. However, I have to do this after each time I boot the machine -- after a reboot, Pulseaudio resets itself back to Headphones. That's a bit annoying. How do I make my selection stick and persist across reboots? Here's a screenshot of how the Volume Control application looks after a reboot, with Headphones selected: If I click on the chooser next to Port, I get the following two options: Selecting Line Out makes sound work. (Notice that both Headphones and Line Out are marked as "unplugged", but actually I do have something plugged into the Line Out port.) Comments: I'm not looking for a way to change the default output device . I have only one sound card. pacmd list-sinks shows only one sink. Therefore, pacmd set-default-sink is not helpful. ( This doesn't help either.) Here what I need to set is the "Port", not the output device. If it's relevant, I'm using Fedora 20 and pulseaudio-5.0-25.fc21.x86_64. | I had the same problem (for at least a year now), and the following seemed to work: Taken from: https://bbs.archlinux.org/viewtopic.php?id=164868 Use pavucontrol to change the port to your desired one. Then find the internal name of the port with this command: $ pacmd list | grep "active port" active port: <hdmi-output-0> active port: <analog-output-lineout> active port: <analog-input-linein> Using this information about the internal name of the port, we can change it with the command: pacmd set-sink-port 0 analog-output-lineout If you (or someone else with the problem) has multiple cards, try changing the 0 to a 1. If this works, you can put: set-sink-port 0 analog-output-lineout in your /etc/pulse/default.pa file to have it across reboots. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/175930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
175,967 | when I want to run google chrome as root, an error message with this title appears: Google chrome can not be run as root and the body of message is: to run as root, you must specify an alternate --user-data-dir for storage of profile information. can anyone help me? | To run google chrome as root, follow these steps: Open google-chrome in your favorite editor (replacing $EDITOR with your favorite): $EDITOR $(which google-chrome) Add --user-data-dir at the very end of the file. my file looks like this: exec -a "$0" "$HERE/chrome" "$PROFILE_DIRECTORY_FLAG" \ "$@"--user-data-dir Save and close the editor. you’re done. Enjoy it :) if you want to see video tutorial, you can check my blog post: How to run google chrome as root in Linux - MoeinFatehi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96074/"
]
} |
175,977 | I am trying to install Cloudera Manager 5.x on Linux Mint 17.1 (based on Ubuntu 14.04 LTS) Install Commands Used chmod u+x cloudera-manager-installer.binsudo ./cloudera-manager-installer.bin Error How can I make Linux Mint appear as Ubuntu to let it install on my system | Most likely the installer is checking /etc/lsb-release . In Linux Mint, the same file for the Ubuntu version it was derived is under /etc/upstream-release/lsb-release . To fool the installer, just replace the former with the latter (although you probably want to back up the file first). In a command terminal you can do: sudo mv /etc/lsb-release /etc/lsb-release.originalsudo cp /etc/upstream-release/lsb-release /etc/lsb-release At some point after your install is done, you can restore the original with: sudo mv /etc/lsb-release.original /etc/lsb-release | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/175977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96078/"
]
} |
176,001 | Reading about this question: In zsh how can I list all the environment variables? , I wondered, how can I list all the shell variables ? Also, does the distinction between shell variables and environment variables apply to shells other than zsh? I am primarily interested in Bash and Zsh, but it would be great to know how to do this in other mainstream shells. | List all shell variables bash : use set -o posix ; set . The POSIX options is there to avoid outputting too much information, like function definitions. declare -p also works. zsh : use typeset Shell variables and environment variables An environment variable is available to exec() -ed child processes (as a copy. if parent process change the variable, the child environment is not updated). A non-environment variable is only available to the current running shell and fork() -ed subshells. This distinction is present in all shells. (completed thanks to comments) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37128/"
]
} |
176,015 | I have a user with limited access on the system (that is, he is not a sudoer); let's call him Bob . I have a script or a binary which I, the system administrator, trust, and would have no problems running it as root; let's call the script get-todays-passphrase.sh . The job of this script is to read data from a "private" (owned by a user/group other than Bob, or even root) file located in /srv/daily-passphrases , and only output a specific line from the file: the line that corresponds with today's date. Users like Bob are not allowed to know tomorrow's passphrase, even though it is listed in the file. For this reason, the file /srv/daily-passphrases is protected by Unix permissions, so non-root users like Bob are not allowed to access the file directly. They are, however, allowed to run the get-todays-passphrase.sh script at any time, which returns the "filtered" data. To summarize (the TL;DR version): Bob can't read the protected file The script can read the protected file At any time, Bob can run the script which can read the file Is it possible to do this within Unix file permissions? Or if Bob starts a script, will the script always be doomed to run with the same permissions as Bob? | This is actually common and quite straightforward. sudo allows you to limit specific applications that a user can invoke. In other words, you don't have to give them all root or nothing; you can give them sudo permissions to run a specific command . This is exactly what you want, and is very common practice for things like allowing users to push Git repositories via SSH and the like. To do this, all you have to do is add a line to /etc/sudoers that looks something like bob ALL=(root) NOPASSWD: /path/to/command/you/trust (The NOPASSWD: part is not required, but is very common in this situation.) At that point, bob can invoke /path/to/command/you/trust via sudo, but nothing else. That said, giving someone root—what we're doing here—may not be quite what you want. Notably, if there were any flaw in your script whatsoever, you risk letting your box get rooted. For that reason, you may instead prefer to create a user specifically to own the special file—say, specialuser —then chown the file to them, and have the /etc/sudoers make bob be that special user. In that case, the line you add to sudoers would simply be bob ALL=(specialuser) NOPASSWD: /path/to/command/you/trust | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
176,027 | I am looking for a way to customize Ash sessions with my own sets of alias es and whatnots. What is the Ash equivalent of Bash's bashrc files? | Ash first reads the following files (if they exist): System: /etc/profile User: ~/.profile | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/686/"
]
} |
176,059 | I have two identical USB sticks ( TrekStor 16GB ) prepared as following with the c't bankix image . Preparation using GParted : Deleted the existing partition Created a new DOS partition table Formatted the stick with FAT32, primary partition Loaded the image to the stick via usb-creator-kde . So I did the identical procedure for both sticks, but one boots and the other crashes with an error message: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: No such deviceCan not mount /dev/loop0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs Then I tried to compare them via sudo cmp /dev/sdb /dev/sdc . This resulted in: /dev/sdb /dev/sdc differ: byte 441, line 5 What's wrong here, and how do I fix it? | While I don't know why one crashes (bad stick? corrupt image?), the usual suspect for differences in "identically" created file systems, be they ISO9660 or otherwise, is time stamps , e.g. for creation time. Or a random default file system label .If you want identical data on both, dd the good image onto the other stick and verify their checksums (md5sum or other; any will do). Oh, and the assumption from the title of your question does not hold. It's not only one byte that differs. cmp only tells you the first that's different and then exits. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
176,074 | I want to get the variables $color and $number from a string that in general is like this: "something, numColor (number)". The color might be W, U, B, R, G. If there is no color the variable color should be C if the string before the comma doesn't have the word land or L otherwise. If there is more than one color the variable $color should be M. Here are some examples of what the string may look like and what the variables should be: Sorcery, R (1) $color=R, $number=1 Creature — Beast 5/3, 4G (5) $color=G $number=5 Sorcery, 1WWU (4) $color=M $number=4 Legendary Land $color=L $number=0 Artifact, 0 $color=C $number=0 Legendary Creature — Eldrazi 15/15, 15 (15) $color=C $number=15 | While I don't know why one crashes (bad stick? corrupt image?), the usual suspect for differences in "identically" created file systems, be they ISO9660 or otherwise, is time stamps , e.g. for creation time. Or a random default file system label .If you want identical data on both, dd the good image onto the other stick and verify their checksums (md5sum or other; any will do). Oh, and the assumption from the title of your question does not hold. It's not only one byte that differs. cmp only tells you the first that's different and then exits. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
176,084 | I'm using a Google Drive command-line script that can return a list of files such as: Id Title Size Created0Bxxxxxxxxxxxxxxxxxxxxxxxxxx backup-2014-12-26.tar.bz2 569 MB 2014-12-26 18:23:32 I want to purge files older than 15 days. How can I execute the following command: drive delete --id 0Bxxxxxxxxxxxxxxxxxxxxxxxxxx with the Id of all the lines that have a Created date older than 15 days? | While I don't know why one crashes (bad stick? corrupt image?), the usual suspect for differences in "identically" created file systems, be they ISO9660 or otherwise, is time stamps , e.g. for creation time. Or a random default file system label .If you want identical data on both, dd the good image onto the other stick and verify their checksums (md5sum or other; any will do). Oh, and the assumption from the title of your question does not hold. It's not only one byte that differs. cmp only tells you the first that's different and then exits. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30018/"
]
} |
176,091 | According to the accepted answer for this SO question: , Python can make a great bash replacement . My question then, is this: how do I go about making a seamless switch? I think the main thing to sort out to make such a switch would be: when starting a virtual terminal, call some Python shell (what though?), rather than something like Bourne shell. Does that make sense? If yes, how could I go about doing that? This Wikipedia comparison of common shells doesn't list a single Python shell: Comparison of command shells | That thread and its accepted answer in particular are about using Python for shell scripting , not as an interactive shell. To write scripts in a different language, put e.g. #!/usr/bin/env python instead of #!/bin/bash at the top of your script. If you want to try out a different interactive shell, just run it, e.g. type ipython at your existing shell prompt. If you've decided to adopt that shell, set the SHELL environment variable at the start of your session (in ~/.profile in most environments, or in ~/.pam_environment ), e.g. export SHELL=/usr/bin/ipython ( .profile syntax) or SHELL="/usr/bin/ipython" ( .pam_environment syntax). None of the shells that I've seen based on advanced languages such as Perl or Python are good enough for interactive use in my opinion. They're too verbose for common tasks, especially the common job of a shell which is to launch an application. I wrote about a similar topic 4 years ago ; I don't think the situation has fundamentally improved since then. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/176091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95597/"
]
} |
176,111 | I have a binary file I would like to include in my C source code (temporarily, for testing purposes) so I would like to obtain the file contents as a C string, something like this: \x01\x02\x03\x04 Is this possible, perhaps by using the od or hexdump utilities? While not necessary, if the string can wrap to the next line every 16 input bytes, and include double-quotes at the start and end of each line, that would be even nicer! I am aware that the string will have embedded nulls ( \x00 ) so I will need to specify the length of the string in the code, to prevent these bytes from terminating the string early. | xxd has a mode for this. The -i / --include option will: output in C include file style. A complete static array definition is written (named after the input file), unless xxd reads from stdin. You can dump that into a file to be #include d, and then just access foo like any other character array (or link it in). It also includes a declaration of the length of the array. The output is wrapped to 80 bytes and looks essentially like what you might write by hand: $ xxd --include foounsigned char foo[] = { 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x2c, 0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64, 0x21, 0x0a, 0x0a, 0x59, 0x6f, 0x75, 0x27, 0x72, 0x65, 0x20, 0x76, 0x65, 0x72, 0x79, 0x20, 0x63, 0x75, 0x72, 0x69, 0x6f, 0x75, 0x73, 0x21, 0x20, 0x57, 0x65, 0x6c, 0x6c, 0x20, 0x64, 0x6f, 0x6e, 0x65, 0x2e, 0x0a};unsigned int foo_len = 47; xxd is, somewhat oddly, part of the vim distribution, so you likely have it already. If not, that's where you get it — you can also build the tool on its own out of the vim source. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/176111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6662/"
]
} |
176,115 | My Arch Linux's systemd starts rpcbind automatically. What do I have to do to stop systemd to do this? There are no remote filesystems in /etc/fstab . The only thing I found why rpcbind gets started is that is supposedly wanted by multi-user target but there is no service in the directory. How can I figure out why it is really started? | There is an open bug report on the Arch tracker . Your best be would be to mask the service: systemctl mask rpcbind.service See Lennart Poettering's series of blog posts, systemd for Administrators, Part V for details on masking: 3. You can mask a service. This is like disabling a service, but on steroids. It not only makes sure that service is not started automatically anymore, but even ensures that a service cannot even be started manually anymore. This is a bit of a hidden feature in systemd, since it is not commonly useful and might be confusing the user. But here's how you do it: $ ln -s /dev/null /etc/systemd/system/ntpd.service$ systemctl daemon-reload By symlinking a service file to /dev/null you tell systemd to never start the service in question and completely block its execution. Unit files stored in /etc/systemd/system override those from /lib/systemd/system that carry the same name. The former directory is administrator territory, the latter terroritory of your package manager. By installing your symlink in /etc/systemd/system/ntpd.service you hence make sure that systemd will never read the upstream shipped service file /lib/systemd/system/ntpd.service . systemd will recognize units symlinked to /dev/null and show them as masked. If you try to start such a service manually (via systemctl start for example) this will fail with an error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96143/"
]
} |
176,125 | How can I compress a directory content without compress the full path folder structure too. I am using this command to zip a folder content under var/www/ directory, but when I unzip the application.zip I got a 2 level folder structure /var/www/my important files zip -r /var/appbackup/application.zip /var/www/ I would like to keep only files without "var" and "www" folders. How can I do that? | There is an open bug report on the Arch tracker . Your best be would be to mask the service: systemctl mask rpcbind.service See Lennart Poettering's series of blog posts, systemd for Administrators, Part V for details on masking: 3. You can mask a service. This is like disabling a service, but on steroids. It not only makes sure that service is not started automatically anymore, but even ensures that a service cannot even be started manually anymore. This is a bit of a hidden feature in systemd, since it is not commonly useful and might be confusing the user. But here's how you do it: $ ln -s /dev/null /etc/systemd/system/ntpd.service$ systemctl daemon-reload By symlinking a service file to /dev/null you tell systemd to never start the service in question and completely block its execution. Unit files stored in /etc/systemd/system override those from /lib/systemd/system that carry the same name. The former directory is administrator territory, the latter terroritory of your package manager. By installing your symlink in /etc/systemd/system/ntpd.service you hence make sure that systemd will never read the upstream shipped service file /lib/systemd/system/ntpd.service . systemd will recognize units symlinked to /dev/null and show them as masked. If you try to start such a service manually (via systemctl start for example) this will fail with an error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96149/"
]
} |
176,158 | I'm looking for an easy way to navigate to directories spanning multiple hard drives and want to set something equivalent to a shortcut within the terminal. In Windows I would accomplish this with mklink and create either symbolic links to jump to the directory, or junctions to append the location to the end of the current file path. Since this is really just about navigation, it also needn't be a link or anything, maybe there is some environmental variable I could set so that I can cd $myDir (preferable). Is such a thing possible, or do I just really need to learn my directories better? | Becoming familiar with your file system layout is all part of becoming a competent user - any time you spend with that aim in mind is not time wasted. However, with that said, you can indeed make it easier to move around the file system. Note that in Linux/UNIX, the file system is presented as a single tree, no matter how many devices make up your storage, unlike in Windows where each "drive" (physical or logical, depending on your configuration) is represented, in the default configuration, by an independent tree. There are numerous ways you can approach this problem. It is certainly possible to set up a bunch of shell variables that each point to a different directory. Issuing cd $SomeDir will cause the shell to expand the variable $SomeDir and substitute it in the command line, so that when it finally runs, cd receives the name of the directory stored in the variable. This is probably the simplest approach, and if you populate your shell variables with absolute paths, it should work from anywhere in the file system. You could also use symbolic links to target directories (hard links to directories are not supported in most UNIXs). However, for this to be effective, you'd still need to give sufficient information in the path argument to allow the kernel to resolve the symlink. That is, you'd need to provide the absolute path to the symlink, or enough of a relative path to allow the kernel to find the link, so it could then follow it. A further approach, which may or not be available, depending on your shell, is to use the shell's cdpath feature. This is supported in bash , zsh , tcsh and undoudtedly others. With this technique, you set the environment variable CDPATH to a colon-separated list of directory names, which is searched when you run cd . If one of the directories on $CDPATH contains a subdirectory whose name matches that passed to cd , the shell changes its current working directory. For example, if CDPATH contains /usr/local , and if your system has a directory /usr/local/www , issuing cd www will look up the contents of $CDPATH , try to find a subirectory of /usr/local called www , and if it exists, will change its current working directory to /usr/local/www . Note that the shell searches the directories in $CDPATH in the order they are specified, so if $CDPATH contains multiple directories that contain the subdirectory you pass as argument to cd , the first match wins . This has caught me out often enough that I no longer us cdpath . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96172/"
]
} |
176,175 | Is there any way that we can edit the terminal preferences like background, colors, etc., from within the command line itself? The terminal is Gnome-terminal. I'm using Ubuntu 14.04 trusty tahr. | Here's what I did: Install gconf-editor sudo apt-get install gconf-editor Fired it up from terminal gconf-editor Went to apps>gnome-terminal>profiles>Default inside gnome-editor This will open up the key-value pairs for preferences.Edit the value corresponding to the required key. Thanks for pointing me in the direction rather than giving the exact answer @jasonwryan . I learnt some other things along the way. Now, I'm going to try to use gconftool-2 to do the exact same thing. I'm trying to eliminate the need for a GUI :) Useful Links: What is Gconf Addition: Using gconftool-2 The program gconftool-2 allows the user to interact with Gconf from the command-line. For example, you wish to set the background darkness level of terminal So, we have to set the key /apps/gnome-terminal/profiles/Default/background_darkness with a value (let's say 0.50) gconftool-2 --set /apps/gnome-terminal/profiles/Default/background_darkness --type=float 0.50 Similar to this, we can set change other values corresponding to different keys. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/176175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96181/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.