source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
308,735 | In Linux , is it possible to use a local serial port? Something similar to this: ssh user@localhost I tried this on Raspbian but it doesn't work (it should place in my shell but it doesn't): microcom -d /dev/ttyAMA0 I also tried /dev/ttyS0 but to no avail. I can of course access Raspberry Pi through serial console from another machine. There is no specific use-case for this question - I just cannot understand how really serial works. If it's possible to connect to the localhost with ssh shouldn't it be also possible with serial port? | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20604/"
]
} |
308,741 | I'm looking for something that could copy a paragraph, change the user and insert it in same file. file before: user1 this is only a test of a lovely ideauser2 this user shhould be copieduser3 who has an idea for my problem file after ( user2 was searched,copied and inserted as user4 ): user1 this is only a test of a lovely ideauser2 this user shhould be copieduser3 who has an idea for my problemuser4 this user shhould be copied | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189003/"
]
} |
308,810 | I want to copying multiple files from remote machine using rsync . So I use the following command. rsync -Pav -e 'ssh -i sshkey' user@remotemachine:/home/user/file1.zip file2.zip file3.zip . It shows following error Unexpected local arg:file2.zip If arg is a remote file/dir, prefix it with a colon (:). rsync error: syntax or usage error (code 1) at main.c(1362) [Receiver=3.1.0] | All remote files should be one argument for rsync. So, just put all remote files in single quotes: rsync -Pav -e 'ssh -i sshkey' 'user@remotemachine:/home/user/file1.zip file2.zip file3.zip' . BTW, you can also do this with a Asterisk (the Asterisk will be resolved by the remote shell then): rsync -Pav -e 'ssh -i sshkey' 'user@remotemachine:/home/user/*.zip' . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172333/"
]
} |
308,829 | I've got a log file that I process as follows: grep pattern /var/log/whatever.log | \ cut ... | sort | uniq -c | sort -rn | \ etc.... However, the log file is quite big, and records events from the beginning of the day. It's also appended to constantly. I'd like to only process the last next 10 minutes of the file. So I'm looking for something like the following: killafter 600 tail -f /var/log/whatever.log | stuff Or, better yet (wait however long it takes to capture 1000 matching lines): tail -f /var/log/whatever.log | grep pattern | stopafter -lines 1000 | stuff Are there any tools that'll let me do this? | roaima rightly points you at the timeout command, and head actually terminates after it has read the desired number of lines, so I would hope that with timeout 600s tail -f ‹logfile› | ‹filter› | head -n 1000 you'd get there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46851/"
]
} |
308,831 | I have configured centralized server for my all Linux servers. I can able to forward all system logs and Oracle database audit logs to centralized server. but my problem is all system and database logs are writing in one single file. My requirement is to write database logs to different file and system logs to different file in centralized location. Please find the scripts below. 192.168.1.150 : centralized server 192.168.1.44 : remote server remote server configuration (192.168.1.44) $ cat /etc/rsyslog.conf# Provides UDP syslog reception$ModLoad imudp.so$UDPServerRun 514# Provides TCP syslog reception$ModLoad imtcp.so$InputTCPServerRun 514*.info;mail.none;authpriv.none;cron.none /var/log/messages#Save oracle rdbms audit trail to oracle_audit.loglocal0.info /u01/app/oracle/admin/prod/adump/oracle_audit.log*.* @192.168.1.150:514 Centralized server configuration (192.168.1.150): $ cat /etc/rsyslog.conf # Provides UDP syslog reception$ModLoad imudp.so$UDPServerRun 514# Provides TCP syslog reception$ModLoad imtcp.so$InputTCPServerRun 514$template RemoteHost,"/backup/CentralizeLogLocation/Linuxlogs/%HOSTNAME%/%HOSTNAME%-%$YEAR%%$MONTH%%$DAY%.log" if ($hostname != '') then ?RemoteHost & ~ The files are creating in this format for each host in centralised server thsc-vmmanager-20160614.log Looks like everything is fine -- I got what I want, but both Oracle database audit logs and system logs are writing in one log file. I'm attaching the screenshot as well. Now the requirement is to separate both the files. Please guide me as to how can I achieve this. | roaima rightly points you at the timeout command, and head actually terminates after it has read the desired number of lines, so I would hope that with timeout 600s tail -f ‹logfile› | ‹filter› | head -n 1000 you'd get there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189070/"
]
} |
308,838 | I want to get exit code 1 if the 4th column did not match the regular expression, but it seems that awk will return 0, even though the regular expression did not match. Any idea how to make awk return 1 if the regexp did not match? root@server:~# netstat -nap|grep "LISTEN\b"tcp 0 0 0.0.0.0:873 0.0.0.0:* LISTEN 1144/rsync tcp 0 0 1.2.3.4.5:53 0.0.0.0:* LISTEN 25213/named tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 25213/named tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 28888/sshd tcp 0 0 0.0.0.0:9686 0.0.0.0:* LISTEN 1150/stunnel tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 25213/named root@server:~# netstat -nap|grep "LISTEN\b"|awk '$4 ~ /:80$/ {print $NF}'root@server:~# echo $?0 | You can set a variable to hold the return code, then negate the variable before quitting: netstat -nap |grep "LISTEN\b" |awk '$4 ~ /:80$/ {rc = 1; print $NF}; END { exit !rc }' If you don't need \b , then you can remove grep part: netstat -nap | awk '/LISTEN/ && $4 ~ /:80$/ {rc = 1; print $NF}; END { exit !rc }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30055/"
]
} |
308,846 | I work on a cluster shared with other colleagues. The hard disk is limited (and has been full on some occasions), so I clean up my part occasionally. I want to do this quickly, so until now I do this by making a list of files larger than 100 MB older than 3 months, and I see if I still need them. But now I am thinking that there could be a folder with >1000 smaller files that I miss, so I want to get an easy way to see if this is the case. From the way I generate data, it would help to get a list of total size per extension. In the context of this question, 'extension' as everything behind the last dot in the filename. Suppose I have multiple folders with multiple files: folder1/file1.bmp 40 kiBfolder1/file2.jpg 20 kiBfolder2/file3.bmp 30 kiBfolder2/file4.jpg 8 kiB Is it possible to make a list of total filesize per file extension, so like this: bmp 70 kiBjpg 28 kiB I don't care about files without extension, so they can be ignored or put in one category. I already went through man pages of ls , du and find , but I don't know what is the right tool for this job... | On a GNU system: LC_ALL=C find . -name '?*.*' -type f -printf '%b.%f\0' | LC_ALL=C gawk -F . -v RS='\0' ' {s[$NF] += $1; n[$NF]++} END { PROCINFO["sorted_in"] = "@val_num_asc" for (e in s) printf "%15d %4d %s\n", s[e]*512, n[e], e }' Or the same with perl , avoiding the -printf extension of GNU find (still using a GNU extension, -print0 , but this one is more widely supported nowadays): LC_ALL=C find . -name '?*.*' -type f -print0 | perl -0ne ' if (@s = lstat$_){ ($ext = $_) =~ s/.*\.//s; $s{$ext} += $s[12]; $n{$ext}++; } END { for (sort{$s{$a} <=> $s{$b}} keys %s) { printf "%15d %4d %s\n", $s{$_}<<9, $n{$_}, $_; } }' It gives an output like: 12288 1 pnm 16384 4 gif 204800 2 ico 1040384 17 jpg 2752512 83 png If you want KiB , MiB ... suffixes, pipe to numfmt --to=iec-i --suffix=B . %b*512 gives the disk usage¹, but note that if files are hard linked several times, they will be counted several times so you may see a discrepancy with what du reports. ¹ As an exception, on HP/UX, the block size reported by lstat() / stat() is 1024 instead of 512. GNU find adjusts for that so it's %b still represents the number of 512 byte units, but with perl , you'd need to multiply by 1024 instead there. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
308,870 | I would like to use LZMA-compressed kernel modules on my system. Unfortunately Canoncial leaves that feature disabled both in kernel and user-space tools. Here's what I did so far: Compile and install the current 14.04.05-LTS kernel (v4.4.19) with: CONFIG_MODULE_COMPRESS=yCONFIG_MODULE_COMPRESS_XZ=y After installation I can now see a bunch of .ko.xz files in /lib/modules/4.4.19-37.56+/kernel/ . Backport the kmod_22 package from Xenial (16.04) to Trusty (14.04) configured with the --with-xz option. This seems to work too. Run update-initrams -u -k 4.4.19-37.56+ . What works so far: arbitrary operations on uncompressed modules (like those built by DKMS): $ modinfo nvidia_370filename: /lib/modules/4.4.19-37.56+/updates/dkms/nvidia_370.ko[…] showing compressed modules by their full path: modinfo /lib/modules/4.4.19-37.56+/kernel/fs/jfs/jfs.ko.xz loading compressed modules without (missing) dependencies by their full path: insmod /lib/modules/4.4.19-37.56+/kernel/fs/jfs/jfs.ko.xz unloading such modules: rmmod jfs What doesn't work: Unloading with modprobe -r . Any other operation with just a package name but no path, e. g.: # insmod jfsinsmod: ERROR: could not load module jfs: No such file or directory# modprobe jfsmodprobe: FATAL: Module jfs not found in directory /lib/modules/4.4.19-37.56+ So, for modules without dependencies like jfs there's a work-around where I can just specify the full module file path to insmod , but this is both annoying and doesn't perform dependency resolution like modprobe . I suppose that the kernel module directory somehow doesn't pick up compressed module files. How can I load compressed kernel modules by their name with modprobe ? | You need to run depmod . depmod (by default) reads the modules under /lib/modules/$(uname -r) , finds which symbols they export and also what they need themselves, then using these info creates the symbol (module) dependencies between modules, and saves it in the file /lib/modules/$(uname -r)/modules.dep and also creates a binary hash /lib/modules/$(uname -r)/modules.dep.bin . It also creates two other files: /lib/modules/$(uname -r)/modules.symbols (and it's binary hash /lib/modules/$(uname -r)/modules.dep.bin ): contains the symbols each module exports /lib/modules/$(uname -r)/modules.devname : contains the /dev entry that needs to be created for necessary modules, contains the module name, name of the /dev entry and the major, minor numbers Just to note, you can also run depmod for a specific kernel version or on a specific module, check man depmod . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47852/"
]
} |
308,896 | How do I (recursively) detect all symlinks in a directory that identify their target in an absolute as opposed to in a relative way? Since these links are very likely to break when an entire directory tree is moved I would like to have a way of identifying them. Even relative links may break if the directory tree is moved (if they happen to point outside the root of the directory tree), but I think this is addressed in this question . | To find absolute links, you can use find 's -lname option if your find supports that (it's available at least in GNU find , on FreeBSD and macOS): find . -type l -lname '/*' This asks find to print the names of files which are symbolic links and whose content (target) matches /* using shell globbing. Strictly speaking, POSIX specifies that absolute pathnames start with one / or three or more / ; to match that, you can use find . -lname '/*' ! -lname '//*' -o -lname '///*' On what systems is //foo/bar different from /foo/bar? has more details on that. (Thanks to Sato Katsura for pointing out that -lname is GNU-specific, to fd0 for mentioning that it's actually also available on at least FreeBSD and macOS, and to Stéphane Chazelas for bringing up the POSIX absolute pathname definition.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24044/"
]
} |
308,904 | I tried to start wpa_supplicant.service , but I got the following error: Failed to start wpa_supplicant.service: Unit wpa_supplicant.service is masked. I tried unmasking it using systemctl unmask wpa_supplicant.service , but it doesn't seem to change anything. systemctl status wpa_supplicant.service returns Loaded: masked (/usr/lib/systemd/system/wpa_supplicaant.service; masked; vendor preset: disabled) Active: inactive (dead) What seems really strange is that when I check the wpa_supplicant.service file it's an empty document. How can I unmask the service? | A service unit that is empty (0 bytes) will be parsed by systemd as masked. While systemctl mask <unit> works by symlinking the service to /dev/null , systemd appears to just check if a file is 0 bytes when read to determine if a unit is masked. This results in the misleading message about a masked service. You need to figure out why the service unit is empty. As to how to unmask a service whose unit file is empty... You "unmask" the service by making the unit non-empty, which is going to be dependent on why the unit is empty. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189141/"
]
} |
308,984 | I'm not sure how this happened, but I have a number of files that have becomed symlinked to themselves. It seems likely that there won't be any way to restore the files, but hopefully there is. Here is what ls -l says lrwxrwxrwx 1 bob users 50 Sep 9 21:45 background.png -> /path/to/background.png I tried unlinking one of the files, but unfortunately the file disappeared. I've also tried readlink. Readlink says that the path to the file is /path/to/background.png Like I said, I really don't know how this happened. I am inheriting all these files from a previous admin. Is there any recourse? | If a file is symlinked to itself then there's no data present and any attempt to access it will result in a loop, and ultimately an error eg $ ls -l myfile lrwxrwxrwx 1 sweh sweh 19 Sep 9 22:38 myfile -> /path/to/here/myfile$ cat myfile cat: myfile: Too many levels of symbolic links Since there's no data, deleting these symlinks won't lose any data, because there is no data to preserve. If you don't get the Too many levels of symbolic links error when you try to cat the file then your file is not a link to itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189204/"
]
} |
309,039 | I have a text document that has a load of text which has an extra space added after every letter! Example: T h e b o o k a l s o h a s a n a n a l y t i c a l p u r p o s e w h i c h i s m o r e i m p o r t a n t… Visually: T␣h␣e␣␣b␣o␣o␣k␣␣a␣l␣s␣o␣␣h␣a␣s␣␣a␣n␣␣a␣n␣a␣l␣y␣t␣i␣c␣a␣l␣␣p␣u␣r␣p␣o␣s␣e␣␣w␣h␣i␣c␣h␣␣i␣s␣␣m␣o␣r␣e␣␣i␣m␣p␣o␣r␣t␣a␣n␣t… Note that there is an extra space after every letter,so there are two spaces between consecutive words. Is there a way that I can get awk or sed to delete the extra spaces? (Unfortunately this text document is massive andwould take a very long time to go through manually.) I appreciate that this is probably a much more complex problem to solve with just a simple bash script as there needs to be some sort of text recognition also. How can I approach this problem? | The following regex will remove the first space in any string of spaces.That should do the job. s/ ( *)/\1/g So something like: perl -i -pe 's/ ( *)/\1/g' infile.txt ...will replace infile.txt with a "fixed" version. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/309039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8575/"
]
} |
309,049 | I have a makeself archive, and I want to see what it has inside, ie. which files would get extracted, rather than actually run its script part. How do I do that? I would rather not actually extract any of it, but if that's the only way, then I'm willing to do the extraction - as long as none of the (ba)sh code in it actually runs. | The generated archive has a --list option, which you can use to list its contents. For reference, I'm talking about this version in Debian: ii makeself 2.2.0-1 all utility to generate self-extractables which generates this chunk in the script: MS_Help(){ cat << EOH >&2 Makeself version 2.2.0 1) Getting help or info about $0 : $0 --help Print this message $0 --info Print embedded info : title, default target directory, embedded script ... $0 --lsm Print embedded lsm entry (or no LSM) $0 --list Print the list of files in the archive $0 --check Checks integrity of the archive 2) Running $0 : $0 [options] [--] [additional arguments to embedded script] with following options (in that order) --confirm Ask before running embedded script --quiet Do not print anything except error messages --noexec Do not run embedded script --keep Do not erase target directory after running the embedded script --noprogress Do not show the progress during the decompression --nox11 Do not spawn an xterm --nochown Do not give the extracted files to the current user --target dir Extract directly to a target directory directory path can be either absolute or relative --tar arg1 [arg2 ...] Access the contents of the archive through the tar command -- Following arguments will be passed to the embedded script EOH} Its manual page needs some work, but the script is easy enough to read - see git repository Further reading: makeself - Make self-extractable archives on Unix | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
309,092 | When I press * on a variable name such as hello , vim will highlight names like this->hello or this.hello , but not _hello . It is a very strange behavior because I can highlight all hello 's by /hello .But for some reason, * behaves differently from / . Is there any way to make * to highlight all hello 's? | When you use the * key it effectively searches for \<theword\> The surrounding \<...\> means that it only looks for whole words. So bhello would not be found, in your example. You can modify the characters that are counted as non-keyword values by set iskeyword The default (in my version) is iskeyword=@,48-57,_,192-255 So we can ensure _ is not part of this: set iskeyword=@,48-57,192-255,^_ This can be put in your .vimrc file or run with a :set inside vim | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100236/"
]
} |
309,096 | For some cloud machines I'm launching, I'm trying to log to a specific file, syslog, and the terminal/console. At the top of my machine setup/cloud-init scripts, I have the following: #!/bin/bashexec &> >(tee "/tmp/box-setup.log" | logger -t box-setup)apt-get install -y some-package This works great at sending output to a file and syslog, but it doesn't pipe to the output to the terminal. Generally speaking not having terminal output isn't a huge problem except when I'm debugging from a remote console. When that happens, I'm completely blind because the console is blank as the bash script executes. Is there a simple way using bash redirection or whatever to pipe all output (standard output along with standard error) to a file, syslog, and the terminal simultaneously? I'm running Ubuntu 16.04. | Add a nested process substitution and another tee in there like: exec &> >(tee >(tee "/tmp/box-setup.log" | logger -t box-setup)) The first tee within the main process substitution sends STDOUT/STDERR to terminal and also to the nested process substitution, the tee inside that saves the content in file /tmp/box-setup.log and the pipe is used to send the output to logger 's STDIN too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184794/"
]
} |
309,209 | My pc is dual boot. I have Red Hat Enterprise Linux 5 along with Windows 7 Ultimate installed. There are some common files which are required by me in both the os. Right now I access and manipulate these files via a secondary storage device(USB or DVD RW) attached to my system. Is it possible to create a common folder/directory which is accessible to both the Linux as well as Windows os. Can the files, within such kind of folders/directories, be manipulated via both the os. How? | Of course, and it's very easy. The simplest way is to have a shared partition that uses a filesystem both OSs can understand. I usually have an NTFS-formatted partition which I mount at /data on Linux. This will be recognized as a regular partition on Windows and be assigned a letter ( D: for example) just like any other. You can then use it from both systems and the files will be available to both your OSs. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/309209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189391/"
]
} |
309,219 | Some recent Bluetooth chipsets from Intel and Broadcom need to run the btattach command in user-space for Bluetooth to be enabled properly (it "attaches" the BT chipset and triggers the loading of the required firmware if needed). Such an example is the Broadcom BCM43241 rev B5 chipset found on Lenovo ThinkPad 8 tablets which needs the following command # btattach --bredr /dev/ttyS1 -P bcm but this is applicable to many other Bluetooth chipsets connected to an UART controller. Q: What is the best recommended way to trigger the required btattach command during boot to have Bluetooth enabled automatically ? P.S. The idea would be to contribute such a modification to Linux distributions starting to package the btattach command (like Debian), since right now many recent devices simply don't have Bluetooth working out-of-the-box. This would be especially useful for tablets that have no or few full-size USB ports. | Of course, and it's very easy. The simplest way is to have a shared partition that uses a filesystem both OSs can understand. I usually have an NTFS-formatted partition which I mount at /data on Linux. This will be recognized as a regular partition on Windows and be assigned a letter ( D: for example) just like any other. You can then use it from both systems and the files will be available to both your OSs. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/309219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189400/"
]
} |
309,244 | I think there must be simple answer to this, but I can't figure out why this isn't working! I have a folder in my home directory (well, a few levels down) called installed-plugins. I want to transfer all the contents of that folder (about 15 .jar files) to a different folders, also called installed-plugins. This is what I am trying: $ sudo mv /home/jira-plugins/installed-plugins/* /var/atlassian/application-data/jira/plugins/installed-plugins/ mv: cannot stat `/home/jira-plugins/installed-plugins/*': No such file or directory What is my error? The folder is definitely not empty. Here is the ls output: $ sudo ls /home/jira-plugins/installed-pluginsanalytics-client-3.15.jar plugin.2223138796603023855.jira-importers-plugin-6.0.30.jaratlassian-chaperone-2.0.3.jar plugin.330169947367430109.jira-fisheye-plugin-6.2.8.jaratlassian-client-resource-1.0.jar plugin.4363048306537053933.jeditor-2.1.7.2.jaratlassian-pocketknife-api-commons-plugin-0.19.jar plugin.4438307615842123002.jira-ical-feed-1.0.4.jaratlassian-pretty-urls-plugin-1.8.jar plugin.461510159947098121.jira-issue-collector-plugin-1.2.5.jarbase-hipchat-integration-plugin-7.8.24.jar plugin.5630909028354276764.atlassian-universal-plugin-manager-plugin-2.7.8.jarbase-hipchat-integration-plugin-api-7.8.24.jar plugin.6920509095052318016.atlassian-bonfire-plugin-2.9.13.jarhipchat-core-plugin-0.8.3.jar plugin.6952408596192442765.atlassian-bonfire-plugin-2.8.2.jarhipchat-for-jira-plugin-1.2.11.jar plugin.7079751365359230322.jira-importers-bitbucket-plugin-1.0.8.jarjira-email-processor-plugin-1.0.29.jar plugin.7451827330686083284.atlassian-universal-plugin-manager-plugin-2.21.4.jarjira-fisheye-plugin-7.1.1.jar plugin.7498175247667964103.jira-importers-redmine-plugin-2.0.7.jarjira-ical-feed-1.1.jar plugin.7803627457720701011.jira-importers-plugin-3.5.3.jarjira-issue-nav-components-6.2.23.jar plugin.7977988994984147602.jira-bamboo-plugin-5.1.6.jarjira-servicedesk-2.3.6.jar plugin.8372419067824134899.jira-importers-plugin-5.0.2.jarjira-workinghours-plugin-1.5.5.jar plugin.9081077311844509190.jira-fisheye-plugin-5.0.13.jarplugin.1260160651631713368.stp-3.0.11.jar plugin.9128973321151732551.jira-fisheye-plugin-6.3.10.jarplugin.2076016305412409108.jira-fisheye-plugin-3.4.10.jar plugin-license-storage-plugin-2.8.jarplugin.218965759549051904.jira-importers-plugin-6.1.5.jar querydsl-4.0.7-provider-plugin-1.1.jarplugin.2211202876682184330.jira-ical-feed-1.0.12.jar stp-3.5.10.jar | It's almost certainly due to the fact that your ordinary user account cannot access the directory, so the shell cannot enumerate the files that would match the wildcard. You can confirm this easily enough with a command like this ls /home/jira-plugins/installed-plugins If you get a permission denied then there is no way the shell is going to be able to expand a * wildcard in that directory. Why? Consider your command sudo mv /home/jira-plugins/installed-plugins/* /var/atlassian/application-data/jira/plugins/installed-plugins/ The order of processing is (1) expand the wildcards, (2) execute the command, which in this case is sudo with some arguments that happen to correspond to a mv statement. You can solve the problem in one of two ways Become root and then move the files sudo -s mv /home/jira-plugins/installed-plugins/* /var/atlassian/application-data/jira/plugins/installed-plugins/ Expand the wildcard after running sudo sudo bash -c "mv /home/jira-plugins/installed-plugins/* /var/atlassian/application-data/jira/plugins/installed-plugins/" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/309244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16596/"
]
} |
309,247 | Today I finally decided to switch from Windows to Ubuntu.I fully install Ubuntu, but everytime my notebook goes in sleepmode and wakes up my mouse will not work anymore. When I do a reboot it works fine until I let it sleep (even if it's for 10sec). So I tried switching to Mint, however even with Mint I had the same problem. I have been trying to fix it all day without success. Notebook:Asus laptop On a similar forum post this was the fix: sudo apt-get install --reinstall xserver-xorg-input-all However this did not work for me. | This issue goes back at least a couple of years. It's a problem for a lot of Asus owners, but it's not entirely specific to Asus devices. Also, it's not Debian-specific — though in your case both Ubuntu and Mint have Debian roots. It's a tricky one because it seems everyone's mileage varies with each proposed solution (and they all have slight hardware variances). There are countless threads trying to solve it. Notable bug threads on Launchpad (depending on your Asus model): Elantech touchpad stops working after suspend FocalTech touchpad stops working after suspend Applicable bug thread on Kernel.org: ETPS/2 Elantech Touchpad dies after resume from suspend Assuming you don't want to patch your kernel, there are a couple potential runtime solutions you can try. (Note: You can open a terminal with Ctrl+Alt+T ) Disable/Enable the Mouse Driver: You can remove and then re-insert your mouse driver module with the following commands: sudo modprobe -r psmousesudo modprobe psmouse Disable/Enable via Xinput: Find your trackpad ID with the following command: xinput --list Once you have your device ID—I'll call it 12 for this example—try disabling and enabling the device with: xinput --disable 12xinput --enable 12 (Replace 12 with your trackpad's device ID) Alternative: As an aside, you may want to try hibernating instead of suspending. Supposedly the problems aren't as prevalent with hibernation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189425/"
]
} |
309,327 | I am reading a Linux shell scripting book and have found the following warning: "Command substitution creates what's called a subshell to run the enclosed command. A subshell is separate child shell generated from the shell that's running the script. Because of that, any variables you create in the script aren't available to the subshell command". I have tried to create a variable in my current bash shell's CLI and then enter the subshell to check whether I can print it on the screen. So yes, I can't do it, seems to be according the citation above. However, I have run the following script with command substitution: #!/bin/bashvar=5.5555ans=$(echo $var)echo $ans And the result is: 5.5555 As I understood, it shouldn't print the value of var since the subshell shouldn't be able to "see it". Why does it happen? | The statement: Because of that, any variables you create in the script aren't available to the subshell command. is false. The scope of a variable defined in the parent shell is the entire script (including subshells created with command substitution). Running: #!/bin/bashvar=5.5555ans1=$(echo $var)ans2=$(var=6; echo $var)echo $ans1echo $ans2 will give the result: 5.55556 $var is resolved by the subshell: if no local variable is specified, the value of three global variable is used if a local variable is specified, it uses its value See also the example 21-2 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/309327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189482/"
]
} |
309,339 | How can I write a shell script that exits, if one part of it fails?For example, if the following code snippet fails, then the script should exit. n=0until [ $n -ge 5 ]do gksu *command* && break n=$[$n+1] sleep 3 | One approach would be to add set -e to the beginning of your script. That means (from help set ): -e Exit immediately if a command exits with a non-zero status. So if any of your commands fail, the script will exit. Alternatively, you can add explicit exit statements at the possible failure points: command || exit 1 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/309339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189493/"
]
} |
309,453 | In Bash, piping tail -f to a read loop blocks indefinitely. while read LINE0 do echo "${LINE0}"; done < <( tail -n 3 -f /tmp/file0.txt | grep '.*' ) # hangs Remove the -f or | grep '.*' , then the loop will iterate. The following does not hang. tail -n 3 -f /tmp/file0.txt | grep '.*' What causes this behavior? Is there anyway in Bash to follow a file and read in a pipe expression? | In the pipeline, grep 's output is buffered. With the GNU implementation of grep , you can force flush the output after each line with --line-buffered ( documentation here ); for example: tail -n 3 -f /tmp/file0.txt | grep --line-buffered '.*' | while IFS= read -r LINE0 do printf '%s\n' "${LINE0}" done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/309453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164612/"
]
} |
309,466 | For a school assignment our team has been provided a VM from the school. We were handed the VM with our user accounts added to the sudo group, and I also have access to the "root" and "sysadm" accounts that already exist on the machine, presumably from the sysadmin that did the setup. I noticed an issue: I am unable to create any directories or files in the home directory. 411blackf16:/> ls -lashrtotal 93K 0 lrwxrwxrwx 1 root root 29 Sep 8 07:43 vmlinuz.old -> boot/vmlinuz-4.4.0-21-generic 0 lrwxrwxrwx 1 root root 29 Sep 8 18:35 vmlinuz -> boot/vmlinuz-4.4.0-36-generic4.0K drwxr-xr-x 14 root root 4.0K Sep 12 18:16 var4.0K drwxr-xr-x 10 root root 4.0K Sep 8 07:42 usr4.0K drwxrwxrwt 10 root root 4.0K Sep 12 18:44 tmp4.0K drwx------ 2 sysadm sysadm 4.0K Sep 12 18:44 sysadm 0 dr-xr-xr-x 13 root root 0 Sep 12 18:30 sys4.0K drwxr-xr-x 2 root root 4.0K Apr 20 18:08 srv4.0K drwxr-xr-x 2 root root 4.0K Apr 19 10:31 snap 12K drwxr-xr-x 2 root root 12K Sep 12 13:18 sbin 0 drwxr-xr-x 36 root root 1.4K Sep 12 18:32 run4.0K drwx------ 3 root root 4.0K Sep 12 18:34 root 0 dr-xr-xr-x 219 root root 0 Sep 8 23:42 proc4.0K drwxr-xr-x 2 root root 4.0K Sep 12 12:20 opt4.0K drwxr-xr-x 2 root root 4.0K Apr 20 18:08 mnt4.0K drwxr-xr-x 4 root root 4.0K Sep 8 07:42 media 16K drwx------ 2 root root 16K Sep 8 07:42 lost+found4.0K drwxr-xr-x 2 root root 4.0K Sep 8 07:42 lib644.0K drwxr-xr-x 22 root root 4.0K Sep 12 18:22 lib 0 lrwxrwxrwx 1 root root 32 Sep 8 07:43 initrd.img.old -> boot/initrd.img-4.4.0-21-generic 0 lrwxrwxrwx 1 root root 32 Sep 8 18:35 initrd.img -> boot/initrd.img-4.4.0-36-generic 0 drwxrwxrwx 2 root root 0 Sep 12 19:03 home4.0K drwxr-xr-x 106 root root 4.0K Sep 12 18:56 etc 0 drwxr-xr-x 19 root root 4.2K Sep 8 23:43 dev1.0K drwxr-xr-x 4 root root 1.0K Sep 12 13:19 boot4.0K drwxr-xr-x 2 root root 4.0K Sep 12 13:18 bin4.0K drwxr-xr-x 24 root root 4.0K Sep 12 18:51 ..4.0K drwxr-xr-x 24 root root 4.0K Sep 12 18:51 . The owner is root:root. 411blackf16:/> sudo mkdir /home/testmkdir: cannot create directory ‘/home/test’: Permission denied411blackf16:/> sudo su rootroot@411blackf16:/# sudo mkdir /home/testmkdir: cannot create directory ‘/home/test’: Permission denied Using my sudo user account or the root account still doesn't allow creation of directory or files. root@411blackf16:/# chmod -R 777 /home/ && touch /home/testtouch: cannot touch '/home/test': Permission denied Even opening up the permissions doesn't help. Does anyone have some any idea on what is going on here? Thanks. | A couple of possibilities: /home could be a filesystem which is mounted readonly (the mount command would show you this) as an exercise, your instructor could have set some interesting ACL (but then ls should have shown a . or other punctuation character after the permissions) the VM (underlying file) permissions are readonly, and the machine cannot write-through its changes (so for instance, journalling might have died). In a followup, OP showed the results from mount : 411blackf16:/> mount | grep homeldap:CN=auto.home,OU=Unix Autofs,DC=cs,DC=odu,DC=edu on /home type autofs (rw,relatime,fd=6,pgrp=1415,timeout=300,minproto=5,maxproto=5,indirect) and MikeA pointed out that the type is "autofs" , which shows that the filesystem is mounted, and the string "ldap:CN=auto.home,OU=Unix Autofs,DC=cs,DC=odu,DC=edu" indicates that it is mounted using LDAP credentials. all of this implies that the actual /home is on another machine that OP cannot modify (aside from files in his/her home-directory). The root user on the VM would not have any permissions on this filesystem (it would be treated as nobody ). If you want to create local user accounts in the VM, with a local home directory, you can put their home directory in a different location. /home is a very common convention, but not an absolute rule. Further reading: 13.2.7. Configuring Services: autofs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41095/"
]
} |
309,491 | My input cat file ABCABCDEFDEFABCGHIGHIDEF The output that I want ABC_1 ABC_2DEF_1DEF_2ABC_3GHI_1GHI_2DEF_3 anybody can help,I google a lot ,but still cannot get how to get the output in unix . | $ awk '{print $1 "_" ++c[$1]}' fileABC_1ABC_2DEF_1DEF_2ABC_3GHI_1GHI_2DEF_3 The above uses a single awk command: print $1 "_" ++c[$1] . This prints the first field, followed by _ , followed by a count of the number of times that the first field has been seen so far. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189638/"
]
} |
309,513 | How can I find out when a device is connected to my FreeBSD machine? Lets say I plug in a USB device, HDMI device, Bluetooth or something like that. Can I have a console output to say [device] and gives some output about the device? | All other answers are good, if you want only to check if a device is connected (checking kernel messages with dmesg , check in /var/log files and use some tools like usbconfig , pciconf or camcontrol ). But, if you want more (handle a message and execute a program or something like that when you plug your device), you can use devd . When you connect a device, FreeBSD kernel will generate messages: when you plug your device, an attach message is generated when you unplug your device, a detach message is generated and more (see devd.conf man page if you want more information). FreeBSD uses devd by default, and its configuration is stored in /etc/devd/ and /etc/devd.conf . If you use linux, the same features exist with devfs and udev . You can find some examples in /usr/share/examples/etc/devd.conf . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140921/"
]
} |
309,514 | I have hundreds of files which contain " </foo:bar> " ccbbaa</foo:bar>ddxxvv I want to change them all at once to ccbbaa</foo:bar> sed works well when i give it the exact file name sed -i "/</foo:bar>/q" 99999.txt but when I try to change all of them at once I get no result. sed -i "/<\/foo:bar>/q" *.txt | Try: sed -s -n -i '0,/<\/foo:bar>/p' *.txt -s tells sed to treat each file as separate. Because we don't want sed to quit until all the files are done, we change to just print from the beginning to <\/foo:bar> and not print the rest of the lines. -n tells sed not print unless we explicitly ask it to. The command 0,/<\/foo:bar>/p tells sed to print any line in the range from the beginning of the file to the first line that matches <\/foo:bar> . The -s option is not available for BSD/OSX sed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/309514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104749/"
]
} |
309,522 | I installed Antergos on my laptop, then installed Ubuntu after that. Ubuntu detected that I had another Linux installed, so it added Antergos to its grub menu. Every time I boot up, grub is loaded from the Ubuntu partition. How can I make the other grub file the default one? My partitions are: Antergos boot partition (want to load grub.cfg from here) - /dev/sda1 Antergos root partition - /dev/sda2 Ubuntu partition (grub loads from here) - /dev/sda6 | In legacy bios systems, the bios looks up the Master Boot Record (MBR) of the disk it is set to boot. This is the first 512 bytes of the disk and contains the first stage of the bootloader process, this will be grub in your case. The sole job of this stage is to locate and load the second stage normally on the drive that contains /boot. The MBR has these paths hardcoded into it and in order to change them you must reinstall the MBR from the system (or chroot of the system) you want it to point to using grub-install . If you can boot the system then this is trivial, but if you cannot then you must use a livecd and chroot into your system; see the instructions here on how to do that. However, in your case the antergos grub config will not have the ubuntu distro in it so you will lose the ability to boot that until you add it. You can also configure the ubuntu grub config to boot antergos by default if this is your intended goal. Either approach is acceptable and depends on what you want to achieve. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188285/"
]
} |
309,533 | I am confused about memory-mapped files, so I have a couple of questions which I would be very glad if you could help me out. Let's say I browse to a directory in my file system and there is a file in this directory. Is it possible that this file points to a region in the main memory, instead of pointing to a region in the disk? If this is possible, is this what we call 'memory-mapped file'? What would be the meaning of moving such file around the file system (that is, mv ing such file from a directory into another)? What I understand is, since the file is memory mapped, the process(es) interacting with the file always writes to a predefined region of the main memory, and when we open that file (for example using vim ), we read that region of main memory (so, no disk is involved). Hence, no matter where we move the file, it will always work correctly right? If yes, does moving the file around the file system has any significance? Is there a command which would tell if a file is memory-mapped? Finally, if I open a memory-mapped file with vim , make some changes on it and save and close vim , what will happen? Will my changes simply be written to main memory? If that's the case, will other processes which use this file will see the changes I have just made? In my experience, the other processes did not see the changes I have made to the file when I made some changes on the file with vim . What is the reason for this? | Memory-mapped files work the other way round. Memory-mapping isn't a property of the file, but a way to access the file: a process can map a file's contents (or a subset thereof) into its address space. This makes it easier to read from and write to the file; doing so simply involves reading and writing in memory. The file itself, on disk, is just the same as any other file. To set this up, processes use the mmap function. This can also be used for other purposes, such as sharing memory between processes. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/309533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106637/"
]
} |
309,547 | Some shells, like bash , support Process Substitution which is a way to present process output as a file, like this: $ diff <(sort file1) <(sort file2) However, this construct isn't POSIX and, therefore, not portable. How can process substitution be achieved in a POSIX -friendly manner (i.e. one which works in /bin/sh ) ? note: the question isn't asking how to diff two sorted files - that is only a contrived example to demonstrate process substitution ! | That feature was introduced by ksh (first documented in ksh86) and was making use of the /dev/fd/ n feature(added independently in some BSDs and AT&T systems earlier). In ksh and up to ksh93u, it wouldn't work unless your system had support for /dev/fd/ n . zsh, bash and ksh93u+ and above can make use of temporary named pipes (named pipes added in System III)where /dev/fd/ n are not available. On systems where /dev/fd/ n is available (POSIX doesn't specify those),you can do process substitution (e.g., diff <(cmd1) <(cmd2) ) yourself with: { cmd1 4<&- | { # in here fd 3 points to the reading end of the pipe # from cmd1, while fd 0 has been restored from the original # stdin (saved on fd 4, now closed as no longer needed) cmd2 3<&- | diff /dev/fd/3 - } 3<&0 <&4 4<&- # restore the original stdin for cmd2} 4<&0 # save a copy of stdin for cmd2 However that doesn't work with ksh93 on Linux as there, shell pipes are implemented with socketpairs instead of pipes and opening /dev/fd/3 where fd 3 points to a socket doesn't work on Linux. Though POSIX doesn't specify /dev/fd/ n , it does specify named pipes. Named pipes work like normal pipes except that you can access them from the file system. The issue here is that you have to create temporary ones and clean up afterwards, which is hard to do reliably, especially considering that POSIX has no standard mechanism (like a mktemp -d as found on some systems) to create temporary files or directories, and signal handling (to clean-up upon hang-up or kill) is also hard to do portably. You could do something like: tmpfifo() ( n=0 until fifo=$1.$$.$n mkfifo -m 600 -- "$fifo" 2> /dev/null do n=$((n + 1)) # give up after 20 attempts as it could be a permanent condition # that prevents us from creating fifos. You'd need to raise that # limit if you intend to create (and use at the same time) # more than 20 fifos in your script [ "$n" -lt 20 ] || exit 1 done printf '%s\n' "$fifo")cleanup() { rm -f -- "$fifo"; }fifo=$(tmpfifo /tmp/fifo) || exitcmd2 > "$fifo" & cmd1 | diff - "$fifo"cleanup (not taking care of signal handling here). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/309547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9259/"
]
} |
309,660 | I use tmux at work as my IDE. I also run vim in a variety of tmux panes and will fairly often background the process (or alternatively I just close the window - I have vim configured not to remove open buffers when the window is closed). Now I've a problem, because a file that I want to edit is open in one of my other vim sessions but I don't know which one. Is it possible to find out which one, without manually going through all my windows and panes? In my particular case, I know that I didn't edit it with vim ~/myfile.txt because ps aux | grep myfile.txt doesn't return anything. | It doesn't tell me everything , but I used fuser ~/.myfile.txt.swp which gave me the PID of the vim session. Running ps aux | grep <PID> I was able to find out which vim session I was using, which gave me a hint as to which window I had it open in. Thanks to Giles's inspiration and a bit of persistence and luck, I came up with the following command: ⚘ (FNAME="/tmp/.fnord.txt.swp"; tmux switch -t $(tmux list-panes -a -F '#{session_name}:#{window_index}.#{pane_index} #{pane_tty}' | grep $(ps -o tty= -p $(lsof -t $FNAME))$ | awk '{ print $1 }')) To explain what this does: (FNAME="/tmp/.fnord.txt.swp"; This creates a subshell and sets FNAME as an environment variable. It's not, strictly speaking, necessary - you could just replace $FNAME with the filename yourself, but it does make editing things easier. Now, working from the inside out: lsof -t $FNAME This produces only the PID of the process that has open the file. ps -o tty= -p $(...) This produces the pts of the PID that we found using lsof . tmux list-panes -a -F '#{session_name}:#{window_index}.#{pane_index} #{pane_tty}' This produces a pane list of entries like session:0.1 /dev/pts/1 . The first part is the format that tmux likes for targets, and the second part is the pts | grep $(...)$ This filters our pane list - the trailing $ is so it will only match the one we care about. I discovered that quite by accident as I had pts/2 and pts/22 , so there were two matches, whoops! | awk '{ print $1 }' This produces the session:0.1 part of the pane output, which is suitable for passing to tmux switch -t . This should work across sessions as well as panes, bringing to focus the pane that contains your swap file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5788/"
]
} |
309,674 | I have a text file "a.txt", it is a 3X4 matrix, the structure is as follows: 1 apple 50 Mary2 banana 40 Lily5 orange 34 Jack I want to extract the value "40" (Row 2, Col 3) and assign it to a new variable called "price". I tried this: awk 'NR == 2 {print $3}' a.txt > priceecho "$price" But why the result is: 0 How can I solve this problem? Thank you. | To store the output of a command in a variable, use : variable=$( commandFooBar ) Check HERE | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189161/"
]
} |
309,768 | I recently learned, that . ./.a.a and ./.a.a is the same. However trying source source .a.a gives an error. IMO, . being Bash alias for source shouldn't behave differently, so what am I missing? Bonus, why is . . OK while source source is not? | You can't just replace . with source everywhere; if . ./.a.a works, you can replace the first . (at least in Bash): source ./.a.a The second . represents the current directory, you can't replace that with source (especially not ./ with source as you've done). source source would be OK if you had a file called source in the current directory, containing something meaningful for your current shell. I can't see how . . would be OK... Also, . ./.a.a and ./.a.a aren't the same, the second form runs .a.a in a separate shell. See What is the difference between sourcing ('.' or 'source') and executing a file in bash? for details. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/309768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65176/"
]
} |
309,777 | I a bash script I have the following CMD="{ head -n1 $DATE.$FROMSTRAT.new && egrep -i \"$SYMS\" $DATE.$FROMSTRAT.new; } > $DATE.$TOSTRAT.new" echo "Running $CMD" `$CMD` When I call the script Running { head -n1 inputFile.new && egrep -i "X|Y" inputFile.new; } > outputFile.newscript.sh: line 17: {: command not found But when I run { head -n1 inputFile.new && egrep -i "X|Y" inputFile.new; } > outputFile.new on the command line it works fine. I try to escape the { with no success, how can I do this ? | Well, if you use a variable on the command line like that, it will be split to words, but that happens after syntactical items like { (or if ) are parsed. So, if you want that, you'll have to use eval CMD="{ echo blah ; echo bleh; } > output"eval "$CMD"# output contains "blah" and "bleh" Though note eval will run everything in the variable in the current shell, including assignments to variables (changing IFS may have funny effects for the rest of the script, say). You could run it in a separate shell with bash -c "$CMD" to mitigate at least the variable assignment issue. Also note that the backticks are used to capture the output of a command, and use it on the command line, so this would run the output of $CMD also as a command: $ CMD="echo foo"$ `$CMD`-bash: foo: command not found If you're redirecting the output to a file, it won't matter, but you most likely also don't need it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30993/"
]
} |
309,786 | When I paste into my terminal session the shell immediately executes the command without me pressing the enter key. I really don't know how to disable that behaviour. I'm using the preinstalled terminal on MacOS Yosemite. | With bash 4.4 or newer and in terminals that support bracketed paste a la xterm , you can do: bind 'set enable-bracketed-paste' (or add set enable-bracketed-paste to your ~/.inputrc ) That will cause the copy-paste buffer to be inserted at the prompt instead of the characters in it to be interpreted as if typed (you could still have problems if that buffer contains characters like ^C , ^Z and your terminal emulator doesn't filter them out). zsh does that by default since version 5.1. For other shells or terminals, see also: How can I protect myself from this kind of clipboard abuse? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160946/"
]
} |
309,788 | It looks like that you cannot create a brand new VM with virsh unless you already have a working XML file. I have just installed all the needed bits for QEMU-KVM to work, and need now to create my very first VM. How to? Hint: I don't have graphics! | There is quite a good walkthrough here . Essentially the tool you're wanting to use is virt-install, which you should already have if you have installed everything needed for QEMU-KVM. Here's the most relevant section. 6. Creating a new Guest VM using virt-install virt-install tool is used to create the VM. This tool can be used in both interactive or non-interactive mode. In the following example, I passed all the required values to create an VM as command line parameters to the virt-install command. # virt-install \-n myRHELVM1 \--description "Test VM with RHEL 6" \--os-type=Linux \--os-variant=rhel6 \--ram=2048 \--vcpus=2 \--disk path=/var/lib/libvirt/images/myRHELVM1.img,bus=virtio,size=10 \--graphics none \--cdrom /var/rhel-server-6.5-x86_64-dvd.iso \--network bridge:br0 In the above virt-install command the parameters have the following meaning: n : Name of your virtual machine description : Some valid description about your VM. For example: Application server, database server, web server, etc. os-type : OS type can be Linux, Solaris, Unix or Windows. os-variant : Distribution type for the above os-type. For example, for linux, it can be rhel6, centos6, ubuntu14, suse11, fedora6 , etc. For windows, this can be win2k, win2k8, win8, win7 ram : Memory for the VM in MB vcpu : Total number of virtual CPUs for the VM. disk path=/var/lib/libvirt/images/myRHELVM1.img,bus=virtio,size=10 : Path where the VM image files is stored. Size in GB. In this example, this VM image file is 10GB. graphics none : This instructs virt-install to use a text console on VM serial port instead of graphical VNC window. If you have the xmanager set up, then you can ignore this parameter. cdrom : Indicates the location of installation image. You can specify the NFS or http installation location (instead of –-cdrom). For example: --location=http://.com/pub/rhel6/x86_64/* network bridge:br0 : This example uses bridged adapter br0. It is also possible to create your own network on any specific port instead of bridged adapter. If you want to use the NAT then use something like below for the network parameter with the virtual network name known as VMnetwork1. All the network configuration files are located under /etc/libvirt/qemu/networks/ for the virtual machines. For example: –-network network=VMnetwork1 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/309788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135585/"
]
} |
309,837 | I am trying to set-up an automatic way, for a given folder, to remove all its subfolders but the most recent created ones. For instance, I would like to keep only the 3 most recent subfolders, and remove all the other ones. Imagine the given folder: /some/specific/folder /subfolder1 /subfolder2 /subfolder3 /subfolder4 /subfolder5 /subfolder6 /subfolder7 /subfolder8 /subfolder9 /subfolder10 I would like to delete all subfolders but keep subfolder8 , subfolder9 and subfolder10 ... For now, I managed to list the files I would like to keep, but how to "reverse" it within a shell command? cd /some/specific/folder/ls -tr | head -3# Gives the following resultsubfolder8subfolder9subfolder10# And I am looking for the following :subfolder1subfolder2subfolder3subfolder4subfolder5subfolder6subfolder7 Any idea? | If ls -tr | head -3 gives you the correct 3 folders to keep,then ls -tr | tail -n +4 will give you all the other folders (it skips the 3 first lines). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79152/"
]
} |
309,845 | I'm trying to get two different vim setups to load based on who is using it. Problem two users use the 'root' user regularly each of us prefers different setup in vim. He uses a simple .exrc file I have a much more complicated .vimrc that uses pathogen#infect and loads a bunch of plugins. Goal I'd like to set something up so that if we type vi somefile it loads his config and vim somefile loads my config. Solution ??? I've experimented with an alias for vim that loads the .vimrc from my user's home directory but then it throws an error trying to load the plugins, etc. It's like it still expects the plugins to be in the root user's home directory instead of where the .vimrc was loaded from? Looking for guidance on the best way to set this up. | If ls -tr | head -3 gives you the correct 3 folders to keep,then ls -tr | tail -n +4 will give you all the other folders (it skips the 3 first lines). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189917/"
]
} |
309,901 | I have a file that looks like this: AA 110 B 2 .. BB 3 ... BBB 3 D F 3 D DAA 111 B 2 .. BB 3 ... BBB 0 F F 0 F FAA 112 C 2 .. BB 3 ... BBB 0 D F 0 D FAA 120 D 2 .. FF 3 ... FFF 3 D F 3 D D I would like to remove any line that contain the specific numerical value of 0. If I do: sed '/0/d' infile > newfile then lines 1 and 4 are deleted because they contain "0s" in 110 and 120. I tried other options with grep grep -v '0' infile > newfile or awk but no luck. I'm sure there is a straightforward way for doing this but cannot find it. Any thoughts? Thanks! | Add -w to grep to do whole-word matching: $ grep -vw 0 infile AA 110 B 2 .. BB 3 ... BBB 3 D F 3 D DAA 120 D 2 .. FF 3 ... FFF 3 D F 3 D D | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189964/"
]
} |
309,938 | When you accidentally attempt to connect to the wrong server with password credentials is it possible for the administrator to read and log the password you used? | Simple put: yes More detail... If you connect to my machine then you don't know if I'm running a normal ssh server, or one that has been modified to write out the password being passed. Further, I wouldn't necessarily need to modify sshd , but could write a PAM module (eg using pam_script ), which will be passed your password. So, yes. NEVER send your password to an untrusted server. The owner of the machine could easily have configured it to log all attempted passwords. (In fact this isn't uncommon in the infosec world; set up a honeypot server to log the passwords attempted) | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/309938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
309,975 | So something you often see on blogs and such are articles about how to install different DEs on your favorite Linux distributions. It kinda bothers me that people talk so casually about doing this, especially when they're addressing novice users, because in my experience all this does is lead to things breaking and theming configurations to be ruined. I remember installing KDE once on an Ubuntu Mate installation and it made all my windows unreadable and ugly. I just installed Cinnamon on my Unity partition the other day, and when I went back to Unity, the top panel no longer said "Unity Desktop" when no windows were active and the icons on the launcher no longer bounced, even though they were set to do so in the configuration application. I can only assume this happened because I installed Cinnamon. So clearly every DE is going to assume that it's the only one installed, and it's going to change settings and configurations so its own liking, regardless of whatever other DE you have installed. My question is: are all these DEs writing to and reading from the same "core" configuration files, and if so, where are they? To me, it seems like something like that is going on considering how they conflict with each other. It would be really nice if I was able to install multiple desktop environments that didn't conflict or cause each other to break in some way. | Generally it shouldn't matter. Different desktop environments should have their own config and not interfere with each other. There are however some corner cases: Some desktop environments are forks of each other or based off the same origin. This is the case for gnome2/3, unity and cinnamon*. There are several competing gui toolkits, the main two are gtk and kde/qt. Both style their applications differently but there has been allot of effort to make kde applications look like gtk ones under gtk window managers as well as to make gtk applications look like kde applications under kde. Installing both can mess with these stylings. But most of the time it should be fine and is mostly down to the distro you use/the configurations you have done. For example, I have had no problems running several different desktop environments/window manager in archlinux or years ago when I tried ubuntu with both kde, gnome and a bunch of others installed. My guess is you were unlucky with mint and kde - I believe mint do some heavy stylings of their applications and messing with different desktop environments could be problematic (I do not run mint so I cannot say for sure). As for unity and cinnamon; they are both shells of gnome 3 and so both rely on the configs of gnome 3 so can interact with each other. I cannot really comment on how these are meant to interact with each other or how much isolation different gnome shell's should have as I do not run either. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309975",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190012/"
]
} |
309,980 | I have my VMs on a dedicated computer, over SSH I use vboxheadless to start them, and then I use remote desktop to use them. Now, while a VM is running, it is trivial to insert the "GuestAdditions" image into the guest's optical drive and install them. To do that with an attached GUI, it's at Devices > Insert Guest Additions CD Image . However, I'm not using the GUI because I'm using the guest OS via remote desktop, so I obviously don't have the menus, either. I'd like to know how to perform this function from command line. I'd imagine it's using vboxmanage to insert and remove that CD image from the virtual guest machine's drive. Also, is there a way how to insert any other CD images and/or floppy images into the virtual drives of a guest system - and remove them - while the guest OS is running? | The way I do this is: Get the VboxAdditions UUID [fredmj@Lagrange ~]$ vboxmanage list dvds[...]UUID: 3cc8e4fb-e56e-blabla...State: createdType: readonlyLocation: /usr/share/virtualbox/VBoxGuestAdditions.isoStorage format: RAWCapacity: 55 MBytesEncryption: disabled Use vboxmanage storageattach with the correct UUID to grab the UUID and put it in the vboxmanage command: [fredmj@Lagrange ~]$ vboxmanage storageattach CENTOS7.GUESTADD --storagectl SATA --port 1 --type dvddrive --medium 3cc8e4fb-e56e-blabla.. Reading the User Manual , I thought it was possible to use something like --medium additions , but I didn't figure how. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1290/"
]
} |
309,992 | I saw this date command: date -d @1332468005 '+%Y-%m-%d %H:%M:%S' Resulting in : 2012-03-22 22:00:05 How can I convert it back from 2012-03-22 22:00:05 to 1332468005 with bash shell? | man date will give you the details of how to use the date command. To convert long date in standard format into Unix epoc time (%s): date -d '2012-03-22 22:00:05 EDT' +%s | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/309992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151643/"
]
} |
310,019 | I have an onboard sound card, and also a connected bluetooth headset. I have configured the bluetooth device in /etc/asound.conf : # cat /etc/asound.confpcm.bluetooth { type bluetooth device 12:34:56:78:9a:bc profile "auto"}ctl.bluetooth { type bluetooth} By default, the onboard card is used for all sound (apparently, the default onboard card does not even need to be listed in asound.conf) When I want an application to use my bluetooth alsa device, I have to specify it, such as: mplayer -ao alsa:device=bluetooth file.mp3 That's fine for me. But I need a way to tell my browsers to use bluetooth alsa device as well. I have found a way how to start chromium using the --alsa-output-device commandline option: chromium --alsa-output-device=bluetooth I need a similar way to start firefox, but I could not find any. How can I tell firefox to use my bluetooth alsa device, without having to modify /etc/asound.conf or ~/.asoundrc every time ? UPDATE: I have followed @lgeorget's advice and my /etc/asound.conf now looks like this: pcm.!default {type plugslave.pcm { @func getenv vars [ ALSAPCM ] default "hw:0,0" }}pcm.bluetooth { type bluetooth device 12:34:56:78:9a:bc profile "auto"}ctl.bluetooth { type bluetooth} When I start firefox using ALSAPCM=bluetooth firefox , I do get sound in my bluetooth headset, but firefox runs at 100% CPU (on my 4 cores) and the youtube video plays at 10x speed (and the sound is correspondingly (garbled). I don't understand what's happening. When I start firefox without ALSAPCM=bluetooth , everything is OK, and sound plays on default alsa device. | Apparently there is no option for firefox, but you can manipulate the ALSA output through environment variables. Try for example: ALSA_PCM_CARD=bluetooth firefox Alternatively, if this does not work, try scripting a little your .asoundrc pcm.!default {type plugslave.pcm { @func getenv vars [ ALSAPCM ] default "hw:hdmi" }} (replace "hw:hdmi" with your normal pcm). Then if you want a program to use a specific PCM, use: ALSAPCM=bluetooth firefox Sources: Archlinux-wiki Stackoverflow.com | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
310,085 | I tried this: find . -type f -size 128 -exec mv {} new_dir/ \; Which didn't work, and it didn't give me an error. Here is an example of what the file looks like, when using stat -x $ stat -x ./testfile | grep Size Size: 128 FileType: Regular File | From my manpage (on a MacOS 10.11 machine) -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c, then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes) (suffixes other than c being non-standard extensions). So, since you didn't specify a suffix, your -size 128 meant 128 blocks , or 64Kbytes that is only matched for files whose size was comprised in between 127*512+1 (65025) and 128*512 (65536) bytes. You should use -size 128c if you want files of exactly 128 bytes, -size -128c for files of size strictly less than 128 bytes (0 to 127), and -size +128c for files of size strictly greater than 128 bytes (129 bytes and above). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
310,095 | Ok, since this is a complex question, I will explain it clearly.I got a file content shown as below: $ Cat File1 ABC Cool Lol POP {MNB}ABC Cool Lol POP {MNB}ABC Cool Lol POP {MNB}ABC Cool Lol POP {TBMKF}ABC Cool Lol POP {YUKER}ABC Cool Lol POP {EFEFVD} The output that I want -Cool MNB + POP ;-Cool MNB + POP ;-Cool MNB + POP ;-Cool TBMKF + POP ;-Cool YUKER + POP ;-Cool EFEFVD +POP ; Firstly I try to take out the last column from the File1 and print it out by sed 's/[{}//g' File1 > File3 After that I copy the whole content of File1 to a new File4 cp File1 File4 After that I replace the data inside the File4 with the File3 data (means the data without bracket one " File1 last column that one") awk 'FNR==NR{a[NR]=$1;next}{$5=a[FNR]}1' File3 File4 >>File5 Output should be like this ABC Cool Lol POP MNBABC Cool Lol POP MNBABC Cool Lol POP MNBABC Cool Lol POP TBMKFABC Cool Lol POP YUKERABC Cool Lol POP EFEFVD Finally, I try awk -F“ " '{print - $2,$5 +,$4 ";"}‘ File5 But the outcome did not come out as shown as I want, only the similar data MNB is all listed down, others did not shown up (File one last column data), | I don't know why you are copying things left and right. The simple thing is awk '{print "-" $2, substr($5,2,length($5)-2), "+", $4, ";"}' File1 I put the - in the beginning and the ; at then end. In between we print $2 because we want it as it is. a substring of $5 , which is thestring without the first and the last character. We skip the firstcharacter by starting at position 2 (awk has always been strangeabout that) and leave out the last character by only selecting asubstring which is two characters shorter, than the original $5 the + because we want it and then $4 However, I'm not sure if all these string functions are specific to GNU awk. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189638/"
]
} |
310,105 | I access my machine via ssh with the -Y parameter. I have a local X server installed (XQuartz for Mac) The remote server is a barebones command line only box. What is the bare minimum I would need to install on the remote linux box to be able to run a GUI application? As an example of the GUI apps I want to run, I would like to run Oracle SQLDeveloper and Eclipse. Potentially Firefox too. I don't need a desktop, or window manager, or any associated tools, if I can help it. | I don't know why you are copying things left and right. The simple thing is awk '{print "-" $2, substr($5,2,length($5)-2), "+", $4, ";"}' File1 I put the - in the beginning and the ; at then end. In between we print $2 because we want it as it is. a substring of $5 , which is thestring without the first and the last character. We skip the firstcharacter by starting at position 2 (awk has always been strangeabout that) and leave out the last character by only selecting asubstring which is two characters shorter, than the original $5 the + because we want it and then $4 However, I'm not sure if all these string functions are specific to GNU awk. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93387/"
]
} |
310,119 | I have a file name like below, and I want to print the file name before .tar . How I can do this? Note: the part after .tar is fixed but the part before .tar is variable. Example: abcd_ef_1.2.3.12+all.tar.gz.md5sum | I don't know why you are copying things left and right. The simple thing is awk '{print "-" $2, substr($5,2,length($5)-2), "+", $4, ";"}' File1 I put the - in the beginning and the ; at then end. In between we print $2 because we want it as it is. a substring of $5 , which is thestring without the first and the last character. We skip the firstcharacter by starting at position 2 (awk has always been strangeabout that) and leave out the last character by only selecting asubstring which is two characters shorter, than the original $5 the + because we want it and then $4 However, I'm not sure if all these string functions are specific to GNU awk. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190129/"
]
} |
310,146 | Is it possible to interactively skip the 90s timeout in systemd? For example, when it is waiting for a disk to become available or user to log out? I know it will fail eventually, so can I just make it fail now? I hate just staring at the screen helplessly. | You have two options: You can set TimeoutStopSpec= on a specific UNIT to a specific value (in seconds*) to wait. You can also set it to infinity in which case SIGKILL will never be sent (not recommended as you may end up with runaway services that are hard to debug). Set DefaultTimeoutStopSec= inside /etc/systemd/system.conf (or user.conf , or in one of the *.d directories) to a default value that all UNITs that do not have TimeoutStopSpec= specified will use. The deafult for this setting is the 90s you normally see. Man page references: man systemd.service for TimeoutStopSpec= man systemd-system.conf for DefaultTimeoutStopSec= * systemd also accepts time specs, e.g. "2min 3s". That's extensively described in the man. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23890/"
]
} |
310,275 | I have headless debian/raspbian linux machine and I would like to backup all my emails via IMAP including all mail and subfolders once daily (connection is secured with SSL/TLS. it should run automatically from cronjob every day). This backup should store the same emails as I have on my default mailserver - so it means when I am working from another computer whole day, it should be able to sync my work (that's why I want to use IMAP). Ideally I would like to have all my emails in readable format on backup machine, if main mailserver fails. Any idea how this can be done? | Use getmail . It's a nice python program which can be used to download mails from servers. The website is a bit dated, but the software is recent and well maintained. Here is an example config file: [options]delete = False[retriever]type = SimpleIMAPSSLRetrieverserver = my-servernameusername = my-usernamepassword = my-password[destination]type = Maildirpath = ~/Maildir/ As you can see, one can define where the mail is to be safed. Multiple mailbox formats are supported. You could also hand mail over to a local IMAP server, e.g. dovecot. If you don't want to use SSL, use SimpleIMAPRetriever instead of SimpleIMAPSSLRetriever . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79705/"
]
} |
310,300 | Could it be automated via ksh/bash, so via a schellscript to check all users ~/.ssh/authorized_keys file for bad = or == ending? One of my friend deleted the = and the == from the end of the SSH keys, so users got locked out, because that was the part of their key :) pattern it went from this (it could be ssh-rsa and with different key length): from="1.2.3.4" ssh-dss AAAAB....0bOJKs= COMMENTHERE COMMENTHERE to this: from="1.2.3.4" ssh-dss AAAAB....0bOJKs COMMENTHERE COMMENTHERE example solution: is there a fix length for the keys? how to filter out the bad keys? | The = mark is just padding, to fill out a base64 conversion. You can read more about that in What is the meaning of an equal sign = or == at the end of a SSH public key? , which gets the information from RFC 4716" SSH Public Key File Format Why does a base64 encoded string have an = sign at the end , based on RFC 2045: Multipurpose Internet Mail Extensions (MIME) You could automate a fix/check for this because the total number of characters in a base64 value (disregarding those outside the encoding such as whitespace) would be a multiple of 4. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/178377/"
]
} |
310,384 | I am making a package for RHEL7. When I try to install the package, I get # yum localinstall mypackage-0.0-1.el7.rpm (...)--> Running transaction check---> Package webmin-GPI-init.noarch 0:0.0-1.el7 will be installed--> Processing Dependency: perl(.::guardian-lib.pl) for package: webmin-GPI-init-0.0-1.el7.noarch--> Processing Dependency: perl(.::hostconfig-lib.pl) for package: webmin-GPI-init-0.0-1.el7.noarch--> Processing Dependency: perl(.::init-lib.pl) for package: webmin-GPI-init-0.0-1.el7.noarch I have in mind that willing to get a perl module named .::init-lib.pl is not desirable. In the code, we can find something like #! /usr/bin/perlrequire './init-lib.pl';require './guardian-lib.pl'; require './hostconfig-lib.pl'; I have managed to remove the win32 with the following option in my .spec file : %{?perl_default_filter}%global __requires_exclude perl\\(VMS|perl\\(Win32|perl\\(\\. How can I get rid of the dependencies regarding the perl packages that start by a dot? I have browsed the Internet and found https://fedoraproject.org/wiki/Packaging:AutoProvidesAndRequiresFiltering and other mailing lists I have not understood. | rpmbuild analyses the content of your rpm package to automatically determine what is required for your program to work. If you use certain perl modules; those need to be installed for your program to work. However, if you don't want rpm to do all that work for you; you can add AutoReqProv : no to your spec file. For more information; read this | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50325/"
]
} |
310,442 | I've been reading this question about why mount must be run as root (with some exceptions), and I was wondering, if mounting a drive requires root (generally), how does a graphical file manager (Nautilus, Thunar, etc) do it? Does it have anything to do with FUSE ? | Users operating at the console of a graphical workstation have noted that several programs can be executed without apparently needing root authentication nor a password such as reboot. This process involves the clever use of the SUID program /usr/sbin/userhelper applied in a broader context than originally designed. The graphical user executes an intermediary aliased program /usr/bin/consolehelper which authorizes actions based on a specific PAM (Programmable Authentication Modules) configuration and then sends the command off to a SUID program to execute the user program with privileges. If the user does not have appropriate authorization, then the requested program is executed under the users’ Linux environment. As currently deployed, the needed PAM configuration file for reboot contains checks for the user to be logged in at the console or be currently running under the root environment to inhibit password requests. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131382/"
]
} |
310,446 | On BSD sed, -E is the "extended regex" flag. On GNU sed, the documentation states that -r is the extended regex flag, but the -E switch works as well (though undocumented in my research). I recall reading somewhere that -E will be specified in the next edition of POSIX specifications, but I can't find where I read that. (Is that true? Is there an authoritative reference for that, or a user here who is an authority?) Just how portable is the -E switch for sed ? Are there standard (i.e. POSIX compliant) versions of sed on which -E is unsupported? (Which ones?) Why is the -E flag not documented for GNU sed? | GNU first added undocumented support for -E just to be compatible with BSD syntax, and the source included the comment /* Undocumented, for compatibility with BSD sed. */ But in 2013 that was removed in this commit with the log message Modify documentation to note sed "-E" option, now in POSIX, for EREs. and the commit references a defect tracker for POSIX at this page that marks as accepted adding the -E flag to the sed arguments It doesn't seem to have made it into the latest POSIX spec ( sed specific part ) though, but I guess it's coming. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310446",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
310,458 | I need vmhgfs to be accessible to both root user and the www-data user. As root, I run the vmhgfs-fuse .host:/ /mnt/hgfs/ command in rc.local However, the webserver is unable to read the shared folder. So I checked its permission, www-data@ubuntu16:~$ ls /mnt/ -lhls: cannot access '/mnt/hgfs': Permission deniedtotal 0d????????? ? ? ? ? ? hgfswww-data@ubuntu16:~$ (The permission if /mnt is 777) I don't know what's happening. Looks like a kernel issue. It never happened in Ubuntu 14.04, now in 16.04 and kernel 4.4.0-21-generic , it became so. P.S If I mount the hgfs with www-data account, it's then accessible by www-data , but not by root user. | Resolved. Use allow_other option to grant access vmhgfs-fuse -o allow_other .host:/ /mnt/hgfs | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
310,500 | In bash , I can use Process Substitution and treat output of a process as if it was a file saved on disk: $ echo <(ls)/dev/fd/63$ ls -lAhF <(ls)lr-x------ 1 root root 64 Sep 17 12:55 /dev/fd/63 -> pipe:[1652825] unfortunately, Process Substitution is not supported in dash . What would be the best way to emulate Process Substitution in dash? I don't want to save the output as a temporary file somewhere ( /tmp/ ) and then have to delete it. Is there an alternative way? | You can reproduce what the shell does under the hood by doing the plumbing manually. If your system has /dev/fd/ NNN entries, you can use file descriptor shuffling: you can translate main_command <(produce_arg1) <(produce_arg2) >(consume_arg3) >(consume_arg4) to { produce_arg1 | { produce_arg2 | { main_command /dev/fd5 /dev/fd6 /dev/fd3 /dev/fd4 </dev/fd/8 >/dev/fd/9; } 5<&0 3>&1 | consume_arg3; } 6<&0 4>&1; | consume_arg4; } 8<&0 9>&1 I've shown a more complex example to illustrate multiple inputs and outputs. If you don't need to read from standard input, and the only reason you're using process substitution is that the command requires an explicit file name, you can simply use /dev/stdin : main_command <(produce_arg1)produce_arg1 | main_command /dev/stdin Without /dev/fd/ NNN , you need to use a named pipe . A named pipe is a directory entry, so you need to create a temporary file somewhere, but that file is just a name, it doesn't contain any data. tmp=$(mktemp -d)mkfifo "$tmp/f1" "$tmp/f2" "$tmp/f3" "$tmp/f4"produce_arg1 >"$tmp/f1" &produce_arg2 >"$tmp/f2" &consume_arg3 <"$tmp/f3" &consume_arg4 <"$tmp/f4" &main_command "$tmp/f1" "$tmp/f2" "$tmp/f3" "$tmp/f4"rm -r "$tmp" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
310,540 | Using zsh , I get a "No match found" message when choosing a pattern that does not fit with rm and that even when redirecting the output. # rm * > /dev/zero 2>&1 zsh: no matches found: * How can I get rid of this message? | This behaviour is controlled by several of Zsh's globbing options . By default, if a command line contains a globbing expression which doesn't match anything, Zsh will print the error message you're seeing, and not run the command at all. You can disable this in three different ways: setopt +o nomatch will leave globbing expressions which don't match anything as-is, and you'll get an error message from rm (which you can disable using -f , although that's a bad idea since it will force removals in other situations where you might not want to); setopt +o nullglob will delete patterns which don’t match anything (so they will be effectively ignored); setopt +o cshnullglob will delete patterns which don’t match anything, and if all patterns in a command are removed, report an error. The last two override nomatch . All these options can be unset with setopt -o … . nullglob can be enabled for a single pattern using the N glob qualifier , e.g. rm *(N) . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/310540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
310,550 | I didn't find any print key in the mupdf manual ( http://mupdf.com/docs/manual ). Is there an undocumented printing function or any other good way to print the document when opened with mupdf? | MuPDF is a Viewer Application. For version 1.1 (this may change in the future), there is no printing function out of the box. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
310,555 | Is there a tool that compresses STDIN, and outputs to STDOUT? This tool (or its counterpart) should be able to decompress as well. Something simple is fine, but it must be lossless. | gzip Most utilities support outputting to STDOUT. Take for example gzip : $ echo "asdgasdfasdfasdfasdfasdfasdf" | gzip | xxd00000000: 1f8b 0800 219b dd57 0003 4b2c 4e49 4f2c ....!..W..K,NIO,00000010: 4e49 c386 b900 45ce f97c 1d00 0000 NI....E..|.... I've used xxd as some unprintable characters exist. Run it through gunzip to decompress xz xz works pretty similarially: $ echo "asdfasdfasdf" | xz | xxd00000000: fd37 7a58 5a00 0004 e6d6 b446 0200 2101 .7zXZ......F..!.00000010: 1600 0000 742f e5a3 e000 0c00 0b5d 0030 ....t/.......].000000020: 9cc8 abf9 a8be f900 0000 0000 9525 d79a .............%..00000030: 089a c592 0001 270d f37b f284 1fb6 f37d ......'..{.....}00000040: 0100 0000 0004 595a ......YZ and for decompress: $ echo "asdfasdfasdf" | xz | xz -dasdfasdfasdf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184237/"
]
} |
310,664 | When I open Chrome or launch Google Hangouts, the hangouts window opens on all workspaces. I always have to right-click on the title and un-check "Always on Visible Workspace" Is there any way to keep this from being default behavior every time I launch Hangouts? | In hangouts, click the "hamburger" menu (or on your name) in the top left to go to the main options. In there (under "Hangouts App Settings") is a checkbox for "Always on Visible Workspace", uncheck it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34796/"
]
} |
310,666 | When I install a GUI application using nix, I see desktop files end inside ~/.nix-profile directory, e.g: ~/.nix-profile/share/applications/firefox.desktop However, my desktop expect that files to be in /user/share/applications in order to be able to create desktop icons for them. Is there any way to tell nix to symlink desktop files to /user/share/applications so I don't have to do it manually? Thanks | Supposing that you are using a distribution other than NixOS, then yes you can expect your desktop environment to be looking for your applications in /usr/share/applications while those installed with Nix are actually in ~/.nix-profile/share/applications . Instead of creating a symlink from /usr/share/applications you should rather tell you desktop where to look. You should be able to do so by adding the following to your ~/.profile : export XDG_DATA_DIRS=$HOME/.nix-profile/share:$HOME/.share:"${XDG_DATA_DIRS:-/usr/local/share/:/usr/share/}" So your desktop will be looking for applications both in /usr/share/applications and ~/.nix-profile/share/applications , with a priority given to the applications installed with Nix. For more info, https://nixos.org/wiki/KDE#Using_KDE_outside_NixOS | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/310666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10357/"
]
} |
310,679 | The Travis CI documentation says to run sleep 3 after starting xvfb to "give [it] some time to start". I couldn't find any reference to this delay in the man page . Is this cargo cult programming? If not, how do I poll rather than sleep to guarantee it's available? | By default Xvfb will create a Unix Domain socket for clients to connect. On my system this file socket file is created in /tmp/.X11-unix/ . You could use inotifywait to listen for events in this directory. For example, $ inotifywait -e create /tmp/.X11-unix/ and then run Xvfb :9 (display 9, for example). When it is ready you should see /tmp/.X11-unix/ CREATE X9 from the inotifywait which will terminate. You should now be able to connect to DISPLAY=:9 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
310,726 | There are some manuals in network, but all think that debian still maintain mirrors for 6.0 However at current moment there are no squeeze on debian repos. So how to update current server to something maintainlible if mirrors not availible? apt-get update Hit http://security.debian.org squeeze/updates Release.gpgIgn http://security.debian.org/ squeeze/updates/contrib Translation-enIgn http://security.debian.org/ squeeze/updates/contrib Translation-en_GBIgn http://security.debian.org/ squeeze/updates/main Translation-enIgn http://security.debian.org/ squeeze/updates/main Translation-en_GBIgn http://security.debian.org/ squeeze/updates/non-free Translation-enIgn http://security.debian.org/ squeeze/updates/non-free Translation-en_GBHit http://security.debian.org squeeze/updates ReleaseHit http://security.debian.org squeeze/updates/main Sources Hit http://security.debian.org squeeze/updates/contrib SourcesHit http://security.debian.org squeeze/updates/non-free SourcesHit http://security.debian.org squeeze/updates/main amd64 PackagesHit http://security.debian.org squeeze/updates/contrib amd64 PackagesHit http://security.debian.org squeeze/updates/non-free amd64 PackagesIgn http://ftp.us.debian.org squeeze Release.gpg Ign http://ftp.us.debian.org/debian/ squeeze/contrib Translation-enIgn http://ftp.us.debian.org/debian/ squeeze/contrib Translation-en_GBIgn http://ftp.us.debian.org/debian/ squeeze/main Translation-enIgn http://ftp.us.debian.org/debian/ squeeze/main Translation-en_GBIgn http://ftp.us.debian.org/debian/ squeeze/non-free Translation-enIgn http://ftp.us.debian.org/debian/ squeeze/non-free Translation-en_GBIgn http://ftp.us.debian.org squeeze ReleaseIgn http://ftp.us.debian.org squeeze/main SourcesIgn http://ftp.us.debian.org squeeze/contrib SourcesIgn http://ftp.us.debian.org squeeze/non-free SourcesIgn http://ftp.us.debian.org squeeze/main amd64 PackagesIgn http://ftp.us.debian.org squeeze/contrib amd64 PackagesIgn http://ftp.us.debian.org squeeze/non-free amd64 PackagesErr http://ftp.us.debian.org squeeze/main Sources 404 Not Found [IP: 128.30.2.26 80]Err http://ftp.us.debian.org squeeze/contrib Sources 404 Not Found [IP: 128.30.2.26 80]Err http://ftp.us.debian.org squeeze/non-free Sources 404 Not Found [IP: 128.30.2.26 80]Err http://ftp.us.debian.org squeeze/main amd64 Packages 404 Not Found [IP: 128.30.2.26 80]Err http://ftp.us.debian.org squeeze/contrib amd64 Packages 404 Not Found [IP: 128.30.2.26 80]Err http://ftp.us.debian.org squeeze/non-free amd64 Packages 404 Not Found [IP: 128.30.2.26 80]W: Failed to fetch http://ftp.us.debian.org/debian/dists/squeeze/main/source/Sources.gz 404 Not Found [IP: 128.30.2.26 80]W: Failed to fetch http://ftp.us.debian.org/debian/dists/squeeze/contrib/source/Sources.gz 404 Not Found [IP: 128.30.2.26 80]W: Failed to fetch http://ftp.us.debian.org/debian/dists/squeeze/non-free/source/Sources.gz 404 Not Found [IP: 128.30.2.26 80]W: Failed to fetch http://ftp.us.debian.org/debian/dists/squeeze/main/binary-amd64/Packages.gz 404 Not Found [IP: 128.30.2.26 80]W: Failed to fetch http://ftp.us.debian.org/debian/dists/squeeze/contrib/binary-amd64/Packages.gz 404 Not Found [IP: 128.30.2.26 80]W: Failed to fetch http://ftp.us.debian.org/debian/dists/squeeze/non-free/binary-amd64/Packages.gz 404 Not Found [IP: 128.30.2.26 80]E: Some index files failed to download, they have been ignored, or old ones used instead. | Debian Squeeze has reached EOL, it doesn't receive any security updates , but if you need to update your database and install packages, its repositories can be found on Debian Archive. You should edit your sources.list as bellow: deb http://archive.debian.org/debian/ squeeze main non-free contrib Also you need to comment out all other repositories. To update run: apt-get install debian-archive-keyringapt-get update debian-archive As time goes on we will expire the binary packages for old releases. Currently we have binaries for squeeze, lenny, etch, sarge, woody, potato, slink, hamm and bo available, and only source code for the other releases. If you are using APT the relevant sources.list entries are like: deb http://archive.debian.org/debian/ hamm contrib main non-free or deb http://archive.debian.org/debian/ bo bo-unstable contrib main non-free | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9332/"
]
} |
310,737 | After a shutdown command is issued, sometimes one gets a status message like this: A stop job is running for Session 1 of user xy and then the system hangs for awhile, or forever depending on ??? So what exactly is "a stop job"? Also, why does it sometimes estimate the time it will take, quite accurately, and other times it can run forever? | systemd operates internally in terms of a queue of "jobs". Each job (simplifying a little bit) is an action to take: stop, check, start, or restart a particular unit . When (for example) you instruct systemd to start a service unit , it works out a list of stop and start jobs for whatever units (service units, mount units, device units, and so forth) are necessary for achieving that goal, according to unit requirements and dependencies, orders them, according to unit ordering relationships, works out and (if possible) fixes up any self-contradictions, and (if that final step is successful) places them in the queue. Then it tries to perform the enqueued "jobs". A stop job is running for Session 1 of user xy The unit display name here is Session 1 of user xy . This will be (from the display name) a session unit, not a service unit. This is the user-space login session abstraction that is maintained by systemd's logind program and its PAM plugins. It is (in essence and in theory) a grouping of all of the processes that that user is running as a "login session" somewhere. The job that has been enqueued against it is stop . And it's probably taking a long time because the systemd people have erroneously conflated session hangup with session shutdown . They break the former to get the latter to work, and in response some people alter systemd to break the latter to get the former to work. The systemd people really should recognize that they are two different things. In your login session, you have something that ignores SIGTERM or that takes a long time to terminate once it has seen SIGTERM . Ironically, the former is the long-standing behaviour of some job-control shells. The correct way to terminate login session leaders when they are these particular job-control shells is to tell them that the session has been hung up , whereupon they terminate all of their jobs (a different kind of job to the internal systemd job) and then terminate themselves. What's actually happening is that systemd is waiting the unit's stop timeout until it resorts to SIGKILL . This timeout is configurable per unit, of course, and can be set to never time out. Hence why one can potentially see different behaviours. Further reading Lennart Poettering (2015). systemd . systemd manual pages. Freedesktop.org. Jonathan de Boyne Pollard (2016-06-01). systemd kills background processes after user logs out . 825394. Debian bug tracker. Lennart Poettering (2015). systemd.kill . systemd manual pages. Freedesktop.org. Lennart Poettering (2015). systemd.service . systemd manual pages. Freedesktop.org. Why does bash ignore SIGTERM? https://superuser.com/questions/1102242/ | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/310737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181039/"
]
} |
310,752 | I'm building an IOT device, powered by headless Debain on a CHIP ( https://getchip.com/ ), and will have connectivity to a customer's wifi. I'm trying to build in functionality for wifi connectivity to the customer's router in a way that wouldn't require the customer to ever need to input a password and username. Basically, I'd like to have WPS push-button functionality in Unix. I've installed wpa_cli , and have been tinkering around with wpa_supplicant.conf. However I'm very confused. The example .conf document located here , states that we'd need to input all the parameters of the router ahead of time. Why would that ever need to be the case? Doesn't that defeat the purpose of WPS (i.e. WPS should be blind to any access points and should handshake with the nearest router that has its WPS window open)? | systemd operates internally in terms of a queue of "jobs". Each job (simplifying a little bit) is an action to take: stop, check, start, or restart a particular unit . When (for example) you instruct systemd to start a service unit , it works out a list of stop and start jobs for whatever units (service units, mount units, device units, and so forth) are necessary for achieving that goal, according to unit requirements and dependencies, orders them, according to unit ordering relationships, works out and (if possible) fixes up any self-contradictions, and (if that final step is successful) places them in the queue. Then it tries to perform the enqueued "jobs". A stop job is running for Session 1 of user xy The unit display name here is Session 1 of user xy . This will be (from the display name) a session unit, not a service unit. This is the user-space login session abstraction that is maintained by systemd's logind program and its PAM plugins. It is (in essence and in theory) a grouping of all of the processes that that user is running as a "login session" somewhere. The job that has been enqueued against it is stop . And it's probably taking a long time because the systemd people have erroneously conflated session hangup with session shutdown . They break the former to get the latter to work, and in response some people alter systemd to break the latter to get the former to work. The systemd people really should recognize that they are two different things. In your login session, you have something that ignores SIGTERM or that takes a long time to terminate once it has seen SIGTERM . Ironically, the former is the long-standing behaviour of some job-control shells. The correct way to terminate login session leaders when they are these particular job-control shells is to tell them that the session has been hung up , whereupon they terminate all of their jobs (a different kind of job to the internal systemd job) and then terminate themselves. What's actually happening is that systemd is waiting the unit's stop timeout until it resorts to SIGKILL . This timeout is configurable per unit, of course, and can be set to never time out. Hence why one can potentially see different behaviours. Further reading Lennart Poettering (2015). systemd . systemd manual pages. Freedesktop.org. Jonathan de Boyne Pollard (2016-06-01). systemd kills background processes after user logs out . 825394. Debian bug tracker. Lennart Poettering (2015). systemd.kill . systemd manual pages. Freedesktop.org. Lennart Poettering (2015). systemd.service . systemd manual pages. Freedesktop.org. Why does bash ignore SIGTERM? https://superuser.com/questions/1102242/ | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/310752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190639/"
]
} |
310,754 | I use Ubuntu 14.04 and in a terminal I became root with sudo su and I wanted to delete root's trash manually. It deleted everything except for a few files that start with a dot. Like .htaccess etc. So I went to that directory (which is "files") and I ran this command: rm -rf .* It did delete those files, BUT I also got an error message that the system couldn't delete "." and ".." What does it mean? Like if I tried to delete the whole directory tree?Like I said, when I was running that command I was in the lowest directory. This one to be exact: /root/.local/share/Trash/files/I shot down my PC and then turned it on. Everything seems to be normal at first glance. So now I want to ask is what went wrong and if what I did could really cause any serious damage to the system in general? In other words, should I be worried now or everything is OK? | .* matches all files whose name starts with . . Every directory contains a file called . which refers to the directory itself, and a file called .. which refers to the parent directory. .* includes those files. Fortunately for you, attempting to remove . or .. fails, so you get a harmless error. In zsh, .* does not match . or .. . In bash, you can set GLOBIGNORE='.:..:*/.:*/..' and then * will match all files, including dot files, but excluding . and .. . Alternatively, you can use a wildcard pattern that explicitly excludes . and .. : rm -rf .[!.]* ..?* or rm -rf .[!.] .??* Alternatively, use find . find . -mindepth 1 -delete | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/310754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139969/"
]
} |
310,768 | I just noticed some zombie processes on CentOS 6.8(Final), tried to kill them but they are still there: [root@host user]# ps -ef | grep gittomcat 746 1 0 Jul18 ? 00:00:00 git clone https://github.com/angular/bower-angular.git -b v1.3.20 --progress . --depth 1tomcat 747 746 0 Jul18 ? 00:00:00 [git-remote-http] <defunct>root 20776 20669 0 09:03 pts/3 00:00:00 grep gittomcat 29970 1 0 Jul18 ? 00:00:00 git clone https://github.com/components/jqueryui.git -b 1.12.0 --progress . --depth 1tomcat 29971 29970 0 Jul18 ? 00:00:00 [git-remote-http] <defunct>[root@host user]# kill 746 747 29970 29971[root@host user]# ps -ef | grep gittomcat 746 1 0 Jul18 ? 00:00:00 git clone https://github.com/angular/bower-angular.git -b v1.3.20 --progress . --depth 1tomcat 747 746 0 Jul18 ? 00:00:00 [git-remote-http] <defunct>root 21525 20669 0 09:26 pts/3 00:00:00 grep gittomcat 29970 1 0 Jul18 ? 00:00:00 git clone https://github.com/components/jqueryui.git -b 1.12.0 --progress . --depth 1tomcat 29971 29970 0 Jul18 ? 00:00:00 [git-remote-http] <defunct> As you can see they are running for two months, and too if they are not harmful I would get rid of them, any alternative way to kill a Zombie? | You can't kill a Zombie (process), it is already dead. It is just waiting for its parent process to do wait(2) and collect its exit status. It won't take any resource on the system other than a process table entry. You can send SIGCHLD to its parent to let it know that one of its children has terminated (i.e. request it to collect child's exit status). This signal can be ignored (which is the default): kill -CHLD <PPID> (Replace <PPID> with the actual PID of the parent.) Or you can kill the parent process so that init (PID 1) will inherit the zombie process and reap it properly (it's one of init 's main tasks to inherit any orphan and do wait(2) regularly). But killing the parent is not recommended. Generally, creation of zombie processes indicates programming issue/issues, and you should try to fix or report that instead. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55995/"
]
} |
310,860 | A script I wrote does something and, at the end, appends some lines to its own logfile. I'd like to keep only the last n lines (say, 1000 lines) of the logfile. This can be done at the end of the script in this way: tail -n 1000 myscript.log > myscript.log.tmpmv -f myscript.log.tmp myscript.log but is there a more clean and elegant solution? Perhaps accomplished via a single command? | It is possible like this, but as others have said, the safest option is the generation of a new file and then a move of that file to overwrite the original. The below method loads the lines into BASH, so depending on the number of lines from tail , that's going to affect the memory usage of the local shell to store the content of the log lines. The below also removes empty lines should they exist at the end of the log file (due to the behaviour of BASH evaluating "$(tail -1000 test.log)" ) so does not give a truly 100% accurate truncation in all scenarios, but depending on your situation, may be sufficient. $ wc -l myscript.log475494 myscript.log$ echo "$(tail -1000 myscript.log)" > myscript.log$ wc -l myscript.log1000 myscript.log | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/310860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34039/"
]
} |
310,916 | I am thinking if there exists any name for such a simple function which returns the order of numbers in an array. I would really love to do this ranking by minimalist way and with basic Unix commands but I cannot get anything to my mind than basic find-and-loop which is not so elegant. Assume you have an array of numbers 17 94 3 52 4 4 9 Expected output where duplicates just receive the same ID; how to handle duplicates is not critical so feel to take shortcuts: 4 6 1 5 2 2 3 Motivation: I saw today many users using many different ways to solve this problem and doing much manual steps with Spreadsheet; so I started to think the minimalist way to do it. Comparing the ranking algorithm to Google's Average ranking In Google Spreadsheet, do =arrayformula(rank.AVG(A:A,A:A,true)) and you get as a benchmark as ascending order like the first expected output 17 594 73 152 64 2.54 2.59 4 where you see that my initial ranking algorithm is biased. I think to be able to set the dataset location would be helpful here. | If that list was in a file , one per line, I'd do something like: sort -nu file | awk 'NR == FNR {rank[$0] = NR; next} {print rank[$0]}' - file If it was in a zsh $array : sorted=(${(nou)array})for i ($array) echo $sorted[(i)$i] That's the same principle as for the awk version above, the rank is the index NR / (i) in the numerically ( -n / (n) ) ordered ( sort / (o) ), uniqued ( -u / (u) ) list of elements. For your average rank: sort -n file | awk 'NR == FNR {rank[$0] += NR; n[$0]++; next} {print rank[$0] / n[$0]}' - file Which gives: 57162.52.54 (use sort -rn to reverse the order like in your Google Spreadsheet version). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/310916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
310,957 | I want to set -x at the beginning of my script and "undo" it (go back to the state before I set it) afterward instead of blindly setting +x . Is this possible? P.S.: I've already checked here ; that didn't seem to answer my question as far as I could tell. | Abstract To reverse a set -x just execute a set +x . Most of the time, the reverse of an string set -str is the same string with a + : set +str . In general, to restore all (read below about bash errexit ) shell options (changed with set command) you could do (also read below about bash shopt options): oldstate="$(set +o)" # POSIXly store all set options...set -vx; eval "$oldstate" # restore all options stored. Should be enough, but bash has two groups of options accessed via set (or shopt -po ) and some others accessed via shopt -p . Also, bash doesn't preserve set -e (errexit) on entering subshells. Note that the list of options that results from expanding $- might not be valid to re-enter in a shell. To capture the whole present state (in bash) use: oldstate="$(shopt -po; shopt -p)"; [[ -o errexit ]] && oldstate="$oldstate; set -e" Or, if you don't mind setting the inherit_errexit flag (and your bash is ≥4.4): shopt -s inherit_errexit; oldstate="$(shopt -po; shopt -p)" Longer Description bash This command: shopt -po xtrace is used to generate an executable string that reflects the state of the option(s).The p flag means print, and the o flag specifies that we are asking about option(s) set by the set command (as opposed to option(s) set only by the shopt command).You can assign this string to a variable, and execute the variable at the end of your script to restore the initial state. # store state of xtrace option.tracestate="$(shopt -po xtrace)"# change xtrace as neededecho "some commands with xtrace as externally selected"set -xecho "some commands with xtrace set"# restore the value of xtrace to its original value.eval "$tracestate" This solution also works for multiple options simultaneously: oldstate="$(shopt -po xtrace noglob errexit)"# change options as neededset -xset +xset -fset -eset -x# restore to recorded state:set +vx; eval "$oldstate" Adding set +vx avoids the printing of a long list of options. If you don’t list any option names, oldstate="$(shopt -po)" it gives you the values of all (set) options.And, if you leave out the o flag,you can do the same things with shopt options: # store state of dotglob option.dglobstate="$(shopt -p dotglob)"# store state of all options.oldstate="$(shopt -p)" If you need to test whether a set option is set,the most idiomatic (Bash) way to do it is: [[ -o xtrace ]] which is better than the other two similar tests: [[ $- =~ x ]] [[ $- == *x* ]] With any of the tests, this works: # record the state of the xtrace option in ts (tracestate):[ -o xtrace ] && ts='set -x' || ts='set +x'# change xtrace as neededecho "some commands with xtrace as externally selected"set -xecho "some commands with xtrace set"# set the xtrace option back to what it was.eval "$ts" Here’s how to test the state of a shopt option: if shopt -q dotglobthen # dotglob is set, so “echo .* *” would list the dot files twice. echo *else # dotglob is not set. Warning: the below will list “.” and “..”. echo .* *fi POSIX A simple, POSIX-compliant solution to store all set options is: set +o which is described in the POSIX standard as: +o Write the current option settings to standard output in a formatthat is suitable for reinput to the shellas commands that achieve the same options settings. So, simply: oldstate=$(set +o) will preserve values for all options set using the set command (in some shells). Again, restoring the options to their original values is a matter of executing the variable: set +vx; eval "$oldstate" This is exactly equivalent to using Bash's shopt -po . Note that it will not cover all possible Bash options, as some of those are set (only) by shopt . bash special case There are many other shell options listed with shopt in bash: $ shoptautocd offcdable_vars offcdspell offcheckhash offcheckjobs offcheckwinsize oncmdhist oncompat31 offcompat32 offcompat40 offcompat41 offcompat42 offcompat43 offcomplete_fullquote ondirexpand offdirspell offdotglob offexecfail offexpand_aliases onextdebug offextglob offextquote onfailglob offforce_fignore onglobasciiranges offglobstar ongnu_errfmt offhistappend onhistreedit offhistverify onhostcomplete onhuponexit offinherit_errexit offinteractive_comments onlastpipe onlithist offlogin_shell offmailwarn offno_empty_cmd_completion offnocaseglob offnocasematch offnullglob offprogcomp onpromptvars onrestricted_shell offshift_verbose offsourcepath onxpg_echo off Those could be appended to the variable set above and restored in the same way: $ oldstate="$oldstate;$(shopt -p)".. # change options as needed..$ eval "$oldstate" bash's set -e special case In bash, the value of set -e ( errexit ) is reset inside sub-shells, that makes it difficult to capture its value with set +o inside a $(…) sub-shell. As a workaround, use: oldstate="$(set +o)"; [[ -o errexit ]] && oldstate="$oldstate; set -e" Or (if it doesn't contradict your goals and your bash supports it) you can use the inherit_errexit option. Note : each shell has a slightly different way to build the list of options that are set or unset (not to mention different options that are defined), so the strings are not portable between shells, but are valid for the same shell. zsh special case zsh also works correctly (following POSIX) since version 5.3. In previous versions it followed POSIX only partially with set +o in that it printed options in a format that was suitable for reinput to the shell as commands, but only for set options (it didn't print un-set options). mksh special case The mksh (and by consequence lksh) is not yet (MIRBSD KSH R54 2016/11/11) able to do this. The mksh manual contains this: In a future version, set +o will behave POSIX compliant and print commands to restore the current options instead. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/310957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54386/"
]
} |
311,090 | For some debugging purposes I enabled the command set -x . Now the output of my bash is like this: $ ls+ ls --color=autoCertificates Desktop Documents Downloads Dropbox ... How can I disable set -x so I won't see stuff like + ls --color=auto ? | You just need to run set +x From man bash : Using + rather than - causes these options to be turned off. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/311090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152402/"
]
} |
311,095 | I've recently learned about the /dev/udp and /dev/tcp pseudo-devices here . Are they specific to some GNU/Linux distributions or can I find them on other unix systems? Are they standardized in some way? So far, I've been able to use them successfuly on OS X, Arch Linux and CentOS. | This is a feature of the shell and not the operating system. So, for example,on Solaris 10 with ksh88 as the shell: % cat < /dev/tcp/localhost/22ksh: /dev/tcp/localhost/22: cannot open However if we switch to bash : % bashbash-3.2$ cat < /dev/tcp/localhost/22SSH-2.0-Sun_SSH_1.1.5 So bash interprets the /dev/tcp but ksh88 didn't. On Solaris 11 with ksh93 as the shell: % cat < /dev/tcp/localhost/22SSH-2.0-Sun_SSH_2.2 So we can see it's very dependent on the shell in use. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/311095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42478/"
]
} |
311,115 | I have alias rm='rm -i' in my ~/.bashrc file (I've just learned it is considered bad practice by some). It seems the alias is not taken into account when running it with nice : bli@naples:~$ touch testbli@naples:~$ rm testrm: remove regular empty file 'test'? nbli@naples:~$ nice rm testbli@naples:~$ Why is that so? | By default, nice is an external command: $ command -v nice/usr/bin/nice This means it has no knowledge of aliases, which are a shell feature: $ alias foo='echo hello'$ foohello$ nice foonice: foo: No such file or directory However there is a feature of the shell that allows aliases to also expand further aliases. You end the expansion with a space. $ alias nice='/usr/bin/nice ' Spot that space at the end; it's important. Now... $ nice foohello$ command -v nicealias nice='/usr/bin/nice ' Any external command can be wrapped with an alias like this if you want the shell to do alias expansion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55127/"
]
} |
311,119 | I need to view the members of a group related to an oracle installation. | You can use getent to display the group's information. getent uses library calls to fetch the group information, so it will honour settings in /etc/nsswitch.conf as to the sources of group data. Example: $ getent group simpsonssimpsons:x:742:homer,marge,bart,lisa,maggie The fields, separated by : , are— Group name Encrypted password (not normally used) Numerical group ID Comma-separated list of members | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/311119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167348/"
]
} |
311,176 | I'm getting an error while trying to install Kali Linux 2016.2 64 Bit in my VMware machine. An installation step failed. You can try to run the failing item again from the menu, or skip it and choose something else. The failing step is: Install the system I've downloaded kali-linux-2016.2-amd64.iso torrent from Kali Linux's official website www.kali.org/downloads/ I've created Virtual machine by selecting Linux > Debian 8.x 64-Bit I Gave Virtual Machine 30.00 GB Hard drive space and 2.00 GB Ram. Booted up the iso and selected Graphical Install After Completing few step then I came up to Partition Disks step. I've selected "Guided - use entire disk" then clicked continue. Then I select the Hard disk and clicked continue In Partitioning scheme I've selected All files in one partition (recommended for new users) Then the following message came: The following partitions are going to be formatted: partition #1 of SCSI3 (0,0,0) (sda) as ext4 partition #5 of SCSI3 (0,0,0) (sda) as swap write changes to disks? yes or no I selected yes and clicked continue and installation was going well. But after a few moments Kali Linux installation stuck and showed me an error as stated below: An installation step failed. You can try to run the failing item again from the menu, or skip it and choose something else.The failing step is: Install the system See the screenshot: http://i.imgur.com/GPklG37.png If I click continue then after a while the error reappears. I tried a lot of time to install but failed everytime. My System: Processor: Intel(R) Core(TM) i5 CPU M430 @2.27GHzRam: 8.00 GBGraphics: ATI Mobility Radeon HD 5470OS: Windows 7 Home Premium, 64-bit 6.1.7601, Service Pack 1 My VMware: Version: 12.1.0 build-3272444 (64Bit) | I had the same problem, and I have fixed it by increasing the space for the hard drive. By default you have 8 GB, increase it to 30 GB or more. And continue with the installation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190952/"
]
} |
311,194 | I know Linux is very permissive and allows customization at many levels but let's said that I have downloaded some .tar.gz files like for example phpStorm and Smargit (both have binaries and libraries used by software) and I want to use them as any other software installed through DNF. Where would you place the uncompressed files or how would you do this? I've found this topic but I am not sure if /opt is the right place for put this kind of standalone software. | You should prefer to put your application folders in /opt which is exactly what you are asking for. /usr (apart from /usr/local ) is the folder in which the files and folders are maintained by package managers like apt-get for Debian or yum for CentOS. Also, you may want to check Filesystem Hierarchy Standard for Linux. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13781/"
]
} |
311,275 | I want to know how to use grep in order to display all lines that begin and end with the same character. | POSIXly: pattern='\(.\).*\1.'grep -x -- "$pattern" file It won't work if line starts or ends with invalid byte character, if you want to cover that case, you can add LC_ALL=C , although LC_ALL=C works with single-byte character data only. perl6 seems to be the best tool, if you have it in your box: $ printf '\ue7\u301 blah \u107\u327\n121\n1\n123\n' | perl6 -ne '.say if m/^(.).*$0$/ || /^.$/'ḉ blah ḉ1211 Although it still chokes on invalid characters. Note that perl6 will alter your text by turning it to NFC form: $ printf '\u0044\u0323\u0307\n' | perl6 -pe '' | perl -CI -ne 'printf "U+%04x\n", ord for split //'U+1e0cU+0307U+000a$ printf '\u0044\u0323\u0307\n' | perl -pe '' | perl -CI -ne 'printf "U+%04x\n", ord for split //'U+0044U+0323U+0307U+000a Internally, perl6 stores string in NFG form (stand for Normalization Form Grapheme ), which is perl6 invented way to deal with un-precomposed graphemes properly: $ printf '\u0044\u0323\u0307\n' | perl6 -ne '.chars.say'1$ printf '\u0044\u0323\u0307\n' | perl6 -ne '.codes.say'2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191047/"
]
} |
311,282 | When I try and open the gnome terminal, by clicking the 'terminal' icon in apps, I get a loading and then nothing happens. Is there some way of seeing the background output of trying to open it to try an ddebug it? UPDATE 1: So I was able to open xterminal and tried starting the gnome terminal like this: gnome-terminal This resulted in this output, sorry if it is slightly wrong I had to manually copy it since I couldn't work out how to copy and paste in xterminal: Error constructing proxy for org.gnome.Terminal:/org/gnome/Terminal/Factory0: Error calling StartServiceByName for org.gnome.Terminal: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.gnome.Terminal exited with status 8 UPDATE 2: So I got it working again by using google fu to find this thread which got me to enter: locale-gen and reboot which seemed to fix it. | As Ipor Sircer suggests , if you can open another terminal, you can run gnome-terminal from there. Alternatively, you can dump gnome-terminal 's output to a file: assuming you're running GNOME, press Alt F2 and enter sh -c "gnome-terminal > ~/gnome-terminal.log 2>&1" Then you'll find all the output in ~/gnome-terminal.log . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190584/"
]
} |
311,329 | I'm quite new at bash and I am trying to learn it by creating some small scripts. I created a small script to look up the DNS entry for multiple domains at the same time. The domains are given as attributes. COUNTER=0DOMAINS=()for domain in "$@"do WOUT_WWW=$(dig "$domain" +short) if (( $(grep -c . <<<"$WOUT_WWW") > 1 )); then WOUT_WWW="${WOUT_WWW##*$'\n'}" ; fi WITH_WWW=$(dig "www.${domain}" +short) if (( $(grep -c . <<<"$WITH_WWW") > 1 )); then WITH_WWW="${WITH_WWW##*$'\n'}" ; fi DOMAINS[$COUNTER]="$domain|$WOUT_WWW|$WITH_WWW" COUNTER=$(($COUNTER+1))done Now I just want to loop through the new "multidimensional" array and give the output like mysql table: +------------------------------+| Row 1 | Row 2 | Row 3 |+------------------------------+| Value | Value | Value |+------------------------------+ How can I do that? | Using perl 's Text::ASCIITable module (also supports multi-line cells): print_table() { perl -MText::ASCIITable -e ' $t = Text::ASCIITable->new({drawRowLine => 1}); while (defined($c = shift @ARGV) and $c ne "--") { push @header, $c; $cols++ } $t->setCols(@header); $rows = @ARGV / $cols; for ($i = 0; $i < $rows; $i++) { for ($j = 0; $j < $cols; $j++) { $cell[$i][$j] = $ARGV[$j * $rows + $i] } } $t->addRow(\@cell); print $t' -- "$@"}print_table Domain 'Without WWW' 'With WWW' -- \ "$@" "${WOUT_WWW[@]}" "${WITH_WWW[@]}" Where the WOUT_WWW and WITH_WWW arrays have been constructed as: for domain do WOUT_WWW+=("$(dig +short "$domain")") WITH_WWW+=("$(dig +short "www.$domain")")done Which gives: .---------------------------------------------------------------------.| Domain | Without WWW | With WWW |+-------------------+----------------+--------------------------------+| google.com | 216.58.208.142 | 74.125.206.147 || | | 74.125.206.104 || | | 74.125.206.106 || | | 74.125.206.105 || | | 74.125.206.103 || | | 74.125.206.99 |+-------------------+----------------+--------------------------------+| stackexchange.com | 151.101.65.69 | stackexchange.com. || | 151.101.1.69 | 151.101.1.69 || | 151.101.193.69 | 151.101.193.69 || | 151.101.129.69 | 151.101.129.69 || | | 151.101.65.69 |+-------------------+----------------+--------------------------------+| linux.com | 151.101.193.5 | n.ssl.fastly.net. || | 151.101.65.5 | prod.n.ssl.us-eu.fastlylb.net. || | 151.101.1.5 | 151.101.61.5 || | 151.101.129.5 | |'-------------------+----------------+--------------------------------' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104327/"
]
} |
311,383 | scp was working fine, but now when I do $ scp /path/to/local/file [email protected] it doesn't ask for my password, but returns nothing immediately, and a new file named "[email protected]" is created in the directory. Also, I have not problem ssh to server, it asks for my password and I could log in successfully. $ ssh [email protected] I did some configuration to set up VIM as IDE for C++ over the weekend, so I might have messed something? I've also created a new anaconda environment for running Python3, if these information help. | You need to tell scp you're copying to a remote, using : and (optionally) a path on the target: scp /path/to/local/file [email protected]:/path/to/remote If you just specify : it will use the default path, probably myusername 's home directory on servername.com : scp /path/to/local/file [email protected]: (Thanks to Stephen Harris for pointing out that the remote path is optional.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191139/"
]
} |
311,402 | I have the following shell script: error=$(mkdir test 2>&1) I know that the variable 'error' will get the error result of the mkdir command if there is an error, but I can't understand how 2>&1 works, could someone explain it? Thanks! | The syntax x="$(some_command)" will run some_command and the output of that is returned and stored in the variable "$x" . Now, normally, programs send "normal output" to the "standard out" stream ( stdout , file handle #1) and error messages to the "standard error" stream ( stderr , file handle #2). The redirection semantic 2>&1 means (roughly speaking; it's a little more complicated under the covers) "send stderr to stdout ". So error messages and output messages are mixed together. So we can combine the two: x="$(some_command 2>&1)" will return the output and the error messages and put them into $x . In your case error="$(mkdir test 2>&1)" means that $error will contain the output (which is empty) and the error (which may contain a string if an error occurs). The result is that $error will contain any error message from the mkdir command. We can see this in action. $ error="$(mkdir /)"mkdir: cannot create directory '/': File exists$ echo "$error"$ error="$(mkdir / 2>&1)"$ echo "$error"mkdir: cannot create directory '/': File exists In the first case the error message is printed immediately because it's sent to stderr , and the variable is empty. In the second case we redirect stderr to stdout and so it is captured and stored in the $error variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59679/"
]
} |
311,417 | While debugging an related issue, I noticed that pgrep was returning a PID for seemingly arbitrary command-line patterns, e.g.: $ sudo pgrep -f "asdf"13017$ sudo pgrep -f ";lkj"13023$ sudo pgrep -f "qwer"13035$ sudo pgrep -f "poiu"13046$ sudo pgrep -f "blahblahblah"14038$ sudo pgrep -f "$(pwgen 16 1)"14219 The same command without sudo returned nothing (as expected): $ pgrep -f blahblahblah I tried to pipe the PID to ps in order to see what the command was, but that didn't work: $ sudo pgrep -f blahblahblah | xargs ps -f -pUID PID PPID C STIME TTY TIME CMD It looks as though the process terminates too quickly. Then I tried using ps and grep, but that didn't work either (i.e. there were no results): $ sudo ps -e -f | grep [a]sdf$ sudo ps -e -o command | grep asdfgrep asdf I also noticed that if I reran the command quickly enough then it seemed as though the PID was steadily climbing: $ for i in $(seq 1 10); do sudo pgrep -f $(pwgen 4 1); done14072140751407814081140841408714090140931409614099$ for i in $(seq 1 10); do sudo pgrep -f blahblahblah; done13071130731307513077130791308113083130851308713089 As a sanity check I tried using find and grep to search the proc directory: $ sudo find /proc/ -regex '/proc/[0-9]+/cmdline' -exec grep adsfasdf {} \;Binary file /proc/14113/cmdline matchesBinary file /proc/14114/cmdline matches$ sudo find /proc/ -regex '/proc/[0-9]+/cmdline' -exec grep adsfasdf {} \;Binary file /proc/14735/cmdline matchesBinary file /proc/14736/cmdline matches Again it seems that the PID is climbing and that the cmdline matches arbitrary strings. I tried this out on both CentOS 6.7 and on Ubuntu 12.04 with the same results. When I tried similar experiments on my Mac the tests came back negative - no mystery processes. What's going on here? | It's showing the sudo process i.e. the PID is the PID of the sudo process that is the parent of the pgrep command you are running by putting after sudo . As you are searching in the whole command line (by -f ), the sudo process pops up in the output because the string (pattern) is also a part of the original sudo command. By using the -l and -a (if your pgrep supports), you would get a better idea. Test: % sudo pgrep -af "asdf"4560 sudo pgrep -af asdf% sudo pgrep -lf "asdf"4562 sudo% pgrep -af "asdf" % | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99163/"
]
} |
311,468 | I need to log into multiple servers at work to get my work done. I'm getting tired of typing the FQDN of a server to access. I'm logging in/out via ssh on our own private network. I'm 99% sure it's on our own private network b/c all the servers have ip addr 10.x.y.z. Is there a way to ssh into servers with just the hostname and not the domain name? We have servers in multiple countries. The way our servers are named is very long. It is named as follows: hostname.country.domainname.com I am getting carpal tunnel typing in ssh [email protected] ... every time I access one of our servers. If I'm in the US and I try to access another host that's in the US, I can just type ssh me@hostname2 and I connect fine. However, if I'm in the US and try to connect to a server in England, I can't type ssh [email protected] and connect to hostname3 . The workaround I did was setup an alias in my ~/.ssh/config file for some of the servers. However, I don't think it's feasible to add 1000+ servers into that file. I've already added 20+ servers and my co-workers think I'm crazy, although I think they are crazy for typing the FQDN when sshing around. Is there an easy way for us to set up something so that we don't have to type in our domainname.com each time? | You can wildcard and use %h in your config eg Host *.eng Hostname %h.domainname.com Now when you do ssh foo.eng it will try to connect to foo.eng.domainname.com . You can add other options to this config as well; eg forcing the username Host *.eng Hostname %h.domainname.com User me Now when you do ssh foo.eng it will try to connect to foo.eng.domainname.com as the user me . % ssh foo.engssh: Could not resolve hostname foo.eng.domainname.com: Name or service not known (well, obviously I get an error before it's not a valid hostname for me!) So now you only need one rule per country. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/311468",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30038/"
]
} |
311,524 | I want to SSH into a remote machine but instead of typing the password, I want to redirect it from another file. So I have a file password.txt that stores the password. File.sh is my bash file. In File.sh #!/bin/bashssh -T [email protected] While executing the file, I did: ./File.sh < password.txt But I was asked to enter password anyway. How do I get the password to be input from the file? | SSH with 'password in a file' is commonly used as public key authentication. Create a key pair using ssh-keygen , upload the public key to the other host: scp ./.ssh/id_rsa.pub [email protected]:~/ and place it as ~/.ssh/authorized_keys : ssh [email protected] ~/.sshmv ~/id_rsa.pub ~/.ssh/authorized_keys or, if an authorized_keys file already exists: cat ~/id_rsa.pub >> ~/.ssh/authorized_keys set the appropriate permissions (600 for the file, 700 for the directory): chmod 600 ~/.ssh/authorized_keyschmod 700 ~/.ssh and start a new ssh session. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/311524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185275/"
]
} |
311,536 | cp --reflink=auto shows following output for MacOS: cp: illegal option -- - Is copy-on-write or deduplication supported for HFS? How can I COW huge files with HFS? | Apple's new APFS filesystem supports copy-on-write; CoW is automatically enabled in Finder copy operations where available, and when using cp -c on the command line. Unfortunately, cp -c is equivalent to cp --reflink=always (not auto ), and will fail when copy-on-write is not possible with cp: somefile: clonefile failed: Operation not supported I'm not aware of a way to get auto behavior. You could make a shell script or function with automatic fallback a la cpclone() { cp -c "$@" || cp "$@"; } but it'll be difficult to make it entirely reliable for all edge cases. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22655/"
]
} |
311,616 | My file looks like this: AA 110 B 10 .. BB 15 ... BBB 20 D F 25 D DAA 111 B 50 .. BB 55 ... BBB 30 F F 45 F FAA 112 C 2 .. BB 3 ... BBB 0 D F 0 D FAA 120 D 2 .. FF 3 ... FFF 3 D F 3 D D I would like to delete lines containing a value =< 10 in any of the columns. I am aware of the use of sed and awk '$3 !=< 10' but this would only delete lines at the third field. Is there a way to tell qwk to consider all columns? | perl to the rescue $ cat ip.txt AA 110 B 10 .. BB 15 ... BBB 20 D F 25 D DAA 111 B 50 .. BB 55 ... BBB 30 F F 45 F FAA 112 C 2 .. BB 3 ... BBB 0 D F 0 D FAA 120 D 2 .. FF 3 ... FFF 3 D F 3 D D$ perl -ae 'print if !(grep { $_ <= 10 && /^\d+$/ } @F)' ip.txt AA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F -a split input line on space and save to @F array grep { $_ <= 10 && /^\d+$/ } @F get all elements of @F array which are made of only digits and whose value is <= 10 then print lines if grep returns 0 . The () around grep means it will return count of matches rather than elements themselves Let's test another condition: $ perl -ae 'print if !(grep { $_ < 10 && /^\d+$/ } @F)' ip.txt AA 110 B 10 .. BB 15 ... BBB 20 D F 25 D DAA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F Certain conditions, like in this question, can be solved with grep too (which probably is faster than perl solution) $ grep -vw '[0-9]\|10' ip.txt AA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F$ grep -vw '[0-9]' ip.txt AA 110 B 10 .. BB 15 ... BBB 20 D F 25 D DAA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F -v lines other than matching pattern -w match whole words only | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189964/"
]
} |
311,621 | So, in theory I can log in as an admin user and have someone else log in as the same user or a different user over an SSH session and have both users attached to the same screen. In our case we have a rather complex environment setup and sometimes need remote assistance with it. How is this done? Do we both need to be the same user? | perl to the rescue $ cat ip.txt AA 110 B 10 .. BB 15 ... BBB 20 D F 25 D DAA 111 B 50 .. BB 55 ... BBB 30 F F 45 F FAA 112 C 2 .. BB 3 ... BBB 0 D F 0 D FAA 120 D 2 .. FF 3 ... FFF 3 D F 3 D D$ perl -ae 'print if !(grep { $_ <= 10 && /^\d+$/ } @F)' ip.txt AA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F -a split input line on space and save to @F array grep { $_ <= 10 && /^\d+$/ } @F get all elements of @F array which are made of only digits and whose value is <= 10 then print lines if grep returns 0 . The () around grep means it will return count of matches rather than elements themselves Let's test another condition: $ perl -ae 'print if !(grep { $_ < 10 && /^\d+$/ } @F)' ip.txt AA 110 B 10 .. BB 15 ... BBB 20 D F 25 D DAA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F Certain conditions, like in this question, can be solved with grep too (which probably is faster than perl solution) $ grep -vw '[0-9]\|10' ip.txt AA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F$ grep -vw '[0-9]' ip.txt AA 110 B 10 .. BB 15 ... BBB 20 D F 25 D DAA 111 B 50 .. BB 55 ... BBB 30 F F 45 F F -v lines other than matching pattern -w match whole words only | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110092/"
]
} |
311,733 | I have a list of strings with the following format. What commands could I use to extract the respective sections. I was thinking of using grep to extract the keywords ie: ADD, username(atra522) etc.. How should I approach this problem? cop1010 ADD atra522,Allison Track,CT,canada I know how to use cut or awk to get all the fields by looking for the commas, but I don't know how to make it work with the first field "cop1010 ADD atra522". | You said bash , so let's do it all with shell builtins: $ inp="cop1010 ADD atra522,Allison Track,CT,canada"$ IFS=, fields=($inp)$ echo ${fields[0]}cop1010 ADD atra522$ echo ${fields[1]}Allison Track$ echo ${fields[2]}CT$ echo ${fields[3]}canada$ IFS=\ cmd=(${fields[0]})$ echo ${cmd[0]}cop1010$ echo ${cmd[1]}ADD$ echo ${cmd[2]}atra522$ You can set them all as variables (instead of echoing them), and never need to spawn a subshell to run awk , cut , or any other tool. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311733",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189455/"
]
} |
311,750 | I downloaded the ISO for Linux Ubuntu onto my flash drive and used this tutorial with universal USB installer to make Linux bootable from the flash drive. I used windows disk management to allocate plenty of space for a my Linux environment next to my windows one. I rebooted and hit F12 to get into the one time boot menu and found the USB listed there so I started Linux and clicked the install Linux icon. It gave me the language options and then the option to connect to the internet and then I got a screen that said Unable to install Linux on this machine. It requires 7 gigs and the machine is only 4. (paraphrasing) I opened gparted in the terminal to see if I could find the host machine's SSD but the only thing that came up was the USB drive. How can I get around this and install Ubuntu alongside windows? Normally that's the screen that would show up after the option to connect to WiFi... but I didn't get that. It's like it can't see the my machine. | You said bash , so let's do it all with shell builtins: $ inp="cop1010 ADD atra522,Allison Track,CT,canada"$ IFS=, fields=($inp)$ echo ${fields[0]}cop1010 ADD atra522$ echo ${fields[1]}Allison Track$ echo ${fields[2]}CT$ echo ${fields[3]}canada$ IFS=\ cmd=(${fields[0]})$ echo ${cmd[0]}cop1010$ echo ${cmd[1]}ADD$ echo ${cmd[2]}atra522$ You can set them all as variables (instead of echoing them), and never need to spawn a subshell to run awk , cut , or any other tool. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191421/"
]
} |
311,752 | Let's say I want to create an internal network with 4 subnets. There is no central router or switch. I have a "management subnet" available to link the gateways on all four subnets (192.168.0.0/24). The general diagram would look like this: 10.0.1.0/24 <-> 10.0.2.0/24 <-> 10.0.3.0/24 <-> 10.0.4.0/24 In words, I configure a single linux box on each subnet with 2 interfaces, a 10.0.x.1 and 192.168.0.x. These function as the gateway devices for each subnet. There will be multiple hosts for each 10.x/24 subnet. Other hosts will only have 1 interface available as a 10.0.x.x. I want each host to be able to ping each other host on any other subnet. My question is first: is this possible. And second, if so, I need some help configuring iptables and/or routes. I've been experimenting with this, but can only come up with a solution that allow for pings in one direction (icmp packets are only an example, I'd ultimately like full network capabilities between hosts e.g. ssh, telnet, ftp, etc). | Ok, so you have five networks 10.0.1.0/24 , 10.0.2.0/24 , 10.0.3.0/24 , 10.0.4.0/24 and 192.168.0.0/24 , and four boxes routing between them. Let's say the routing boxes have addresses 10.0.1.1/192.168.0.1 , 10.0.2.1/192.168.0.2 , 10.0.3.1/192.168.0.3 , and 10.0.4.1/192.168.0.4 . You will need to add static routes to the other 10.0.x.0/24 networks on each router box, with commands something like this (EDITED!): # on the 10.0.1.1 boxip route add 10.0.2.0/24 via 192.168.0.2ip route add 10.0.3.0/24 via 192.168.0.3ip route add 10.0.4.0/24 via 192.168.0.4 and the corresponding routes on the other router boxes. On the non-routing boxes with only one interface, set the default route to point to 10.0.x.1 . Of course you will also have to add the static addresses and netmasks on all the interfaces. Also note that linux does not function as a router by default, you will need to enable packet forwarding with: echo 1 > /proc/sys/net/ipv4/ip_forward The ip commands above do not make the settings persistent, how to do that is dependent on the distribution. As I said, I haven't tested this and may have forgotten something. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191425/"
]
} |
311,755 | I have installed Arch Linux for the first time, I have attempted to setup my UEFI boot process but must have failed somewhere, on bootup I do see the boot menu with the Arch Linux option but when I select it, I get a message /vmlinuz-linux:Not Found i.e. it can't find the kernel to boot. I've followed the instructions on https://wiki.archlinux.org/index.php/Installation_guide but must have messed up somewhere. How can I fix this? partition layout: /dev/sda1 EFI System (512M)/dev/sda2 Linux fs (244M)/dev/sda3 Linux fs (1M)/dev/sda4 Linux fs (465G) /etc/fstab : #/dev/sda4 UUID=41d8483f-0d29-4234-bf1e-3c55346b5667 / ext4 rw,realtime,data=unordered 0 1 esp was setup in /boot/ edit 1 Oh yeah I can anytime boot from my USB thumb drive for troubleshooting..., edit2 I see, my /boot/loder/entries/arch.conf looks like: title Arch Linuxlinux /vmlinuz-linuxinitrd /initramfs-linux.imgoptions root=PARTUUID=41d8483f-0d29-4234-bf1e-3c55346b5667 rw but there's no files in my / at all only the directories. Might that be the problem? | Boot from your bootable USB Arch-linux , mount all your partitions and chroot into the system. As montioned jasonwryan : You need to mount your ESP to /boot First create the efi folder: mkdir /boot/efi mount the esp partition mount /dev/sda1 /boot/efi Verify your /etc/fstab , the esp mount point need to be added to fstab . Create a new sub-directory /boot/efi/EFI/arch/ mkdir -p /boot/efi/EFI/arch/ Move /boot/vmlinuz-linux , initramfs-linux.img and initramfs-linux-fallback.img : cp /boot/vmlinuz-linux /boot/efi/EFI/arch/vmlinuz-linux.eficp /boot/initramfs-linux.img /boot/initramfs-linux-fallback.img /boot/efi/EFI/arch Run mkinitcpio -p linux then update GRUB: grub-mkconfig -o /boot/grub/grub.cfg ` | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46433/"
]
} |
311,758 | In a bash script, how can I remove a word from a string, the word would be stored in a variable. FOO="CATS DOGS FISH MICE"WORDTOREMOVE="MICE" | Try: $ printf '%s\n' "${FOO//$WORDTOREMOVE/}"CATS DOGS FISH This also work in ksh93 , mksh , zsh . POSIXLY: FOO="CATS DOGS FISH MICE"WORDTOREMOVE="MICE"remove_word() ( set -f IFS=' ' s=$1 w=$2 set -- $1 for arg do shift [ "$arg" = "$w" ] && continue set -- "$@" "$arg" done printf '%s\n' "$*")remove_word "$FOO" "$WORDTOREMOVE" It assumes your words are space delimited and has side effect that remove spaces before and after "$WORDTOREMOVE" . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/311758",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
311,763 | According to Gnome / Nautilus, these files reside at: mtp://[usb:001,007] But: $ cd mtp://[usb:001,007]bash: cd: mtp://[usb:001,007]: No such file or directory And df -h doesn't list it. While lsusb suggests it's there: Bus 001 Device 008: ID 04e8:6860 Samsung Electronics Co., Ltd Galaxy (MTP) | Short answer: You can’t cd to this directory. Media Transfer Protocol (MTP) Media Transfer Protocol (MTP) uses a special API that to provide limited access to files on a device. As I understand it, it was originally designed by Microsoft for use with proprietary software compatible with its digital restriction system . The protocol became an official USB device class in 2008 and it provides a standard means of transferring media and metadata between a computer and an external device. It is not tied to DRM and the ever-innovative FOSS community developed the libmtp library to support MTP devices. The mtp://[usb:001,007] URL is a GNOME Virtual file system which uses libmtp as its backend.Since MTP abstracts away the filesystem, it’s not mounted in the same way as a regular storage device so it will not be listed by the mount or df commands. The MTP Wikipedia article has a good description of the protocol and lists the advantages to using MTP for accessing files on an external device. The comprehensive MTP article on the Arch Linux Wiki has tonnes of useful information on using MTP with GNU/Linux (most of the information is not distribution-specific). Mass Storage Class (MSC) If you want to treat the files on the Samsung device as a regular filesystem that can be mounted like any other storage device (and use cd ), you would need to configure the device to present itself as a Mass Storage Class (MSC) (aka UMS) device. Some devices can be configured to use either method. I have a Sony Android phone that allows either method but I’ve always connected to its SD card using USB Mass Storage (even though it means the Android OS has to unmount the SD card, to allow the GNU/Linux OS to mount it). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/311763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118798/"
]
} |
311,904 | In the ASCII table the 'J' character exists which has code points in different numeral systems: Oct Dec Hex Char112 74 4A J It's possible to print this char by an octal code point by printing printf '\112' or echo $'\112' . How do I print the same character by decimal and hexadecimal code point presentations? | Hex: printf '\x4a' Dec: printf "\\$(printf %o 74)" Alternative for hex :-) xxd -r <<<'0 4a' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/311904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179072/"
]
} |
312,048 | I am starting a new embedded system project, and was trying to find an answer to my question: What is the most light weight linux system that is tailored for embedded devices, I stumbled upon Archlinux and Ubuntu core (snappy), but did not get to find a clear answer to difference between two, can anyone help with this ? | There are many differences between Ubuntu and Arch Linux. With Ubuntu core, you get a ready-made distribution (based on Debian) aimed towards embedded devices. Arch Linux on the other hand "is what you make it". After installing Arch Linux you are left with a minimal GNU/Linux system (not based on any other distribution). It is then up to you to configure the system as you want it. To summarize; Ubuntu core is indeed tailored towards embedded systems, whereas with Arch Linux you will have to do the tailoring yourself. Arch Linux link: https://wiki.archlinux.org/index.php/Arch_Linux Ubuntu Core link: http://www.ubuntu.com/internet-of-things | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191654/"
]
} |
312,059 | I have a file as below. ~PAR1~This is Par1 line 1This is Par1 line 2Par Finished~PAR2~This is Par2 line 1This is Par2 line 2Par Finished If I pass PAR1 , I should get all lines between PAR1 and Par Finished line. How can I get it? I was looking into awk and sed and couldn't find any options. | If you want the header and footer line then it's pretty simple with sed eg sed -n "/^~PAR1~$/,/Par Finished/p" This is simple to use with a variable START=PAR1sed -n "/^~$START~$/,/Par Finished/p" We can also make the last line to be a variable START=PAR1END="Par Finished"sed -n "/^~$START~$/,/$END/p" The result looks like: ~PAR1~This is Par1 line 1This is Par1 line 2Par Finished Now if you don't want the start/end lines and you don't want the blank line then it's a little more complicated. There may be better ways, but this works for me: sed -n "/^~$START~$/,/$END/ { /^~$START~$/d ; /$END/d ; /^$/d ; p }" The result of this is This is Par1 line 1This is Par1 line 2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191661/"
]
} |
312,113 | I'm currently installing Bash on Ubuntu on Windows. I installed Bash and set up the user on normally. Everything worked fine, but I didn't want to keep doing sudo with every command. I uninstalled then reinstalled 'Bash on Ubuntu on Wwindows' with lxrun /install /y It saved the username, but not the previous password. I'm trying to view the current password for the user that I am using. How do I view the password to my user in Bash? | You can't actually, your password is hashed and is only a 1-way decoded. To summarize it, just imagine each time you try to login, it will do something like if hash('password') == currentHash;do grantAccess(); and each time you save a password, will do hashedPass = hash('password');writeOnShadowFile('hashedPass') This is by security standards of hashing avoid storing a real password, but instead storing the result of a function, and evaluating it on that way. Hashing functions are intended to do lot of the original value conversion with data loss, and due the data loss it will make almost impossible to know your original password. You can easily change your password with usermod -p <password> <user> , or just passwd <user> . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191691/"
]
} |
312,146 | Executables are stored in /usr/libexec on Unix-like systems. The FHS says (section 4.7. /usr/libexec : Binaries run by other programs (optional)" : /usr/libexec includes internal binaries that are not intended to be executed directly by users or shell scripts. Applications may use a single subdirectory under /usr/libexec . On macOS, rootless-init a program called by launchd immediately after booting, is stored in /usr/libexec . Why would it be stored in /usr/libexec when it is a standalone executable that could be stored in /usr/bin or /usr/sbin ? init and other programs not called directly by shell scripts are also stored in folders like [/usr]/{bin,sbin} . | It's a question of supportability - platform providers have learned from years of experience that if you put binaries in PATH by default, people will come to depend on them being there, and will come to depend on the specific arguments and options they support. By contrast, if something is put in /usr/libexec/ it's a clear indication that it's considered an internal implementation detail, and calling it directly as an end user isn't officially supported. You may still decide to access those binaries directly anyway, you just won't get any support or sympathy from the platform provider if a future upgrade breaks the private interfaces you're using. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/312146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89807/"
]
} |
312,197 | Looking into hard disk encryption. the go to solution seems to be dm-crypt with LUKS using a password. I work with multiple independent hard disks mounted into a disk pool for reading. In this case, I have to type a password multiple times. Is there a way for me to encrypt the hard disks with a key file, maybe put it on a USB drive and just plug it in when necessary?? | One of the best ways to do this is to use a smart card with a crypto key on it to unlock the keys for your encrypted block devices. You will only need to enter the passphrase (called "PIN" by the tools but it's really a passphrase) once, after which it will be cached. This has the added advantage of protecting the encrypted data with something-you-have (the smart card itself, out of which the private key cannot be extracted) and something-you-know (the passphrase). Format your /etc/crypttab like this: mapper-name /dev/disk/raw-device /var/lib/filename-containing-encrypted-key \ luks,keyscript=/lib/cryptsetup/scripts/decrypt_opensc In Debian and derivatives, the initramfs-tools will notice the keyscript and copy all of the necessary tools and daemons for accessing the smart card to the initramfs automatically. Information on setting up the smart card and creating (and encrypting) the keys is found in /usr/share/doc/cryptsetup/README.opensc.gz . You can use a Yubikey 4 or Yubikey NEO among others for this purpose. Implementation notes : This feature has rough edges and apparently doesn't work out of the box so YMMV. The last time I successfully achieved it, I had to add the following hacks: Disable systemd because it disastrously tries to take over the whole process of setting up encrypted devices from /etc/crypttab but it knows nothing about keyscript which leads to a big FAIL. Luckily, in Debian, you can still opt out of systemd . Install this fixer-upper script as /etc/initramfs-tools/hooks/yubipin because the built-in feature didn't install quite enough support to get the Yubikey to be usable from the initramfs. You may need to adjust this. #!/bin/shPREREQ=cryptrootprereqs(){ echo "$PREREQ"}case $1 inprereqs) prereqs exit 0 ;;esac# /scripts/local-top/cryptopensc calls pcscd with the wrong pathln -s ../usr/sbin/pcscd ${DESTDIR}/sbin/pcscdmkdir -p "${DESTDIR}/usr/lib/x86_64-linux-gnu"# opensc-tool wants this dynamically, copy_exec doesn't know thatcp -pL /usr/lib/x86_64-linux-gnu/libpcsclite.so.1 "${DESTDIR}/usr/lib/x86_64-linux-gnu/libpcsclite.so.1"mkdir -p "${DESTDIR}/lib/x86_64-linux-gnu"# without this, pcscd aborts with a pthread_cancel errorcp -pL /lib/x86_64-linux-gnu/libgcc_s.so.1 "${DESTDIR}/lib/x86_64-linux-gnu/libgcc_s.so.1"# this gets copied as a dangling symlink, fix itrm "${DESTDIR}/usr/lib/pcsc/drivers/ifd-ccid.bundle/Contents/Info.plist"cp -pL /usr/lib/pcsc/drivers/ifd-ccid.bundle/Contents/Info.plist "${DESTDIR}/usr/lib/pcsc/drivers/ifd-ccid.bundle/Contents/Info.plist"# pcscd needs this to open the reader once it has found itcp -pL /lib/x86_64-linux-gnu/libusb-1.0.so.0 "${DESTDIR}/lib/x86_64-linux-gnu/libusb-1.0.so.0" Install another script as /etc/initramfs-tools/scripts/local-bottom/killpcscd to clean up: #!/bin/shset -ePREREQ=cryptopenscprereqs(){ echo "$PREREQ"}case $1 in prereqs) prereqs exit 0 ;;esac# because cryptopensc does not do it properlykillall pcscd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42502/"
]
} |
312,208 | I'd like to run multiple MySQL queries and save them to specified files. Queries are saved in a bash script, queries.sh , e.g.: cat queries.shmysql -u <user> -p <DBname> -e "select Col1 from Table1 where Condition;" > /home/<user>/queries/Col1Table1Condmysql -u <user> -p <DBname> -e "select Col5 from Table2 where Condition;" > /home/<user>/queries/Col5Table2Cond Executing the script is not enough, since <user> has to input his password at each query and this is compromising the script flow. | Try this ;) user="jonnDoe"dbname="customers"password="VerySecret"mysql -u $user -D $dbname -p $password -e "select Col1 from Table1 where Condition;" > /home/$user/queries/Col1Table1Condmysql -u $user -D $dbname -p $password -e "select Col5 from Table2 where Condition;" > /home/$user/queries/Col5Table2Cond A better practice is to configure mysql /etc/mysql/my.cnf file with the password inside, this way, no need to enter it each times : [client]password = your_password In this later case, remove all password related things from previous script | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
312,220 | I need to use an old backup, but feeling ashamed I cannot remember which utility I used to create it. Please is anybody able to identify this backup format? This is list of all files in my backup folder: 08.04.2014 17:44 4 Cisty-system-08-04-2014.backup08.04.2014 17:44 32 768 Cisty-system-08-04-2014.mbr08.04.2014 17:44 251 Cisty-system-08-04-2014.sfdisk08.04.2014 17:44 14 Cisty-system-08-04-2014.size08.04.2014 17:47 2 147 483 648 Cisty-system-08-04-2014_part4.00008.04.2014 17:49 2 147 483 648 Cisty-system-08-04-2014_part4.00108.04.2014 17:51 2 147 483 648 Cisty-system-08-04-2014_part4.00208.04.2014 17:54 2 147 483 648 Cisty-system-08-04-2014_part4.00308.04.2014 17:56 2 147 483 648 Cisty-system-08-04-2014_part4.00408.04.2014 17:59 2 147 483 648 Cisty-system-08-04-2014_part4.00508.04.2014 18:01 2 147 483 648 Cisty-system-08-04-2014_part4.00608.04.2014 18:03 2 147 483 648 Cisty-system-08-04-2014_part4.00708.04.2014 18:06 1 822 069 003 Cisty-system-08-04-2014_part4.008 The .backup file contains only sda4 which is probably partition location, the file .sfdisk contains (surprisingly) an output of sfdisk utility. Thank you for your suggestions :) Klasyc | Try this ;) user="jonnDoe"dbname="customers"password="VerySecret"mysql -u $user -D $dbname -p $password -e "select Col1 from Table1 where Condition;" > /home/$user/queries/Col1Table1Condmysql -u $user -D $dbname -p $password -e "select Col5 from Table2 where Condition;" > /home/$user/queries/Col5Table2Cond A better practice is to configure mysql /etc/mysql/my.cnf file with the password inside, this way, no need to enter it each times : [client]password = your_password In this later case, remove all password related things from previous script | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.