source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
203,846 | Having migrated to Linux from Windows, I would like to find an alternative software to Winmerge or rather learn command line tools to compare and sync two folders on Linux. I would be grateful if you could tell me how to do the following tasks on the command line... (I have studied diff and rsync, but I still need some help.) We have two folders: "/home/user/A" and "/home/user/B" Folder A is the place where regular files and folders are saved and folder B is a backup folder that serves as a complete mirror of folder A. (Nothing is directly saved or modified by the user in folder B.) My questions are: How to list files that exist only in folder B? (E.g. the ones deleted from folder A since the last synchronization.) How to copy files that exist in only folder B back into folder A? How to list files that exist in both folders but have different timestamps or sizes? (The ones that have been modified in folder A since last synronization. I would like to avoid using checksums, because there are tens of thousands of files and it'd make the process too slow.) How to make an exact copy of folder A into folder B? I mean, copy everything from folder A into folder B that exists only in folder A and delete everything from folder B that exists only in folder B, but without touching the files that are the same in both folders. | This puts folder A into folder B: rsync -avu --delete "/home/user/A" "/home/user/B" If you want the contents of folders A and B to be the same, put /home/user/A/ (with the slash) as the source. This takes not the folder A but all of it's content and puts it into folder B. Like this: rsync -avu --delete "/home/user/A/" "/home/user/B" -a Do the sync preserving all filesystem attributes -v run verbosely -u only copy files with a newer modification time (or size difference if the times are equal) --delete delete the files in target folder that do not exist in the source Manpage: https://download.samba.org/pub/rsync/rsync.html | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/203846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
203,948 | The proc file system allows the kernel to communicate information about each running process on a Linux system. Why is proc called a file system? It’s not a real file system like ext4 . It’s just a collection of files containing information about the running processes. | /proc is a filesystem because user processes can navigate through it with familiar system calls and library calls, like opendir() , readdir() , chdir() and getcwd() . Even open() , read() and close() work on a lot of the "files" that appear in /proc . For most intents and almost all purposes, /proc is a filesystem, despite the fact that its files don’t occupy blocks on some disk. I suppose we should all clarify what definition of the term “file system” we are currently using. In the context of ext4, when we write “file system”, we’re probably talking about the combination of a layout of disk blocks, specification of metadata information about the disk blocks that also resides somewhere on disk, and the code that deals with that on-disk layout. In the context of /usr , /tmp , /var/run and so on, we’re writing about an understanding or a shared conceptualization of how to name some things. Those two uses of the term “file system” are indeed quite different. /proc is really the second kind of “file system”, as you’ve noted. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/203948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65878/"
]
} |
203,980 | I have a 900GB ext4 partition on a (magnetic) hard drive that has no defects and no bad sectors. The partition is completely empty except for an empty lost+found directory. The partition was formatted using default parameters except that I set the number of reserved filesystem blocks to 1%. I downloaded the ~900MB file xubuntu-15.04-desktop-amd64.iso to the partition's mount point directory using wget . When the download was finished, I found that the file was split into four fragments: filefrag -v /media/emma/red/xubuntu-15.04-desktop-amd64.isoFilesystem type is: ef53File size of /media/emma/red/xubuntu-15.04-desktop-amd64.iso is 1009778688 (246528 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 32767: 34816.. 67583: 32768: 1: 32768.. 63487: 67584.. 98303: 30720: 2: 63488.. 96255: 100352.. 133119: 32768: 98304: 3: 96256.. 126975: 133120.. 163839: 30720: 4: 126976.. 159743: 165888.. 198655: 32768: 163840: 5: 159744.. 190463: 198656.. 229375: 30720: 6: 190464.. 223231: 231424.. 264191: 32768: 229376: 7: 223232.. 246527: 264192.. 287487: 23296: eof/media/emma/red/xubuntu-15.04-desktop-amd64.iso: 4 extents found Thinking this might be releated to wget somehow, I removed the ISO file from the partition, making it empty again, then I copied the ~700MB file v1.mp4 to the partition using cp . This file was fragmented too. It was split into three fragments: filefrag -v /media/emma/red/v1.mp4Filesystem type is: ef53File size of /media/emma/red/v1.mp4 is 737904458 (180153 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 32767: 34816.. 67583: 32768: 1: 32768.. 63487: 67584.. 98303: 30720: 2: 63488.. 96255: 100352.. 133119: 32768: 98304: 3: 96256.. 126975: 133120.. 163839: 30720: 4: 126976.. 159743: 165888.. 198655: 32768: 163840: 5: 159744.. 180152: 198656.. 219064: 20409: eof/media/emma/red/v1.mp4: 3 extents found Why is this happening? And is there a way to prevent it from happening? I thought ext4 was meant to be resistant to fragmentation. Instead I find that it immediately fragments a solitary file when all the rest of the volume is unused. This seems to be worse than both FAT32 and NTFS . | 3 or 4 fragments in a 900mb file is very good. Fragmentation becomes a problem when a file of that size has more like 100+ fragments. It isn't uncommon for fat or ntfs to fragment such a file into several hundred pieces. You generally won't see better than that at least on older ext4 filesystems because the maximum size of a block group is 128 MB, and so every 128 MB the contiguous space is broken by a few blocks for the allocation bitmaps and inode tables for the next block group. A more recent ext4 feature called flex_bg allows packing a number of ( typically 16 ) block groups' worth of these tables together, leaving longer runs of allocatable blocks but depending on your distribution and what version of e2fsprogs was used to format it, this option may not have been used. You can use tune2fs -l to check the features enabled when your filesystem was formatted. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/203980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
204,037 | I have a python program which I need to run every minute from 11PM (EDT) to 06AM (EDT). How can I schedule a cron job to do this? * 23-6 * * 1-5 python my_program.py will this work? or do I have to write 2 separate cron jobs for this? | Ranges that wrap around like that are ambiguous. Specify the hours as 23,0-6 instead and avoid future problems. Cron checks every minute the contents of crontab files and if it founds coincidence of the time and the conditions it will run the script indicated on the line. For this case these is the set of coincidences that must be met: From 11 PM to 11:59 PM and from 0:00 to 6:59 AM From monday to friday So, every minute during the time that the set of coincidences are true, it will run. Don't expect it to run outside of the range of hours and days indicated, for example on saturday . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111661/"
]
} |
204,045 | Here's a subset of the file names in my file: profile_10_1_1profile_10_1_2profile_1_1_1 I'm trying to sort them numerically in ascending order, that is starting from 1 onwards. I used the following command sort -n filename and also tried this: sort -nk filename But the ones with 10 will always be at the top of the list. How do I write a command to get this desired output: profile_1_1_1profile_1_1_2....profile_9_1_1....profile_10_1_1 | FreeBSD and GNU sort have a -V option for that. sort -V < filename GNU ls has a -v option. So if those files do exist, you could do: xargs -d '\n' < filename ls -dv -- zsh has parameter expansion flags to sort arrays numerically: printf '%s\n' ${(fno)"$(<filename)"} Otherwise, portably, you'd have to do it like: sort -t_ -k1,1 -k2,2n -k3,3n -k4,4n -k5,5n filename | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115801/"
]
} |
204,052 | I need to do a RegExp pattern test on a certain bit of user input. This is the pattern I need to test the value against. ^([a-z]{2,3})\-([a-z][a-z]*[0-9]*)\-(\d+)$ An example match would be: na-examplename-01 The shell I have available is BusyBox a.k.a ash, so I don't have full bash functionality. What are my options for RegExp pattern tests when using BusyBox? Note: I cannot use expr, as it is not available in my install. I have the following functions available: arp, ash, awk, basename, bash, bunzip2, bzcat, bzip2, cat, chmod,chown, chvt, clear, cp, crond, crontab, cryptpw, cut, date, dd,deallocvt, df, dirname, dmesg, dnsdomainname, dos2unix, du, egrep,eject, env, fbset, fgconsole, fgrep, find, findfs, flock, free, fstrim,ftpget, ftpput, fuser, getopt, grep, groups, gunzip, gzip, head,hostname, httpd, hwclock, id, ifconfig, ifdown, ifplugd, ifup, install,ionice, iostat, ip, kill, killall, killall5, less, ln, loadkmap,logger, login, ls, lsof, md5sum, mkdir, mkdosfs, mkfifo, mkfs.vfat,mknod, mkpasswd, mkswap, mktemp, more, mount, mountpoint, mpstat, mv,nbd-client, nc, netstat, nice, nohup, nslookup, ntpd, od, pgrep, pidof,ping, ping6, pmap, printenv, ps, pstree, pwd, pwdx, rdate, readlink,realpath, renice, reset, rm, rmdir, route, sed, seq, setconsole,setserial, sh, sleep, smemcap, sort, stat, su, switch_root, sync,sysctl, tail, tar, tee, telnet, time, top, touch, tr, traceroute,traceroute6, true, ttysize, umount, uname, uniq, unix2dos, unxz,uptime, usleep, vconfig, vi, watch, wc, wget, which, whoami, whois,xargs, xz, xzcat, zcat | You have three tools that can do regular expressions. These all assume that $in contains na-examplename-01 . grep $ printf "%s\n" "$in" | ./grep -E '^[a-z]{2,3}-[a-z]+[0-9]*-[0-9]+$'na-examplename-01 sed $ printf "%s\n" "$in" | ./sed -n '/^[a-z]\{2,3\}-[a-z]\+[0-9]*-[0-9]\+$/p'na-examplename-01 awk $ printf "%s\n" "$in" | ./awk '/^[a-z]{2,3}-[a-z]+[0-9]*-[0-9]+$/'na-examplename-01 Note that those match on each line inside $in as opposed to the content of $in as a whole. For instance, they would match on the second and third line of a $in defined as in='whateverxx-a-1yy-b-2' As Stéphane pointed out in his answer, it's a good idea to prepend these commands with LC_ALL=C to ensure that your locale does not confuse the character ranges. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88576/"
]
} |
204,068 | What is the difference between directory structure and file system ? Unix/Linux directories and file system looks as follows: The following two directories obviously we know directories. /home/abc/xyzdir1 --is a directory /home/abc/xyzdir2 -- is a directory the following three samples are saying file system. /proc -- is a file system/ -- is a file system/bin -- is a file system How can I identify which one is a file system and a directory from the above code snippets? | People don't use file system too carefully. In your examples, I would say that / , /bin and /proc are file systems because an entire partition (like /dev/sdb1 ) is mounted on those directories. My Arch linux system doesn't have /bin as a file system so this example isn't perfect but... % ls -lid /proc /home /boot /2 drwxr-xr-x 17 root root 4096 Feb 24 12:12 //2 drwxr-xr-x 4 root root 4096 May 16 14:29 /boot/2 drwxr-xr-x 5 root root 4096 Mar 14 18:11 /home/1 dr-xr-xr-x 116 root root 0 May 16 17:18 /proc/ Inode number 2 is traditionally the "root" inode of an entire on-disk file system (which is the other usage of the phrase). / , /boot and /home all have inode number 2, while /proc , which is presented entirely by the kernel and does not have an on-disk presence, has inode 1. Those inode numbers indicates that a whole, on-disk file system, or a virtual file system is mounted using that name. The sentence ' /home/abc/xyzdir1 is a directory" basically means that no on-disk file system is mounted using that name. If you do the same ls -lid command on a directory you get something like this: % ls -lid /home/bediger/src3670039 drwxr-xr-x 29 bediger bediger 4096 May 17 19:57 /home/bediger/src/ Inode number 3670039 is just whatever inode got allocated from in the on-disk file system mounted (on my machine) at /home . You could also find file systems by invoking the mount command. It lists all mounted file systems and where they are mounted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102396/"
]
} |
204,069 | So I want to do generate all possible combinations of lower and upper case characters and numbers that can make up a 5 character string. Possibilities: a..z, A..Z and 0..9. Is there any elegant way of doing this in bash at all? | In bash , you could try: printf "%s\n" {{a..z},{A..Z},{0..9}}{{a..z},{A..Z},{0..9}}{{a..z},{A..Z},{0..9}}{{a..z},{A..Z},{0..9}}{{a..z},{A..Z},{0..9}} but that would take forever and use-up all your memory. Best would be to use another tool like perl : perl -le '@c = ("A".."Z","a".."z",0..9); for $a (@c){for $b(@c){for $c(@c){for $d(@c){for $e(@c){ print "$a$b$c$d$e"}}}}}' Beware that's 6 x 62 5 bytes, so 5,496,796,992. You can do that same loop in bash , but bash being the slowest shell in the west, that's going to take hours: export LC_ALL=C # seems to improve performance by about 10%shopt -s xpg_echo # 2% gain (against my expectations)set {a..z} {A..Z} {0..9}for a do for b do for c do for d do for e do echo "$a$b$c$d$e"done; done; done; done; done (on my system, that outputs at 700 kiB/s as opposed to 20MiB/s with the perl equivalent). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115804/"
]
} |
204,139 | I'd like to search for text that may be split over several lines in a file. A grep that would ignore line breaks and return the matching span of lines. e.g. I would be searching for is an example file , and expect it to be found in the following file: This is an example file. Not to depend on leading or trailing spaces, entirely ignoring all forms of white space might be best (ideally, treating any sequence of white space as a single space). One non-ideal solution is tr '\n' ' ' | grep , that discriminates between matches and non-matches, but doesn't show the match, nor deals well with big files. | The GNU grep can do it grep -z 'is\san\sexample\sfile.' file To fulfill some points which arise in comments there are some modifications to script: grep -oz '^[^\n]*\bis\s*an\s*example\s*file\.[^\n]*' file Regarding huge files I have no imagination of memory limitation but in the case of problem you are free to use sed sed '/\bis\b/{ :1 N /file\.\|\(\n.*\)\{3\}/!b1 } /\<is\s*an\s*example\s*file\./p D' file that keep no more than 4-lines (because 4 words in pattern) in memory ( \(\n.*\)\{3\} ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24170/"
]
} |
204,159 | I have an input stream containing strings representing file types. I want to print all file types that are not text or are PostScript (PostScript is a text file type). I tried the following sed expression: sed -n '/PostScript/pb; /text/!p' However, this returns an error: sed: -e expression #1, char 14: extra characters after command This is confusing to me because I thought it was acceptable to specify multiple commands (e.g. bp ) after a pattern. I can get the behavior I want using the following expression: sed -n '/PostScript/p; /PostScript/b; /text/!p' How can I get the behavior I want without duplicating the /PostScript/ pattern in my expression? | The GNU grep can do it grep -z 'is\san\sexample\sfile.' file To fulfill some points which arise in comments there are some modifications to script: grep -oz '^[^\n]*\bis\s*an\s*example\s*file\.[^\n]*' file Regarding huge files I have no imagination of memory limitation but in the case of problem you are free to use sed sed '/\bis\b/{ :1 N /file\.\|\(\n.*\)\{3\}/!b1 } /\<is\s*an\s*example\s*file\./p D' file that keep no more than 4-lines (because 4 words in pattern) in memory ( \(\n.*\)\{3\} ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34505/"
]
} |
204,161 | I have a list of my customers emails and want to remove some that end in .br for example. I would normally do the following command: sed -i '/.br/d' customers.csv But that would also delete a customers email that was something like [email protected] . Example of a customer detail is: "Phone Number","[email protected]","NAME" How would I delete only customers emails that end in .br ? | The GNU grep can do it grep -z 'is\san\sexample\sfile.' file To fulfill some points which arise in comments there are some modifications to script: grep -oz '^[^\n]*\bis\s*an\s*example\s*file\.[^\n]*' file Regarding huge files I have no imagination of memory limitation but in the case of problem you are free to use sed sed '/\bis\b/{ :1 N /file\.\|\(\n.*\)\{3\}/!b1 } /\<is\s*an\s*example\s*file\./p D' file that keep no more than 4-lines (because 4 words in pattern) in memory ( \(\n.*\)\{3\} ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102085/"
]
} |
204,166 | Below is a part of the /etc/init.d script that controls a daemon. The full init.script is available at http://pastebin.com/02G5tpgH case "$1" instart) printf "%-50s" "Starting $DAEMON_NAME..." cd $DIR [ -d $LOGPATH ] || mkdir $LOGPATH [ -f $LOGFILE ] || su $DAEMON_USER -c 'touch $LOGFILE' PID=`$PYTHON $DAEMON $DAEMON_OPTS > $LOGFILE 2>&1 & echo $!` #echo "Saving PID" $PID " to " $PIDFILE if [ -z $PID ]; then printf "%s\n" "Fail" else echo $PID > $PIDFILE printf "%s\n" "Ok" fi;;status) printf "%-50s" "Checking $DAEMON_NAME..." if [ -f $PIDFILE ]; then PID=`cat $PIDFILE` if [ -z "`ps axf | grep ${PID} | grep -v grep`" ]; then printf "%s\n" "Process dead but pidfile exists" else echo "Running" fi else printf "%s\n" "Service not running" fi;;stop) printf "%-50s" "Stopping $DAEMONNAME" PID=`cat $PIDFILE` cd $DIR if [ -f $PIDFILE ]; then kill -HUP $PID printf "%s\n" "Ok" rm -f $PIDFILE else printf "%s\n" "pidfile not found" fi;;restart) $0 stop $0 start;;*) echo "Usage: $0 {status|start|stop|restart}" exit 1esac I use capistrano2 to deploy/update this application. So prior to the deploy, i have a task to stop the application/service and then another task to start the service post deploy task. The service is never started successfully in this process via the capistrano task. It throws the error. Process dead but pidfile exists Manually stopping and starting cannot replicate this issue. So looks like some kind of deamon issue, where the service will not start when called via script EDITING: As per the evidences so far, looks like its failing at this part of the script. case "$1" instart) printf "%-50s" "Starting $DAEMON_NAME..." cd $DIR [ -d $LOGPATH ] || mkdir $LOGPATH [ -f $LOGFILE ] || su $DAEMON_USER -c 'touch $LOGFILE' PID=`$PYTHON $DAEMON $DAEMON_OPTS > $LOGFILE 2>&1 & echo $!` if [ -z $PID ]; then printf "%s\n" "Fail" else echo $PID > $PIDFILE printf "%s\n" "Ok" fi;; | The GNU grep can do it grep -z 'is\san\sexample\sfile.' file To fulfill some points which arise in comments there are some modifications to script: grep -oz '^[^\n]*\bis\s*an\s*example\s*file\.[^\n]*' file Regarding huge files I have no imagination of memory limitation but in the case of problem you are free to use sed sed '/\bis\b/{ :1 N /file\.\|\(\n.*\)\{3\}/!b1 } /\<is\s*an\s*example\s*file\./p D' file that keep no more than 4-lines (because 4 words in pattern) in memory ( \(\n.*\)\{3\} ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108305/"
]
} |
204,257 | iOS Simulator for Linux Mint OS I need to have iOS simulator for my Linux Mint OS to check my website responsive in various iOS devices is this possible as I know this feature is available in almost all MAC systems but I cannot afford MAC. | Two solutions: OSX Virtual Machine : Run a VMWare OSX image inside Linux and install XCode . You will have the most accurate iOS emulator where you can choose iPad2, iPad. iPhone5, iPhone6, 6+, rotation of screen, etc. Google Chrome : Load your webpage, right click and click on Inspect Element . On the new window called Developer Tools click on the mobile phone icon and now you get a very good emulator of iOS / Android and Windows Phone devices (Mobile and tablet). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115914/"
]
} |
204,269 | I have written the following code to search for all the file names in the current working directory that contain the letter 'f'. for i in *do echo $i if [ $i = "*f*" ] then echo "no" else echo "yes" fidone This prints "yes" for every file present in that directory irrespective that it contains 'f' or not. Please help. | [ $i = "*f*" ] splits the file name stored in the variable i into separate words at spaces, interprets each word as a wildcard pattern and expands it if it matches, and then parses the resulting list of words as a condition for [ … ] . To avoid this rigmarole and instead use the file name, put double quotes around the variable expansion . [ "$i" = "*f*" ] tests whether the file name in the variable i is the three-character name *f* . If you don't have a file with this name, then all files will trigger the else branch, so yes is printed. In ksh, bash or zsh, you can test whether a string matches a wildcard pattern by using the double bracket construct. Leave the wildcards in the pattern unquoted. if [[ "$i" = *f* ]]; then … In plain sh, there is no double bracket syntax. To test whether a string matches a pattern, you can use a case construct. case "$i" in *f*) echo "no";; *) echo "yes";;esac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115920/"
]
} |
204,387 | My problem is, that my Debian installation shows a grey screen on start up and boots into a console, instead into gnome . When I start X manually with startx everything starts fine, so the DE seems to be functioning. | The program where you type your user name and password in a graphical environment, and that logs you into a graphical session, is called a display manager . You need to install a display manager. On Debian, if you install any of the display manager packages then one of them will be started at boot time. Any of the packages that provide the x-display-manager virtual package will do. As of Debian jessie, that's gdm3 (Gnome), kdm (KDE), lightdm (lightweight but themable), slim (lightweight but themable), wdm (lightweight but themable, oldish), xdm (old-style, bare-bones). You don't have to use a display manager that matches your desktop environment. If in doubt, pick lightdm. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115996/"
]
} |
204,392 | From the output of pactl list sink-inputs , I need to grab the sink input number for VLC. Before that, I'm trying to extract the piece that contains the output for only VLC. All the methods that I thought would work have shortcomings. This is a sample output: $ pactl list sink-inputsSink Input #1373 Driver: protocol-native.c Owner Module: 9 Client: 10350 Sink: 0 Sample Specification: float32le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"float32le\"" format.rate = "44100" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: 0: 100% 1: 100% 0: 0,00 dB 1: 0,00 dB balance 0,00 Buffer Latency: 453287 usec Sink Latency: 19697 usec Resample method: copy Properties: media.role = "video" media.name = "audio stream" application.name = "VLC media player (LibVLC 2.1.5)" native-protocol.peer = "UNIX socket client" native-protocol.version = "28" application.id = "org.VideoLAN.VLC" application.version = "2.1.5" application.icon_name = "vlc" application.language = "pt_BR.UTF-8" application.process.id = "19965" application.process.machine_id = "948146522454ae6aa2bb8ed153f4bce4" application.process.session_id = "948146522454ae6aa2bb8ed153f4bce4-1431635199.85146-1790309877" application.process.user = "teresaejunior" application.process.host = "localhost" application.process.binary = "vlc" window.x11.display = ":0.0" module-stream-restore.id = "sink-input-by-media-role:video"Sink Input #1378 Driver: protocol-native.c Owner Module: 9 Client: 10378 Sink: 0 Sample Specification: s16le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"s16le\"" format.rate = "44100" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: 0: 87% 1: 87% 0: -3,63 dB 1: -3,63 dB balance 0,00 Buffer Latency: 989841 usec Sink Latency: 19572 usec Resample method: n/a Properties: media.name = "audio stream" application.name = "mplayer2" native-protocol.peer = "UNIX socket client" native-protocol.version = "28" application.process.id = "20093" application.process.user = "teresaejunior" application.process.host = "localhost" application.process.binary = "mplayer2" application.language = "C" window.x11.display = ":0.0" application.process.machine_id = "948146522454ae6aa2bb8ed153f4bce4" module-stream-restore.id = "sink-input-by-application-name:mplayer2" Both awk '/^Sink/,/VLC/' and sed -n '/^Sink/,/VLC/p' grab the VLC part, but then grab the mplayer2 part too and go until the end of the output: $ pactl list sink-inputs | awk '/^Sink/,/VLC/'Sink Input #1373 Driver: protocol-native.c Owner Module: 9 Client: 10350 Sink: 0 Sample Specification: float32le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"float32le\"" format.rate = "44100" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: 0: 100% 1: 100% 0: 0,00 dB 1: 0,00 dB balance 0,00 Buffer Latency: 437414 usec Sink Latency: 19666 usec Resample method: copy Properties: media.role = "video" media.name = "audio stream" application.name = "VLC media player (LibVLC 2.1.5)"Sink Input #1379 Driver: protocol-native.c Owner Module: 9 Client: 10381 Sink: 0 Sample Specification: s16le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"s16le\"" format.rate = "44100" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: 0: 87% 1: 87% 0: -3,63 dB 1: -3,63 dB balance 0,00 Buffer Latency: 980045 usec Sink Latency: 19563 usec Resample method: n/a Properties: media.name = "audio stream" application.name = "mplayer2" native-protocol.peer = "UNIX socket client" native-protocol.version = "28" application.process.id = "20093" application.process.user = "teresaejunior" application.process.host = "localhost" application.process.binary = "mplayer2" application.language = "C" window.x11.display = ":0.0" application.process.machine_id = "948146522454ae6aa2bb8ed153f4bce4" module-stream-restore.id = "sink-input-by-application-name:mplayer2" grep -Poz '^Sink(?s).*?VLC' works, but if the VLC output should come after mplayer2, it would fail (a test with mplayer2 instead of VLC): $ pactl list sink-inputs | grep -Poz '^Sink(?s).*?mplayer'Sink Input #1373 Driver: protocol-native.c Owner Module: 9 Client: 10350 Sink: 0 Sample Specification: float32le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"float32le\"" format.rate = "44100" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: 0: 100% 1: 100% 0: 0,00 dB 1: 0,00 dB balance 0,00 Buffer Latency: 441088 usec Sink Latency: 18159 usec Resample method: copy Properties: media.role = "video" media.name = "audio stream" application.name = "VLC media player (LibVLC 2.1.5)" native-protocol.peer = "UNIX socket client" native-protocol.version = "28" application.id = "org.VideoLAN.VLC" application.version = "2.1.5" application.icon_name = "vlc" application.language = "pt_BR.UTF-8" application.process.id = "19965" application.process.machine_id = "948146522454ae6aa2bb8ed153f4bce4" application.process.session_id = "948146522454ae6aa2bb8ed153f4bce4-1431635199.85146-1790309877" application.process.user = "teresaejunior" application.process.host = "localhost" application.process.binary = "vlc" window.x11.display = ":0.0" module-stream-restore.id = "sink-input-by-media-role:video"Sink Input #1380 Driver: protocol-native.c Owner Module: 9 Client: 10396 Sink: 0 Sample Specification: s16le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"s16le\"" format.rate = "44100" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: 0: 87% 1: 87% 0: -3,63 dB 1: -3,63 dB balance 0,00 Buffer Latency: 989841 usec Sink Latency: 18084 usec Resample method: n/a Properties: media.name = "audio stream" application.name = "mplayer The desired output: Sink Input #1373 Driver: protocol-native.c Owner Module: 9 Client: 10350 Sink: 0 Sample Specification: float32le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"float32le\"" format.rate = "44100" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: 0: 100% 1: 100% 0: 0,00 dB 1: 0,00 dB balance 0,00 Buffer Latency: 441088 usec Sink Latency: 18159 usec Resample method: copy Properties: media.role = "video" media.name = "audio stream" application.name = "VLC media player (LibVLC 2.1.5)" | With ed : ed -s <<'IN'r !pactl list sink-inputs/VLC/+,$d?Sink Input?,.pqIN It r eads the command output into the text buffer, d eletes everything after the first line matching VLC and then p rints from the previous line matching Sink Input up to current line. With sed : pactl list sink-inputs | sed -n 'H;/Sink Input/h;/VLC/{x;p;q}' It appends each line to H old buffer, if a line matches Sink Input it overwrites the h old buffer and when a line matches VLC it e x changes the hold space w. pattern space, p rints and q uits. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9491/"
]
} |
204,441 | I just installed nginx 1.9 on a Debian 8 server.nginx is working fine, when I tell it to run, but it won't seem to load nginx automatically on boot. I have tried numerous init scripts recommended on the internet, but nothing has worked yet. So now I am trying to figure it out with systemctl. ~$ systemctl status nginx● nginx.service Loaded: masked (/dev/null) Active: inactive (dead)~$ sudo systemctl try-restart nginxFailed to try-restart nginx.service: Unit nginx.service is masked.~$ sudo systemctl reload nginxFailed to reload nginx.service: Unit nginx.service is masked.~$ sudo systemctl reload nginxFailed to reload nginx.service: Unit nginx.service is masked. Unfortunately, I do not know what "service is masked" means, and I don't know why it is masked. when I run sudo nginx the server runs just fine. So then, I looked into unmasking the nginx service. ~$ sudo systemctl unmask nginx.serviceRemoved symlink /etc/systemd/system/nginx.service. ok cool, now I can start nginx using systemctl. So I checked to see if rebooting would load nginx automatically. But it fails to do so, and I have no idea where to go from here. Can someone help me get nginx running automatically on boot? | You seem to confuse enable, start and mask operations. systemctl start , systemctl stop : starts (stops) the unit in question immediately ; systemctl enable , systemctl disable : marks (unmarks) the unit for autostart at boot time (in a unit-specific manner, described in its [Install] section); systemctl mask , systemctl unmask : disallows (allows) all and any attempts to start the unit in question (either manually or as a dependency of any other unit, including the dependencies of the default boot target). Note that marking for autostart in systemd is implemented by adding an artificial dependency from the default boot target to the unit in question, so "mask" also disallows autostarting. So, these all are distinct operations. Of these, you want systemctl enable . Ref.: systemctl(1) . More: Lennart Poettering (2011-03-02). "The Three Levels of Off" . systemd for Administrators . 0pointer.de. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
204,462 | I run Ubuntu 14.04 on a Toshiba chromebook using crouton. The drive my OS is installed on is small, with only 3.6 GB of free space. I'd like to install sage on my system but sage requires 6 GB of free space on the system. However, I always keep an SD card inserted into the unit. The card has 175 GB of free space. Is it possible to install sage on the SD card? The way I'm attempting to download sage with the commands apt-add-repository -y ppa:aims/sagemathapt-get updateapt-get install sagemath-upstream-binary as found here . | You seem to confuse enable, start and mask operations. systemctl start , systemctl stop : starts (stops) the unit in question immediately ; systemctl enable , systemctl disable : marks (unmarks) the unit for autostart at boot time (in a unit-specific manner, described in its [Install] section); systemctl mask , systemctl unmask : disallows (allows) all and any attempts to start the unit in question (either manually or as a dependency of any other unit, including the dependencies of the default boot target). Note that marking for autostart in systemd is implemented by adding an artificial dependency from the default boot target to the unit in question, so "mask" also disallows autostarting. So, these all are distinct operations. Of these, you want systemctl enable . Ref.: systemctl(1) . More: Lennart Poettering (2011-03-02). "The Three Levels of Off" . systemd for Administrators . 0pointer.de. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
204,480 | I want to run multiple commands (processes) on a single shell. All of them have own continuous output and don't stop. Running them in the background breaks Ctrl - C . I would like to run them as a single process (subshell, maybe?) to be able to stop all of them with Ctrl - C . To be specific, I want to run unit tests with mocha (watch mode), run server and run some file preprocessing (watch mode) and see output of each in one terminal window. Basically I want to avoid using some task runner. I can realize it by running processes in the background ( & ), but then I have to put them into the foreground to stop them. I would like to have a process to wrap them and when I stop the process it stops its 'children'. | To run commands concurrently you can use the & command separator. ~$ command1 & command2 & command3 This will start command1 , then runs it in the background. The same with command2 . Then it starts command3 normally. The output of all commands will be garbled together, but if that is not a problem for you, that would be the solution. If you want to have a separate look at the output later, you can pipe the output of each command into tee , which lets you specify a file to mirror the output to. ~$ command1 | tee 1.log & command2 | tee 2.log & command3 | tee 3.log The output will probably be very messy. To counter that, you could give the output of every command a prefix using sed . ~$ echo 'Output of command 1' | sed -e 's/^/[Command1] /' [Command1] Output of command 1 So if we put all of that together we get: ~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /'[Command1] Starting command1[Command2] Starting command2[Command1] Finished[Command3] Starting command3 This is a highly idealized version of what you are probably going to see. But its the best I can think of right now. If you want to stop all of them at once, you can use the build in trap . ~$ trap 'kill %1; kill %2' SIGINT~$ command1 & command2 & command3 This will execute command1 and command2 in the background and command3 in the foreground, which lets you kill it with Ctrl + C . When you kill the last process with Ctrl + C the kill %1; kill %2 commands are executed, because we connected their execution with the reception of an INTerupt SIGnal, the thing sent by pressing Ctrl + C . They respectively kill the 1st and 2nd background process (your command1 and command2 ). Don't forget to remove the trap, after you're finished with your commands using trap - SIGINT . Complete monster of a command: ~$ trap 'kill %1; kill %2' SIGINT~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /' You could, of course, have a look at screen . It lets you split your console into as many separate consoles as you want. So you can monitor all commands separately, but at the same time. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/204480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112420/"
]
} |
204,522 | PulseAudio is always running on my system, and it always instantly restarts if it crashes or I kill it. However, I never actually start PulseAudio. I have checked /etc/init.d/ and /etc/X11/Xsession.d/ , and I have checked systemctl list-units -a , and PulseAudio is nowhere to be found. How come PulseAudio seemingly magically starts by itself without me ever running it, and how does it instantly restart when it dies? I'm using Debian 8 (jessie) with xinit and the i3 window manager, and PulseAudio 5. | It seems any process linking to the libpulse* family of shared objects--either before or after running X and the i3 window manager--may implicitly autospawn PulseAudio server, under your user process, as a byproduct of attempts to interface with the audio subsystem. PulseAudio creator Lennart Poettering seems to confirm this, in a 2015-05-29 email to the systemd-devel mailing list : "pulseaudio is generally not a system service but a user service. Unless your user session is fully converted to be managed by systemd too (which is unlikely) systemd is hence not involved at all with starting it. "PA is usually started from the session setup script or service. In Gnome that's gnome-session, for example. It's also auto-spawned on-demand if the libraries are used and note that it is missing." For example, on Debian Stretch (Testing), web browser IceWeasel links to two libpulse* shared objects: 1) libpulsecommon-7.1.so; and 2) libpulse.so.0.18.2: k@bucket:~$ ps -ef | grep iceweaselk 17318 1 5 18:58 tty2 00:00:15 iceweaselk 17498 1879 0 19:03 pts/0 00:00:00 grep iceweaselk@bucket:~$ sudo pmap 17318 | grep -i pulse00007fee08377000 65540K rw-s- pulse-shm-244225319300007fee0c378000 65540K rw-s- pulse-shm-315628792600007fee11d24000 500K r-x-- libpulsecommon-7.1.so00007fee11da1000 2048K ----- libpulsecommon-7.1.so00007fee11fa1000 4K r---- libpulsecommon-7.1.so00007fee11fa2000 8K rw--- libpulsecommon-7.1.so00007fee121af000 316K r-x-- libpulse.so.0.18.200007fee121fe000 2044K ----- libpulse.so.0.18.200007fee123fd000 4K r---- libpulse.so.0.18.200007fee123fe000 4K rw--- libpulse.so.0.18.2 You may see which running processes link to libpulse*. For example, first get a list of libpulse* shared objects, then run lsof on each (note: this comes from Debian Stretch (Testing), so your output may differ): sudo find / -type f -name "*libpulse*"*snip*/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsedsp.so/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so/usr/lib/x86_64-linux-gnu/libpulse.so.0.18.2/usr/lib/x86_64-linux-gnu/libpulse-simple.so.0.1.0/usr/lib/x86_64-linux-gnu/libpulse-mainloop-glib.so.0.0.5/usr/lib/libpulsecore-7.1.so/usr/lib/ao/plugins-4/libpulse.sosudo lsof /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.soCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEgnome-she 864 Debian-gdm mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.sognome-set 965 Debian-gdm mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.sognome-set 1232 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.sognome-she 1286 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.sochrome 2730 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.sopulseaudi 18356 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so To tell these processes not to autospawn PulseAudio, edit ~/.config/pulse/client.conf and add line autospawn = no PulseAudio and its libraries respect that setting, generally. The libpulse* linking by running processes may also indicate why PulseAudio respawns so quickly. The FreeDesktop.org page, " Running PulseAudio ", seems to confirm this: "...typically some background application will immediately reconnect, causing the server to get immediately restarted." You seem to indicate you start the i3 window manager via the console (by running xinit) and do not use a display manager or desktop environment. The rest of this answer details info for those that do use GNOME, KDE, and so forth. ADDITIONAL INFO, FOR GNOME/KDE AUTOSTART Package PulseAudio (5.0-13), in Debian Jessie (Stable) amd64, installs the following four system files : /etc/xdg/autostart/pulseaudio-kde.desktop /etc/xdg/autostart/pulseaudio.desktop /usr/bin/start-pulseaudio-x11 /usr/bin/start-pulseaudio-kde Some graphical session managers automatically run FreeDesktop.org autostart scripts on user login. The PulseAudio autostart script, in turn, tells graphical session managers to run the appropriate PulseAudio startup script: /usr/bin/start-pulseaudio-x11/usr/bin/start-pulseaudio-kde These scripts call PulseAudio client /usr/bin/pactl to load PulseAudio modules, which spawns the PulseAudio server as a byproduct (note: if you have autospawn set to "no", pactl respects that and will not autospawn PulseAudio server). More detail, at the FreeDesktop.org page " Running PulseAudio ". Some display managers, in addition and in other distributions, may start PulseAudio (for example, SDDM, on ArchLinux . Though maintainers may have resolved this, by now). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/204522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17214/"
]
} |
204,523 | I know that there are many questions about adding extension to a multiple files. But none of them could get my job done. I have a huge list of images without extension, most of them are png, but there are jpg files also and maybe tiff. How could I rename them correctly? | Perhaps like this: for f in /some/dir/*; do type="$( file -bi -- "$f" )" case "${type%%;*}" in image/jpeg) ext=jpg ;; image/png) ext=png ;; image/tiff) ext=tiff ;; *) printf '%s: %s: unknown file type\n' "${0##*/}" "$f" >&2; ext='' ;; esac if [ -n "$ext" ]; then mv -n -- "$f" "${f}.${ext}"; fidone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116089/"
]
} |
204,530 | I need to recursively copy a folder from a Ubuntu remote server where I have ssh access. I don't want to follow symbolic links, nor to copy permissions/owner/group, because my client system (Ubuntu too) doesn't have the same users as the server. This rsync solution could be the best one.But the server does not have rsync and I can't install it there; so that command gives me error. Is there another way to copy the remote folder? | You can use scp -r to copy files recursively between different hosts. Your syntax could be like scp -r user@Ubuntu-Server:/home/myuser ./from_Ubuntu_server Besides, you might be able to upload your local rsync binary using scp to the Ubuntu server and add the --rsync-path=/home/myuser/rsync to your original rsync command to let your client rsync know which rsync it should invoke on the Ubuntu server. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48707/"
]
} |
204,597 | Can someone explain why this script doesn't produce the output I was expecting? #!/bin/bash #var=0ls -1 /tmp| while read filedo echo $file var=1doneecho "var is $var" I get a list of files followed by var is 0 Why isn't var equal to 1? Is it because the while loop spawns a sub-shell? | Piping does. You can check for yourself, for example by printing $BASHPID from inside and outside of the while loop or by doing something like: ls | while read file; do sleep 100;done , stopping it with C-Z and checking ps or ps --forest afterwards to see the process tree in your terminal session. You can avoid the subshell by "piping" a little differently: var=0while read filedo echo $file; var=1done < <(ls -1 /tmp/)echo $var #=> 1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90367/"
]
} |
204,599 | I've been struggling to come up with a solution for this, the goal is to organize the results into comma separated columns as we have to recable our entire datacenter from cat6 to cat6a. $DBFILE output is access1a access1b ... goal: switch,switch-port,server,server-port access1a,1,server6,eth0 access1a,2,server4,eth0 access1a,3,server1,eth0 how ever my current output is the following: #!/bin/shDBFILE=$(cat /tmp/routers.all | awk -F: '{print $1}'| grep access)for OUTPUT in $DBFILEdo /usr/bin/snmpwalk -Os -c pass -v 2c $OUTPUT iso.0.8802.1.1.2.1.4.1.1.8.0 | tr -d "\"" | sed -r 's/ /./g' |awk -F. '{print "'"$OUTPUT"'"","$13","$22","}' /usr/bin/snmpwalk -Os -c pass -v 2c $OUTPUT iso.0.8802.1.1.2.1.4.1.1.9.0 | tr -d "\"" | sed -r 's/ /./g' |awk -F. '{print "'"$OUTPUT"'"","$13","$17","}'doneaccess1a,1,server6,access1a,2,server4,access1a,3,server1,access1a,1,eth0,access1a,2,eth0,access1a,3,eth0, I've tried many different variations of arrays and for loops and either i only get the very last query or no success at all, i figured i'd ask as i can't seem to find methods to accomplish the following. | Piping does. You can check for yourself, for example by printing $BASHPID from inside and outside of the while loop or by doing something like: ls | while read file; do sleep 100;done , stopping it with C-Z and checking ps or ps --forest afterwards to see the process tree in your terminal session. You can avoid the subshell by "piping" a little differently: var=0while read filedo echo $file; var=1done < <(ls -1 /tmp/)echo $var #=> 1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116127/"
]
} |
204,607 | When I use grep -o to search in multiple files, it outputs each result prefixed with the file name. How can I prevent this prefix? I want the results without the file names. | With the GNU implementation of grep (the one that also introduced -o ) or compatible, you can use the -h option. -h, --no-filename Suppress the prefixing of file names on output. This is the default when there is only one file (or only standard input) to search. With other implementations, you can always concatenate the files with cat and grep that output: cat ./*.txt | grep regexp Or use sed or awk instead of grep : awk '/regexp/' ./*.txt (extended regexps like with grep -E ). sed '/regexp/!d/' ./*.txt (basic regexps like with grep without -E . Many sed implementations now also support a -E option for extended regexps). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/204607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11086/"
]
} |
204,614 | I want to exclude the file ./test/main.cpp from my search. Here's what I'm seeing: $ grep -r pattern --exclude=./test/main.cpp./test/main.cpp:pattern./lib/main.cpp:pattern./src/main.cpp:pattern I know it is possible to get the output that I want by using multiple commands in a pipes-and-filters arrangement, but is there some quoting/escaping that will make grep understand what I want natively? | grep can't do this for file in one certain directory if you have more files with the same name in different directories, use find instead: find . -type f \! -path './test/main.cpp' -exec grep pattern {} \+ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6764/"
]
} |
204,625 | I am trying to git clone a repo off of bitbucket. I use git clone {https} temp . This gives me an error of refs not found . The https address is the one I get off of bitbucket, but the one they provide uses hg instead of git . Why is this happening? | grep can't do this for file in one certain directory if you have more files with the same name in different directories, use find instead: find . -type f \! -path './test/main.cpp' -exec grep pattern {} \+ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116146/"
]
} |
204,641 | I currently have an extra HDD which I am using as my workspace. I am trying to get it to mount automatically on reboots using the following line added to /etc/fstab /dev/sdb1 /media/workspace auto defaults 0 1 This works to auto mount it, however I would like to restrict read/write access to users belonging to a specific group. How would I go about doing this in /etc/fstab? Can I simply just use chown or chmod to control the access? | If the filesystem type is one that doesn't have permissions, such as FAT, you can add umask , gid and uid to the fstab options. For example: /dev/sdb1 /media/workspace auto defaults,uid=1000,gid=1000,umask=022 0 1 uid=1000 is the user id. gid=1000 is the group id. umask=022 this will set permissions so that the owner has read, write, execute. Group and Others will have read and execute. To see your changes you do not need to reboot. Just umount and mount again without arguments. For example: umount /media/workspacemount /media/workspace But make sure to do not have any process (even your shell) using that directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/204641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116153/"
]
} |
204,661 | My OpenVAS isn't starting in Kali Linux. root@kali:~# openvas-mkcertOne or more files do already exist and would be overriden: /var/lib/openvas/CA/cacert.pem /var/lib/openvas/private/CA/cakey.pem /var/lib/openvas/CA/servercert.pem /var/lib/openvas/private/CA/serverkey.pemYou need to remove or rename them and re-run openvas-mkcert.If you run openvas-mkcert with '-f', the files will be overwritten.root@kali:~# openvas-nvt-sync[i] This script synchronizes an NVT collection with the 'OpenVAS NVT Feed'.[i] The 'OpenVAS NVT Feed' is provided by 'The OpenVAS Project'.[i] Online information about this feed: 'http://www.openvas.org/openvas-nvt-feed.html'.[i] NVT dir: /var/lib/openvas/pluginsOpenVAS feed server - http://www.openvas.org/This service is hosted by Intevation GmbH - http://intevation.de/All transactions are logged.Please report synchronization problems to [email protected] you have any other questions, please use the OpenVAS mailing listsor the OpenVAS IRC chat. See http://www.openvas.org/ for details.[i] Feed is already current, no synchronization necessary.root@kali:~# openvas-mkcert-client -n om -iGenerating RSA private key, 1024 bit long modulus...++++++.......................++++++e is 65537 (0x10001)You are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or a DN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter '.', the field will be left blank.-----Country Name (2 letter code) [DE]:State or Province Name (full name) [Some-State]:Locality Name (eg, city) []:Organization Name (eg, company) [Internet Widgits Pty Ltd]:Organizational Unit Name (eg, section) []:Common Name (eg, your name or your server's hostname) []:Email Address []:Using configuration from /tmp/openvas-mkcert-client.3524/stdC.cnfCheck that the request matches the signatureSignature okThe Subject's Distinguished Name is as followscountryName :PRINTABLE:'DE'localityName :PRINTABLE:'Berlin'commonName :PRINTABLE:'om'Certificate is to be certified until May 19 17:49:55 2016 GMT (365 days)Write out database with 1 new entriesData Base UpdatedYour client certificates are in /tmp/openvas-mkcert-client.3524 .You will have to copy them by hand.root@kali:~# openvasmd --rebuildroot@kali:~# openvasmd --backuproot@kali:~# openvasad -c 'add_user' -n openvasadmin -rbash: openvasad: command not foundroot@kali:~# openvasad -c 'add_user' -n openvasadmin -r adminbash: openvasad: command not foundroot@kali:~# openvassdroot@kali:~# openvas-mkcertOne or more files do already exist and would be overriden: /var/lib/openvas/CA/cacert.pem /var/lib/openvas/private/CA/cakey.pem /var/lib/openvas/CA/servercert.pem /var/lib/openvas/private/CA/serverkey.pemYou need to remove or rename them and re-run openvas-mkcert.If you run openvas-mkcert with '-f', the files will be overwritten.root@kali:~# openvas-mkcert -f------------------------------------------------------------------------------- Creation of the OpenVAS SSL Certificate-------------------------------------------------------------------------------This script will now ask you the relevant information to create the SSL certificate of OpenVAS.Note that this information will *NOT* be sent to anybody (everything stays local), but anyone with the ability to connect to your OpenVAS daemon will be able to retrieve this information.CA certificate life time in days [1460]: Server certificate life time in days [365]: Your country (two letter code) [DE]: PLYour state or province name [none]: Your location (e.g. town) [Berlin]: WroclawYour organization [OpenVAS Users United]: ------------------------------------------------------------------------------- Creation of the OpenVAS SSL Certificate-------------------------------------------------------------------------------Congratulations. Your server certificate was properly created.The following files were created:. Certification authority: Certificate = /var/lib/openvas/CA/cacert.pem Private key = /var/lib/openvas/private/CA/cakey.pem. OpenVAS Server : Certificate = /var/lib/openvas/CA/servercert.pem Private key = /var/lib/openvas/private/CA/serverkey.pemPress [ENTER] to exitroot@kali:~# openvas-nvt-sync[i] This script synchronizes an NVT collection with the 'OpenVAS NVT Feed'.[i] The 'OpenVAS NVT Feed' is provided by 'The OpenVAS Project'.[i] Online information about this feed: 'http://www.openvas.org/openvas-nvt-feed.html'.[i] NVT dir: /var/lib/openvas/pluginsOpenVAS feed server - http://www.openvas.org/This service is hosted by Intevation GmbH - http://intevation.de/All transactions are logged.Please report synchronization problems to [email protected] you have any other questions, please use the OpenVAS mailing listsor the OpenVAS IRC chat. See http://www.openvas.org/ for details.[i] Feed is already current, no synchronization necessary.root@kali:~# openvas-mkcert-client -n om -iGenerating RSA private key, 1024 bit long modulus.............................++++++..++++++e is 65537 (0x10001)You are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or a DN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter '.', the field will be left blank.-----Country Name (2 letter code) [DE]:State or Province Name (full name) [Some-State]:Locality Name (eg, city) []:Organization Name (eg, company) [Internet Widgits Pty Ltd]:Organizational Unit Name (eg, section) []:Common Name (eg, your name or your server's hostname) []:Email Address []:Using configuration from /tmp/openvas-mkcert-client.3871/stdC.cnfCheck that the request matches the signatureSignature okThe Subject's Distinguished Name is as followscountryName :PRINTABLE:'DE'localityName :PRINTABLE:'Berlin'commonName :PRINTABLE:'om'Certificate is to be certified until May 19 17:59:47 2016 GMT (365 days)Write out database with 1 new entriesData Base UpdatedYour client certificates are in /tmp/openvas-mkcert-client.3871 .You will have to copy them by hand.root@kali:~# openvasmd --rebuildroot@kali:~# openvassdbind() failed : Address already in useroot@kali:~# This is not working: [i] This script synchronizes an NVT collection with the 'OpenVAS NVT Feed'.[i] The 'OpenVAS NVT Feed' is provided by 'The OpenVAS Project'.[i] Online information about this feed: 'http://www.openvas.org/openvas-nvt-feed.html'.[i] NVT dir: /var/lib/openvas/pluginsOpenVAS feed server - http://www.openvas.org/This service is hosted by Intevation GmbH - http://intevation.de/All transactions are logged.Please report synchronization problems to [email protected] you have any other questions, please use the OpenVAS mailing listsor the OpenVAS IRC chat. See http://www.openvas.org/ for details.@ERROR: max connections (200) reached -- try again laterrsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9][e] Error: rsync failed.[i] This script synchronizes a SCAP data directory with the OpenVAS one.[i] SCAP dir: /var/lib/openvas/scap-data[i] Will use rsync[i] Using rsync: /usr/bin/rsync[i] Configured SCAP data rsync feed: rsync://feed.openvas.org:/scap-dataOpenVAS feed server - http://www.openvas.org/This service is hosted by Intevation GmbH - http://intevation.de/All transactions are logged.Please report synchronization problems to [email protected] you have any other questions, please use the OpenVAS mailing listsor the OpenVAS IRC chat. See http://www.openvas.org/ for details.@ERROR: max connections (200) reached -- try again laterrsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9][e] Error: rsync failed. Your SCAP data might be broken now.[i] This script synchronizes a CERT advisory directory with the OpenVAS one.[i] CERT dir: /var/lib/openvas/cert-data[i] Will use rsync[i] Using rsync: /usr/bin/rsync[i] Configured CERT data rsync feed: rsync://feed.openvas.org:/cert-dataOpenVAS feed server - http://www.openvas.org/This service is hosted by Intevation GmbH - http://intevation.de/All transactions are logged.Please report synchronization problems to [email protected] you have any other questions, please use the OpenVAS mailing listsor the OpenVAS IRC chat. See http://www.openvas.org/ for details.@ERROR: max connections (200) reached -- try again laterrsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9]Error: rsync failed. Your CERT data might be broken now.Stopping OpenVAS Manager: openvasmd.Stopping OpenVAS Scanner: openvassd. And, the terminal freezes at this point. Starting OpenVas ServicesStarting Greenbone Security Assistant: ERROR.Starting OpenVAS Scanner: ERROR.Starting OpenVAS Manager: ERROR.root@kali:~# How to solve this problem? | If the filesystem type is one that doesn't have permissions, such as FAT, you can add umask , gid and uid to the fstab options. For example: /dev/sdb1 /media/workspace auto defaults,uid=1000,gid=1000,umask=022 0 1 uid=1000 is the user id. gid=1000 is the group id. umask=022 this will set permissions so that the owner has read, write, execute. Group and Others will have read and execute. To see your changes you do not need to reboot. Just umount and mount again without arguments. For example: umount /media/workspacemount /media/workspace But make sure to do not have any process (even your shell) using that directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/204661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
204,683 | Let's say I want to write a shell script that executes just one command. But this command is poorly designed. It doesn't offer any command line options; instead it asks some questions and waits for user input. Is there a way to prepare this input in the script, so the questions are answered automatically? | If the command is not very picky it should work with something like this: command > /dev/null << EOF<answer 1><answer 2><answer 3>EOF This requires that you know the exact answers beforehand. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204683",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116180/"
]
} |
204,689 | If you search something in Vim by prepending searchterm with a forward slash, e.g. /searchterm , Vim puts that searchterm into the search string history table . You then are able to navigate through past search terms by typing in forward slash ( / ) and using Up / Down arrow keys. That search string history table is persistent across Vim restarts. Everything above is also true for command (typed with : prepended) history table. How do I clear those history tables? | The history is persisted in the viminfo file; you can configure what (and how many of them) is persisted via the 'viminfo' (and 'history' ) options. You can clear the history via the histdel() function, e.g. for searches: :call histdel('/') You can even delete just certain history ranges or matching lines. Alternatively, you could also just edit the ~/.viminfo file directly (when Vim is closed, and either with another editor, or with vim -i NONE ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/204689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58428/"
]
} |
204,749 | I was asked the following question in a test on shell scripting at my university, which never gave an answer, and google is of little help. Q: What is the line separator that should be used to end this here-document? fff=filexyz <<\\$fff... | The line which ends the here document is \$fff From the man bash section on Here Documents: The format of here-documents is: <<[-]word here-document delimiter No parameter and variable expansion , command substitution, arithmetic expansion, or pathname expansion is performed on word . If any characters in word are quoted, the delimiter is the result of quote removal on word, and the lines in the here-document are not expanded. If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion, the character sequence \ newline is ignored, and \ must be used to quote the characters \ , $ , and ` . word does undergo quote removal, so \\$fff is dequoted to \$fff . But, as the manpage says, no variable expansion is done so it stays that way. The body of a here document might or might not undergo variable expansion and backslash interpretation. In this case, since word contains a quoted character (that is, the backslash), parameter expansion and backslash dequoting are not performed on the text of the here document. However, the input is compared with the terminating sequence before variable expansion, so it is not necessary to backslash-escape the \ nor the $ in the terminating line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90729/"
]
} |
204,753 | My system has booted up relatively fast while ran Debian 7 Wheezy, but after an upgrade to Debian 8 Jessie, and therefore from SysVinit to systemd , it became a way, way slower. The thing slowing down the booting is network. The waiting for the upbringing of network interfaces exceeds 1 minute. I don't know what in the /etc/network/interfaces is affecting the boot up process, so here it is in its entirety. /etc/network/interfaces : allow-auto lo iface lo inet loopbackauto wlan0 iface wlan0 inet static address 192.168.150.1 netmask 255.255.255.0auto eth1 iface eth1 inet manual up ifconfig $IFACE 0.0.0.0 up down ifconfig $IFACE downauto eth2 iface eth2 inet manual up ifconfig $IFACE 0.0.0.0 up down ifconfig $IFACE downauto eth0 iface eth0 inet dhcp post-up brctl addbr br0 post-up brctl addif br0 eth1 eth2 post-up ifconfig br0 192.168.10.1 pre-down ifconfig br0 0.0.0.0 pre-down brctl delif br0 eth1 eth2 pre-down ifconfig br0 down pre-down brctl delbr br0 Any suggestions how to boost things? | The solution is fairly easy, just replace auto to allow-hotplug . So I ended up with this: allow-hotplug lo iface lo inet loopbackallow-hotplug wlan0 iface wlan0 inet static address 192.168.150.1 netmask 255.255.255.0allow-hotplug eth1 iface eth1 inet manual up ifconfig $IFACE 0.0.0.0 up down ifconfig $IFACE downallow-hotplug eth2 iface eth2 inet manual up ifconfig $IFACE 0.0.0.0 up down ifconfig $IFACE downallow-hotplug eth0 iface eth0 inet dhcp post-up brctl addbr br0 post-up brctl addif br0 eth1 eth2 post-up ifconfig br0 192.168.10.1 pre-down ifconfig br0 0.0.0.0 pre-down brctl delif br0 eth1 eth2 pre-down ifconfig br0 down pre-down brctl delbr br0 Now system boots really fast. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42158/"
]
} |
204,796 | I want to know current system time with microsecond Resolution. date +%s returns Time in seconds since epoch(1-1-1970). How can I get time in microseconds Resolution. How much delay is in querying this value? By delay I mean suppose at time t secs i query and it gives me value t + t' what is t' ? My Use case: I am recording Videos using multiple Raspberry Pis simulatenously. Now I want to timestamp each frame of videos so that I can align them. Currently for timestamp it's using boot time(time since boot). Boot time is accurate but it's different for each Raspberry Pi. I have configured all Pi's to a NTP Server thus all have same System time. So basically I want the timestamp of System time not Boot Time. How can I do that ? | date +%s%N will give the nano seconds since epoch To get the micro seconds just do an eval expr `date +%s%N` / 1000 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116254/"
]
} |
204,833 | Is there a way to make a modern shell's history feature be scoped to a path? My working contexts are split up as paths on the file system, and the pattern of shell activity, such as repeatedly issued commands, tends to be distinct to each 'project'. It would be nice if I could scope the history feature to commands issued from the current path (or sub-path). | With zsh , you could do: mkdir -p ~/.zsh/dirhist And add to your ~/.zshrc: HISTSIZE=1000SAVEHIST=10000setopt HIST_SAVE_NO_DUPS INC_APPEND_HISTORYHISTFILE=~/.zsh/dirhist/${PWD//\//@}chpwd() { [[ $PWD = $OLDPWD ]] || fc -Pp ~/.zsh/dirhist/${PWD//\//@}} chpwd() is called whenever the current directory changes. There, we reset the history file to something like ~/.zsh/dirhist/@foo@bar when you cd to /foo/bar . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115224/"
]
} |
204,866 | I have the following folders on a linux machine ./myFolder ./tmp ./packages ./zips compress.sh I run tar -czf ./zips/someFile.tar.gz ./tmp/someFolder from compress.sh . The folder is created, but the resulting nesting occurs inside the zip file: . -> tmp -> someFolder How can I zip this file so that when I open it, it's just someFolder , and not someFolder inside tmp , inside . ? | You can try following: tar -czf ./zips/someFile.tar.gz -C ./tmp/ someFolder | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204866",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105228/"
]
} |
204,868 | I've just been looking through a few man pages for a few different commands including grep and ifconfig . I've noticed over a few pages, the content uses a strange syntax to notate what i think are quotations (back-tick followed by a single or double quote): `text' Why can't they use ' or " to open and close quotations? Update I now realise that this should be bolding out the characters instead of noting quotes. Is there any reason my system is ignoring these when formatting? I'm using OSX. | Man pages historically have been written in the troff/nroff markup language, although there are alternatives now such as DocBook . Troff, which is meant for preparing output to a phototypesetter (or to files in formats such as PostScript or PDF), will automatically change the ` and ' characters in the input into curved quotation marks, ‘ and ’ . (See the Troff User’s Manual , section 2.1). Nroff, which is what the man command runs when the output is to a terminal, will pass those characters through unchanged. Those quotes are actually in the man page sources for the older version of GNU grep (2.5.1) in FreeBSD and OSX: .B GREP_COLORenvironment variable. WHEN may be `never', `always', or `auto' More recent versions of GNU grep do not have those quotes in the man page sources : .I WHENis.BR never ", " always ", or " auto . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65992/"
]
} |
204,906 | Sounds strange, I have a shell script that it gets trigger by udev rules , to mount attached USB device to the system file tree. The script runs when an usb device attaches to system, so the rules seems to be fine. I monitor how the script progress by syslog, and it also goes fine, and even mount command returns zero, and it says: root[1023]: mount: /dev/sda1 mounted on /media/partitionlabel. But at the end the device is not mounted, it is not listed in /etc/mtab - /proc/mounts - findmnt - mount . and if I run umount on device, it also says, the device is not mounted. However If I run the script manually as a root from terminal, then it works perfect and device gets mount, but not when it runs by udev . I've added 8 second sleep time to the start of the script, to make sure it's not a timing problem and also removed number from rules file name to make sure udevd would put the new rules at the bottom of rules queue, and script would run after other system rules, but no success. The syslog: (right after the device attached) kernel: usb 1-1.2: new high-speed USB device number 12 using dwc_otgkernel: usb 1-1.2: New USB device found, idVendor=058f, idProduct=6387kernel: usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3kernel: usb 1-1.2: Product: Mass Storagekernel: usb 1-1.2: Manufacturer: Generickernel: usb 1-1.2: SerialNumber: 24DCF568kernel: usb-storage 1-1.2:1.0: USB Mass Storage device detectedkernel: scsi host6: usb-storage 1-1.2:1.0kernel: scsi 6:0:0:0: Direct-Access Generic Flash Disk 8.07 PQ: 0 ANSI: 4kernel: sd 6:0:0:0: [sda] 1968128 512-byte logical blocks: (1.00 GB/961 MiB)kernel: sd 6:0:0:0: [sda] Write Protect is offkernel: sd 6:0:0:0: [sda] Mode Sense: 23 00 00 00kernel: sd 6:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUAkernel: sda: sda1kernel: sda: p1 size 1968126 extends beyond EOD, enabling native capacitykernel: sda: sda1kernel: sda: p1 size 1968126 extends beyond EOD, truncatedkernel: sd 6:0:0:0: [sda] Attached SCSI removable diskroot[1004]: /usr/local/sbin/udev-auto-mount.sh - status: started to automount sda1root[1019]: /usr/local/sbin/udev-auto-mount.sh - status: Device Label is partitionlabel and Filesystem is vfat.root[1021]: /usr/local/sbin/udev-auto-mount.sh - status: mounting the device sda1 by filesystem vfat to /media/partitionlabel.root[1023]: mount: /dev/sda1 mounted on /media/partitionlabel.root[1024]: /usr/local/sbin/udev-auto-mount.sh status: mount command proceed for vfat, retval is 0root[1025]: /usr/local/sbin/udev-auto-mount.sh - status: succeed! Configs: /etc/udev/rules.d/local-rules: The defined rule in udev is: # /etc/udev/rules.d/local-rulesENV{ID_BUS}=="usb", ACTION=="add", ENV{DEVTYPE}=="partition", \ RUN+="/usr/local/sbin/udev-automounter.sh %k $ENV{ID_FS_LABEL_ENC}" udev-auto-mount.sh The script starts by another script which defined in the udev rule. It so straight, it makes mount point directory and mounts the usb device to the mount point using its file system type and some regular options. I've added "-v" option to the mount command to be more verbose and also redirected all outputs to syslog, so I can see how it runs, but it not says too much. #!/bin/sh## /usr/local/sbin/udec-auto-mount.sh##logger -s "$0 - status: started to automount ${1}"DEVICE=$1sleep 8 #...#...# Checking inputs, getting filesystem type (ID_FS_TYPE), partition label# (ID_FS_LABEL) and ...mkdir "/media/${ID_FS_LABEL}"logger -s "$0 - status: mounting the device ${DEVICE} by filesystem ${ID_FS_TYPE} to /media/${ID_FS_LABEL}."case $ID_FS_TYPE in vfat) mount -v -t vfat -o sync,noatime,nosuid,nodev /dev/${DEVICE} "/media/${ID_FS_LABEL}" 2>&1 | logger let retVal=$? logger -s "$0 status: mount command proceed for vfat, retval is ${retVal}" ;; *) mount -v -t auto -o sync,noatime /dev/${DEVICE} "/media/${ID_FS_LABEL}" ;;esacif [ ${retVal} -eq 0 ]; then logger -s "$0 - status: succeed!" exit 0else logger -s "$0 Error: unable to mount the device ${DEVICE}, retval is ${retVal}" rmdir "/media/${ID_FS_LABEL}"fiexit 0 Maybe it helps: Sometimes, after the script fails to mount the USB device, when I detach the device, some error come to syslog like: kernel: usb 1-1.2: USB disconnect, device number 11systemd-udevd[143]: error: /dev/sda: No such file or directorysystemd-udevd[977]: inotify_add_watch(7, /dev/sda, 10) failed: No such file or directory Edit: This is 'mount' version: $ mount -V:mount from util-linux 2.27.1 (libmount 2.27.0: assert, debug) | On a system with systemd, this problem can be encountered when you reformat the partition and try to mount it back. I moved a disk from encryption to unencrypted, causing systemd’s generated mnt-disk .mount to (where mnt-disk is mount path from /etc/fstab) to refer the old path that didn’t exist any more, causing mount to go haywire. Just doing systemctl daemon-reload and then doing the mount makes things work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114909/"
]
} |
204,907 | I'm trying to integrate ipython and vim through a single tmux session using the popular vim plugin "vim-slime." The problem is that while I can get it to work fine provided that vim is opened in a separate tmux window (or using gvim), if I try to send lines of code to a different pane in the same window, at best I wind up sending it to the vim session I'm currently using. Really what I want in my setup is vim on the right-hand side of the screen, ipython on the upper-left, and a regular command-line on the bottom left. I don't really want to be opening and managing multiple sessions and windows. Is there a simple way to do this that I just don't know about because of my relative inexperience? | Okay, so I was having exactly the same problem, which is what brought me to this question. I have a split session, vim code on the left and a scheme prompt on the right. My problem was, I thought the session name was the socket name, but they are two different things. I had named the session '0', for the 0-th window, but in fact, the SOCKET is named 'default' despite the session name I specified. To get a list of the tmux sockets run: lsof -U | grep "^tmux" I found that from this answer: https://stackoverflow.com/questions/11333291/is-it-possible-to-find-tmux-sockets-currently-in-use The above was helpful to see the actual names of the sockets. That is what you put in the first prompt. I was putting '0', which was the name of my session, but it was not working. 'default' is what is needed there despite the fact I had named the session. Then, at the second prompt, I entered (index-0 window, index-1 pane): :0.1 Voila! Finally! It was working. Brilliant, now side-by-sideediting! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116322/"
]
} |
204,922 | Don't normally post here but I am ripping my hair out over this one.I have a Python script that forks when it launches, and is responsible for starting a bunch of other processes. This script used to be launched at startup via sysvinit , but recently I upgraded to Debian Jessie so have adapted it to launch via systemd . Unfortunately, I'm running into an issue I can't work out. When you launch the script directly in a user shell, it launches it's child processes correctly, and when the script exits the child processes are orphaned and continue to run. When launched Via systemd, if the parent process exits, the children all exit too (Well, the screen s that they launch in die and appear as Dead). Ideally I need to be able to restart the parent script without killing all the child processes, is there something that I am missing? Thanks! [Unit]Description=Server commanderAfter=network.target[Service]User=serveruserType=forkingPIDFile=/var/Server/Server.pidExecStart=/var/Server/Server.pyExecStop=/bin/kill -s TERM $MAINPID[Install]WantedBy=multi-user.target Edit: It's probably relevant for me to point out that the Python script is essentially a 'controller' for its child processes. It starts and stops servers in GNU screen s as requested from a central server. It is normally always running, it doesn't spawn services and exit. There are cases however where I would like to be able to reload the script without killing child processes, even if that means the processes are orphaned off to pid 1. In fact, it wouldn't even matter if the Python script started off processes as a parent process, if that is even possible. A better explanation of how it works: systemd spawns Server.py Server.py forks and writes the pid file for systemd Server.py then spawns server processes in gnu screen based on its instructions Server.py continues to run to perform any restarts requested from the server When launching without systemd , Server.py can be restarted and the GNU screens it launches are unaffected. When launching with systemd , when Server.py shuts down, instead of those screen processes being orphaned off to pid 1, they are killed. | I managed to fix this simply by setting KillMode to process instead of control-group (default). Thanks all! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116330/"
]
} |
204,938 | I recently installed Linux Ubuntu 14.04 to my computer. To enable internet connection I needed to change my IP and Gateway address.I did the following as a root user # ifconfig eth0 "my ip address here" netmask 255.255.255.0 up# route add default gw " gw address here" It works fine for a couple of minutes but then goes back to the previous settings every time.So, How can I change the IP and the gw addresses permanently? | As stated by jpkotta, network-manager is likely the culprit. You can see its status by running ps -aux | grep network-manager | grep <username> . If you get a result, it is running, otherwise it isn't. It will keep overwriting any changes you make with ifconfig as long as it is running. Kill network-manager by running sudo service network-manager stop . You can bring it back up any time with sudo service network-manager start . Once it is disabled, use ifconfig to set your static, OR edit your /etc/network/interfaces file to include something like: auto eth0iface eth0 inet staticaddress 192.168.1.2netmask 255.255.255.0network 192.168.1.0broadcast 192.168.1.255gateway 192.168.1.1dns-nameservers 8.8.8.8 Finally, run ifup -a to bring up the interfaces you have in your /etc/network/interfaces file. All of this can be avoided though, if you'd rather not mess around with killing network manager. Just click on its icon in the taskbar and click 'edit connections'. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116341/"
]
} |
204,949 | Some shell scripts I have come across use the following syntax when defining variables: file_list_1=$(echo "list.txt") or file_list_2=$(find ./) I would have used: file_list_1="list.txt" and file_list_2=`find ./` I'm not sure which if any of the above are better or safer. What is the benefit of using the syntax x=$( ) when setting a variable? | From the manual ( man bash ): $( command ) or ` command ` Bash performs the expansion by executing command and replacing the command substitution with the standard output of the command, with any trailing newlines deleted. Embedded newlines are not deleted, but they may be removed during word splitting. The command substitution $(cat file ) can be replaced by the equivalent but faster $(< file ) . When the old-style backquote form of substitution is used, backslash retains its literal meaning except when followed by $ , ` , or \ . The first backquote not preceded by a backslash terminates the command substitution. When using the $( command ) form, all characters between the parentheses make up the command; none are treated specially. The POSIX standard defines the $() form of command substitution. $() allows nested commands and looks better (legibility). It should be available on all Bourne shells . You can read more on IEEE Std 1003.1, Shell Command Language, Section 2.6.3 Command Substitution . At least one Unix, AIX, has documented that backticks are obsolete . From that link: Although the backquote syntax is accepted by ksh, it is considered obsolete by the X/Open Portability Guide Issue 4 and POSIX standards. These standards recommend that portable applications use the $(command) syntax. However, /bin/sh does not have to be POSIX compliant. So there is still sometimes a case for backticks in the real world, as @Jeight points out. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/204949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116348/"
]
} |
204,956 | I am trying to run an update of freebsd10 and I am being asked for the kernel sources ===>>> Launching child to update lsof-4.89.b,8 to lsof-4.89.d,8===>>> All >> lsof-4.89.b,8 (9/9)===>>> Currently installed version: lsof-4.89.b,8===>>> Port directory: /usr/ports/sysutils/lsof ===>>> This port is marked IGNORE ===>>> requires kernel sources ===>>> If you are sure you can build it, remove the IGNORE line in the Makefile and try again.===>>> Update for lsof-4.89.b,8 failed===>>> Aborting update but sysinstall no longer exist sysinstall: not found What is the new method of installing the kernel sources in FreeBSD10? I thought bsdinstall, but it only tries to chop up my disk which I do not want | You can manually download and extract a tarball of the full source tree for your specific release from ftp://ftp.freebsd.org/pub/FreeBSD/releases/ E.g. fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/ 10.2-RELEASE /src.txz tar -C / -xzvf src.txz 10.2-RELEASE MUST be replaced with correct version of your OS. One can find version using command: freebsd-version -k The minor versions should be ignored to fetch from the above URL. For ex: if it is 10.2-RELEASE-p1 , just use: 10.2-RELEASE | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104719/"
]
} |
204,970 | How can I create a new system user, an exact copy of another one (having the same groups, permissions, privileges and settings), but with different username, password and home directory? | This script will do it (updated as per comments): #!/bin/bashSRC=$1DEST=$2SRC_GROUPS=$(id -Gn ${SRC} | sed "s/ /,/g" | sed -r 's/\<'${SRC}'\>\b,?//g')SRC_SHELL=$(awk -F : -v name=${SRC} '(name == $1) { print $7 }' /etc/passwd)sudo useradd --groups ${SRC_GROUPS} --shell ${SRC_SHELL} --create-home ${DEST}sudo passwd ${DEST} It gets the source user's groups (not including the group that's the same as their login) and shell, then creates a new user with the same shell and secondary groups. Usage: clone-user src_user_name new_user_name There is no error checking, it's just a quick and dirty clone script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/204970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80070/"
]
} |
204,985 | I was recently given username/password access to a list of servers and want to propagate my SSH public key to these servers, so that I can login more easily. So that it's clear: There is not any pre-existing public key on the remote servers that I can utilize to automate this This constitutes the very first time I'm logging into these servers, and I'd like to not have to constantly type my credentials in to access them Nor do I want to type in my password over and over using ssh-copy-id in a for loop. | Rather than type your password multiple times you can make use of pssh and its -A switch to prompt for it once, and then feed the password to all the servers in a list. NOTE: Using this method doesn't allow you to use ssh-copy-id , however, so you'll need to roll your own method for appending your SSH pub key file to your remote account's ~/.ssh/authorized_keys file. Example Here's an example that does the job: $ cat ~/.ssh/my_id_rsa.pub \ | pssh -h ips.txt -l remoteuser -A -I -i \ ' \ umask 077; \ mkdir -p ~/.ssh; \ afile=~/.ssh/authorized_keys; \ cat - >> $afile; \ sort -u $afile -o $afile \ 'Warning: do not enter your password if anyone else has superuserprivileges or access to your account.Password:[1] 23:03:58 [SUCCESS] 10.252.1.1[2] 23:03:58 [SUCCESS] 10.252.1.2[3] 23:03:58 [SUCCESS] 10.252.1.3[4] 23:03:58 [SUCCESS] 10.252.1.10[5] 23:03:58 [SUCCESS] 10.252.1.5[6] 23:03:58 [SUCCESS] 10.252.1.6[7] 23:03:58 [SUCCESS] 10.252.1.9[8] 23:03:59 [SUCCESS] 10.252.1.8[9] 23:03:59 [SUCCESS] 10.252.1.7 The above script is generally structured like so: $ cat <pubkey> | pssh -h <ip file> -l <remote user> -A -I -i '...cmds to add pubkey...' High level pssh details cat <pubkey> outputs the public key file to pssh pssh uses the -I switch to ingest data via STDIN -l <remote user> is the remote server's account (we're assuming you have the same username across the servers in the IP file) -A tells pssh to ask for your password and then reuse it for all the servers that it connects to -i tells pssh to send any output to STDOUT rather than store it in files (its default behavior) '...cmds to add pubkey...' - this is the trickiest part of what's going on, so I'll break this down by itself (see below) Commands being run on remote servers These are the commands that pssh will run on each server: ' \ umask 077; \ mkdir -p ~/.ssh; \ afile=~/.ssh/authorized_keys; \ cat - >> $afile; \ sort -u $afile -o $afile \' In order: set the remote user's umask to 077, this is so that any directories or files we're going to create, will have their permissions set accordingly like so: $ ls -ld ~/.ssh ~/.ssh/authorized_keysdrwx------ 2 remoteuser remoteuser 4096 May 21 22:58 /home/remoteuser/.ssh-rw------- 1 remoteuser remoteuser 771 May 21 23:03 /home/remoteuser/.ssh/authorized_keys create the directory ~/.ssh and ignore warning us if it's already there set a variable, $afile , with the path to authorized_keys file cat - >> $afile - take input from STDIN and append to authorized_keys file sort -u $afile -o $afile - uniquely sorts authorized_keys file and saves it NOTE: That last bit is to handle the case where you run the above multiple times against the same servers. This will eliminate your pubkey from getting appended multiple times. Notice the single ticks! Also pay special attention to the fact that all these commands are nested inside of single quotes. That's important, since we don't want $afile to get evaluated until after it's executing on the remote server. ' \ ..cmds... \' I've expanded the above so it's easier to read here, but I generally run it all on a single line like so: $ cat ~/.ssh/my_id_rsa.pub | pssh -h ips.txt -l remoteuser -A -I -i 'umask 077; mkdir -p ~/.ssh; afile=~/.ssh/authorized_keys; cat - >> $afile; sort -u $afile -o $afile' Bonus material By using pssh you can forgo having to construct files and either provide dynamic content using -h <(...some command...) or you can create a list of IPs using another of pssh 's switches, -H "ip1 ip2 ip3" . For example: $ cat .... | pssh -h <(grep -A1 dp15 ~/.ssh/config | grep -vE -- '#|--') ... The above could be used to extract a list of IPs from my ~/.ssh/config file. You can of course also use printf to generate dynamic content too: $ cat .... | pssh -h <(printf "%s\n" srv0{0..9}) .... For example: $ printf "%s\n" srv0{0..9}srv00srv01srv02srv03srv04srv05srv06srv07srv08srv09 You can also use seq to generate formatted numbers sequences too! References & similar tools to pssh If you don't want to use pssh as I've done so above there are some other options available. sshpt Ansible's authorized_key_module | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/204985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
205,010 | I'm renaming network interfaces by modifying the files in /etc/sysconfig/network-scripts . eth0 -> nic0 eth1 -> nic1 The content of the network scripts looks like this, after modification: # cat /etc/sysconfig/network-scripts/ifcfg-nic0DEVICE=nic0BOOTPROTO=staticONBOOT=yesHWADDR=xx:xx:xx:xx:xx:xxUSERCTL=noIPV6INIT=noMASTER=bond0SLAVE=yes A reboot activates the new config. But how do I activate this configuration without rebooting? A systemctl restart network doesn't do the trick. I can shut down one interface by its old name ( ifdown eth0 ) but ifup results in below message no matter if the old or new name was provided: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device nic0 does not seem to be present, delaying initialization. /etc/init.d/network status shows this output: Configured devices:lo bond0 nic0 nic1Currently active devices:lo eth0 eth1 bond0 Both, ifconfig and ip a show the old interface names. | You can rename the device using the ip command: /sbin/ip link set eth1 down/sbin/ip link set eth1 name eth123/sbin/ip link set eth123 up Edit : I am leaving the below for the sake of completeness and posterity (and for informational purposes,) but I have confirmed swill's comment and Marco Macuzzo's answer that simply changing the name and device of the interface /etc/sysconfig/network-scripts/ifcfg-eth0 (and renaming the file) will cause the device to be named correctly as long as the hwaddr= field is included in the configuration file. I recommend using this method instead after the referenced update. You may also want to make sure that you configure a udev rule, so that this will work on the next reboot too. The path for udev moved in CentOS 7 to /usr/lib/udev/rules.d/60-net.rules but you are still able to manage it the same way. If you added "net.ifnames=0 biosdevname=0" to your kernel boot string to return to the old naming scheme for your nics, you can remove ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{type}=="1", PROGRAM="/lib/udev/rename_device", RESULT=="?*", NAME="$result" And replace it with ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:50:56:8e:3f:a7", NAME="eth123" You need one entry per nic. Be sure to use the correct MAC address and update the NAME field. If you did not use "net.ifnames=0 biosdevname=0", be careful as there could be unintended consequences. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/205010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101263/"
]
} |
205,016 | While I am connecting to my server I get, -bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: Resource temporarily unavailable And I try following commands also, then the result is same. -bash-4.1$ df -h-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: Resource temporarily unavailable-bash-4.1$ -bash-4.1$ ls -lrth-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: Interrupted system call-bash-4.1$ -bash-4.1$ ps -aef | grep `pwd`-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: retry: Resource temporarily unavailable-bash: fork: Resource temporarily unavailable-bash-4.1$ Why this comming ? And how can I resolve it ? | This could be due to some resource limit, either on the server itself (or) specific to your user account. Limits in your shell could be checked via ulimit -a . Esp check for ulimit -u max user processes, if you have reached max processes, fork is unable to create any new and failing with that error. This could also be due to swap/memory resource issue | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/205016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105701/"
]
} |
205,022 | I have a multiline variable, and I only want the first line in that variable. The following script demonstrates the issue: #!/bin/bashSTRINGTEST="Onlygetthefirstlinebutnotthesecondorthethird"echo " Take the first line and send to standard output:"echo ${STRINGTEST%%$'\n'*}# Output is as follows:# Onlygetthefirstlineecho " Set the value of the variable to the first line of the variable:"STRINGTEST=${STRINGTEST%%$'\n'*}echo " Send the modified variable to standard output:"echo $STRINGTEST# Output is as follows:# Onlygetthefirstline butnotthesecond orthethird Question: Why does ${STRINGTEST%%$'\n'*} return the first line when placed after an echo command, but replace newlines with spaces when placed after assignment? | Maybe there is other way to archive what you want to do, but this works #!/bin/bashSTRINGTEST="Onlygetthefirstlinebutnotthesecondorthethird"STRINGTEST=(${STRINGTEST[@]})echo "${STRINGTEST[0]}" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116403/"
]
} |
205,058 | Environment: Distribution: Arch Linux Display Manager: GDM 3.16.x Dekstop Environment: Gnome 3.16 Question: How to disable the user list displayed on the login screen? Clarification: Wanted result: In effect, from the users perspective, the result: Being presented with a box that requests a username upon reaching the login screen. Not a solution: Making the given users into systemusers is not a very good solution. Preferred method of achieving the wanted result What exact packages do I need to install or disable? If not through packages then what utilities should I use to configure the needed setting? If lower-level configuration is required, what manual settings do I need to change in what files (filepaths please)? | This should work with gdm ≥ 3.12 (tested on archlinux w. gdm 3.16.1 ): switch to a VT (e.g. Ctrl + Alt + F3 ), login as root and run: su - gdm -s /bin/sh to switch user to gdm . then run: export $(dbus-launch) and: GSETTINGS_BACKEND=dconf gsettings set org.gnome.login-screen disable-user-list true run exit or hit Ctrl + D to return to root account. restart the display manager: systemctl restart gdm Reverting is pretty much the same, just change true to false @ step 2. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114267/"
]
} |
205,076 | I'm using the timeout function on debian to wait 5 seconds for my script.Works great but the problem I have is that I need a return value. Like 1 for timeout and 0 for no timeoutHow am I going to do this? Have a look at my code: timeout 5 /some/local/script/connect_script -x 'status' > output.txt# here i need the return of timeout As you see my connect_script -x 'status' returns the status as a string and print it to the screen (probably you can't see this) Background of this issue is that if the server (for connect_script) is freeze the script does nothing. That's why I need the timeout around that. And when it timeouts I want to restart the server. I can do that, but I have no idea how I can see if its timeout or not... | If timeout times out, it exits with status 124 ; you can check this to determine whether the script timed out or not. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/205076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116430/"
]
} |
205,095 | I have a list of sound sources that I'm processing with a script. An example would be: alsa_input.usb-AVEO_Technology_Corp._USB2.0_Camera-02-Camera.analog-monoalsa_input.pci-0000_00_14.2.analog-stereoalsa_input.usb-Plantronics_Plantronics_GameCom_780-00-P780.analog-stereo I'd like to sort them by substrings that are in arbitrary positions. For example I'd like sort --by usb file.txt to put the USB devices first (while otherwise preserving their order): alsa_input.usb-AVEO_Technology_Corp._USB2.0_Camera-02-Camera.analog-monoalsa_input.usb-Plantronics_Plantronics_GameCom_780-00-P780.analog-stereoalsa_input.pci-0000_00_14.2.analog-stereo And I'd like to be able to be able to specify multiple substrings to get finer grain priority. So sort --by Platronics --by usb file.txt would put any line containing "Platronics" first, followed by the lines containing "usb", followed by the rest of the lines. Can I accomplish this with any sort of command line utility? | It sounds like you want a scoring system. Write a script to assign a score to each line,indicating how early in the output you want to see it. awk seems well suited to this job. For your example: #!/bin/shawk '{score=0} /usb/ {score=1} /Plantronics/ {score=2} {print score, NR, $0}' "$@" This assigns a score of 0 to every line by default,and then overrides it with a 1 if the line contains usb and 2 if the line contains Plantronics . I have placed the usb and Plantronics statements in that order so,if a line contains both strings, the final value will be 2. Then ./score file.txt | sort -k1nr -k2n | cut -d" " -f3- (where score is the name of the script). sort -k1nr means sort based on the first field (the score),treating it as a number and sorting higher values first(because the score script assigned high scores to the linesyou’re most interested in). -k2n means, for lines that have the same value in the first field,sort by the second field, as a number in normal, ascending order. The second field is NR , the record number (a.k.a. line number). This ensures that lines with the same score(e.g., those that contain usb but not Plantronics )come out in their original order. If you don’t care about that, delete the NR, from the print statement,delete the -k2n from the sort command, and change the -f3- to -f2- . (Actually, sort may preserve order like that by default,so you might not need that at all.) Of course the cut -d" " -f3- strips off the numbersthat the score script prepended to the data. If you don’t fully understand how this is working, try running ./score file.txt and ./score file.txt | sort -k1nr -k2n This approach is quite flexible. For example, the above code will produce, in order, all lines containing Plantronics , all lines containing usb (but not Plantronics ), and all lines containing neither of the above, with each group sorted in order of appearance in the input file. But, by changing the score script as follows, #!/bin/shawk '{score=0} /usb/ {score+=1} /Plantronics/ {score+=2} {print score, NR, $0}' "$@" we can assign a score of 3 to lines that contain both strings, so now we have all lines containing Plantronics and usb , followed by all lines containing Plantronics (but not usb ), followed by all lines containing usb (but not Plantronics ), and then all lines containing neither of the above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31848/"
]
} |
205,135 | I'm trying to recursively download website which is normally available only when you login. I have valid username and password, but the problem is that I need to login through web interface, so using --user=user and --password=password doesn't help. wget downloads only one webpage with text: Sorry this page is not available, maybe you've forgotten to login? Is it possible to download? I can't use --user, --password even at the login page because there is no FTP/HTTP file retrieval login as mentioned in man wget : --user=user--password=password Specify the username user and password password for both FTP and HTTP file retrieval. Classic graphical login is there. If I try to do this: wget --save-cookies coookies --keep-session-cookies --post-data='j_username=usr&j_password=pwd' 'https://idp2.civ.cvut.cz/idp/Authn/UserPassword' . Using POST method to login and trying to save cookies, the coookies file is empty and the saved page is some error page. The URL is https://idp2.civ.cvut.cz/idp/Authn/UserPassword . Actually, when I want to log in, it redirects me to this page and when I successfully log in, it redirects me back to the page where I was before or some page where I wanted to be after logging in (example: https://progtest.fit.cvut.cz/ . | The session information is probably saved in a cookie to allow you to navigate to other pages after you have logged in. If this is the case, you could do this in two steps : Use wget 's --save-cookies mycookies.txt and --keep-session-cookies options on the login page of the website along with your --username and --password options Use wget 's --load-cookies mycookies.txt option on the subsequent pages you are trying to retrieve. EDIT If the --password and --username option doesn't work, you must find out the info sent to the server by the login page and mimic it : For a GET request, you can add the GET parameters directly in the address wget must fetch (make sure you properly quote the & , = and other special characters). The url would probably look something like https://the_url?user=foo&pass=bar . For a POST request you can use wget 's --post-data=the_needed_info option to use the post method on the needed login info. EDIT 2 It seems that you indeed need the POST method with the j_username and j_password set. Try --post-data='j_username=yourusername&j_password=yourpassword option to wget . EDIT 3 With the page of origin, I was able to understand a little more of what is happening. That being said, I cannot make sure that it works because, well, I don't have (nor do I want) valid credentials. That being said, here is what's happening : The page https://progtest.fit.cvut.cz/ sets a PHPSESSID cookie and present you with login options. Clicking the login button sends a request to https://progtest.fit.cvut.cz/shibboleth-fit.php which takes the PHPSESSID cookie (not sure if it uses it) and redirects you to the SSO engine with a specially crafted url just for you which looks like this : https://idp2.civ.cvut.cz/idp/profile/SAML2/Redirect/SSO?SAMLRequest=SOME_VERY_LONG_AND_UNIQUE_ID The SSO response sets a new cookie named _idp_authn_lc_key and redirects you to the page https://idp2.civ.cvut.cz:443/idp/AuthnEngine which redirects you again to https://idp2.civ.cvut.cz:443/idp/Authn/UserPassword (the real login page) You enter your credentials and send the post data j_username and j_password along with the cookie from the SSO response ??? The first four steps can be done with wget like this : origin='https://progtest.fit.cvut.cz/'# Get the PHPSESSID cookiewget --save-cookies phpsid.cki --keep-session-cookies "$origin"# Get the _idp_authn_lc_key cookiewget --load-cookies phpsid.cki --save-cookies sso.cki --keep-session-cookies --header="Referer: $origin" 'https://progtest.fit.cvut.cz/shibboleth-fit.php'# Send your credentialswget --load-cookies sso.cki --save-cookies auth.cki --keep-session-cookies --post-data='j_username=usr&j_password=pwd' 'https://idp2.civ.cvut.cz/idp/Authn/UserPassword' Note that wget follows redirection all by himself, which helps us quite a bit in this case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114844/"
]
} |
205,141 | I'm seeing very high RX dropped packets in the output of ifconfig : Thousands of packets per second, an order of magnitude more than regular RX packets . wlan0 Link encap:Ethernet HWaddr 74:da:38:3a:f4:bb inet addr:192.168.99.147 Bcast:192.168.99.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:31741 errors:0 dropped:646737 overruns:0 frame:0 TX packets:18424 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:90393262 (86.2 MiB) TX bytes:2348219 (2.2 MiB) I'm testing WiFi dongles. Both have this problem, and the one with the higher drop rate actually performs better in ping floods. The one with low dropped packets suffers from extreme Ping RTTs, while the other never skips a beat. What does Linux consider a dropped packet? Why am I seeing so many of them? Why doesn't it seem to affect performance? There are lots of questions around with answers that say a dropped packet could be one of the following but that doesn't help me very much, because those possibilities don't seem to make sense in this scenario. | Packet Dropped seen from ifconfig could be due to many reasons, you should dig deeper into NIC statistics to figure out real reason. Below are some general reasons NIC ring buffers getting full and unable to cope-up with incoming bursts of traffic CPU receiving NIC interrupts is very busy and unable to process some cable/hardware/duplex issues some bug in NIC driver Look at the output of ethtool -S wlan0 iwconfig wlan0 and the content of /proc/net/wireless for any further information. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78754/"
]
} |
205,142 | On my system i have some amount of swap used: undefine@uml:~$ free total used free shared buffers cachedMem: 16109684 15848264 261420 633496 48668 6096984-/+ buffers/cache: 9702612 6407072Swap: 15622140 604 15621536 How to check what is in swap? I try to check it via processes, but for every pid on system VmSwap is 0: undefine@uml:~$ awk '/VmSwap/ {print $2}' /proc/*/status |uniq0 What else can be in swap?I thought about tmpfs - but i reread all files on tmpfs-es - and it doesn't flush swap size. | smem is the standard tool for this. It's clean and simple. On a Debian based system, install it via package manager: sudo apt-get install smem A sample (clipped) output from my system: $ smem -s swap -t -k -n PID User Command Swap USS PSS RSS 831 1000 /bin/bash 0 3.8M 3.8M 5.5M 3931 1000 bash /usr/bin/sage -c noteb 276.0K 4.0K 20.0K 1.2M 17201 1000 /usr/bin/dbus-launch --exit 284.0K 4.0K 8.0K 500.0K 17282 1000 /usr/bin/mate-settings-daem 372.0K 11.0M 11.7M 21.8M 17284 1000 marco 432.0K 16.7M 18.1M 29.5M 17053 1000 mate-session 952.0K 3.3M 3.5M 9.2M 3972 1000 python /usr/lib/sagemath/sr 2.7M 101.8M 102.1M 104.3M ------------------------------------------------------------------------------- 141 1 5.2M 3.9G 4.0G 4.5G | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85895/"
]
} |
205,158 | I like to restrict the columns shown in ls -l command, by eliminating the first 4 columns. ls -lh shows: drwxr-sr-x 20 gamma alpha 4.0K May 22 13:18 Desktopdrwxr-sr-x 3 gamma alpha 22 Oct 6 2014 Eclipse-rw-r--r-- 1 gamma alpha 28K Jul 11 2014 firedrwxr-sr-x 5 gamma alpha 48 Mar 31 2014 lb_deployment To eliminate the first four columns, I tried ls -lh | cut -d " " -f5- . But it doesn't behave as desired: 4.0K May 22 13:18 Desktopalpha 22 Oct 6 2014 Eclipsealpha 28K Jul 11 2014 firealpha 48 Mar 31 2014 lb_deployment I like it to look like this: 4.0K May 22 13:18 Desktop 22 Oct 6 2014 Eclipse 28K Jul 11 2014 fire 48 Mar 31 2014 lb_deployment The reason that it does not behave as desired is that, in cut I defined delimiter to be blank space ( -d " " ), but since the number of links (second column of \ls -lh ) of the first file is a 2-digit number (20), ls -lh adds another blank space to the link counter of the files with 1-digit link counter to adjust the columns positions. And this causes cut to not behave as desired. Any ideas on how to fix this? | Pass the -o and -g options to omit the user and group columns. Since user and group names can contain spaces, you can't reliably edit them out. There's no option to omit the permissions and link count columns. Since the first column you want to keep can start with whitespace (for right alignment), you can't use the whitespace-to-non-whitespace transition as the start criterion. Instead, use the right edge of the last column to eliminate the columns you don't want. This is safe because the first two columns can't contain embedded whitespace. ls -lhog | sed 's/^[^ ][^ ]* *[^ ][^ ]* //' Explanation of the sed command: s/ REGEXP / REPLACEMENT / replaces the first occurrence of the specified regular expression on each line by the specified replacement text. Here the replacement text is empty. ^ at the beginning of the regexp makes it match only at the beginning of the line. [^ ][^ ]* matches any non-empty sequence of characters other than a space. Thus the sed command removes the first and second non-whitespace sequences, as well as the next space (but only one space at the end). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40343/"
]
} |
205,170 | I have an array: CATEGORIES=(one two three four) I can prepend to each array member using parameter expansion: echo ${CATEGORIES[@]/#/foo } I can append to each array member the same way: echo ${CATEGORIES[@]/%/ bar} How can I do both? None of these work: echo ${CATEGORIES[@]/(.*)/foo \1 bar}echo ${CATEGORIES[@]/(.*)/foo $1 bar}echo ${CATEGORIES[@]/(.*)/foo ${BASH_REMATCH[1]} bar} | Depending on what your ultimate aim is, you could use printf : $ a=(1 2 3)$ printf "foo %s bar\n" "${a[@]}"foo 1 barfoo 2 barfoo 3 bar printf re-uses the format string until all the arguments are used up, so it provides an easy way to apply some formatting to a set of strings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22172/"
]
} |
205,177 | I got a new Windows laptop, and I wanted to dual boot with Linux. I installed Fedora, before changing my mind and going back to Mint. I'd like to keep Win 8.1 and Mint. However, now my UEFI boot menu contains five entries: The first two take me to Grub, which I guess is leftover from my Fedora install. The next two both take me to Linux Mint, and the last takes me to Win 8.1. I'd like to remove both Fedora entries and one Linux Mint entry. The "Setup" interface makes it pretty simple to understand how, but: I'd like to make sure deleting those entries isn't something stupid I don't know how to handle those remnants of Grub that are left from the Fedora install. Should I delete them? Ignore them? If I do delete the Grub remnants, I'm not sure how to do so, or even which partition it's on. Here's a look at my partition table in Gparted and my partition table in Windows . Last but not least, here's what EasyBCD shows: There are a total of 5 entries listed in the bootloader.Default: Windows 8.1Timeout: 30 secondsEasyBCD Boot Device: C:\Entry #1Name: FedoraBCD ID: {51954931-ff5c-11e4-8caa-f68841e7e615}Device: \Device\HarddiskVolume1Bootloader Path: \EFI\FEDORA\SHIM.EFIEntry #2Name: ubuntuBCD ID: {51954933-ff5c-11e4-8caa-f68841e7e615}Device: \Device\HarddiskVolume1Bootloader Path: \EFI\UBUNTU\SHIMX64.EFIEntry #3Name: UEFI OSBCD ID: {51954932-ff5c-11e4-8caa-f68841e7e615}Device: \Device\HarddiskVolume1Bootloader Path: \EFI\BOOT\BOOTX64.EFIEntry #4Name: ubuntuBCD ID: {51954934-ff5c-11e4-8caa-f68841e7e615}Device: \Device\HarddiskVolume1Bootloader Path: \EFI\UBUNTU\GRUBX64.EFIEntry #5Name: Windows 8.1BCD ID: {current}Drive: C:\Bootloader Path: \Windows\system32\winload.efi How should I handle these extra boot options without bricking my laptop? | In Linux use the command efibootmgr efibootmgr -v lists entries. efibootmgr -b 0002 -B would remove entry number 2 from the menu. In case you wanted to regenerate these values in case they are deleted:First mount your ESP. Usually to /boot/efi but /mnt is fine too. Then grub-install --target=x86_64-efi --efi-directory=[ESP mount] --bootloader-id=[name] (this does not reconfigure GRUB) If you need to reenter the entry for Fedora or Ubuntu using Shim instead of GRUB sudo efibootmgr -c -L Fedora -d /dev/sdX -p Y -l \\EFI\\fedora\\shim.efi X is the device and Y is the partition number of the EFI system partition (ESP). Also, note that \EFI\BOOT\BOOTX64.EFI will be loaded when you select the hard disk from UEFI instead of one of the NVRAM entries. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116479/"
]
} |
205,180 | I have MySQL password saved on a file foo.php , for example P455w0rd , when I try to use it: $ cat foo.php | grep '$dbpwd=' | cut -d '"' -f 2 | mysql -U root -p mydb -h friendserverEnter password: (holds)$ echo P455w0rd | mysql -u root -p mydb -h friendserverEnter password: (holds) Both option still ask for password, what's the correct way to send password from stdin ? | You have to be very careful how you pass passwords to command lines as, if you're not careful, you'll end up leaving it open to sniffing using tools such as ps . The safest way to do this would be to create a new config file and pass it to mysql using either the --defaults-file= or --defaults-extra-file= command line option. The difference between the two is that the latter is read in addition to the default config files whereas with the former, only the one file passed as the argument is used. Your additional configuration file should contain something similar to: [client]user=foopassword=P@55w0rd Make sure that you secure this file. Then run: mysql --defaults-extra-file=<path to the new config file> [all my other options] | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/205180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27996/"
]
} |
205,191 | I know we can concatenate files by cat file [file] [[file] ...] > joined-file . I have a directory which contains lakhs (hundreds of thousands) of files. I want to concatenate a few set (1000) of files into one file. I have a file set that is very small in size. I want to concatenate 1000 files irrespective of their name and order so that it's easy for the other service to read and hold all the file names in memory to operate. This is what I have tried for i in /var/abc/*.csv; do "$i" > file1.csv; rm -rf "$i"; done but to keep track of count another variable. What can be efficient method? So that I can't directly concatenate 1000 files and move them. Why 1000? because the directory contains lakhs (hundreds of thousands) of files. We have each file size in 1-4 KB and just to make sure that the one output file size doesn't grow beyond a limit. I have tried this with your answers. cd /var/abc for file in $(ls -p | grep -v / | tail -1000); do cat "$file" >>"/var/abcd/xigzag"$tick".csv" && rm -rf "$file"; done | You don't need to loop, you can tell cat to read all the files: cat /var/abc/*.csv > file1.csv && rm /var/abc/*.csv as long as there aren't too many files (but the limit is huge). Using && between the two commands ensures the files are only deleted if they were successfully "copied". There are a few caveats though: you mustn't run this in the same folder as the original files you're concatenating, otherwise the rm will delete the agregate and you'll lose everything; if new CSV files appear between the start of the cat and the expansion of rm 's arguments, they'll be deleted without being copied; if any of the CSV files are modified after they have been concatenated, those modifications will be lost. You can mitigate the first two caveats by storing the list of files before creating the output file: set -- /var/abc/*.csvcat -- "$@" > file1.csv && rm -- "$@" This will still lose any changes made to files after they have been copied. To concatenate files 1000 at a time (so one resulting CSV per 1000 original CSV), with any number of files you'd proceed as follows, in the target directory: find /var/abc -maxdepth 1 -type f -name \*.csv | split -d -l 1000 - csvlistsfor file in csvlists*; do cat $(cat $file) > concat${file##csvlists}.csv && rm $(cat $file); done This will find all the files in /var/abc named *.csv , and list them 1000 at a time in files starting with csvlists ( csvlists00 , csvlists01 ...). Then the for loop reads each file list and concatenates the listed CSV files in a file named concat00.csv etc. to match the list. Once each set of files is copied, the original files are deleted. This version assumes that the CSV files' names don't contain spaces, newlines and so on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45726/"
]
} |
205,217 | I find useful the gnome-terminal feature of editing (and creating) profiles with the option of holding the terminal open after the command exists. (I like to use context menu file manager to run commands to display info about a file in a terminal, to show info in a terminal while processing, etc.) I wasn't able to find the same feature in other terminals, so I have to install gnome-terminal even when it's not the default terminal. Are there other terminal emulators with this feature? Is there a command to be used in a given terminal that would have the same effect? I want, with a single line (to be added as context menu entry), to open the terminal, run a command and display info in the terminal window that stays open. Example: in pantheon-files (elementary os) I add a context menu entry for media info using a contractor file with a line like Exec=xterm -hold -e "mediainfo -i %f" (according to a comment below) or Exec=gnome-terminal --window-with-profile=new1 -e "mediainfo -i %f" . | You can achieve this in any terminal emulator by the simple expedient of arranging for the program not to exit without user confirmation. Tell the terminal to run terminal_shell_wrapper which is a script containing something like #!/bin/shif [ $# -eq 0 ]; then "${SHELL:-sh}"; else "$@"; fiecho "The command exited with status $?. Press Enter to close the terminal."read line If you want any key press to close the terminal change read line to stty -icanon; dd ibs=1 count=1 >/dev/null 2>&1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
205,233 | I am using Ubuntu 14.04 with Cinnamon desktop. After trying to create a shortcut for a PDF file to Cinnamon's Taskbar, I found maybe I should have searched for a folder containing the Taskbar's configuration information and create a launcher there. And by the way I don't know if I've guessed right or if yes, where would it be! How would I add the shortcut to the pdf file and then place it in the Taskbar? | A simple GUI method: Right-click Menu and then click Configure . Click Open the Menu Editor . Optionally create a new folder for your custom links. Create a new item that opens the file, using the command, evince /path/to/file.pdf , or whichever PDF viewer you want to use. Close the menu editor and right-click on your new menu item, selecting Add to Panel . If you chose to make a new folder in the menu, it exists in ~/.local/share/desktop-directories/ as a file with the extension, .directory . If you chose to make a new menu item, it exists in ~/.local/share/applications/ as a file with the extension, .desktop . These were created by alacarte . They are regular text files; and, now that you know their location, you could do this manually, too. The rest of the files for the menu are located in /usr/share/desktop-directories and /usr/share/applications . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/205233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49856/"
]
} |
205,276 | I have a broken disk where I need to copy a 60G file from. From time to time the disk resets and I can't finish the copy. I would like to try and copy partial slices and put them all together. How can I do this? | Use ddrescue , which is designed for this type of scenario. It uses a log file to keep track of the parts of the data that it has successfully copied - or otherwise. As a result you can stop and restart it as many times as necessary, provided that the log file is maintained. See Ddrescue - Data recovery tool | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49309/"
]
} |
205,299 | I have an array whose elements may contain spaces: set ASD "a" "b c" "d" How can I convert this array to a single string of comma-separated values? # what I want:"a,b c,d" So far the closest I could get was converting the array to a string and then replacing all the spaces. The problem is that this only works if the array elements don't contain spaces themselves (echo $ARR | tr ' ' ',') | Since fish 2.3.0 you can use the string builtin: string join ',' $ASD The rest of this answer applies to older versions of fish. One option is to use variable catenation: echo -s ,$ASD This adds an extra comma to the beginning. If you want to remove it, you can use cut : echo -s ,$ASD | cut -b 2- For completeness, you can also put it after and use sed : echo -s $ASD, | sed 's/,$//' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
205,354 | I have a large number of pictures from an old hard drive that I'm trying to organize. If I run ls -l , I notice all of these files have a creation date of 2012 or before. Ideally, I'd like to move these to my computer's second hard drive, which is not set to mount automatically. Preferably, I could do this all as a batch with some commands linked together. So far, I have ls -l | grep -i 2012 which spits out only the files with 2012 in the date provided by ls -l . Now, the trick would be cp 'ing all of those files to the new directory. I'm not sure where to go next with this because each file would have to be copied. What would be my next set of commands? | Do not use ls . It's not recommended to use in such cases. Moreover using grep to filter according to date is not a good idea. You filename might itself contain 2012 string, even though it was not modified in 2012. Use find command and pipe its output. find . -newermt 20120101 -not -newermt 20130101 -print0 | xargs -0 cp -t /your/target/directory Here, -newermt 20120101 ==> File's modified date should be newer than 01 Jan 2012-not ==> reverses the following condition. Hence file should be older than 01 Jan 2013-print0 and -0 ==> Use this options so that the command doesn't fail when filenames contain spaces | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205354",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88561/"
]
} |
205,383 | I've followed these instructions to use google-drive-ocamlfuse to mount Google Drive folders on a headless server But I've encountered an issue, unless I run the command to mount my ~/drive folder as root (via sudo) it throws an error. (precise)lukes@localhost:~$ google-drive-ocamlfuse -label me ~/drive/fuse: failed to exec fusermount: No such file or directory So I figured I'd require root privileges and ran sudo google-drive-ocamlfuse -label me /home/lukes/drive (precise)lukes@localhost:~$ sudo google-drive-ocamlfuse -label me /home/lukes/drive/[sudo] password for lukes: (precise)lukes@localhost:~$ ls -lls: cannot access drive: Permission deniedtotal 4drwx--x--- 3 lukes 1001 4096 May 24 17:00 Downloadsd????????? ? ? ? ? ? drive Huh? thats a wierd looking output from ls ,so I figured because I mounted it as root I need to run sudo ls -l (precise)lukes@localhost:~$ sudo ls -ltotal 8drwx--x--- 3 lukes 1001 4096 May 24 17:00 Downloadsdrwxrwxr-x 2 lukes lukes 4096 May 24 18:29 drive So the drive folder is owned correctly. Not sure what I can do to fix the fact I can't cd into it. N.B.I can sudo su and then cd drive && ls no problems, but I can't edit any of the files that are in my Google Drive folder, which defeats the point of having mounted them in the first place. | When you mount a FUSE filesystem, by default, only the user doing the mounting can access it. You can override this by adding the allow_other mount option, but this is a security risk if the filesystem wasn't designed for it (and most filesystems accessed via FUSE aren't): what are the file permissions going to allow other users to do? Furthermore only root can use allow_other , unless explicitly authorized by root. Anyway, you should do the mounting as your ordinary user, not as root. FUSE is designed to be used as an ordinary user. Depending on your distribution and how your system is configured, you may need to be in the fuse group. Check the permissions on /dev/fuse : you can use FUSE iff you have read-write access to it. Anyway, the error you got doesn't indicate a permission problem. The command fusermount should be in /bin or /usr/bin , on every user's $PATH . If you don't have it, the most likely explanation is that you need to install it. For example, on Debian/Ubuntu/…, install the fuse package. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116604/"
]
} |
205,403 | How can the NFS server on a Debian 8 system be limited to NFSv3? By default, shares can be mounted with both vers=3 and vers=4. /etc/default/nfs-kernel-server: # To disable NFSv4 on the server, specify '--no-nfs-version 4' here#RPCMOUNTDOPTS="--manage-gids"RPCMOUNTDOPTS="--manage-gids --no-nfs-version 4" This option does not seem to have any effect (rpcinfo still shows nfs accepting version 4). | Turns out modifying the RPCMOUNTDOPTS variable as described in /etc/default/nfs-kernel-server does not work and there's a bug report for that: #738063 This variable is used in the rpc.mountd call: # systemctl status nfs-kernel-server● nfs-kernel-server.service - LSB: Kernel NFS server support Loaded: loaded (/etc/init.d/nfs-kernel-server) Active: active (running) since Sun 2016-06-12 19:46:01 CEST; 6s ago Process: 15110 ExecStop=/etc/init.d/nfs-kernel-server stop (code=exited, status=0/SUCCESS) Process: 15119 ExecStart=/etc/init.d/nfs-kernel-server start (code=exited, status=0/SUCCESS) CGroup: /system.slice/nfs-kernel-server.service └─15167 /usr/sbin/rpc.mountd --manage-gids --port 2048 --no-nfs-version 4 However, clients are still able to mount using -o vers=4 . Instead, this option must be passed to rpc.nfsd .Looking at the init script /etc/init.d/nfs-kernel-server , it seems like the RPCNFSDCOUNT variable is the only variable that's passed to rpc.nfsd. It's not intended for that purpose, but it works and it seems to be the only option short of editing the init script. Solution : In /etc/default/nfs-kernel-server , add the --no-nfs-version 4 option to RPCNFSDCOUNT instead of RPCMOUNTDOPTS : # Number of servers to start up#RPCNFSDCOUNT=8RPCNFSDCOUNT="8 --no-nfs-version 4" Restart the NFS service: # systemctl restart nfs-kernel-server Test it: # mount -t nfs -o vers=4 SERVER:/data/public /mntmount.nfs: Protocol not supported Version 3 still works: # mount -t nfs -o vers=3 SERVER:/data/public /mnt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205403",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58393/"
]
} |
205,450 | I have a folder in which I have many subfolders. The Root folder name is allCSV and sub foldername is will be like a_date(s), b_date(s), c_date(s) ... I want a file which is in a_date(s) and ends with .csv . I tried with: find ../ -name '[a_]*' -a -name '*[.csv]' But it is showing all the files ending with .csv | The pattern [a_]* matches names that start with either of the characters a or _ . The pattern *[.csv] matches names that end with one of the characters . , c , s or v . To match names that start with a_ , use -name 'a_*' . To match names that end with .csv , use -name '*.csv' . find ../ -name 'a_*' -a -name '*.csv' or equivalently find ../ -name 'a_*.csv' matches files whose name starts with a_ and ends with .csv . This does not filter on the directories traversed to reach the file. If the files are in subdirectories of the parent directory (e.g. ../a_foo/wibble.csv ), you don't need find : the find command is only useful to search directory trees recursively. You can use echo or ls : ls ../a_*/*.csv If the files can be in subdirectories below the a_* directories (e.g. ../a_foo/wibble.csv or ../a_foo/bar/wibble.csv but not ../qux/a_foo/wibble.csv ), then call find and tell it to search the a_* directories. find ../a_* -name '*.csv' Alternatively, instead of using find , you can use the ** wildcard to search in subdirectories recursively. In ksh93, you need to enable this pattern with set -o globstar first. In bash, you need to enable this pattern with shopt -s globstar first. In zsh, this pattern is enabled by default. Other shells such as plain sh don't have ** . ls ../a_*/**/*.csv If the a_* directories can themselves be at any depth below the parent directory, you can either use find -path or ** : find .. -path '*/a_*/*.csv'ls ../**/a_*/**/*.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77586/"
]
} |
205,529 | I want to mount the NFS share of a Zyxel NSA310s NAS. Showmount, called on the client machine, shows the share: $ showmount 10.0.0.100 -eExport list for 10.0.0.100:/i-data/7fd943bf/nfs/zyxelNFS * The client's /etc/fstab contains the line: 10.0.0.100:/i-data/7fd943bf/nfs/zyxelNFS /media/nasNFS nfs rw 0 0 But mounting does not work: sudo mount /media/nasNFS/ -vmount.nfs: timeout set for Mon May 25 17:34:46 2015mount.nfs: trying text-based options 'vers=4,addr=10.0.0.100,clientaddr=10.0.0.2'mount.nfs: mount(2): Protocol not supportedmount.nfs: trying text-based options 'addr=10.0.0.100'mount.nfs: prog 100003, trying vers=3, prot=6mount.nfs: trying 10.0.0.100 prog 100003 vers 3 prot TCP port 2049mount.nfs: portmap query retrying: RPC: Program/version mismatchmount.nfs: prog 100003, trying vers=3, prot=17mount.nfs: trying 10.0.0.100 prog 100003 vers 3 prot UDP port 2049mount.nfs: portmap query failed: RPC: Program/version mismatchmount.nfs: Protocol not supported nfs-common is installed. What else can be missing? | To summarize the steps taken to get to the answer: According to the output given the NFS server does not like NFSv4 nor UDP. To see the capabilities of the NFS server you can use rpcinfo 10.0.0.100 (you might extend the command to filter for nfs by: |egrep "service|nfs" ) Apparently the only version supported by the server is version 2: rpcinfo 10.0.0.100 |egrep "service|nfs"program version netid address service owner100003 2 udp 0.0.0.0.8.1 nfs unknown100003 2 tcp 0.0.0.0.8.1 nfs unknown Solution to mount the export is to use mount option vers=2 either on the commandline: mount -o rw,vers=2 10.0.0.100:/i-data/7fd943bf/nfs/zyxelNFS /media/nasNFS or by editing the /etc/fstab: 10.0.0.100:/i-data/7fd943bf/nfs/zyxelNFS /media/nasNFS nfs rw,vers=2 0 0 Another approach may be to change the NFS server to support version 3 (or even 4). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116670/"
]
} |
205,541 | I am writing some shell scripts to handle some disk image stuff, and I need to use loop devices to access some disk images. However, I am not sure how to properly allocate a loop device without exposing my program to a race condition. I know that I can use losetup -f to get the next unallocated loop device, and then allocate that loop device like this: ld=$(losetup -f)sudo losetup $ld myfile.imgdostuffwith $ld However, in the case where I want to run multiple instances of the program at the same time, this is almost a textbook example of a race condition, and that bothers me quite a lot. If I had multiple instance of this program running, or other programs trying to also get a loop device, then each process might not be able to allocate the loop device before the next one calls losetup -f , in which case both processes would think that the same loop device is available, but only one can get it. I could use external synchronization for this, but I would like to (if possible) avoid additional complexity. Also, other programs that use loop devices wouldn't likely respect whatever synchronization I might come up with. How can I avoid this potential race condition? Ideally, I'd like to be able to discover and bind the loop device atomically, for instance with a command like: ld=$(sudo losetup -f myfile.img)dostuffwith $ld However, when I do that, $ld does not get assigned to the loop device path, and moving the sudo out, as in sudo ld=$(losetup -f myfile.img) gives permission errors. | This is a classic problem in concurrency: when allocating a resource, you need to atomically determine that the resource is free and reserve it, otherwise another process could reserve the resource between the time you check that it's free and the time you reserve it. Do use losetup 's automatic allocation mode ( -f ), and pass the --show option to make it print the loop device path. ld=$(sudo losetup --show -f /tmp/1m) This option has been present in util-linux since version 2.13 ( initially added as -s , but --show has been supported in all released versions and recent versions have dropped the -s option name). Unfortunately the BusyBox version doesn't have it. Version 3.1 of the Linux kernel introduced a method to perform the loop device allocation operation directly in the kernel, via the new /dev/loop-control device. This method is only supported since util-linux 2.21. With kernel <3.1 or util-linux <2.21, the losetup program enumerates the loop device entries to reserve one. I can't see a race condition in the code though; it should be safe but it might have a small window during which it will incorrectly report that all devices are allocated even though this is not the case. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29620/"
]
} |
205,546 | One thing I really miss in Midnight Commander (compared to some GUI file explorers, like f.e. Thunar) is the ability to go to a certain directory by just typing prefix of its name. For example for a current directory containing: filesothermanymany_othersome Typing man would take me to (focus) directory many . Is there any plugin that would let me configure MC that way? | You don't need any plugins. You have two options: In current directory panel, type Alt + s or Ctrl + s , then type your search pattern, the cursor will jump to the matches sequentially. To toggle through all results that match the current pattern, repeat the keystroke. Note : The Ctrl + s combination will freeze many terminal implementations (press [ Ctrl + q to unfreeze), so use Alt + s instead if that happens to you. Disable Command prompt in Options/Layout . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/205546",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20334/"
]
} |
205,567 | ssh has an annoying feature in that when you run: ssh user@host cmd and "here's" "one arg" Instead of running that cmd with its arguments on host , it concatenates that cmd and arguments with spaces and runs a shell on host to interpret the resulting string (I guess that's why its called ssh and not sexec ). Worse, you don't know what shell is going to be used to interpret that string as that's the login shell of user which is not even guaranteed to be Bourne like as there are still people using tcsh as their login shell and fish is on the rise. Is there a way around that? Suppose I have a command as a list of arguments stored in a bash array, each of which may contain any sequence of non-null bytes,is there any way to have it executed on host as user in a consistent way regardless of the login shell of that user on host (which we'll assume is one of the major Unix shell families: Bourne, csh, rc/es, fish)? Another reasonable assumption that I should be able to make is that there be a sh command on host available in $PATH that is Bourne-compatible. Example: cmd=( 'printf' '<%s>\n' 'arg with $and spaces' '' # empty $'even\n* * *\nnewlines' "and 'single quotes'" '!!') I can run it locally with ksh / zsh / bash / yash as: $ "${cmd[@]}"<arg with $and spaces><><even* * *newlines><and 'single quotes'><!!> or env "${cmd[@]}" or xterm -hold -e "${cmd[@]}"... How would I run it on host as user over ssh ? ssh user@host "${cmd[@]}" obviously won't work. ssh user@host "$(printf ' %q' exec "${cmd[@]}")" would only work if the login shell of the remote user was the same as the local shell (or understands quoting in the same way as printf %q in the local shell produces it) and runs in the same locale. | I don't think any implementation of ssh has a native way to pass a command from client to server without involving a shell. Now, things can get easier if you can tell the remote shell to only run a specific interpreter (like sh , for which we know the expected syntax) and give the code to execute by another mean. That other mean can be for instance standard input or an environment variable . When neither can be used, I propose a hacky third solution below. Using stdin If you don't need to feed any data to the remote command, that's the easiest solution. If you know the remote host has an xargs command that supports the -0 option and the command is not too large, you can do: printf '%s\0' "${cmd[@]}" | ssh user@host 'xargs -0 env --' That xargs -0 env -- command line is interpreted the same with all those shell families. xargs reads the null-delimited list of arguments on stdin and passes those as arguments to env . That assumes the first argument (the command name) does not contain = characters. Or you can use sh on the remote host after having quoted each element using sh quoting syntax. shquote() { LC_ALL=C awk -v q=\' ' BEGIN{ for (i=1; i<ARGC; i++) { gsub(q, q "\\" q q, ARGV[i]) printf "%s ", q ARGV[i] q } print "" }' "$@"}shquote "${cmd[@]}" | ssh user@host sh Using environment variables Now, if you do need to feed some data from the client to the remote command's stdin, the above solution won't work. Some ssh server deployments however allow passing of arbitrary environment variables from the client to the server. For instance, many openssh deployments on Debian based systems allow passing variables whose name starts with LC_ . In those cases you could have a LC_CODE variable for instance containing the shquoted sh code as above and run sh -c 'eval "$LC_CODE"' on the remote host after having told your client to pass that variable (again, that's a command-line that's interpreted the same in every shell): LC_CODE=$(shquote "${cmd[@]}") ssh -o SendEnv=LC_CODE user@host ' sh -c '\''eval "$LC_CODE"'\' Building a command line compatible to all shell families If none of the options above are acceptable (because you need stdin and sshd doesn't accept any variable, or because you need a generic solution), then you'll have to prepare a command line for the remote host that is compatible with all supported shells. That is particularly tricky because all those shells (Bourne, csh, rc, es, fish) have their own different syntax, and in particular different quoting mechanisms and some of them have limitations that are hard to work around. Here is a solution I came up with, I describe it further down: #! /usr/bin/perlmy $arg, @ssh, $preamble =q{printf '%.0s' "'\";set x=\! b=\\\\;setenv n "\";set q=\';printf %.0s "\""'"';q='''';n=``()echo;x=!;b='\'printf '%.0s' '\'';set b \\\\;set x !;set -x n \n;set q \'printf '%.0s' '\'' #'"\"'";export n;x=!;b=\\\\;IFS=.;set `echo;echo \.`;n=$1 IFS= q=\'};@ssh = ('ssh');while ($arg = shift @ARGV and $arg ne '--') { push @ssh, $arg;}if (@ARGV) { for (@ARGV) { s/'/'\$q\$b\$q\$q'/g; s/\n/'\$q'\$n'\$q'/g; s/!/'\$x'/g; s/\\/'\$b'/g; $_ = "\$q'$_'\$q"; } push @ssh, "${preamble}exec sh -c 'IFS=;exec '" . join "' '", @ARGV;}exec @ssh; That's a perl wrapper script around ssh . I call it sexec . You call it like: sexec [ssh-options] user@host -- cmd and its args so in your example: sexec user@host -- "${cmd[@]}" And the wrapper turns cmd and its args into a command line that all shells end up interpreting as calling cmd with its args (regarless of their content). Limitations: The preamble and the way the command is quoted means the remote command line ends up being significantly larger which means the limit on the maximum size of a command line will be reached sooner. I've only tested it with: Bourne shell (from heirloom toolchest), dash, bash, zsh, mksh, lksh, yash, ksh93, rc, es, akanga, csh, tcsh, fish as found on a recent Debian system and /bin/sh, /usr/bin/ksh, /bin/csh and /usr/xpg4/bin/sh on Solaris 10. If yash is the remote login shell, you can't pass a command whose arguments contain invalid characters, but that's a limitation in yash that you can't work around anyway. Some shells like csh or bash read some startup files when invoked over ssh. We assume those don't change the behaviour dramatically so that the preamble still works. beside sh , it also assumes the remote system has the printf command. To understand how it works, you need to know how quoting works in the different shells: Bourne: '...' are strong quotes with no special character in it. "..." are weak quotes where " can be escaped with backslash. csh . Same as Bourne except that " cannot be escaped inside "..." . Also a newline character has to be entered prefixed with a backslash. And ! causes problems even inside single quotes. rc . The only quotes are '...' (strong). A single quote within single quotes is entered as '' (like '...''...' ). Double quotes or backslashes are not special. es . Same as rc except that outside quotes, backslash can escape a single quote. fish : same as Bourne except that backslash escapes ' inside '...' . With all those contraints, it's easy to see that one cannot reliably quote command line arguments so that it works with all shells. Using single quotes as in: 'foo' 'bar' works in all but: 'echo' 'It'\''s' would not work in rc . 'echo' 'foobar' would not work in csh . 'echo' 'foo\' would not work in fish . However we should be able to work around most of those problems if we manage to store those problematic characters in variables, like backslash in $b , single quote in $q , newline in $n (and ! in $x for csh history expansion) in a shell independant way. 'echo' 'It'$q's''echo' 'foo'$b would work in all shells. That would still not work for newline for csh though. If $n contains newline, in csh , you have to write it as $n:q for it to expand to a newline and that won't work for other shells. So, what we end-up doing instead here is calling sh and have sh expand those $n . That also means having to do two levels of quoting, one for the remote login shell, and one for sh . The $preamble in that code is the trickiest part. It makes use of the various different quoting rules in all shells to have some sections of the code interpreted by only one of the shells (while it's commented out for the others) each of which just defining those $b , $q , $n , $x variables for their respective shell. Here's the shell code that would be interpreted by the login shell of the remote user on host for your example: printf '%.0s' "'\";set x=\! b=\\;setenv n "\";set q=\';printf %.0s "\""'"';q='''';n=``()echo;x=!;b='\'printf '%.0s' '\'';set b \\;set x !;set -x n \n;set q \'printf '%.0s' '\'' #'"\"'";export n;x=!;b=\\;IFS=.;set `echo;echo \.`;n=$1 IFS= q=\'exec sh -c 'IFS=;exec '$q'printf'$q' '$q'<%s>'$b'n'$q' '$q'arg with $and spaces'$q' '$q''$q' '$q'even'$q'$n'$q'* * *'$q'$n'$q'newlines'$q' '$q'and '$q$b$q$q'single quotes'$q$b$q$q''$q' '$q''$x''$x''$q That code ends up running the same command when interpreted by any of the supported shells. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205567",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22565/"
]
} |
205,635 | I converted a simple binary file into a text file with: od –t x1 Check.tar | cut –c8- > Check.txt Which gives a content similar to: 64 65 76 2f 6e 75 6c 6c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [...] What is the opposite way -- to convert Check.txt to Check.tar as the original file? | od -An -vtx1 Check.tar > Check.txt You need -v or od will condense sequences of identical bytes. For the reverse: LC_ALL=C tr -cd 0-9a-fA-F < Check.txt | xxd -r -p > Check.tar Or: perl -ape '$_=pack "(H2)*", @F' Check.txt > Check.tar If your purpose is to transfer files over a channel that only supports ASCII text, then there are dedicated tools for that like uuencode : tar cf - myfiles.* | xz | uuencode myfiles.tar.xz | that-channel And to recover those files on the other end: uudecode < file.uu would recreate myfiles.tar.xz . Or: uudecode -o - < file.uu | xz -d | tar xf - To extract the files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205635",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
205,642 | I have ±10,000 files ( res.1 - res.10000 ) all consisting of one column, and an equal number of rows.What I want is, in essence, simple; merge all files column-wise in a new file final.res . I have tried using: paste res.* However (although this seems to work for a small subset of result files, this gives the following error when performed on the whole set: Too many open files . There must be an 'easy' way to get this done, but unfortunately I'm quite new to unix. Thanks in advance! PS: To give you an idea of what (one of my) datafile(s) looks like: 0.50.50.038250.510211.045710227.8469-5102.52280.07423.0944... | If you have root permissions on that machine you can temporarily increase the "maximum number of open file descriptors" limit: ulimit -Hn 10240 # The hard limitulimit -Sn 10240 # The soft limit And then paste res.* >final.res After that you can set it back to the original values. A second solution , if you cannot change the limit: for f in res.*; do cat final.res | paste - $f >temp; cp temp final.res; done; rm temp It calls paste for each file once, and at the end there is a huge file with all columns (it takes its minute). Edit : Useless use of cat ... Not ! As mentioned in the comments the usage of cat here ( cat final.res | paste - $f >temp ) is not useless. The first time the loop runs, the file final.res doesn't already exist. paste would then fail and the file is never filled, nor created. With my solution only cat fails the first time with No such file or directory and paste reads from stdin just an empty file, but it continues. The error can be ignored. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116795/"
]
} |
205,650 | I have an 8G usb stick (I'm on linux Mint), and I'm trying to copy a 5.4G file into it, but getting No space left on device The filesize of the copied file before failing is always 3.6G An output of the mounted stick shows.. df -T/dev/sdc1 ext2 7708584 622604 6694404 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fedf -h/dev/sdc1 7.4G 608M 6.4G 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fedu -h --max-depth=188K ./.sshls -h myfile -rw-r--r-- 1 moo moo 5.4G May 26 09:35 myfile So a 5.4G file, won't seem to go on an 8G usb stick. I thought there wasn't issues with ext2, and it was only problems with fat32 for file sizes and usb sticks ? Would changing the formatting make any difference ? Edit: Here is an report from tunefs for the drive sudo tune2fs -l /dev/sdd1 Filesystem volume name: Last mounted on: /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9feFilesystem UUID: ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9feFilesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: ext_attr resize_inode dir_index filetype sparse_super large_fileFilesystem flags: signed_directory_hash Default mount options: (none)Filesystem state: not clean with errorsErrors behavior: ContinueFilesystem OS type: LinuxInode count: 489600Block count: 1957884Reserved block count: 97894Free blocks: 970072Free inodes: 489576First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 477Blocks per group: 32768Fragments per group: 32768Inodes per group: 8160Inode blocks per group: 510Filesystem created: Mon Mar 2 13:00:18 2009Last mount time: Tue May 26 12:12:59 2015Last write time: Tue May 26 12:12:59 2015Mount count: 102Maximum mount count: 26Last checked: Mon Mar 2 13:00:18 2009Check interval: 15552000 (6 months)Next check after: Sat Aug 29 14:00:18 2009Lifetime writes: 12 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Default directory hash: half_md4Directory Hash Seed: 249823e2-d3c4-4f17-947c-3500523479fdFS Error count: 62First error time: Tue May 26 09:48:15 2015First error function: ext4_mb_generate_buddyFirst error line #: 757First error inode #: 0First error block #: 0Last error time: Tue May 26 10:35:25 2015Last error function: ext4_mb_generate_buddyLast error line #: 757Last error inode #: 0Last error block #: 0 | Your 8GB stick has approximately 7.5 GiB and even with some file system overhead should be able to store the 5.4GiB file. You use tune2fs to check the file sytem status and properties: tune2fs -l /dev/<device> By default 5% of the space is reserved for the root user. Your output lists 97894 blocks, which corresponds to approximately 385MiB and seems to be the default value. You might want to adjust this value using tune2fs if you don't need that much reserved space. Nevertheless, even with those 385MiB the file should fit on the file system. Your tune2fs output shows an unclean file system with errors. So please run fsck on the file system. This will fix the errors and possibly place some files in the lost+found directory. You can delete them if you're not intending to recover the data. This should fix the file system and copying the file will succeed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83959/"
]
} |
205,664 | I am using debian8 (jessie) and I went to find read the manpage for open.instead I got a warning: $ man 3 openNo manual entry for open in section 3See 'man 7 undocumented' for help when manual pages are not available. I have the manpage-dev package installed,so where is the programmers manpage (man 3) for open? | You want man 2 open for the C library interface, not man 3 open .It is indeed in manpages-dev (not manpage-dev ). man 3 open gives a Perl manual page. # Show the corresponding source groff fileman -w 2 open /usr/share/man/man2/open.2.gz# Show which package this file belongs todpkg -S /usr/share/man/man2/open.2.gzmanpages-dev: /usr/share/man/man2/open.2.gz# Or use dlocate to show which package this file belongs todlocate /usr/share/man/man2/open.2.gzmanpages-dev: /usr/share/man/man2/open.2.gz | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
205,666 | on my LAN (debian server, windows client) I use rsync:// protocol with this code cwRsync\rsync.exe -avLP --force --append-verify rsync://rsync-usb@server/rsync-usb/* usb/ I have read, that rsync in daemon mode with rsync:// protocol does not use encryption, so If I want to use rsync over internet, I want to use encryption. I like simplicity of rsync://user@server/module. I do not want use ssh user with allowed login, because I do not want give access to my server. I have created one user with blocked login rsyncssh:x:1001:1007:,,,:/tmp:/bin/false Is there a way how to tunnel rsync:// protocol over ssh ? for example with plink.exe (putty link) and private user key ? | I often use SSH port tunneling to create an encrypted channel. Since you're using an rsync:// URL I assume you have the rsync daemon running on TCP port 873 on the remote server. We can forward this port as follows: ssh -N -L 873:localhost:873 rsyncssh@server The -N option prevents the execution of a remote command, which would in your case disconnect the session since there's no valid shell. You can now sync with your remote server by connecting to it over the forwarded port on the local host, i.e., running your old command but replace @server with @localhost . You can also add the -f option to the SSH command to move SSH to the background which might be handy if you want to leave the connection open at all times. I'd recommend logging in with a private key for convenience and security. In case you don't have an rsync daemon running on the remote server, set one up and use the documentation in man 5 rsyncd.conf if necessary. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116812/"
]
} |
205,706 | I can't find a way to toggle mc internal editor in hex mode. Here it says to use F4 however it suggest to replace. How to do it? | You can open file with F3 . Hex view - F4 . Start edit - F2 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205706",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50426/"
]
} |
205,708 | I have a PC(kernel 3.2.0-23-generic ) which has 192.168.1.2/24 configured to eth0 interface and also uses 192.168.1.1 and 192.168.1.2 addresses for tun0 interface: root@T42:~# ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:41:54:01:93 brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 inet6 fe80::216:41ff:fe54:193/64 scope link valid_lft forever preferred_lft forever3: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff4: irda0: <NOARP> mtu 2048 qdisc noop state DOWN qlen 8 link/irda 00:00:00:00 brd ff:ff:ff:ff5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:13:ce:8b:99:3e brd ff:ff:ff:ff:ff:ff inet 10.30.51.53/24 brd 10.30.51.255 scope global eth1 inet6 fe80::213:ceff:fe8b:993e/64 scope link valid_lft forever preferred_lft forever6: tun0: <POINTOPOINT,MULTICAST,NOARP> mtu 1500 qdisc pfifo_fast state DOWN qlen 100 link/none inet 192.168.1.1 peer 192.168.1.2/32 scope global tun0root@T42:~# ip route show dev eth0192.168.1.0/24 proto kernel scope link src 192.168.1.2 root@T42:~# As seen above, tun0 is administratively disabled( ip link set dev tun0 down ). Now when I receive ARP requests for 192.168.1.2 , the PC does not reply to those requests: root@T42:~# tcpdump -nei eth0tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes15:30:34.875427 00:1a:e2:ae:cb:b7 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.1, length 4615:30:36.875268 00:1a:e2:ae:cb:b7 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.1, length 4615:30:39.138651 00:1a:e2:ae:cb:b7 > 00:1a:e2:ae:cb:b7, ethertype Loopback (0x9000), length 60:^C3 packets captured3 packets received by filter0 packets dropped by kernelroot@T42:~# Only after I delete the tun0 interface( ip link del dev tun0 ) the PC will reply to ARP request for 192.168.1.2 on eth0 interface. Routing table looks exactly alike before and after ip link del dev tun0 : root@T42:~# netstat -rnKernel IP routing tableDestination Gateway Genmask Flags MSS Window irtt Iface0.0.0.0 10.30.51.254 0.0.0.0 UG 0 0 0 eth110.30.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1192.168.1.0 192.168.1.2 255.255.255.0 UG 0 0 0 eth0root@T42:~# ip link del dev tun0root@T42:~# netstat -rnKernel IP routing tableDestination Gateway Genmask Flags MSS Window irtt Iface0.0.0.0 10.30.51.254 0.0.0.0 UG 0 0 0 eth110.30.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1192.168.1.0 192.168.1.2 255.255.255.0 UG 0 0 0 eth0root@T42:~# Routing entry below is removed already with ip link set dev tun0 down command: Destination Gateway Genmask Flags MSS Window irtt Iface192.168.1.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 However, while routing tables are exactly alike before and after the ip link del dev tun0 command, the actual routing decisions kernel will make are not: T42:~# ip route get 192.168.1.1local 192.168.1.1 dev lo src 192.168.1.1 cache <local> T42:~# ip link del dev tun0T42:~# ip route get 192.168.1.1192.168.1.1 dev eth0 src 192.168.1.2 cache ipid 0x8390T42:~# Is this an expected behavior? Why does kernel ignore the routing table? | Your routing table isn't being ignored, exactly. It's being overruled by a higher-priority routing table. What's Going On The routing table you see when you type ip route show isn't the only routing table the kernel uses. In fact, there are three routing tables by default, and they are searched in the order shown by the ip rule command: # ip rule show0: from all lookup local32766: from all lookup main32767: from all lookup default The table you're most familiar with is main , but the highest-priority routing table is local . This table is managed by the kernel to keep track of local and broadcast routes: in other words, the local table tells the kernel how to route to the addresses of its own interfaces. It looks something like this: # ip route show table localbroadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1broadcast 192.168.1.0 dev eth0 proto kernel scope link src 192.168.1.2local 192.168.1.1 dev tun0 proto kernel scope host src 192.168.1.1local 192.168.1.2 dev eth0 proto kernel scope host src 192.168.1.2broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.2 Check out that line referencing tun0 . That's what's causing your strange results from route get . It says 192.168.1.1 is a local address, which means if we want to send an ARP reply to 192.168.1.1, it's easy; we send it to ourself. And since we found a route in the local table, we stop searching for a route, and don't bother checking the main or default tables. Why multiple tables? At a minimum, it's nice to be able to type ip route and not see all those "obvious" routes cluttering the display (try typing route print on a Windows machine). It can also serve as some minimal protection against misconfiguration: even if the main routing table has gotten mixed up, the kernel still knows how to talk to itself. (Why keep local routes in the first place? So the kernel can use the same lookup code for local addresses as it does for everything else. It makes things simpler internally.) There are other interesting things you can do with this multiple-table scheme. In particular, you can add your own tables, and specify rules for when they are searched. This is called "policy routing", and if you've ever wanted to route a packet based on its source address, this is how to do it in Linux. If you're doing especially tricky or experimental things, you can add or remove local routes yourself by specifying table local in the ip route command. Unless you know what you're doing, though, you're likely to confuse the kernel. And of course, the kernel will still continue to add and remove its own routes, so you have to watch to make sure yours don't get overwritten. Finally, if you want to see all of the routing tables at once: # ip route show table all For more info, check out the ip-rule(8) man page or the iproute2 docs . You might also try the Advanced Routing and Traffic Control HOWTO for some examples of what you can do. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/205708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
205,759 | I pressed ~ Tab Tab on the bash command prompt and got an unexpected set of completions. First it looked like all the folks in the /Users directory, and a lot more. Then I thought it was doing the reverse lookup of folks with "home" directories in /etc/password , or perhaps the ones that were /var/empty -- this seems about right. What I'm curious about is what's really going on and why this works as it does. | I don't have an OSX system handy to check on but on all *nixes, ~foo is a shorthand for the home directory of user foo . For example, this command will move into my user's $HOME ( cd ~ alone will move into your home directory): cd ~terdon So, ~ and Tab will expand to all possible user names. The list should be the same as the list of users in /etc/passwd . I can confirm that that is exactly what happens when I try this on my Debian. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/205759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13077/"
]
} |
205,788 | I have redircted my ouput using /dev/null in bash script but it is still throwing an error. Code is following ps -p $proc | fgrep $proc> /dev/nullif [ $? -ne '0' ] ; then......fi below is error error: list of process IDs must follow -pUsage: ps [options] Try 'ps --help <simple|list|output|threads|misc|all>' or 'ps --help <s|l|o|t|m|a>' for additional help text.For more details see ps(1).Usage: fgrep [OPTION]... PATTERN [FILE]...Try 'fgrep --help' for more information. How can I suppress this error without affecting $? output? | You can use command grouping: { ps -p "$proc" | fgrep "$proc";} >/dev/null 2>&1 or wrap pipe in subshell: (ps -p "$proc" | fgrep "$proc") >/dev/null 2>&1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45726/"
]
} |
205,799 | I installed Debian OS today and while installation I skipped the option to create root user and chose to use sudo. Now I would like to create root user account. Is there any way to create the root account now? | The root account is always there, you just need to set a password for it: sudo passwd root And enter a password when prompted to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116932/"
]
} |
205,813 | I'm trying to update my application to use it with systemd.When I have used Upstart, I've just create a /etc/init.d/myService script: #!/bin/bash#chkconfig: 2345 90 10#description: myDescription### BEGIN INIT INFO# Provides: myService# Required-Start: sshd# Required-Stop: sshd# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: start myService# Description:### END INIT INFOSCRIPT=$(readlink -f $0)lockfile="/var/lock/subsys/myService"do_start() { if [ -d "/var/lock/subsys" ]; then touch $lockfile fi ...}do_stop() { ... if [ -d "/var/lock/subsys" ]; then if [ -f "$lockfile" ]; then rm -f $lockfile fi fi}do_status() { ...}case "$1" in start) do_start exit 0 ;; stop) do_stop exit 0 ;; status) do_status exit 0 ;; restart) do_stop do_start exit 0 ;; *) echo "Usage: $SCRIPTNAME {start|stop|status|restart}" >&2 exit 3 ;;esac And all were fine. Notice , this script generate some subprocesses which will executing in background.To use it with systemd, I made the follow service file ( myService.service ): [Unit]Description=My DescriptionRequires=sshd.serviceAfter=sshd.serviceBefore=shutdown.target reboot.target halt.target[Service]Type=oneshotExecStart=/etc/init.d/myService startExecStop=/etc/init.d/myService stopRemainAfterExit=yesKillMode=none[Install]WantedBy=multi-user.target If I run systemctl stop myService.service All work fine. My application stop successfully by /etc/init.d/myService stop command. But I've got the follow issue:When I reboot the system, and /etc/init.d/myService stop is executing, process which I should stop by myService script already killed. There are many processes which I should control ( around 7 processes ), and system should not terminated it itself. I've tried to use Type=forking and specify the PIDFile as a pidfile of process, which has the longest life-time ( it should started first end stopped last ), however all my process were terminated again. Is any simple way to avoid killing my subprocess? If it's matter, application running by another user ( no root ). | The root account is always there, you just need to set a password for it: sudo passwd root And enter a password when prompted to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116940/"
]
} |
205,820 | I am using sshfs over ssh tunnel. The reason for this is because I do not have direct access to machine from which I want to mount file system. This command establishes me tunnel: ssh -o TCPKeepAlive=no -o ServerAliveInterval=15 -o ServerAliveCountMax=3 -L 2222:192.168.1.55:22 root@beast -Nf And this one mounts me remote file system sudo sshfs -o idmap=user,allow_other,reconnect,TCPKeepAlive=no,ServerAliveInterval=15,ServerAliveCountMax=3 waktana@localhost:/home/wakatana /ubuntu -p 2222 The problem is that I am using unreliable link and this connection is often hanging (I am not able to access /ubuntu, text editor which has opened files from /ubuntu freezes etc.) despite the options that I've tried. I've read about mosh that I would like to give a chance, but I do not know how can I create tunnel using mosh ? | not yet possible. there is a pull-request for oob data which seems to work for some people, though: mosh/pull/583 all credits goes to guy named tribut from #mosh channel on freenode | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48949/"
]
} |
205,823 | Sometimes I have seen when one program (say A ) is running in the foreground and printing its output to stdout, I can type another command (say B ), and when I press Enter it is run, even though I had not been prompted to type B since A had not finished executing yet. I could do this in the tcsh shell and the end result was that B was executed after A . What is this feature called? Is this shell specific? How does it work without me typing the command at the prompt? | This is called typeahead , and it's not shell specific. What you type ends up being buffered in the terminal, and the next time a program running in the terminal is ready for input it reads what is waiting in the buffer. In your example that program is the shell, so it executes the command you typed as if you'd waited for A to finish before typing it. Some programs will explicitly clear the buffer before accepting input; for example most programs which ask for a password will clear the buffer, to make sure the user knows what was typed (since the input isn't echoed). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
205,830 | EDIT The issue as exposed here is solved (about files modes of the .ssh folder. But an other issue persists so I create a new question : > Unable to login with SSH-RSA key I can no longer connect with ssh-rsa key for a specific user, but it still work for other users. The git user defined as follow : # cat /etc/passwd | grep gitgit:x:1002:1002:,,,:/var/git:/bin/bash So you noticed that this is the git user thus its home is /var/git , it's not in /home . Now, ssh always prompt me for password : $ ssh git@srvgit@srv's password: I checked logs : # tail -n 1 /var/log/auth.log[...] Authentication refused: bad ownership or modes for file /var/git/.ssh/authorized_keys So authorized_keys as some ownership or modes missconfiguration. I don't understand because here is the conf for this file : # ls -l /var/git/.ssh/ | grep auth-rw-rw-r-- 1 git git 394 mai 22 17:39 authorized_keys And here is (in case...) the parent .ssh dir: # ls -al /var/git/ | grep sshdrwxrwxr-x 2 git git 4096 mai 22 17:39 .ssh And the $HOME directory : # ls -l /var/ | grep gitdrwxr-xr-x 7 git git 4096 mai 27 10:49 git So owners are always git , like owner groups. And files are readable so where could be the trick ? | The problem is the fact that file and directory permissions do not meet the requirements of StrictModes , which in OpenSSH is yes by default and should not be changed. Try setting the permissions of authorized_keys to 0600 and the .ssh directory to 0700 . # chmod 0700 .../.ssh/# chmod 0600 .../.ssh/authorized_keys Note that the ... will differ based on installation (e.g., in this question it is /var/git/ but for users it will be /home/username/ . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/205830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80244/"
]
} |
205,842 | This post is following this question : Authentication refused: bad ownership or modes for file /var/git/.ssh/authorized_keys . The issue as exposed there is solved (about files modes of the .ssh folder. But an other issue persists so I create a new question : When I try to login (with verbose options), all seems to work fine but at the end, here is what happened : debug1: Authentications that can continue: publickey,passworddebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/remi/.ssh/id_rsadebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,passworddebug1: Trying private key: /home/remi/.ssh/id_dsadebug1: Trying private key: /home/remi/.ssh/id_ecdsadebug1: Trying private key: /home/remi/.ssh/id_ed25519debug2: we did not send a packet, disable methoddebug1: Next authentication method: password I don't understand because these lines seems to be a non-sense for me : we sent a publickey packet, wait for reply we did not send a packet, disable method | You will get this behaviour if the file mode of the user's home directory on the destination host is not set correctly. It's not just the mode of the .ssh directory that has to be correctly set! ssh to the host and give your password to login, then chmod 755 ~logout Then ssh again and assuming you have everything else set up correctly (see the other answers), you should be able to login. This is what it looks like when the home directory is wide open (777). Note that it doesn't try the rsa key: ssh -v [email protected]: Next authentication method: publickeydebug1: Offering RSA public key: /home/iwoolf/.ssh/id_rsadebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Trying private key: /home/iwoolf/.ssh/id_dsadebug1: Trying private key: /home/iwoolf/.ssh/id_ecdsadebug1: Trying private key: /home/iwoolf/.ssh/id_ed25519debug1: Next authentication method: password... Then with the home directory permissions set correctly (755): ssh -v [email protected]: Next authentication method: publickeydebug1: Offering RSA public key: /home/iwoolf/.ssh/id_rsadebug1: Server accepts key: pkalg ssh-rsa blen 279debug1: Authentication succeeded (publickey). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/205842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80244/"
]
} |
205,867 | Is there a way to view iptables rules in a bit more detail? I recently added masquerade to a range of IPs: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADEservice iptables saveservice iptables restart Which has done what I want it to, but when I use: iptables -L I get the same output as I normally get: Chain INPUT (policy ACCEPT)target prot opt source destinationChain FORWARD (policy ACCEPT)target prot opt source destinationChain OUTPUT (policy ACCEPT)target prot opt source destination How can I see the rules including the ones I add? (system is CentOS 6) | When using the -L , --list option to list the current firewall rules, you also need to specify the appropriate Netfilter table (one of filter , nat , mangle , raw or security ). So, if you’ve added a rule for the nat table, you should explicitly specify this table using the -t , --table option: iptables --table nat --list Or using the options short form: iptables -t nat -L If you don’t specify a specific table, the filter table is used as the default. For faster results, it can be useful to also include the -n , --numeric option to print numeric IP addresses instead of hostnames, thus avoiding the need to wait for reverse DNS lookups. You can get even more information by including the -v , --verbose option. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/205867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89568/"
]
} |
205,876 | I'm trying to work remotely on a project that I have stored in a server, but the computer I am on belongs to the university and I don't have any keys nor permission to install anything. Could it be possible to log into my account using eclipse, gedit or something similar? Or to create somehow because I'm using a guest account a local folder connected to the remote? I have been able to connect using firefox, but it doesn't allow to work remotely. Update: The host is active24.com (owned by Mamut I think), it is a simple webhosting with ftp and mysql. It's running over linux. I'm the owner, but I don't administer the server, only the domain and db. I'm locally in a ubuntu machine. I need the FTP acces for editing the web files, because the website is not yet ready and I want to modify it, so I want to either create a remote folder (which I don't think is possible) or to log in remotely to the files. I thougt eclipse would allow me but it requires installing the Remote System Explorer. I have also tried login with ssh but the host doesn't allow me. | When using the -L , --list option to list the current firewall rules, you also need to specify the appropriate Netfilter table (one of filter , nat , mangle , raw or security ). So, if you’ve added a rule for the nat table, you should explicitly specify this table using the -t , --table option: iptables --table nat --list Or using the options short form: iptables -t nat -L If you don’t specify a specific table, the filter table is used as the default. For faster results, it can be useful to also include the -n , --numeric option to print numeric IP addresses instead of hostnames, thus avoiding the need to wait for reverse DNS lookups. You can get even more information by including the -v , --verbose option. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/205876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116986/"
]
} |
205,883 | As I understand, Linux kernel logs to /proc/kmsg file(mostly hardware-related messages) and /dev/log socket? Anywhere else? Are other applications also able to send messages to /proc/kmsg or /dev/log ? Last but not least, am I correct that it is the syslog daemon( rsyslog , syslog-ng ) which checks messages from those two places and then distributes those to various files like /var/log/messages or /var/log/kern.log or even central syslog server? | Simplified, it goes more or less like this: The kernel logs messages (using the printk() function) to a ring buffer in kernel space. These messages are made available to user-space applications in two ways: via the /proc/kmsg file (provided that /proc is mounted), and via the sys_syslog syscall. There are two main applications that read (and, to some extent, can control) the kernel's ring buffer: dmesg(1) and klogd(8) . The former is intended to be run on demand by users, to print the contents of the ring buffer. The latter is a daemon that reads the messages from /proc/kmsg (or calls sys_syslog , if /proc is not mounted) and sends them to syslogd(8) , or to the console. That covers the kernel side. In user space, there's syslogd(8) . This is a daemon that listens on a number of UNIX domain sockets (mainly /dev/log , but others can be configured too), and optionally to the UDP port 514 for messages. It also receives messages from klogd(8) ( syslogd(8) doesn't care about /proc/kmsg ). It then writes these messages to some files in /log , or to named pipes, or sends them to some remote hosts (via the syslog protocol, on UDP port 514), as configured in /etc/syslog.conf . User-space applications normally use the libc function syslog(3) to log messages. libc sends these messages to the UNIX domain socket /dev/log (where they are read by syslogd(8) ), but if an application is chroot(2) -ed the messages might end up being written to other sockets, f.i. to /var/named/dev/log . It is, of course, essential for the applications sending these logs and syslogd(8) to agree on the location of these sockets. For these reason syslogd(8) can be configured to listen to additional sockets aside from the standard /dev/log . Finally, the syslog protocol is just a datagram protocol. Nothing stops an application from sending syslog datagrams to any UNIX domain socket (provided that its credentials allows it to open the socket), bypassing the syslog(3) function in libc completely. If the datagrams are correctly formatted syslogd(8) can use them as if the messages were sent through syslog(3) . Of course, the above covers only the "classic" logging theory. Other daemons (such as rsyslog and syslog-ng , as you mention) can replace the plain syslogd(8) , and do all sorts of nifty things, like send messages to remote hosts via encrypted TCP connections, provide high resolution timestamps, and so on. And there's also systemd , that is slowly phagocytosing the UNIX part of Linux. systemd has its own logging mechanisms, but that story would have to be told by somebody else. :) Differences with the *BSD world: On *BSD there is no klogd(8) , and /proc either doesn't exist (on OpenBSD) or is mostly obsolete (on FreeBSD and NetBSD). syslogd(8) reads kernel messages from the character device /dev/klog , and dmesg(1) uses /dev/kmem to decode kernel names. Only OpenBSD has a /dev/log . FreeBSD uses two UNIX domain sockets /var/run/log and var/rub/logpriv instead, and NetBSD has a /var/run/log . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/205883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
205,936 | Each line contains text and numbers in one column. I need to calculate the sum of the numbers in each row. How can I do that? Thx example.log contains: time=31sectime=192sectime=18sectime=543sec The answer should be 784 | With a newer version (4.x) of GNU awk : awk 'BEGIN {FPAT="[0-9]+"}{s+=$1}END{print s}' With other awk s try: awk -F '[a-z=]*' '{s+=$2}END{print s}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/205936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117040/"
]
} |
206,011 | Looking for a way to invoke more than one command in a xargs one-liner, I found the recommendation in findutils to invoke the shell from xargs like this: $ find ... | xargs sh -c 'command $@' The funny thing is, if I use xargs like that, for some reason it skips the first argument: $ seq 10 | xargs bash -c 'echo $@'2 3 4 5 6 7 8 9 10$ seq 10 | xargs -n2 bash -c 'echo $@'246810 Is something wrong with my shell or xargs version?Is that documentation inaccurate? Using xargs (GNU findutils) 4.4.2 and GNU bash, version 4.3.11(1)-release . | The [bash] man page says: " -c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0. " - The key is $0 ; it means that the command name shall be the first argument. seq 10 | xargs sh -c 'echo $@; echo $0' sh1 2 3 4 5 6 7 8 9 10sh | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/206011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12954/"
]
} |
206,121 | In Debian Stretch , when I try to install the python package python-constraint via pip install python-constraint I get the following error; Exception:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 290, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python2.7/dist-packages/pip/index.py", line 292, in find_requirement elif is_prerelease(version) and not (self.allow_all_prereleases or req.prereleases): File "/usr/lib/python2.7/dist-packages/pip/util.py", line 739, in is_prerelease return any([any([y in set(["a", "b", "c", "rc", "dev"]) for y in x]) for x in parsed])TypeError: 'int' object is not iterableStoring debug log for failure in /home/von/.pip/pip.log In Debian Jessie the same command is sucessful. Where is the problem? How to solve it? $python --versionPython 2.7.9$pip --versionpip 1.5.6 from /usr/lib/python2.7/dist-packages (python 2.7) | The error is related to the bug https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=786580 The solution is to downgrade python-distlib and python-distlib-whl to the jessie version. wget http://ftp.debian.org/debian/pool/main/d/distlib/python-distlib_0.1.9-1_all.debwget http://ftp.debian.org/debian/pool/main/d/distlib/python-distlib-whl_0.1.9-1_all.debdpkg -i python-distlib_0.1.9-1_all.deb dpkg -i python-distlib-whl_0.1.9-1_all.deb After that running pip install is sucessful. $ sudo pip install python-constraintDownloading/unpacking python-constraint Downloading python-constraint-1.2.tar.bz2 Running setup.py (path:/tmp/pip-build-JeOIzg/python-constraint/setup.py) egg_info for package python-constraintInstalling collected packages: python-constraint Running setup.py install for python-constraintSuccessfully installed python-constraintCleaning up... Put the packages on hold, and wait for an official bug fix. sudo aptitude hold python-distlib python-distlib-whl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/206121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74926/"
]
} |
206,175 | I have to change a properties file containing the property: ro.product.firmware=0.0.1 with a new value that is coming from a function called in a different section of my bash script. I cannot get the regex to work properly. For this particular case I need the value to be changed from 0.0.1 to $1 but the value will not always be 0.0.1. The regex I currently have is: sed -i 's/^(ro\.product\.firmware).*$/(ro\.product\.firmware="$1")' | This should work for your case: sed -ri 's/^(ro\.product\.firmware\=)(.*)$/\1'"$1"'/g' file.txt Here, -r ==> for using extended regex\1 ==> for the first captured group | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/206175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117200/"
]
} |
206,217 | I just now installed Cent OS 7 on my VMware 8 and I am not able to connect it to a network. I checked the VM network and its mapped to the physical NIC. The same setting work like charm on my CentOS 5 running on VM8. Running the ip a command shows the following output: | You have to activate the interface. One way of doing that is with Network Manager's utility nmtui . Open nmtui with: $ sudo nmtui And you'll get a text based interface like this: Navigate by using TAB and ENTER . In nmtui you can activate your interface, edit connection's and set hostname. After you're done, restart network with: $ sudo systemctl restart network | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/206217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73029/"
]
} |
206,224 | My python code: import sysprint "i am a daemon"print "i will be run using nohup"sys.stderr.write("i am an error message inside nohup process\n") When i run the code as python a.py , it shows, i am a daemoni will be run using nohupi am an error message inside nohup process When i run the code as nohup python a.py > a.log 2>&1 < /dev/null & , a.log shows, i am an error message inside nohup processi am a daemoni will be run using nohup Why does stderr logs get flushed/written before stdout logs when using nohup ? | I don't think it's got anything to do with nohup . You get the same behavior when you do python a.py > a.log 2>&1 . Python is most likely using C file stdio underneath. With that, stdout , when in a terminal, will be line-buffered, and buffered when stdout is a file. stderr is always unbuffered. Redirecting stdout to a file will switch stdout 's buffering from line-buffered to buffered and cause the print ed string to be stuck in the buffer, which only gets flushed when your program (the stream) closes. The stderr stream makes it to the file faster because it's unbuffered. You can use stdbuf to tweak standard buffering, forcing the lines to print in the correct order: stdbuf -o0 python a.py >a.log 2>&1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/206224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
206,255 | How do I search a text file and list all the words begining with Q? The file I am searching is Python source code. The closest I have found to a soulution is this question but that is not what I want. | You probably want something like grep -o '\bQ\w*' The \b matches a word boundary, i.e. the beginning of a word. Then it has to start with Q followed by any number of other word characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/206255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105436/"
]
} |
206,289 | This is something I haven't been able to find much info on so any help would be appreciated. My understanding is thus. Take the following file: -rw-r----- 1 root adm 69524 May 21 17:31 debug.1 The user phil cannot access this file: phil@server:/var/log$ head -n 1 debug.1cat: debug.1: Permission denied If phil is added to the adm group, it can: root@server:~# adduser phil admAdding user `phil' to group `adm' ...Adding user phil to group admDone.phil@server:/var/log$ head -n 1 debug.1May 21 11:23:15 server kernel: [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5.1-0-g8936dbb-20141113_115728-nilsson.home.kraxel.org 04/01/2014 If, however, a process is started whilst explicitly setting the user:group to phil:phil it cannot read the file. Process started like this: nice -n 19 chroot --userspec phil:phil / sh -c "process" If the process is started as phil:adm , it can read the file: nice -n 19 chroot --userspec phil:adm / sh -c "process" So the question really is: What is special about running a process with a specific user/group combo that prevents the process being able to access files owned by supplementary groups of that user and is there any way around this? | A process is run with a uid ang a gid. Both have permissions assigned to them. You could call chroot with a userspec of a user and group, where actually the user is not in that group. The process would then executed with the users uid and the given groups gid. See an example. I have a user called user , and he is in the group student : root@host:~$ id useruid=10298(user) gid=20002(student) groups=20002(student) I have a file as follows: root@host:~$ ls -l file-rw-r----- 1 root root 9 Mai 29 13:39 file He cannot read it: user@host:~$ cat filecat: file: Permission denied Now, I can execte the cat process in the context of the user user AND the group root . Now, the cat process has the necessary permissions: root@host:~$ chroot --userspec user:root / sh -c "cat file"file contents Its interesting to see what id says: root@host:~$ chroot --userspec user:root / sh -c "id"uid=10298(user) gid=0(root) groups=20002(student),0(root) Hm, but the user user is not in that group ( root ). Where does id get its informations from? If called without argument, id uses the system calls, getuid() , getgid() and getgroups() . So the process context of id itself is printed. That context we have altered with --userspec . When called with an argument, id just determines the group assignments of the user: root@host:~$ chroot --userspec user:root / sh -c "id user"uid=10298(user) gid=20002(student) groups=20002(student) To your question: What is special about running a process with a specific user/group combo that prevents the process being able to access files owned by supplementary groups of that user and is there any way around this? You can set the security process context that is needed to solve whatever task the process needs to do. Every process has a uid and gid set under which he runs. Normally the process "takes" the calling users uid and gid as his context. With "takes" I means the kernel does, otherwise it would be a security problem. So, it's actually not the user, that has no permissions to read the file, its the process' permissions ( cat ). But the process runs with the uid/gid of the calling user. So you don't have to be in a specific group for a process to run with your uid and the gid of that group. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/206289",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117275/"
]
} |
206,309 | I have a path on a Linux machine (Debian 8) which I want to share with Samba 4 to Windows computers (Win7 and 8 in a domain). In my smb.conf I did the following: [myshare]path = /path/to/sharewriteable = yesbrowseable = yesguest ok = yespublic = yes I have perfect read access from Windows. But in order to have write access, I need to do chmod -R 777 /path/to/share in order to be able to write to it from Windows. What I want is write access from Windows after I provide the Linux credentials of the Linux owner of /path/to/share . I already tried: [myshare]path = /path/to/sharewriteable = yesbrowseable = yes Then Windows asks for credentials, but no matter what I enter, it's always denied. What is the correct way to gain write access to Samba shares from a Windows domain computer without granting 777 permissions? | I recommend to create a dedicated user for that share and specify it in force user (see docs) . Create a user ( shareuser for example) and set the owner of everything in the share folder to that user: adduser --system shareuserchown -R shareuser /path/to/share Then add force user and permission mask settings in smb.conf : [myshare]path = /path/to/sharewriteable = yesbrowseable = yespublic = yescreate mask = 0644directory mask = 0755force user = shareuser Note that guest ok is a synonym for public . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/206309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50666/"
]
} |
206,315 | Before all the unit files were in /etc/systemd/system/ but now some are showing up in /usr/lib/systemd/system (<- on CentOS, or /lib/systemd/system <- on Debian/Ubuntu), what is the difference between these folders? | This question is already answered in man 7 file-hierarchy which comes with systemd (there is also online version ): /etc System-specific configuration. (…) VENDOR-SUPPLIED OPERATING SYSTEM RESOURCES /usr Vendor-supplied operating system resources. Usually read-only, but this is not required. Possibly shared between multiple hosts. This directory should not be modified by the administrator, except when installing or removing vendor-supplied packages. Basically, files that ships in packages downloaded from distribution repository go into /usr/lib/systemd/ . Modifications done by system administrator (user) go into /etc/systemd/system/ . System-specific units override units supplied by vendors. Using drop-ins, you can override only specific parts of unit files, leaving the rest to vendor (drop-ins are available since the very beginning of systemd, but were properly documented only in v219; see man systemd.unit ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/206315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109720/"
]
} |
206,322 | I have a netgear wireless router, single web server , 100 clients with 192.168.0.0/24 network. I haven't Internet connection and I am not connected to outside world. Now my goal is to provide the name to server's ip by installing bind& configuring in the same server. This means single server acting as DNS server & web server. observe the scenario: actually my server is getting the ip and every setting from the router so my server's ip always changes dynamically.In this type of situations how can i configure the "bind" in that server with dynamic ip which i am getting from router. is this possible that the server's ip and primary dns can have same address?if yes how the router will generate this perticular configuration to the server?. will router assign the configuration like this to the server? Ip:192.168.0.101broadcast:192.168.0.255Primary dns:192.168.0.101default route:192.168.0.1 | This question is already answered in man 7 file-hierarchy which comes with systemd (there is also online version ): /etc System-specific configuration. (…) VENDOR-SUPPLIED OPERATING SYSTEM RESOURCES /usr Vendor-supplied operating system resources. Usually read-only, but this is not required. Possibly shared between multiple hosts. This directory should not be modified by the administrator, except when installing or removing vendor-supplied packages. Basically, files that ships in packages downloaded from distribution repository go into /usr/lib/systemd/ . Modifications done by system administrator (user) go into /etc/systemd/system/ . System-specific units override units supplied by vendors. Using drop-ins, you can override only specific parts of unit files, leaving the rest to vendor (drop-ins are available since the very beginning of systemd, but were properly documented only in v219; see man systemd.unit ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/206322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89716/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.