source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
151,342 | I'd like to remove from a given column ($2 in the example) the duplicate fields (comma separated), but only if the ID column ($1 in the example) is the same. Input: A 1,2,3,4A 8,9,10,11A 2,3,4,11,12B 4,5,6,7B 6,8,9,10 Expected output: A 1,2,3,4A 8,9,10,11A 12B 4,5,6,7B 8,9,10 | This is easy enough in awk using an array , split , and a regular loop : { split($2, elements, ",") out = "" for (i in elements) { el = elements[i] key = $1 " " el if (!(key in used)) { out = out el "," } used[key] = 1 } sub(/,$/, "", out) $2 = out}1 For each line, we split the second column by commas and save the bits into an array elements . We then build up the new value for that column with a loop, checking whether we've seen the value before or not. We're keeping the set of values we've already seen in the (associative) array used . We construct a string key containing both the value of the first column and the value from the second we're currently looking at. If key is in used , we've seen this one before for this ID and shouldn't put it in the output; otherwise, it's new, and we concatenate the value to out . So that we don't use it again, we store key (which will be something like " A 3 ") into our set of seen elements. Finally, we put the constructed list back into the second column. This is essentially the approach you'd take in any other language. Put the code above into a file and run it with awk -f , or single-quote all of it as an argument on the command line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
151,367 | We know to overwrite bar.txt , we can use this: echo "foo" > bar.txt but how can we overwrite to the standard output? I tried this command: echo "foo" > but it failed. I want this because I want to print something like this: Hello worldI'm HERE one word at a time and I want to delete the previous word (one word at a time on standard output). | echo always outputs to its stdout 1 . echo foo always does a write(1, "foo\n") . When you do echo foo > file , the shell changes the stdout for the echo command to a new file file. echo foo then still does a write(1, "foo\n") but this time its stdout (its file descriptor 1) points to file , not to the stdout it inherits from the shell. If you want to write to stdout without that stdout being redirected , just write: echo foo If you want the resource pointed to by stdout to be reopen (and truncated) before each echo , then that's trickier. On Linux (and Linux only), you can do: echo foo > /dev/fd/1echo bar > /dev/fd/1# now the file stdout points to (if it's a regular file) contains "bar" On Linux, /dev/fd/1 is a symlink to the file that is open on stdout. So opening it again with > will truncate it (on other Unices, > /dev/fd/n is like dup2(n,1) so it won't truncate the file). That won't work if stdout goes to a socket though. Otherwise, you can call ftruncate() on stdout (fd 1) to truncate it: perl -e 'truncate STDOUT,0'echo fooperl -e 'truncate STDOUT,0'echo bar If instead, you want those words, when stdout is a terminal to overwrite each other on the screen, you can use terminal control characters to affect the cursor positioning. printf %s FOO printf '\r%s' BAR will have BAR overwrite FOO because \r is the control characters that instructs the terminal to move the cursor to the beginning of the line. If you do: printf %s FOOBARprintf '\r%s' BAZ however, you'll see BAZBAR on the screen. You may want to use another control sequence to clear the line. While \r as meaning carriage return is universal, the control sequence to clear the screen varies from one terminal to the next. Best is to query the terminfo database with tput to find it out: clr_eol=$(tput el)printf %s FOOBARprintf '\r%s%s' "$clr_eol" BAZ 1 if using the Korn or Z shell, another outputting command that supports writing to other file descriptors is print : print -u2 FOO for instance writes FOO to print 's stderr. With echo and in Bourne-like shells, you can do echo >&2 FOO to achieve the same result. echo is still writing to its stdout, but its stdout has been made a duplicate of the shell's stderr using the >&2 shell redirection operator | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78050/"
]
} |
151,369 | I updated tmux on my VPS with: apt-get install -t squeeze-backports tmux -V And needed to get back to previous version to recover some running sessions, so I went to look which version of tmux is the right for Debian and then downgraded with: apt-get install tmux=1.3-2+squeeze1 But it took me a lot of time to figure out this specific command and look for a proper version. Is there a shortcut that automatically gets the stable version? I tried different combinations for -t flag, but it didn't help. apt-cache policy tmux : tmux: Installed: 1.6-2~bpo60+1 Candidate: 1.6-2~bpo60+1 Version table: *** 1.6-2~bpo60+1 0 200 http://www.backports.org/debian/ squeeze-backports/main amd64 Packages 100 /var/lib/dpkg/status 1.3-2+squeeze1 0 500 http://debian.newdream.net/ squeeze/main amd64 Packages 500 http://security.debian.org/ squeeze/updates/main amd64 Packages | The easier way is using the release option on apt, example: sudo apt-get install tmux/stable or, in case you are using the name of the release instead of the tier (ie. squeeze, jeesie, sid instead of stable, testing, unstable) you should use that name instead: sudo apt-get install tmux/squeeze This will install the latest version available in the specified suite (stable, testing, unstable, stable-backports, sid, etc.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47022/"
]
} |
151,390 | How to check whether or not a particular directory is a mount point?For instance there is a folder named /test that exists, and I want to check if it is a mount point or not. | If you want to check it's the mount point of a file system, that's what the mountpoint command (on most Linux-based systems) is for: if mountpoint -q -- "$dir"; then printf '%s\n' "$dir is a mount point"fi It does that by checking whether . and .. have the same device number ( st_dev in stat() result). So if you don't have the mountpoint command, you could do: perl -le '$dir = shift; exit(1) unless (@a = stat "$dir/." and @b = stat "$dir/.." and ($a[0] != $b[0] || $a[1] == $b[1]))' "$dir" Like mountpoint , it will return true for / even if / is not a mount point (like when in a chroot jail), or false for a mount point of a bind mount of the same file system within itself. Contrary to mountpoint , for symbolic links, it will check whether the target of the symlink is a mountpoint. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/151390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81544/"
]
} |
151,437 | When running echo abcd | wc -c it returns 5 . But the word abcd is only 4 characters long. Is echo printing some special character after the word abcd ? And can I prevent echo from printing that? | echo print newline ( \n ) at end of line echo abcd | xxd0000000: 6162 6364 0a abcd. With some echo implementations, you can use -n : -n do not output the trailing newline and test: echo -n abcd | wc -c4 With some others, you need the \c escape sequence: \c : Suppress the <newline> that otherwise follows the final argument in the output. All characters following the '\c' in the arguments shall be ignored. echo -e 'abcd\c' | wc -c4 Portably, use printf : printf %s abcd | wc -c4 (note that wc -c counts bytes, not characters (though in the case of abcd they are generally equivalent). Use wc -m to count characters). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/151437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78050/"
]
} |
151,463 | There are several people with root access to a particular VM I am in charge of. I would like to find out which IP address was used to log into root. | This depends on your distribution or OS. sshd will log each login somewhere, and will include the relevant IP address in the login a format like this: Aug 20 15:56:53 machine sshd[2728]: Accepted publickey for root from 192.168.1.2 port 49297 That part is consistent, but how you get there can vary. On systems based on systemd , use journalctl : journalctl /usr/bin/sshd to list out all log messages from the sshd executable. You can grep that out for root logins or other criteria, and limit it by date with --since and --until (see man journalctl ). Alternatively and historically, messages will be logged into (usually) somewhere in /var/log . Commonly sshd messages go into /var/log/auth.log , but the exact file can vary substantially. Whichever one it is: grep sshd /var/log/auth.log will give you broadly equivalent output to the journalctl version. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42896/"
]
} |
151,469 | I am using watch to monitor the progress of a MySQL replication slave. The way I have it set up now, the password shows up in the header of the watch output. I am not exceedingly worried that someone will steal the password, but on principle I would prefer not to be showing a naked password to anyone who happens to wander up and look over my shoulder. The command line I am currently using with watch is: watch -n1 -d mysql -pmypass -e "'show slave status\G'" Should I just bury the mysql command in a tiny little shell script? | A better option than providing the password on the command line at all is to make a ~/.my.cnf file with the credentials in it: [client]password=something That way they are also protected against someone looking at the ps output, or your shell history. That said, you can turn off the watch title entirely with the -t or --no-title option , which will: Turn off the header showing the interval, command, and current time at the top of the display, as well as the following blank line. You do lose a little more information than you were wanting, but it's not hugely vital. Otherwise, a shell script also works, as you suggested. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38382/"
]
} |
151,479 | I have few big folders "cosmo_sim_9", "cosmo_sim_10".... in one of my external hard disk, and a old copy of this on another external hard disk. I want to Synchronize old directories with the new one(recursively), but without overwriting already existing files(for saving time). How can I do this? My os is Fedora 20. | use rsync : rsync -a --ignore-existing cosmo_sim_9 /dest/disk/cosmo_sim_9 --ignore-existing will cause it to skip existing files on the destination, -a will make it recursive, preserving if possible permission/ownership/group/timestamp/links/special devices. you can do it for all directories by using a bash for loop: for dir in cosmo_sim_* ; dorsync -a --ignore-existing "$dir" "/dest/disk/$dir"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81594/"
]
} |
151,501 | I am looking for a specific file in OS/X. I can see the file by using: ls -alR | grep "mkmf.*" This shows my that the file exists. How do I find what directory the file is located. Many thanks | Use find which is better suited for your intended purpose: find . -name "mkmf*" It will list all appearances of your pattern including the relative path. For more information look at manual page of find with man find or go to http://www.gnu.org/software/findutils/manual/html_mono/find.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29998/"
]
} |
151,504 | I'm new on linux server. I have 2 question that I'm confuse about this. 1 1. user:group now I chown my /var/www/html like this. my nginx.conf is set server{ user www-data } and in terminal I set chown -R root:www-data /var/www/htmlfind /var/www/html -type d -exec chmod 775 {} +find /var/www/html -type f -exec chmod 664 {} +find /var/www/html/uploads/images -type d -exec chmod 775 {} + Is I'm do the right thing ? or it need to set to www-data:www-data ? 2 2. about crontab it a lot of TUT but it is not clear about user who ran crontab. The question is If I login with adam user and my server is own by root:www-data or www-data:www-data how can I give the crontab to that user not adam user ? because it need perm to write the files like backup. | Use find which is better suited for your intended purpose: find . -name "mkmf*" It will list all appearances of your pattern including the relative path. For more information look at manual page of find with man find or go to http://www.gnu.org/software/findutils/manual/html_mono/find.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81042/"
]
} |
151,510 | How can I find out the total memory allocated for a particular process in Ubuntu? | Try: pidof bash | xargs ps -o rss,sz,vsz To find the memory usage of your current bash shell (assuming you're using bash ). Change bash to whatever you're investigating. If you're after one specific process, simply use on it's own: ps -o rss,sz,vsz <process id> From the man page: RSS : resident set size, the non-swapped physical memory that a task has used (in kiloBytes). SZ : size in physical pages of the core image of the process. This includes text, data, and stack space. VSZ : virtual memory size of the process in KiB (1024-byte units). The man page for ps will list all the possible arguments to the -o option (there are quite a few to choose from). Instead of -o rss,sz you could use the BSD style v option (no dash) which shows an alternative memory layout. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81620/"
]
} |
151,523 | I do not have enough space on my root partition to store the database. $ df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/ManjaroVG-ManjaroRoot 29G 21G 6.4G 77% /dev 1.9G 0 1.9G 0% /devrun 1.9G 852K 1.9G 1% /runtmpfs 1.9G 76K 1.9G 1% /dev/shmtmpfs 1.9G 0 1.9G 0% /sys/fs/cgrouptmpfs 1.9G 596K 1.9G 1% /tmp/dev/mapper/ManjaroVG-ManjaroHome 197G 54G 133G 29% /home/dev/sda1 247M 88M 147M 38% /boottmpfs 383M 8.0K 383M 1% /run/user/1000 Changing datadir in my. cnf to a new location caused a permission problem ([Warning] Can't create test file /home/u/tmp/mysql/acpfg.lower-test ) How is it possible change the directory where MariaDB/MySQL stores the database under Linux (for example to /home/u/tmp/mysql)? | You can either reconfigure MySQL to look for the data directory in a different location, or bind mount a new location over the original. Make sure that the mysql service is stopped before you carry out these changes. Then, move all the files and sub-directories from the original location into your new location. Reconfigure MySQL edit /etc/my.cnf and change datadir to: datadir=/home/u/tmp/mysql or... Bind Mount Use a bind mount to mount your new location over the original: mount --bind /home/u/tmp/mysql /var/lib/mysql Once you're happy that everything works, edit your /etc/fstab to make it permanent: /home/u/tmp/mysql /var/lib/mysql none bind 0 0 File Permissions Regardless of which method you choose, you'll need to ensure that the permissions on your new location are correct, as follows: The top level directory ( /home/u/tmp/mysql ) and everything below should be owned by user and group mysql (assuming mysql runs as these on Arch Linux): # chown -R mysql. /home/u/tmp/mysql All files are: # find /home/u/tmp/mysql/ -type f -exec chmod 0660 {} \; All directories are: # find /home/u/tmp/mysql/* -type d -exec chmod 0700 {} \; The top level directory is: # chmod 0755 /home/u/tmp/mysql | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34872/"
]
} |
151,528 | In our Company we have an internal-wikipedia-like help site. The Problem is, the MindTouch-Wiki is out of date, so I have to update it! The Problem is, it's running virtual. It's Ubuntu 7.7 In Microsoft Server 2003/2008, I can't connect to the machine using "Connect to virtual machine" I'm not familiar with Linux, is there a way to connect to the Ubuntu that is open by default? Like RDP over VNC in Windows? | You can either reconfigure MySQL to look for the data directory in a different location, or bind mount a new location over the original. Make sure that the mysql service is stopped before you carry out these changes. Then, move all the files and sub-directories from the original location into your new location. Reconfigure MySQL edit /etc/my.cnf and change datadir to: datadir=/home/u/tmp/mysql or... Bind Mount Use a bind mount to mount your new location over the original: mount --bind /home/u/tmp/mysql /var/lib/mysql Once you're happy that everything works, edit your /etc/fstab to make it permanent: /home/u/tmp/mysql /var/lib/mysql none bind 0 0 File Permissions Regardless of which method you choose, you'll need to ensure that the permissions on your new location are correct, as follows: The top level directory ( /home/u/tmp/mysql ) and everything below should be owned by user and group mysql (assuming mysql runs as these on Arch Linux): # chown -R mysql. /home/u/tmp/mysql All files are: # find /home/u/tmp/mysql/ -type f -exec chmod 0660 {} \; All directories are: # find /home/u/tmp/mysql/* -type d -exec chmod 0700 {} \; The top level directory is: # chmod 0755 /home/u/tmp/mysql | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81626/"
]
} |
151,540 | I have change the user's (user1) password using root passwd user1 but when i tried to ssh user@localhost or through gdm or tty it always failed the journalctl log shows Failed password for user1 from ::1 FAILED LOGIN 1 FROM tty3 FOR user1, Authentication failurepam_unix(gdm-password:auth): conversation failedpam_unix(gdm-password:auth): auth could not identify password for [user1] I'm using fresh Arch Linux installation (64-bit) | So the solution was, change the /etc/passwd file for user1 's shell from /usr/bin/bash into /bin/bash | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27996/"
]
} |
151,547 | How to change the system date in Linux ? I want to change: Only Year Only Month Only Date Any combination of above three | Use date -s : date -s '2014-12-25 12:34:56' Run that as root or under sudo . Changing only one of the year/month/day is more of a challenge and will involve repeating bits of the current date. There are also GUI date tools built in to the major desktop environments, usually accessed through the clock. To change only part of the time, you can use command substitution in the date string: date -s "2014-12-25 $(date +%H:%M:%S)" will change the date, but keep the time. See man date for formatting details to construct other combinations: the individual components are %Y , %m , %d , %H , %M , and %S . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/151547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
151,574 | I have a configuration file in the following format. <Title> [part1] A.File = a A.Val = val1 B.File = a B.Val = val1 [part2] A.File = a1 A.Val = val2 B.File = a B.Val = val1 I want to extract values from first part only. #!/bin/sh getCalibDate(){ file="/path/of/config/file" value=`cat ${file} | grep Val | cut -d'=' -f2` for v in $value do echo $v done}getCalibDate Above script will return all the values.How can I get values from only first part (part1) ? | If you have only 4 lines after [part1] you can use -A4 option with grep : cat ${file} | grep -A4 "part1" | cut -d'=' -f2` For general case (more than 4 lines after [part1]) use sed to get the text between two parts: cat ${file} | sed -n "/part1/,/part2/p" | head -n-1 head is to delete additional part2 at the end As terdon said you don't have to use cat , you can do the following instead: grep -A4 "part1" ${file} | cut -d'=' -f2` OR: sed -n "/part1/,/part2/p" ${file} | head -n-1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67219/"
]
} |
151,592 | How can I get the part of the output of a command between two specific lines? A dummy example: Command: git for-each-ref --sort='*authordate' --format='%(tag)' refs/tags | grep -v '^$' Output: 0.1.00.2.01.0.01.0.11.0.21.1.01.2.01.2.11.3.01.4.01.4.1 I want to get the part of this output, between two specific lines (not based on line number, based on content): 0.1.00.2.01.0.01.0.11.0.2 | You can pipe output to awk : $ ... | awk '/0\.1\.0/,/1\.0\.2/'0.1.00.2.01.0.01.0.11.0.2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151592",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81655/"
]
} |
151,614 | I'm looking for a simpler way to have a cross-os-unix file size check. I could use wc -c but I'm concerned the performance may suck on large files (I'm assuming it just counts chars and doesn't do a stat under the covers?) The below works for linux and macos (perhaps bsd). Is there a simpler well performing approach? function filesize{ local file=$1 size=`stat -c %s $file 2>/dev/null` # linux if [ $? -eq 0 ]; then echo $size return 0 fi eval $(stat -s $file) # macos if [ $? -eq 0 ]; then echo $st_size return 0 fi echo 0 return -1} | From the source of wc ( coreutils/src/wc.c ) in GNU coreutils (i.e. the verion on non-embedded Linux and Cygwin): When counting only bytes, save some line- and word-counting overhead. If FD is a 'regular' Unix file, using lseek is enough to get its 'size' in bytes. So using wc -c to count the bytes will perform well. You can easily test this optimisation on a large file (i.e. one that would take some time reading). wc -c on a 9.9Gb file took 0.015s of real time on a file that is located on my server and I would rejoice if the whole file would have been transferred in that time, but my gigabit ethernet is unfortunately not that fast (it takes 21s to copy that file to /dev/null over the network). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81674/"
]
} |
151,654 | I'm trying to check if an input is an integer and I've gone over it a hundred times but don't see the error in this. Alas it does not work, it triggers the if statement for all inputs (numbers/letters) read scaleif ! [[ "$scale" =~ "^[0-9]+$" ]] then echo "Sorry integers only"fi I've played around with the quotes but either missed it or it did nothing. What do I do wrong? Is there an easier way to test if an input is just an INTEGER? | Remove quotes if ! [[ "$scale" =~ ^[0-9]+$ ]] then echo "Sorry integers only"fi | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/151654",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79435/"
]
} |
151,658 | I am working on mac with sed, perl, awk, bash.. I have a large-ish (10GB) text file which has 13 fields (columns) of TAB delimited data. Unfortunately some of these lines have extraneous TABs , so I want to delete the entire line where we have extra TABs , and thus unequal fields. (I don't mind discarding the lines in their entirety) What I currently have writes the number of fields into another file. awk -F'\t' '{print NF}' infile > fieldCounthead fieldCount13131013131314131313 I would like to construct a short script that removes any line with more (or less) than 13 proper fields (from the original file). speed is helpful as I have to do this on multiple files doing it in one sweep would be cool I currently am porting the fieldCount file into Python, trying to load with line by line. EDIT: vaild (13 columns) a b c d e f g h i j k l m invalid (14 columns) a b c d e f g h i j k l m n | You almost have it already: awk -F'\t' 'NF==13 {print}' infile > newfile And, if you're on one of those systemswhere you're charged by the keystroke ( :) )you can shorten that to awk -F'\t' 'NF==13' infile > newfile To do multiple files in one sweep,and to actually change the files (and not just create new files),identify a filename thats not in use (for example, scharf ),and perform a loop, like this: for f in list do awk -F'\t' 'NF==13 {print}' "$f" > scharf && mv -f -- scharf "$f"done The list can be one or more filenamesand/or wildcard filename expansion patterns; for example, for f in blue.data green.data *.dat orange.data red.data /ultra/violet.dat The mv command overwrites the input file (e.g., blue.data )with the temporary scharf file(which has only the lines from the input file with 13 fields). (Be sure this is what you want to do, and be careful. To be safe, you should probably back up your data first.) The -f tells mv to overwrite the input file,even though it already exists. The -- protects you against weirdnessif any of your files has a name beginning with - . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81699/"
]
} |
151,659 | I need the latest libpcre3-dev library to compile a software from source, however, the current distribution of the OS (Ubuntu) on my server only has the older version of libpcre3-dev and no backport is available. I am thinking to compile the binary on a separate server with the latest version of libpcre3-dev and install the binary back to my actual server. I have two questions: Does this work? My main concern is that the libpcre3 on my server is still the older version, does the binary still need the latest corresponding libpcre3 at runtime even if it is compiled with the latest libpcre3-dev ? What is the best way of installing the binary back to my server? Simply copy the binary or make it into a .deb package and then install using the package manager (if possible)? | You almost have it already: awk -F'\t' 'NF==13 {print}' infile > newfile And, if you're on one of those systemswhere you're charged by the keystroke ( :) )you can shorten that to awk -F'\t' 'NF==13' infile > newfile To do multiple files in one sweep,and to actually change the files (and not just create new files),identify a filename thats not in use (for example, scharf ),and perform a loop, like this: for f in list do awk -F'\t' 'NF==13 {print}' "$f" > scharf && mv -f -- scharf "$f"done The list can be one or more filenamesand/or wildcard filename expansion patterns; for example, for f in blue.data green.data *.dat orange.data red.data /ultra/violet.dat The mv command overwrites the input file (e.g., blue.data )with the temporary scharf file(which has only the lines from the input file with 13 fields). (Be sure this is what you want to do, and be careful. To be safe, you should probably back up your data first.) The -f tells mv to overwrite the input file,even though it already exists. The -- protects you against weirdnessif any of your files has a name beginning with - . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81705/"
]
} |
151,666 | $ ffmpeg -v debug ... Coloured output. $ ffmpeg -v debug ... |& less -R Dull output. How do I make the output coloured while piping it to something? | For commands that do not have an option similar to --color=always , you can do, e.g. with your example: script -c "ffmpeg -v debug ..." /dev/null < /dev/null |& less -R What script does is that it runs the command in a terminal session. EDIT: Instead of a command string, if you want to be able to provide an array, then the following zsh wrapper script seems to work: #!/usr/bin/env zshscript -c "${${@:q}}" /dev/null < /dev/null |& less -R | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17594/"
]
} |
151,669 | We can see from the nginx logs that there is an IP address doing nasty things. How can we block it with a pf command and then later permanently with the /etc/pf.log ? How can we block a x.x.x.x/24 for that IP? It is example: 1.2.3.4 UPDATE: no, looks like OpenBSD doesn't have allow/deny file in /etc. And AFAIK the best advise for blocking abusive IP addresses are using pf. # cd /etc # ls -la|egrep -i 'deny|allow'# uname -aOpenBSD foo.com 5.4 GENERIC.MP#0 amd64# | The best way to do this is to define a table and create a rule to block the hosts, in pf.conf : table <badhosts> persistblock on fxp0 from <badhosts> to any And then dynamically add/delete IP addresses from it: $ pfctl -t badhosts -T add 1.2.3.4$ pfctl -t badhosts -T delete 1.2.3.4 Other 'table' commands include flush (remove all), replace and show . See man pfctl for more. If you want a more permanent list you can keep it in one (or more) files. In pf.conf : table <badhosts> persist file "/etc/badguys1" file "/etc/badguys2"block on fxp0 from <badhosts> to any You can also add hostnames instead of IP addresses. See the "Tables" section of man pf.conf and man pfctl . Note : The examples above assume that the internet-facing interface is fxp0 , please change according to your setup. Also, keep in mind that the rules in pf.conf are evaluated sequentially and for block or pass rules its the last matching rule that applies. With this ruleset table <badhosts> persistblock on fxp0 from <badhosts> to anypass inet tcp from 192.168.0.0/24 to any port 80 and after adding 1.2.3.4 and 192.168.0.10 to the badhosts table $ pfctl -t badhosts -T add 1.2.3.4$ pfctl -t badhosts -T add 192.168.0.10 all traffic from 1.2.3.4 and 192.168.0.10 will be blocked but the second host will be able to make connections to other machines' port 80 because the pass rule matches and overrides the block rule. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81044/"
]
} |
151,682 | Is there a nice alternative for this? I always use du -shc * to check the size of all files and folders in the current directory. But it would be nice to have a colored and nicely formatted view (for example like dfc for viewing the sizes of partitions). | This is not coloured, but also really nicely ordered by size and visualized: ncdu - NCurses Disk Usage apt-get install ncdu SYNOPSIS ncdu [options] dir DESCRIPTION ncdu (NCurses Disk Usage) is a curses-based version of the well-known 'du', and provides a fast way to see what directories are using your disk space. Output looks like this: ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help --- /var/www/freifunk ------------------------------------------------------------- 470,7MiB [##########] /firmware 240,8MiB [##### ] /ffki-firmware 157,9MiB [### ] /gluon-alfred-vis 102,6MiB [## ] chaosradio_162.mp3 100,2MiB [## ] /ffki-startseite 99,6MiB [## ] /ffki-startseite-origin 72,3MiB [# ] /startseite 66,2MiB [# ] /metameute-startseite 35,2MiB [ ] /startseite_site 11,9MiB [ ] /jungebuehne ncdu is nice, cause you can install it via apt on debian. Only colors would be cool and an export function that does not use the whole screen. gt5 - a diff-capable 'du-browser' gt5 looks quite the same, and there are some colors, but they have no meaning (only all files and folders are green). gt5 is also available via apt: sudo apt-get install gt5 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151682",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
151,689 | If there are two (or more) versions of a given RPM available in a YUM repository, how can I instruct yum to install the version I want? Looking through the Koji build service I notice that there are several versions. | To see what particular versions are available to you via yum you can use the --showduplicates switch . It gives you a list like "package name.architecture version": $ yum --showduplicates list httpd | expandLoaded plugins: fastestmirror, langpacks, refresh-packagekitLoading mirror speeds from cached hostfile * fedora: mirror.steadfast.netAvailable Packageshttpd.x86_64 2.4.6-6.fc20 fedora httpd.x86_64 2.4.10-1.fc20 updates As far as installing a particular version? You can append the version info to the name of the package, removing the architecture name, like so: $ sudo yum install <package name>-<version info> For example in this case if I wanted to install the older version, 2.4.6-6 I'd do the following: $ sudo yum install httpd-2.4.6-6 You can also include the release info when specifying a package. In this case since I'm dealing with Fedora 20 (F20) the release info would be "fc20", and the architecture info too. $ sudo yum install httpd-2.4.6-6.fc20$ sudo yum install httpd-2.4.6-6.fc20.x86_64 repoquery If you're ever unsure that you're constructing the arguments right you can consult with repoquery too. $ sudo yum install yum-utils # (to get `repoquery`)$ repoquery --show-duplicates httpd-2.4*httpd-0:2.4.6-6.fc20.x86_64httpd-0:2.4.10-1.fc20.x86_64 downloading & installing You can also use one of the following options to download a particular RPM from the web, and then use yum to install it. $ yum --downloadonly <package>-or-$ yumdownloader <package> And then install it like so: $ sudo yum localinstall <path to rpm> What if I want to download everything that package X requires? $ yumdownloader --resolve <package> Example $ yumdownloader --resolve vim-X11Loaded plugins: langpacks, presto, refresh-packagekitAdding en_US to language list--> Running transaction check---> Package vim-X11.x86_64 2:7.3.315-1.fc14 set to be reinstalled--> Finished Dependency Resolutionvim-X11-7.3.315-1.fc14.x86_64.rpm | 1.1 MB 00:01 Notice it's doing a dependency check, and then downloading the missing pieces. See my answer that covers it in more details here: How to download a file from repo, and install it later w/o internet connection? . References Get yum to install a specific package version | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/151689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
151,700 | I keep my digital music and digital photos in directories in a Windows partition, mounted at /media/win_c on my dual-boot box.I'd like to include those directories—but only those directories—in the locate database. However, as far as I can make out, updatedb.conf only offers options to exclude directories, not add them.Of course, I could remove /media from PRUNEPATHS , and then add a whole bunch of subdirectories ( /media/win_c/Drivers , /media/win_c/ProgramData ...) but this seems a very clunky way of doing it—surely there's a more elegant solution? (I tried just creating soft links to the Windows directories from an indexed linux partition, but that doesn't seem to help.) | There's no option for that in updatedb.conf . You'll have to arrange to pass options to updatedb manually. With updatedb from GNU findutils , pass --localpaths . updatedb --localpaths '/ /media/win_c/somewhere/Music /media/win_c/somewhere/Photos' With updatedb from mlocate , there doesn't appear a way to specify multiple roots or exclude a directory from pruning, so I think you're stuck with one database per directory. Set the environment variable LOCATE_PATH to the list of databases: updatedb --output ~/.media.mlocate.db --database-root /media/win_c/somewhere --prunepaths '/media/win_c/somewhere/Videos'export LOCATE_PATH="$LOCATE_PATH:$HOME/.media.mlocate.db" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81731/"
]
} |
151,729 | I want to sync some files from a remote server to my local computer. How can I make rsync to just copy the files with a certain file extension in the directory but no subdirectories? I assumed this to be an easy task, but embarassingly I'm not getting it for nearly 2 hours. So could someone give me an example? I did various experiments with something like the following command: rsync -a --include=what? --exclude=what? -e ssh [email protected]:/test /test | If you just want one extension, in one directory, why not just use regular globbing? rsync /home/you/rsync_this/*.jpg user@server:/remote/folder/ You can even copy multiple extensions with: rsync /home/you/rsync_this/*.{jpg,png,gif} user@server:/remote/folder/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243890/"
]
} |
151,757 | Below is the script. I wanted to login several servers and check for the kernel version. #!/bin/bash#input server names line by line in server.txtcat server.txt | while read linedosshpass -p password ssh root@$line << EOFhostnameuname -rEOFdone I would expect output which goes like.. server1_hostnamekernel_versionserver2_hostnamekernel_version and so on.. I ran this script with about 80 servers in server.txt And the output i got was like..... Pseudo-terminal will not be allocated because stdin is not a terminal. Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.======================================================================================================== WARNING =======================================================================================================This system is solely for the use of authorized personnel. Individualsusing this system are subject to having some or all of their activitiesmonitored and recorded. Anyone using this system expressly consents tosuch monitoring and is advised that any unauthorized or improper use ofthis system may result in disciplinary action up to and includingtermination of employment. Violators may also be subject to civil and/orcriminal penalties.========================================================================Warning: no access to tty (Bad file descriptor).Thus no job control in this shell.xxxxdev012.6.32-431.23.3.el6.x86_64Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal.Pseudo-terminal will not be allocated because stdin is not a terminal. Here I got output for only 1 host, which is xxxxdev01 and that too comes with the ssh banner and other warning. I need output of all other hosts and without ssh banner.. What is going wrong here? | I can't tell you why you're not getting the expected output from the hostname and uname commands, but I can help with the extraneous text. The "Pseudo-terminal" lines are being printed by ssh because it tries to allocate a TTY by default when no command to be executed was provided on the command line. You can avoid that message by adding "-T" to the ssh command: sshpass -p password ssh -T root@$line The "Warning: no access to tty" line is coming from the shell on the remote system. csh and tcsh will print that message under certain circumstances. It's possible that it's triggered by something in the .cshrc or similar file on the remote system, trying to access some feature which requires a TTY. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43502/"
]
} |
151,763 | I was running a script that iterated over all the files on my Linux system and created some metadata about them, and it threw an error when it hit a broken symbolic link. I am newish to *nix, but I get the main idea behind linking files and how broken links come to exist. As far as I know, they are like the equivalent of litter in the street. Things that a program I'm removing wasn't smart enough to tell the package manager existed, and belonged to it, or something that got left behind in an upgrade. At first, I started to tweak the script I'm running to skip them, then I thought, 'well we could always delete them while we're down here...' I'm running Ubuntu 14.04 (Trusty Tahr). I can't see any reason not to, but before I go ahead and run this over my development system, is there any reason this might actually be a terrible idea? Do broken symlinks serve some purpose I am not aware of? | There are many reasons for broken symbolic links: A link was created to a target which no longer exists. Resolution: remove the broken symlink. A link was created for a target which has been moved. Or it's a relative link that's been moved relative to its target. (Not to imply that relative symlinks are a bad idea — quite the opposite: absolute symlinks are more prone to going stale because their target moved.) Resolution: find the intended target and fix the link. There was a mistake when creating the link. Resolution: find the intended target and fix the link. The link is to a file which is on a removable disk, network filesystem or other storage area which is not currently mounted.Resolution: none, the link isn't broken all the time. The link will work when the storage area is mounted. The link is to a file which exists only some of the time, by design. For example, the file is the cached output of a process, which is deleted when the information goes stale but only re-created upon explicit request. Or the link is to an inbox which is deleted when empty. Or the link is to a device file which is only present when the corresponding peripheral is attached.Resolution: none, the link isn't broken all the time. The link is only valid in a different storage hierarchy. For example, it is valid only in a chroot jail, or it's exported by an NFS server and only valid on the server or on some of its clients. Resolution: none, the link isn't broken everywhere. The link is broken for you, because you lack the permission to traverse a directory to reach the target, but it isn't broken for users with appropriate privilege. Resolution: none, the link isn't broken for everybody. The link is used to store information, as in the Firefox lock example cited by vinc17 . One reason to do it this way is that it's easier to populate a symlink atomically — there's no other way, whereas populating a file atomically is more complex: you need to create the file content under a temporary name, then move it into place, and handle stale temporary files left behind by a crash. Another reason is that symlinks are typically stored directly inside their inode on some filesystems, which makes reading them faster than reading the content of a file. Resolution: none. In this case, removing the link would be detrimental. If you can determine that a symlink falls into the first category, then sure, go ahead and delete it. Otherwise, abstain. A program that traverses directories recursively and cares about file contents should usually ignore broken symbolic links. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/151763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80706/"
]
} |
151,779 | I have more than 90 subdirectories and inside each one, there will be a number of .txt files. What I need to do is to copy all those txt files out to one single directory. How can I do that? | use command : find . -name "*.txt" -exec cp {} /path/to/destination \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30324/"
]
} |
151,785 | I installed Raspbmc on my Raspberry Pi. It's working fine but I'm not able to find the terminal. How can I go about finding it? | use command : find . -name "*.txt" -exec cp {} /path/to/destination \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70720/"
]
} |
151,807 | I'm trying to pass multiple argument to a function, but one of them is consist of two words and I want shell function to deal with it as one arg: args=("$@")function(){ echo ${args[0]} echo ${args[1]} echo ${args[2]}} when I call this command sh shell hi hello guys bye I get this hihelloguys But what I really want is: hi hello guysbye | You should just quote the second argument. myfunc(){ echo "$1" echo "$2" echo "$3"}myfunc hi "hello guys" bye | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/151807",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78050/"
]
} |
151,812 | Each device node under /dev has its own major/minor number pair. I know that we can retrieve this pair of numbers from the device node by means of stat , like this: stat -c 'major: %t minor: %T' <file> Or, ls -l also shows these numbers. But how can we get device node(s) by given major and minor numbers? The only way I'm aware of is some kind of ls -l + awk trick, but I really hope there is better solution. | I found a simpler approach using the sys pseudofilesystem, at /sys/dev you have the devices ordered by type an then by major/minor, the file uevent contains the device name and a bunch of other info. So for example, for file in $(find /sys/dev/ -name 7:0); do source ${file}/uevent; echo $DEVNAME; done; Echoes, loop0vcs Note: This was tested in Debian Wheezy | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46011/"
]
} |
151,815 | I have a folder with 36,348 files gz files. I want to unzip all of them. Running: gunzip ./* results in -bash: /usr/bin/gunzip: Argument list too long What's the easiest way to get around this? | Try: find . -type f -exec gunzip {} + This assumes that current directory only contains files that you want to unzip. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81787/"
]
} |
151,826 | A new Guix release came out some time ago. And I got the idea that if I can bootstrap glibc, gcc, and guix to HURD and Mach, I can have a non-Linux GNU system. But I also need some software like bash, emacs, binutils, coreutils, an init system. Do any of those have any system calls that are linux dependent? Would I be able to do it like in LFS? | Try: find . -type f -exec gunzip {} + This assumes that current directory only contains files that you want to unzip. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70332/"
]
} |
151,833 | I would like to use a Linux distro without a desktop environment, but I need to print out homework that I type up. I could always email it to myself and print from another computer, but it would be nice if I could just do something like print homework.txt from a bash prompt. Does anyone have a way of doing this? | CUPS understands many different types of files directly, including text, PostScript, PDF, and image files. This allows you to print from inside your applications or at the command-line, whichever is most convenient! Type either of the following commands to print a file to the default (or only) printer on the system: lp filename lpr filename Use the -d option with the lp command to print to a specific printer: lp -d printer filename Or the -P option with the lpr command: lpr -P printer filename Printing the Output of a Program Both the lp and lpr commands support printing from the standard input: program | lpprogram | lp -d printerprogram | lprprogram | lpr -P printer If the program does not provide any output, then nothing will be queued for printing. More advanced options can be added to the print job with the -o options . For exampling stapling: lpr -P printer -o StapleLocation=UpperLeft Source and more Details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68781/"
]
} |
151,844 | For the purpose of backups, I'd like to transfer (several) whole disk partitions over an ssh link. The source is a block special device and the target should be a regular file. Common tools seem ill-suited for this, though: scp will complain not a regular file tar will try to recreate device inodes on the target side rsync says skipping non-regular file My best bet currently is nc over a port forwarding, or one cat invocation on the remote side per partition, which means one password entry per partition unless one sets up public keys. Is there a more elegant solution? Environment would be any reasonable Linux live system. Currently I happen to have a Debian wheezy lying around, but it should not be too specific to that. | You could pipe through SSH. Example using dd : dd bs=1M if=/dev/disk | ssh -C target dd bs=1M of=disk.img If the network connection breaks during transfer, you can resume if you know how much was copied. For example if you're sure at least 1000MiB were transferred already (check the file size of disk.img ): dd bs=1M skip=1000 if=/dev/disk | ssh -C target dd bs=1M seek=1000 of=disk.img dd is just an example, it works just as well with other commands, as long as they work with pipes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20807/"
]
} |
151,850 | According to this answer and my own understanding, the tilde expands to the home directory: $ echo ~/home/braiam Now, whenever I want the shell expansion to work, i. e. using variable names such $FOO , and do not break due unexpected characters, such spaces, etc. one should use double quotes " : $ FOO="some string with spaces"$ BAR="echo $FOO"$ echo $BARecho some string with spaces Why doesn't this expansion works with the tilde? $ echo ~/some/path/home/braiam/some/path$ echo "~/some/path"~/some/path | The reason, because inside double quotes, tilde ~ has no special meaning, it's treated as literal. POSIX defines Double-Quotes as: Enclosing characters in double-quotes ( "" ) shall preserve the literal value of all characters within the double-quotes, with the exception of the characters dollar sign, backquote, and backslash, ... The application shall ensure that a double-quote is preceded by a backslash to be included within double-quotes. The parameter '@' has special meaning inside double-quotes Except $ , ` , \ and @ , others characters are treated as literal inside double quotes. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/151850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41104/"
]
} |
151,860 | I am on OS X trying to ssh into a ubuntu 12.04 server. I was able to SSH in -- until abruptly stuff stopped working. I've read online to use the -v to debug this. Output is shown below. If I ssh into a different box and then ssh from that box to the server I am able to login. I have no idea how to debug this problem but would like to learn. $ ssh -v me@serverOpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011debug1: Reading configuration data /etc/ssh_configdebug1: /etc/ssh_config line 20: Applying options for *debug1: /etc/ssh_config line 53: Applying options for *debug1: Connecting to server [IP] port 22.debug1: Connection established.debug1: identity file /Users/me/.ssh/id_rsa type 1debug1: identity file /Users/me/.ssh/id_rsa-cert type -1debug1: identity file /Users/me/.ssh/id_dsa type -1debug1: identity file /Users/me/.ssh/id_dsa-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.2ssh_exchange_identification: read: Connection reset by peer So far (on advice of message boards) I have looked for a hosts deny file -- but there is no such file on my machine. $ cat /etc/hosts.deny cat: /etc/hosts.deny: No such file or directory I have admin access on client machine but not on server. | The abrupt change could be the result of a change in the configuration file on the servers sshd configuration, but you indicate cannot check or alter that without admin right. You can still try the following if the server's admins cannot be reached (in time). Your log only indicates the local version string, you should check the versions of sshd running on the server and the intermediate machine. If these versions differ (especially between the local machine and the server and less between the intermediate machine and the server) there might be some negotiation incompatibility, this has happened before in ssh . The solution used to be to shorten the Ciphers, HostKeyAlgorithms and/or MACs entries, either on the commandline ( ssh -c aes256-ctr , etc.) or on in your /etc/ssh/ssh_config . You should look in the debug information (from connecting via the intermediate to the server) for appropriate values as argument for the -c / Ciphers , -o HostKeyAlgorithms / HostKeyAlgorithms and -m / MACs commandline resp. ssh_config changes. I haven't had this problem myself for a while, but IIRC when I did it was enough to manually force the Ciphers and HostKeyAlgorithms setting, after which I could update the server's sshd version and the problem went away. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18366/"
]
} |
151,867 | I installed Debian in VirtualBox (for various experiments which usually broke my system) and tried to launch the VirtualBox guest addon script. I logged in as root and tried to launch autorun.sh , but I got «Permission denied». ls -l shows that the script have an executable rights. Sorry, that I can't copy the output -- VirtualBox absolutely have no use without the addon, as neither a shared directory, nor a shared clipboard works. But just for you to be sure, I copied the rights by hands: #ls -l ./autorun.sh-r-xr-xr-x 1 root root 6966 Mar 26 13:56 ./autorun.sh At first I thought that it may be that the script executes something that gave the error. I tried to replace /bin/sh with something like #/pathtorealsh/sh -xv , but I got no output — it seems the script can't even be executed. I have not even an idea what could cause it. | Maybe your file system is mounted with noexec option set, so you can not run any executable files. From mount documentation: noexec Do not allow direct execution of any binaries on the mounted filesystem. (Until recently it was possible to run binaries anyway using a command like /lib/ld*.so /mnt/binary. This trick fails since Linux 2.4.25 / 2.6.0.) Try: mount | grep noexec Then check if your file system is listed in output. If yes, you can solve this problem, by re-mounting file system with exec option: mount -o remount,exec filesystem | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/151867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59928/"
]
} |
151,883 | I operate a Linux system which has a lot of users but sometimes an abuse occurs; where a user might run a single process that uses up more than 80% of the CPU/Memory. So is there a way to prevent this from happening by limiting the amount of CPU usage a process can use (to 10% for example)? I'm aware of cpulimit , but it unfortunately applies the limit to the processes I instruct it to limit (e.g single processes). So my question is, how can I apply the limit to all of the running processes and processes that will be run in the future without the need of providing their id/path for example? | While it can be an abuse for memory, it isn't for CPU: when a CPU is idle, a running process (by "running", I mean that the process isn't waiting for I/O or something else) will take 100% CPU time by default. And there's no reason to enforce a limit. Now, you can set up priorities thanks to nice . If you want them to apply to all processes for a given user, you just need to make sure that his login shell is run with nice : the child processes will inherit the nice value. This depends on how the users log in. See Prioritise ssh logins (nice) for instance. Alternatively, you can set up virtual machines. Indeed setting a per-process limit doesn't make much sense since the user can start many processes, abusing the system. With a virtual machine, all the limits will be global to the virtual machine. Another solution is to set /etc/security/limits.conf limits; see the limits.conf(5) man page. For instance, you can set the maximum CPU time per login and/or the maximum number of processes per login. You can also set maxlogins to 1 for each user. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/151883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67864/"
]
} |
151,889 | Given the following sourceable bar Bash script ... echo :$#:"$@": ... and the following executable foo Bash script: echo -n source bar:source barecho -n source bar foo:source bar foofunction _import { source "$@"}echo -n _import bar:_import barecho -n _import bar foo:_import bar foo I get the following output when running the foo Bash script, i.e. ./foo : source bar::0::source bar foo::1:foo:_import bar::1:bar:_import bar foo::1:foo: Here are my questions: Why does it make a difference when I call Bash's source command from the _import function as opposed to directly? How can I normalize the behavior of Bash's source command? I'm using Bash version 4.2.47(1)-release on Fedora version 20. | Gnouc's answer explains my first question: "Why does it make a difference when I call Bash's source command from the _import function as opposed to directly?" Regarding my second question: "How can I normalize the behavior of Bash's source command?" I think I found the following answer: By changing the _import function to: function _import { local -r file="$1" shift source "$file" "$@"} I get the following output when running the foo Bash script, i.e. ./foo : source bar::0::source bar foo::1:foo:_import bar::0::_import bar foo::1:foo: Rationale behind my question and this answer: An "imported" Bash script should be able to evaluate its own set of arguments via Bash's positional and special parameters even when none were given while importing it. Any arguments that MAY be passed to the importing Bash script MUST NOT be implicitly passed to the imported Bash script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27412/"
]
} |
151,911 | When I use below code in SSH terminal for CentOS it works fine: paste <(printf "%s\n" "TOP") But if I place the same line code in a shell script (test.sh) and run shell script from terminal, it throws error as this ./test.sh: line 30: syntax error near unexpected token (' ./test.sh: line 30: paste <(printf "%s\n" "TOP") How can I fix this problem? | Process substitution is not specified by POSIX, so not all POSIX shells support it, only some shells like bash , zsh , ksh88 , ksh93 . In CentOS system, /bin/sh is a symlink to /bin/bash . When bash is invoked with name sh , bash enters posix mode ( Bash Startup Files - Invoked with name sh ). In bash versions prior to 5.1, process substitution support was disabled when invoked in posix mode, causing a syntax error. The script should work if you call bash directly: bash test.sh . If not, maybe bash has entered posix mode. This can occur if you start bash with the --posix argument or if the variable POSIXLY_CORRECT is set when bash starts: $ bash --posix test.sh test.sh: line 54: syntax error near unexpected token `('test.sh: line 54: `paste <(printf "%s\n" "TOP")'$ POSIXLY_CORRECT=1 bash test.sh test.sh: line 54: syntax error near unexpected token `('test.sh: line 54: `paste <(printf "%s\n" "TOP") Or bash is built with --enable-strict-posix-default option. Here, you don't need process substitution, you can use standard shell pipes: printf "%s\n" "TOP" | paste - - is the standard way to tell paste to read the data from stdin. With some paste implementations, you can omit it though that's not standard. Where it would be useful is when pasting the output of more than one command like in: paste <(cmd1) <(cmd2) On systems that support /dev/fd/n , that can be done in sh with: { cmd1 4<&- | { cmd2 3<&- | paste /dev/fd/3 -; } 3<&0 <&4 4<&-; } 4<&0 (it's what <(...) does internally). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/151911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81390/"
]
} |
151,916 | I am trying to copy files over SSH , but cannot use scp due to not knowing the exact filename that I need. Although small binary files and text files transfer fine, large binary files get altered. Here is the file on the server: remote$ ls -la-rw-rw-r-- 1 user user 244970907 Aug 24 11:11 foo.gzremote$ md5sum foo.gz 9b5a44dad9d129bab52cbc6d806e7fda foo.gz Here is the file after I've moved it over: local$ time ssh [email protected] -t 'cat /path/to/foo.gz' > latest.gzreal 1m52.098suser 0m2.608ssys 0m4.370slocal$ md5sum latest.gz76fae9d6a4711bad1560092b539d034b latest.gzlocal$ ls -la-rw-rw-r-- 1 dotancohen dotancohen 245849912 Aug 24 18:26 latest.gz Note that the downloaded file is bigger than the one on the server! However, if I do the same with a very small file, then everything works as expected: remote$ echo "Hello" | gzip -c > hello.txt.gzremote$ md5sum hello.txt.gz08bf5080733d46a47d339520176b9211 hello.txt.gzlocal$ time ssh [email protected] -t 'cat /path/to/hello.txt.gz' > hi.txt.gz real 0m3.041suser 0m0.013ssys 0m0.005s local$ md5sum hi.txt.gz08bf5080733d46a47d339520176b9211 hi.txt.gz Both file sizes are 26 bytes in this case. Why might small files transfer fine, but large files get some bytes added to them? | TL;DR Don't use -t . -t involves a pseudo-terminal on the remote host and should only be used to run visual applications from a terminal. Explanation The linefeed character (also known as newline or \n ) is the one that when sent to a terminal tells the terminal to move its cursor down. Yet, when you run seq 3 in a terminal, that is where seq writes 1\n2\n3\n to something like /dev/pts/0 , you don't see: 1 2 3 but 123 Why is that? Actually, when seq 3 (or ssh host seq 3 for that matters) writes 1\n2\n3\n , the terminal sees 1\r\n2\r\n3\r\n . That is, the line-feeds have been translated to carriage-return (upon which terminals move their cursor back to the left of the screen) and line-feed. That is done by the terminal device driver. More exactly, by the line-discipline of the terminal (or pseudo-terminal) device, a software module that resides in the kernel. You can control the behaviour of that line discipline with the stty command. The translation of LF -> CRLF is turned on with stty onlcr (which is generally enabled by default). You can turn it off with: stty -onlcr Or you can turn all output processing off with: stty -opost If you do that and run seq 3 , you'll then see: $ stty -onlcr; seq 31 2 3 as expected. Now, when you do: seq 3 > some-file seq is no longer writing to a terminal device, it's writing into a regular file, there's no translation being done. So some-file does contain 1\n2\n3\n . The translation is only done when writing to a terminal device. And it's only done for display. similarly, when you do: ssh host seq 3 ssh is writing 1\n2\n3\n regardless of what ssh 's output goes to. What actually happens is that the seq 3 command is run on host with its stdout redirected to a pipe. The ssh server on host reads the other end of the pipe and sends it over the encrypted channel to your ssh client and the ssh client writes it onto its stdout, in your case a pseudo-terminal device, where LF s are translated to CRLF for display. Many interactive applications behave differently when their stdout is not a terminal. For instance, if you run: ssh host vi vi doesn't like it, it doesn't like its output going to a pipe. It thinks it's not talking to a device that is able to understand cursor positioning escape sequences for instance. So ssh has the -t option for that. With that option, the ssh server on host creates a pseudo-terminal device and makes that the stdout (and stdin, and stderr) of vi . What vi writes on that terminal device goes through that remote pseudo-terminal line discipline and is read by the ssh server and sent over the encrypted channel to the ssh client. It's the same as before except that instead of using a pipe , the ssh server uses a pseudo-terminal . The other difference is that on the client side, the ssh client sets the terminal in raw mode (and disables local echo ). That means that no translation is done there ( opost is disabled and also other input-side behaviours). For instance, when you type Ctrl-C , instead of interrupting ssh , that ^C character is sent to the remote side, where the line discipline of the remote pseudo-terminal sends the interrupt to the remote command. When you do: ssh -t host seq 3 seq 3 writes 1\n2\n3\n to its stdout, which is a pseudo-terminal device. Because of onlcr , that gets translated on host to 1\r\n2\r\n3\r\n and sent to you over the encrypted channel. On your side there is no translation ( onlcr disabled), so 1\r\n2\r\n3\r\n is displayed untouched (because of the raw mode) and correctly on the screen of your terminal emulator. Now, if you do: ssh -t host seq 3 > some-file There's no difference from above. ssh will write the same thing: 1\r\n2\r\n3\r\n , but this time into some-file . So basically all the LF in the output of seq have been translated to CRLF into some-file . It's the same if you do: ssh -t host cat remote-file > local-file All the LF characters (0x0a bytes) are being translated into CRLF (0x0d 0x0a). That's probably the reason for the corruption in your file. In the case of the second smaller file, it just so happens that the file doesn't contain 0x0a bytes, so there is no corruption. Note that you could get different types of corruption with different tty settings. Another potential type of corruption associated with -t is if your startup files on host ( ~/.bashrc , ~/.ssh/rc ...) write things to their stderr, because with -t the stdout and stderr of the remote shell end up being merged into ssh 's stdout (they both go to the pseudo-terminal device). You don't want the remote cat to output to a terminal device there. You want: ssh host cat remote-file > local-file You could do: ssh -t host 'stty -opost; cat remote-file' > local-file That would work (except in the writing to stderr corruption case discussed above), but even that would be sub-optimal as you'd have that unnecessary pseudo-terminal layer running on host . Some more fun: $ ssh localhost echo | od -tx10000000 0a0000001 OK. $ ssh -t localhost echo | od -tx10000000 0d 0a0000002 LF translated to CRLF $ ssh -t localhost 'stty -opost; echo' | od -tx10000000 0a0000001 OK again. $ ssh -t localhost 'stty olcuc; echo x'X That's another form of output post-processing that can be done by the terminal line discipline. $ echo x | ssh -t localhost 'stty -opost; echo' | od -tx1Pseudo-terminal will not be allocated because stdin is not a terminal.stty: standard input: Inappropriate ioctl for device0000000 0a0000001 ssh refuses to tell the server to use a pseudo-terminal when its own input is not a terminal. You can force it with -tt though: $ echo x | ssh -tt localhost 'stty -opost; echo' | od -tx10000000 x \r \n \n0000004 The line discipline does a lot more on the input side. Here, echo doesn't read its input nor was asked to output that x\r\n\n so where does that come from? That's the local echo of the remote pseudo-terminal ( stty echo ). The ssh server is feeding the x\n it read from the client to the master side of the remote pseudo-terminal. And the line discipline of that echoes it back (before stty opost is run which is why we see a CRLF and not LF ). That's independent from whether the remote application reads anything from stdin or not. $ (sleep 1; printf '\03') | ssh -tt localhost 'trap "echo ouch" INT; sleep 2'^Couch The 0x3 character is echoed back as ^C ( ^ and C ) because of stty echoctl and the shell and sleep receive a SIGINT because stty isig . So while: ssh -t host cat remote-file > local-file is bad enough, but ssh -tt host 'cat > remote-file' < local-file to transfer files the other way across is a lot worse. You'll get some CR -> LF translation, but also problems with all the special characters ( ^C , ^Z , ^D , ^? , ^S ...) and also the remote cat will not see eof when the end of local-file is reached, only when ^D is sent after a \r , \n or another ^D like when doing cat > file in your terminal. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/151916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
151,920 | I've read several guides on mounting samba shares, but no luck yet. I'm able to "login" to my samba share with the following command: smbclient //vvlaptop/Documents It asks for password, but there is no password so I just press Enter. It then successfully logs me in with the prompt smb: \> . For some reason I'm unable to mount the share. This is the command I'm using: mount -t cifs //vvlaptop/Documents /mnt/virginiamount error: could not resolve address for vvlaptop: Unknown error How can I mount this device successfully? | smbclient is able to look up host names mount is Not able to look up host names To mount by name you have to use a local DNS service like Avahi. Without a local DNS, you have to specify the IP address when connecting. You can use nmblookup -S WORKGROUP to discover the IP address. mount -t cifs //192.168.0.123/Documents /mnt/virginia Usually a better way to access shares is by using smbnetfs . This will allow you to mount many shares without root permission. smbnetfs ~/mountdirfusermount -u ~/mountdir # To unmount. The manpage for smbnetfs will tell you more. If a share requires login and password, then follow these steps. mkdir ~/.smbcp /etc/samba/smb.conf /etc/smbnetfs.conf ~/.smb/touch ~/.smb/smbnetfs.authchmod 600 ~/.smb/* Edit the file ~/.smb/smbnetfs.auth to insert credentials. File format auth "hostname" "username" "password" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/151920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23373/"
]
} |
151,937 | Quite often times when I do a reboot, I get the following error message: kernel: watchdog watchdog0: watchdog did not stop! I tried to find out more about watchdog by doing man watchdog , but it says no manual entry. I tried yum list watchdog and found that it was not installed. However, when I look at the /dev directory, I actually found two watchdogs: watchdog and watchdog0 I am curious. Do I actually own any watchdogs? Why does the kernel complain that it did not stop when I do a reboot? | Most modern PC hardware includes watchdog timer facilities. You can read more about them here via wikipedia: Watchdog Timers . Also from the Linux kernel docs: excerpt - https://www.kernel.org/doc/Documentation/watchdog/watchdog-api.txt A Watchdog Timer (WDT) is a hardware circuit that can reset the computer system in case of a software fault. You probably knew that already. Usually a userspace daemon will notify the kernel watchdog driver via the /dev/watchdog special device file that userspace is still alive, at regular intervals. When such a notification occurs, the driver will usually tell the hardware watchdog that everything is in order, and that the watchdog should wait for yet another little while to reset the system. If userspace fails (RAM error, kernel bug, whatever), the notifications cease to occur, and the hardware watchdog will reset the system (causing a reboot) after the timeout occurs. The Linux watchdog API is a rather ad-hoc construction and different drivers implement different, and sometimes incompatible, parts of it. This file is an attempt to document the existing usage and allow future driver writers to use it as a reference. This SO Q&A titled, Who is refreshing hardware watchdog in Linux? , covers the linkage between the Linux kernel and the hardware watchdog timer. What about the watchdog package? The description in the RPM makes this pretty clear, IMO. The watchdog daemon can either act as a software watchdog or can interact with the hardware implementation. excerpt from RPM description The watchdog program can be used as a powerful software watchdog daemon or may be alternately used with a hardware watchdog device such as the IPMI hardware watchdog driver interface to a resident Baseboard Management Controller (BMC). watchdog periodically writes to /dev/watchdog; the interval between writes to /dev/watchdog is configurable through settings in the watchdog sysconfig file. This configuration file is also used to set the watchdog to be used as a hardware watchdog instead of its default software watchdog operation. In either case, if the device is open but not written to within the configured time period, the watchdog timer expiration will trigger a machine reboot. When operating as a software watchdog, the ability to reboot will depend on the state of the machine and interrupts. When operating as a hardware watchdog, the machine will experience a hard reset (or whatever action was configured to be taken upon watchdog timer expiration) initiated by the BMC. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/151937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24509/"
]
} |
151,951 | Assuming you know the target is a symbolic link and not a file, is there any difference between using rm and unlink to remove the link? | Anytime you have these types of questions it's best to conceive of a little test to see what's actually happening. For this you can use strace . unlink $ touch file1$ strace -s 2000 -o unlink.log unlink file1 rm $ touch file1$ strace -s 2000 -o rm.log rm file1 When you take a look at the 2 resulting log files you can "see" what each call is actually doing. Breakdown With unlink it's invoking the unlink() system call: ....mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f6d025cc000close(3) = 0unlink("file1") = 0close(1) = 0close(2) = 0exit_group(0) = ?.... With rm it's a slightly different path: ....ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0geteuid() = 1000newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0faccessat(AT_FDCWD, "file1", W_OK) = 0unlinkat(AT_FDCWD, "file1", 0) = 0lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)close(0) = 0close(1) = 0close(2) = 0exit_group(0) = ?+++ exited with 0 +++... The system calls unlink() and unlinkat() are essentially the same except for the differences described in this man page: http://linux.die.net/man/2/unlinkat . excerpt The unlinkat() system call operates in exactly the same way as either unlink(2) or rmdir(2) (depending on whether or not flags includes the AT_REMOVEDIR flag) except for the differences described in this manual page. If the pathname given in pathname is relative, then it is interpreted relative to the directory referred to by the file descriptor dirfd (rather than relative to the current working directory of the calling process, as is done by unlink(2) and rmdir(2) for a relative pathname). If the pathname given in pathname is relative and dirfd is the special value AT_FDCWD, then pathname is interpreted relative to the current working directory of the calling process (like unlink(2) and rmdir(2)). If the pathname given in pathname is absolute, then dirfd is ignored. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/151951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
151,969 | I have a recently-compiled Linux kernel image (vmlinuz file) and I want to boot into it. I am aware that this won't give me a familiar Linux system, but I am hoping to be able to at least run some basic "Hello world" program as the init process. Is this even possible, and if so, how? So far I have tried to do this by installing GRUB on a USB which had an ext2 filesystem with the vmlinuz file in /boot. It must have loaded the kernel image because it ended in a kernel panic message: "VFS: Unable to mount root fs on unknown-block(0,0)" Here is the entry in grub.cfg: menuentry 'linux' --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0)' search --no-floppy --fs-uuid --set=root <my USB drive's UUID> linux /boot/vmlinuz root=UUID=<my USB drive's UUID> ro $vt_handoff} Thanks for any help. | Anytime you have these types of questions it's best to conceive of a little test to see what's actually happening. For this you can use strace . unlink $ touch file1$ strace -s 2000 -o unlink.log unlink file1 rm $ touch file1$ strace -s 2000 -o rm.log rm file1 When you take a look at the 2 resulting log files you can "see" what each call is actually doing. Breakdown With unlink it's invoking the unlink() system call: ....mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f6d025cc000close(3) = 0unlink("file1") = 0close(1) = 0close(2) = 0exit_group(0) = ?.... With rm it's a slightly different path: ....ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0geteuid() = 1000newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0faccessat(AT_FDCWD, "file1", W_OK) = 0unlinkat(AT_FDCWD, "file1", 0) = 0lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)close(0) = 0close(1) = 0close(2) = 0exit_group(0) = ?+++ exited with 0 +++... The system calls unlink() and unlinkat() are essentially the same except for the differences described in this man page: http://linux.die.net/man/2/unlinkat . excerpt The unlinkat() system call operates in exactly the same way as either unlink(2) or rmdir(2) (depending on whether or not flags includes the AT_REMOVEDIR flag) except for the differences described in this manual page. If the pathname given in pathname is relative, then it is interpreted relative to the directory referred to by the file descriptor dirfd (rather than relative to the current working directory of the calling process, as is done by unlink(2) and rmdir(2) for a relative pathname). If the pathname given in pathname is relative and dirfd is the special value AT_FDCWD, then pathname is interpreted relative to the current working directory of the calling process (like unlink(2) and rmdir(2)). If the pathname given in pathname is absolute, then dirfd is ignored. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/151969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81868/"
]
} |
152,003 | I'm using find -mtime -2 to find files modified in the past 24 hours or less.The output looks like this. /home/user/logs/file-2014-08-22.log/home/user/logs/file-2014-08-23.log I need to save the first line of the output to a variable and then save the second line to a separate variable. I can't just pipe it. I know you might suggest | grep 22 or 23 but this is part of a bash script that will run many times, there will be a different set of files with different names next time so grep would be too specific. Could awk accomplish this? If so, how? . | Assuming that there are no spaces or the like in any of your filenames, there are a couple of ways of doing this. One is just to use an array: files=( $(find -mtime -2) )a=${files[1]}b=${files[2]} files will be an array of all the paths output by find in order, indexed from zero. You can get whichever lines you want out of that. Here I've saved the second and third lines into a and b , but you could use the array elements directly too. An alternative if you have GNU find or another with the printf option is to use it in combination with read and process substitution : read junk a b junk < <(find -printf '%p ') This one turns all of find 's output into a single line and then provides that line as the input to read , which saves the first word (path) into junk, the second into a , the third into b , and the rest of the line into junk again. Similarly, you can introduce the paste command for the same effect on any POSIX-compatible system: read junk a b junk < <(find -mtime -2 | paste -s) paste -s will convert its input into a single tab-separated line, which read can deal with again. In the general case, if you're happy to execute the main command more than once (not necessary here), you can use sed easily: find | sed -n 2p That will print only the second line of the output, by suppressing ordinary output with -n and selecting line 2 to p rint. You can also stitch together head and tail for the same effect, which will likely be more efficient in a very long file. All of the above have the same effect of storing the second and third lines into a and b , and all still have the assumption that there are no spaces, tabs, newlines, or any other characters that happen to be in your input field separator ( IFS ) value in any of the filenames. Note though that the output order of find is undefined, so "second file" isn't really a useful identifier unless you're organising them to be ordered some other way. It's likely to be something close to creation order in many cases, but not all. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80015/"
]
} |
152,031 | I am trying to run a job using at , and I have a bash script (I do have #!/bin/bash as the first line), but the job fails because of use of [[ and ]] in the script, I see the following in the mail that it generates: =sh: 58: [[: not found=sh: 58: [[: not found=sh: 58: [[: not found=sh: 58: [[: not found=sh: 58: [[: not found The /bin/sh is a link to dash , but if I have the she-bang line as #!/bin/bash in my script - should it not use bash instead? P.S If I just run the script on my bash prompt - it works fine with no errors (and it has executable permissions). --EDIT-- I am adding the job like this: $ at 1:30 am today -f /path/to/my/script.sh$ which bash /bin/bash$ ls -l /bin/shlrwxrwxrwx 1 root root 4 Mar 30 2012 /bin/sh -> dash$ uname -aLinux server1 3.5.0-46-generic #70~precise1-Ubuntu SMP Thu Jan 9 23:55:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | POSIX defined that at can use SHELL environment variable as alternative to /bin/sh , but did not restrict it: SHELL Determine a name of a command interpreter to be used to invoke the at-job. If the variable is unset or null, sh shall be used. If it is set to a value other than a name for sh, the implementation shall do one of the following: use that shell; use sh; use the login shell from the user database; or any of the preceding accompanied by a warning diagnostic about which was chosen. Some implementation of at can give you ability to chose which shell you want to run, like -k for Korn shell, -c for C-shell. And not all implementation of at allow SHELL to substitute sh . So POSIX also guaranteed the reliable way to use another shell is explicit call it : Some implementations do not allow substitution of different shells using SHELL. System V systems, for example, have used the login shell value for the user in /etc/passwd. To select reliably another command interpreter, the user must include it as part of the script, such as: $ at 1800 myshell myscript EOT job ... at ... $ A simple way, using bash to run your script is passing bash script as stdin to at : echo "bash /path/to/yourscript" | at <time> Example: echo "bash /path/to/yourscript" | at 16:30 will run bash /path/to/yourscript at 16:30 today. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47055/"
]
} |
152,039 | When my debian jessie desktop box wakes up from sleep (via the new shiny systemd) my mouse settings are returned to their defaults, having reset my customisation xinput set-prop 12 'Device Accel Constant Deceleration' 2.5 which runs when I log in. how can I run an arbitrary user script on wakeup? (assume that the user is the owner of the X session) As far as I can recall, the following is the only customisation I've made of the systemd setup (yes, I know it's completely wrong because it doesn't work for arbitrary users, but I've not worked out how to do that yet... this is somewhat related) additionally, how can I run an arbitrary user script before wakeup, as the user who is currently using the X screen? cat /etc/systemd/system/i3lock.service #systemctl enable i3lock.service[Unit]Description=i3lockBefore=sleep.target[Service]User=fommilType=forkingEnvironment=DISPLAY=:0ExecStart=/usr/bin/i3lock -c 000000[Install]WantedBy=sleep.target | This answer is based on askubuntu.com/a/661747/394818 (as also referred to in the comment by @sun-bear), askubuntu.com/q/616272/394818 and superuser.com/a/1269158/585953 . Using a system service: Create the file /etc/systemd/system/my_user_script.service : [Unit]Description=Run my_user_scriptAfter=suspend.target hibernate.target hybrid-sleep.target suspend-then-hibernate.target[Service]ExecStart=/path/to/my_user_script#User=my_user_name#Environment=DISPLAY=:0[Install]WantedBy=suspend.target hibernate.target hybrid-sleep.target suspend-then-hibernate.target Remove suspend/hibernate/hybrid in case the service should only be executed after waking up from a specific type of sleep. In case the service needs to be run by a specific user, uncomment the User= and Environment= lines and replace the relevant user name. Install the service file with: sudo systemctl enable my_user_script Using a user service will not work: In order to avoid setting a hard coded user name with User= , one could create the exact same service file at ~/.config/systemd/user/my_user_script.service and activate with systemctl --user enable my_user_script However, that will not work. @grawity explains in more detail at unix.stackexchange.com/a/174837/163108 why that is: sleep.target is specific to system services. The reason is, sleep.target is not a magic target that automatically gets activated when going to sleep. It's just a regular target that puts the system to sleep – so the 'user' instances of course won't have an equivalent. (And unfortunately the 'user' instances currently have no way to depend on systemwide services.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22890/"
]
} |
152,042 | I have a pre process command to output a file ./preprocess.sh > preprocessed_file and the preprocessed_file will be used like this while read linedo ./research.sh $line &done < preprocessed_file rm -f preprocessed_file Is there any way to direct the output to the while read line part instead of outputting to the preprocessed_file? I think there should be a better way other than using this temp preprocessed_file . | You can use bash process substitution : while IFS= read -r line; do ./research.sh "$line" &done < <(./preprocess.sh) Some advantages of process substitution: No need to save temporary files. Better performance. Reading from another process often faster than writing to disk, then read back in. Save time to computation since when it is performed simultaneously with parameter and variable expansion, command substitution, and arithmetic expansion | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56579/"
]
} |
152,081 | The mysql user cannot use ports below 1024 because these are reserved for the root user. Apache, on the other hand, can use port 80. Apache runs as root before it runs as Apache and thus it can use port 80. It can even listen to port 81 and any other port. However, when I tried to get Apache to listen on port 79, it did not work. I tried to listen on port 1 too, and that did not work either. When I change the Apache settings, Apache restarts just fine, but it doesn’t actually work on the web. Can I use port 1 on the web? | I'm going to use Firefox as an example, because its open source and easy to find the information for, but this applies (probably with slightly different lists of ports) to other browsers, too. In August 2001, CERT issued a vulnerability note about how a web browser could be used to send near-arbitrary data to TCP ports chosen by an attacker, on any arbitrary IP address. This could be used to, for example, send emails which would appear to come from the user running the web browser. In order to mitigate this, Mozilla (as well as many other vendors) blocked Firefox from accessing certain ports . The two ports you tried, 79 and 1, happen to be on the blocklist. The source contains the full list of blocked ports . You can (on your browser) override this list using the preferences network.security.ports.banned.override and network.security.ports.banned . This isn't useful on the Internet in general, as you'd have to convince everyone who might visit your site to go to about:config and change them. (Note: Current versions of Firefox will give an error message explaining that if you try to browse to a site on a blocked port.) In general, there is little reason to use additional HTTP ports, at least externally. If you have to, prefer traditional extra ports like 8080, 8000, etc. that are far less likely to be blocked or at least ones outside of the IANA-assigned system ports range (0-1023). See the IANA port registry for more details. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/152081",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81947/"
]
} |
152,084 | I want to run something like % sudo grep -i -a 'some-string' <drive device> On Linux, <drive device> would be something like /dev/sda1 . How can I find the equivalent on Darwin? (In case it's not obvious, I'm on a last-ditch-effort hunt for accidentally deleted data.) | If the filesystem takes over the whole disk, OS X currently uses a name like /dev/disk5 . If the disk is partitioned, it adds an s# suffix, like /dev/disk5s2 for the second partition. ( s is short for "slice," a BSDism functionally equivalent to a partition.) Disks are numbered sequentially in discovery order by the OS, on boot, so you may have to experiment a bit to figure out which device is which. If you say diskutil list at the command line, you get a detailed list of available disks, including their /dev nodes and volume names: /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *121.3 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 121.0 GB disk0s2 3: Apple_Boot Boot OS X 134.2 MB disk0s3/dev/disk1 ...etc../dev/disk7 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Time Machine *999.5 GB disk7 In the case of disk0 above, there are three slices on the disk, so there is a /dev/disk0 for the whole disk, plus /dev/disk0s1 through ...s3 . In the case of disk7 , there are no slices reported, so you would access that one through the whole-disk /dev/disk7 node. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
152,103 | Is this a problem on Linux like it is one Windows? Installing and uninstalling things that end up leaving behind little bits and pieces that accumulate and have a negative effect? If so, what can I do to prevent this? | Yes and no. *nix has a huge advantage over Windows in package management. Unlike in Windows where you must rely on third-party packages to have sane (un)installers, *nix distributions offer package managers that take care of installation and uninstallation in a unified manner. As a result, when you remove a package, all the system-level files for that package will be removed; you do not need to worry about this clutter. However, there is one place that programs might create files which won't be removed with the package: your $HOME directory. Many files keep configuration, save-games, etc. in $HOME , but package managers should never touch anything in $HOME . As a result, when you remove a package, any files it created in your home directory will persist. There is a silver lining; if you really want to clean out all left-over files from a package that you've uninstalled, the nuclear option isn't a reinstallation, it'd be wiping your $HOME . Now, this would typically still be an over-reaction because most programs tend to store their files in a single directory under $HOME (often $HOME/.name-of-app/ or $HOME/.config/name-of-app/ ). The ideal spring cleaning of these files would just be to remove the per-program directory—that, coupled with the standard uninstallation of the package, should be enough to rid your system of any files created/owned by the package. Note: YMMV | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81819/"
]
} |
152,171 | I have a 4TB drive with a 4096 byte block size. I want to check a very specific set of blocks, around the 700,000,000th block or so for bad sectors. However, badblocks seems to only support int32 as the stop and start block counts, which means that it's impossible for me to specify this range of blocks. Is there another way I can scan this drive for badblocks? I don't want to wait the 7 hours it's going to take to test the whole drive. It is a single drive from an mdadm array so it does not contain a usable file system. | Tell badblocks to use the larger block size and it will work above 2TB. I used this on a WD 6TB drive: badblocks -b 4096 -v /dev/sda | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4772/"
]
} |
152,178 | I'm attempting to replace a pattern in a file with the complete contents of another file using sed. Currently I am using: sed "s/PATTERN/`cat replacement.txt`/g" "openfile.txt" > "output.txt" However, if the replacement file contains any characters such as ' , " , or / I start getting errors due to input not being sanitized. I have been trying to use this guide to aid me, but I am finding it quite hard to follow. If I try the suggested r file command, I just end up with a string. What is the best way to approach this? | This should work for you. I did not vote to close as duplicate since you had specified you have ' , / and " characters in your file. So, I did the testing as below. I have file1 contents as below. cat file1ramesh' has " and /in this fileand trying"to replace'the contents in /other file. Now, file2 is as below. cat file2This is file2PATTERN After pattern contents go here. As per the shared link, I created the script.sed as below. cat script.sed/PATTERN/ { r file1 d} Now, when I run the command as sed -f script.sed file2 , I get the output as, This is file2ramesh' has " and /in this fileand trying"to replace'the contents in /other file.After pattern contents go here. EDIT : This works well with multiple patterns in the file as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55796/"
]
} |
152,186 | While starting MariaDB I got [Warning] Could not increase number of max_open_files to more than 1024 (request: 4607) $ sudo systemctl status mysqld● mysqld.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled) Active: activating (start-post) since Tue 2014-08-26 14:12:01 EST; 2s agoMain PID: 8790 (mysqld); : 8791 (mysqld-post) CGroup: /system.slice/mysqld.service ├─8790 /usr/bin/mysqld --pid-file=/run/mysqld/mysqld.pid └─control ├─8791 /bin/sh /usr/bin/mysqld-post └─8841 sleep 1Aug 26 14:12:01 acpfg mysqld[8790]: 140826 14:12:01 [Warning] Could not increase number of max_open_files to more than 1024 (request: 4607) I tried unsuccessfully to fix the problem with max_open_files inside this file: $ sudo nano /etc/security/limits.conf mysql hard nofile 8192mysql soft nofile 1200 I even restarted the computer again, but I got the same problem. The /etc/mysql/my.cnf looks like this: [mysql]# CLIENT #port = 3306socket = /home/u/tmp/mysql/mysql.sock[mysqld]# GENERAL #user = mysqldefault-storage-engine = InnoDBsocket = /home/u/tmp/mysql/mysql.sockpid-file = /home/u/tmp/mysql/mysql.pid# MyISAM #key-buffer-size = 32Mmyisam-recover = FORCE,BACKUP# SAFETY #max-allowed-packet = 16Mmax-connect-errors = 1000000skip-name-resolvesql-mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BYsysdate-is-now = 1innodb = FORCEinnodb-strict-mode = 1# DATA STORAGE #datadir = /home/u/tmp/mysql/# BINARY LOGGING #log-bin = /home/u/tmp/mysql/mysql-binexpire-logs-days = 14sync-binlog = 1# CACHES AND LIMITS #tmp-table-size = 32Mmax-heap-table-size = 32Mquery-cache-type = 0query-cache-size = 0max-connections = 500thread-cache-size = 50open-files-limit = 65535table-definition-cache = 1024table-open-cache = 2048# INNODB #innodb-flush-method = O_DIRECTinnodb-log-files-in-group = 2innodb-log-file-size = 128Minnodb-flush-log-at-trx-commit = 1innodb-file-per-table = 1innodb-buffer-pool-size = 2G# LOGGING #log-error = /home/u/tmp/mysql/mysql-error.loglog-queries-not-using-indexes = 1slow-query-log = 1slow-query-log-file = /home/u/tmp/mysql/mysql-slow.log How is it possible to fix the problem with max_open_files? | Edit /etc/security/limits.conf and add the following lines mysql soft nofile 65535mysql hard nofile 65535 then reboot. Then edit /usr/lib/systemd/system/mysqld.service or /usr/lib/systemd/system/mariadb.service and add LimitNOFILE=infinityLimitMEMLOCK=infinity Then restart the db service: systemctl reload mariadb.service | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34872/"
]
} |
152,208 | I am trying to use awk to print variables from various columns in a line delimited by , .The columns and pattern which I want to print is stored in another variable b .The pattern a and print patter b are given below: a='15986327,415532694,850257614,875121642,20140819'b='$1","$2","$4' I am trying to get the output as below. 15986327,415532694,875121642 I am trying to pass the pattern b to awk using -v switch but not sure how to use that with print inside awk . Tried below commands but this is not the output I was trying to get to. echo $a |awk -v ba="${b}" -F"," '{print ba}'$1","$2","$4echo $a |awk -v ba="${b}" -F"," '{print $ba}'15986327,415532694,850257614,875121642,20140819 | You don't need to use variable ba , try: $ echo $a | awk -F',' '{print '"$b"'}'15986327,415532694,875121642 With this, $b is expanded by the shell, not by awk . And the rest of awk statement is not affect, because they're enclosed in single quote. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26374/"
]
} |
152,222 | In Ubuntu (and I guess in Debian too) there is a system script named update-grub which automatically executes grub-mkconfig -o with the correct path for the GRUB configuration file. Is there a similar command for Red Hat-based distributions? If not, how does the system know where the GRUB configuration file is to update when a new kernel version is installed? | After analyzing the scripts in Fedora, I realize that the configuration file path is read from the symlink /etc/grub2.conf . The correct grub2-mkconfig line is thus: grub2-mkconfig -o "$(readlink -e /etc/grub2.conf)" As noted in comments, it might be /etc/grub2.cfg , or /etc/grub2-efi.cfg on a UEFI system. Actually, both links might be present at the same time and pointing to different locations . The -e flag to readlink will error out if the target file does not exist, but on my system both existed... Check your commands, I guess. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/152222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22966/"
]
} |
152,260 | I am using Ubuntu 14.04 . I want to change the http proxy settings from the command line. This should be equivalent to changing in the GUI(All Settings->Network->Network Proxy) and clicking the button Apply System Wide . I don't want to restart/logout the system as I am planning to change the settings dynamically from a script( bash ). | From what I understand, setting proxies system-wide via that GUI does three things: Set the corresponding values in the dconf database. Set the values in /etc/environment . Set the values in /etc/apt/apt.conf . 1 and 3 take effect immediately. /etc/environment is parsed on login, so you will need to logout and login for that to take effect. (Note that this is login proper, not merely running a login shell.)The following script should be equivalent (assuming http/https proxies): #! /bin/bashHTTP_PROXY_HOST=proxy.example.comHTTP_PROXY_PORT=3128HTTPS_PROXY_HOST=proxy.example.comHTTPS_PROXY_PORT=3128gsettings set org.gnome.system.proxy mode manualgsettings set org.gnome.system.proxy.http host "$HTTP_PROXY_HOST"gsettings set org.gnome.system.proxy.http port "$HTTP_PROXY_PORT"gsettings set org.gnome.system.proxy.https host "$HTTPS_PROXY_HOST"gsettings set org.gnome.system.proxy.https port "$HTTPS_PROXY_PORT"sudo sed -i.bak '/http[s]::proxy/Id' /etc/apt/apt.confsudo tee -a /etc/apt/apt.conf <<EOFAcquire::http::proxy "http://$HTTP_PROXY_HOST:$HTTP_PROXY_PORT/";Acquire::https::proxy "http://$HTTPS_PROXY_HOST:$HTTPS_PROXY_PORT/";EOFsudo sed -i.bak '/http[s]_proxy/Id' /etc/environmentsudo tee -a /etc/environment <<EOFhttp_proxy="http://$HTTP_PROXY_HOST:$HTTP_PROXY_PORT/"https_proxy="http://$HTTPS_PROXY_HOST:$HTTPS_PROXY_PORT/"EOF Even though it requires a re-login for PAM to apply /etc/environment everywhere, in a current shell you can still extract the values in that file: export http_proxy=$(pam_getenv http_proxy) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73864/"
]
} |
152,264 | I have a added a local proxy for all my hosts in my .ssh config, however I want to shell into my local vm without the proxy command. Output of my ssh attempt: debug1: /Users/bbarbour/.ssh/config line 1: Applying options for local.devdebug1: /Users/bbarbour/.ssh/config line 65: Applying options for * Given the following ssh config how do I prevent the ProxyCommand from being applied to the local.dev entry? Host local.dev HostName dev.myserver.com User developer...Host * ProxyCommand /usr/local/bin/corkscrew 127.0.0.1 8840 %h %p | You can exclude local.dev from ProxyCommand, using ! before it: Host * !local.dev ProxyCommand /usr/local/bin/corkscrew 127.0.0.1 8840 %h %p From ssh_config documentation: If more than one pattern is provided, they should be separated by whitespace. A pattern entry may be negated by prefixing it with an exclamation mark (`!') . If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. The documentation also said: For each parameter, the first obtained value will be used . The configuration files contain sections separated by ``Host'' specifications, and that section is only applied for hosts that match one of the patterns given in the specification. The matched host name is the one given on the command line. So, you can also disable ProxyCommand for local.dev by override value that you have defined in Host * : Host local.dev HostName dev.myserver.com User developer ProxyCommand none | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/152264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67441/"
]
} |
152,294 | Is there a way to detect that a Zenity dialog has lost focus? I would like to keep the dialog box in the foreground unless the user presses ESC . I am attempting to add it to this script : #!/bin/bash# requires these packages from ubuntu repository:# wmctrl, zenity, x11-utils # and the script mouse-speed# This procect on git: https://github.com/rubo77/mouse-speed######## configuration ########### seconds between micro breaksmicrobreak_time=$(( 10 * 60 ))# micro break duration in secondsmicrobreak_duration=15# seconds between long breakslongbreak_time=$(( 120 * 60 ))# message to display message="Try focussing a far object outside the window with the eye to relax!"longbreak_message="Change your seating or continue work in a standing/sitting position"#postpone labelpostpone="Postpone"window_title="typebreak"# global zoom of your window manager:ZOOM=2# height in px of the top system-bar:TOPMARGIN=57# sum in px of all horizontal borders:HORIZONTALMARGIN=40# get width of screen and height of screenSCREEN_WIDTH=$(xwininfo -root | awk '$1=="Width:" {print $2}')SCREEN_HEIGHT=$(xwininfo -root | awk '$1=="Height:" {print $2}')# width and heightW=$(( $SCREEN_WIDTH / $ZOOM - 2 * $HORIZONTALMARGIN ))H=$(( $SCREEN_HEIGHT / $ZOOM - 2 * $TOPMARGIN ))function slow_down(){ #zenity --warning --text "slow down mouse"; mouse-speed -d 30}while true; do # short loop every few minutes to look around sleep $microbreak_time ( echo "99" sleep $(( $microbreak_duration - 2 )) echo "# Mouse speed reset to 100%" sleep 2 echo "100" ) | if ( sleep 1 && wmctrl -F -a "$window_title" -b add,maximized_vert,maximized_horz && sleep 3 && wmctrl -F -a "$window_title" -b add,above ) & ( zenity --progress --text "$message" --percentage=0 --auto-close --height=$H --width=$W --pulsate --title="$window_title" --cancel-label="$postpone" ); then #zenity --info --text "Maus normal speed!" mouse-speed -r else slow_down fidone &while true; do # second long loop to change seat position sleep $longbreak_time zenity --warning --text "$longbreak_message" --title="$window_title - long break"done | #!/bin/bash# This will wait one second and then steal focus and make the Zenity dialog box always-on-top (aka. 'above').(sleep 1 && wmctrl -F -a "I am on top" -b add,above) &(zenity --info --title="I am on top" --text="How to help Zenity to get focus and be always on top") Source: http://wp.shaibn.com/how-to-help-zenity-to-get-focus-and-be-always-on-top and http://pastebin.com/VUsBevqy | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
152,299 | I am trying to retrieve memory used(RAM) in percentage using Linux commands. My cpanel shows Memory Used which I need to display on a particular webpage. From forums, I found out that correct memory can be found from the following: free -m Result: -/+ buffers/cache: 492 1555 -/+ buffers/cache: contains the correct memory usage. I don't know how to parse this info or if there is any different command to get the memory used in percentage. | Here is sample output from free: % free total used free shared buffers cachedMem: 24683904 20746840 3937064 254920 1072508 13894892-/+ buffers/cache: 5779440 18904464Swap: 4194236 136 4194100 The first line of numbers ( Mem: ) lists total memory used memory free memory usage of shared usage of buffers usage filesystem caches ( cached ) In this line used includes the buffers and cache and this impacts free.This is not your "true" free memory because the system will dump cache if needed to satisfy allocation requests. The next line ( -/+ buffers/cache: ) gives us the actual used and free memory as if there were no buffers or cache. The final line ( Swap ) gives the usage of swap memory. There is no buffer or cache for swap as it would not make sense to put these things on a physical disk. To output used memory (minus buffers and cache) you can use a command like: % free | awk 'FNR == 3 {print $3/($3+$4)*100}'23.8521 This grabs the third line and divides used/total * 100. And for free memory: % free | awk 'FNR == 3 {print $4/($3+$4)*100}' 76.0657 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/152299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82109/"
]
} |
152,310 | I find it hard to phrase the question precisely but I will give my best. I use dwm as my default window manager and dmenu as my application launcher. I hardly use GUI applications aside from my browser. Most of my work is done directly from the command line. Furthermore, I'm a great fan of minimalism regarding operating systems, applications etc. One of the tools I never got rid of was an application launcher. Mainly because I lack a precise understanding of how application launchers work/what they do. Even extensive internet search only shows up vague explanation. What I want to do is get rid even of my application launcher because apart from actually spawning the application I have absolutely no use for it. In order to do this I would really like to know how to "correctly" start applications from the shell. Whereby the meaning of "correctly" can be approximated by "like an application launcher would do". I do not claim that all application launchers work the same way because I do not understand them well enough. I know about the following ways to spawn processes from the shell: exec /path/to/Program replace shell with the specified command without creating a new process sh -c /path/to/Program launch shell dependent process /path/to/Program launch shell dependent process /path/to/Program 2>&1 & launch shell independent process nohup /path/to/Program & launch shell independent process and redirect output to nohup.out Update 1: I can illustrate what e.g. dmenu does reconstructing it from repeated calls to ps -efl under different conditions. It spawns a new shell /bin/bash and as a child of this shell the application /path/to/Program . As long as the child is around so long will the shell be around. (How it manages this is beyond me...) In contrast if you issue nohup /path/to/Program & from a shell /bin/bash then the program will become the child of this shell BUT if you exit this shell the program's parent will be the uppermost process. So if the first process was e.g. /sbin/init verbose and it has PPID 1 then it will be the parent of the program. Here's what I tried to explain using a graph: chromium was launched via dmenu , firefox was launched using exec firefox & exit : systemd-+-acpid |-bash---chromium-+-chrome-sandbox---chromium-+-chrome-sandbox---nacl_helper | | `-chromium---5*[chromium-+-{Chrome_ChildIOT}] | | |-{Compositor}] | | |-{HTMLParserThrea}] | | |-{OptimizingCompi}] | | `-3*[{v8:SweeperThrea}]] | |-chromium | |-chromium-+-chromium | | |-{Chrome_ChildIOT} | | `-{Watchdog} | |-{AudioThread} | |-3*[{BrowserBlocking}] | |-{BrowserWatchdog} | |-5*[{CachePoolWorker}] | |-{Chrome_CacheThr} | |-{Chrome_DBThread} | |-{Chrome_FileThre} | |-{Chrome_FileUser} | |-{Chrome_HistoryT} | |-{Chrome_IOThread} | |-{Chrome_ProcessL} | |-{Chrome_SafeBrow} | |-{CrShutdownDetec} | |-{IndexedDB} | |-{LevelDBEnv} | |-{NSS SSL ThreadW} | |-{NetworkChangeNo} | |-2*[{Proxy resolver}] | |-{WorkerPool/1201} | |-{WorkerPool/2059} | |-{WorkerPool/2579} | |-{WorkerPool/2590} | |-{WorkerPool/2592} | |-{WorkerPool/2608} | |-{WorkerPool/2973} | |-{WorkerPool/2974} | |-{chromium} | |-{extension_crash} | |-{gpu-process_cra} | |-{handle-watcher-} | |-{inotify_reader} | |-{ppapi_crash_upl} | `-{renderer_crash_} |-2*[dbus-daemon] |-dbus-launch |-dhcpcd |-firefox-+-4*[{Analysis Helper}] | |-{Cache I/O} | |-{Cache2 I/O} | |-{Cert Verify} | |-3*[{DOM Worker}] | |-{Gecko_IOThread} | |-{HTML5 Parser} | |-{Hang Monitor} | |-{Image Scaler} | |-{JS GC Helper} | |-{JS Watchdog} | |-{Proxy R~olution} | |-{Socket Thread} | |-{Timer} | |-{URL Classifier} | |-{gmain} | |-{localStorage DB} | |-{mozStorage #1} | |-{mozStorage #2} | |-{mozStorage #3} | |-{mozStorage #4} | `-{mozStorage #5} |-gpg-agent |-login---bash---startx---xinit-+-Xorg.bin-+-xf86-video-inte | | `-{Xorg.bin} | `-dwm-+-dwmstatus | `-xterm---bash-+-bash | `-pstree |-systemd---(sd-pam) |-systemd-journal |-systemd-logind |-systemd-udevd |-wpa_actiond `-wpa_supplicant Update 2: I guess the question can also be boiled down to: What should be the parent of a process? Should it e.g. be a shell or should it be the init process i.e. the process with PID 1 ? | Well, you seem to have a pretty good understanding of it. To clarify some of what you have, sh -c /path/to/Program is fairly similar to $ sh % /path/to/Program % Ctrl + D (or you could type “ exit ”) $ where you start a new shell process,provide the application command path to the new shell,and then let the new shell terminate. I have shown the new shell giving a different prompt for illustration purposes;this probably wouldn’t happen in real life. The sh -c " command " construct is mostly usefulfor doing tricky stuff, like wrapping multiple commands into a bundle,so they look like a single command (sort of a single-use unnamed script),or building complicated commands, possibly from shell variables. You would hardly ever use it just for running a single program with simple argument(s). 2>&1 means redirect the standard error to the standard output.This doesn’t really have much to do with & ;rather, you use it when a command sends error messages to the screeneven if you say command > file and you want to capture the error messages in the file. Redirecting output to nohup.out is a trivial side-effect of nohup . The primary purpose of nohup command & is to run command asynchronously(commonly known as “in the background”,or as a “shell independent process”, to use your words)and configure it so it has a better chance of being able to continue to runif you terminate the shell (e.g., logout) while the command is still running. bash(1) and the Bash Reference Manual are good sources of information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65869/"
]
} |
152,312 | I am trying to remove all instances of a pattern match from a file if it matches a pattern. If there is a match, the (complete) line with the matching pattern and the next line get removed. The next line always appears after the line with the pattern match, but in addition it appears in other areas of the file. I am using grep and it is deleting all occurrences of the next line in the file, as expected. Is there a way I can remove that next line if and only if it is after the line with the pattern match? | You can use sed with the N and d commands and a {} block : sed -e '/pattern here/ { N; d; }' For every line that matches pattern here , the code in the {} gets executed. N takes the next line into the pattern space as well, and then d deletes the whole thing before moving on to the next line. This works in any POSIX-compatible sed . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/152312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82116/"
]
} |
152,331 | I have a Dell XPS 13 ultrabook which has a wifi nic, but no physical ethernet nic (wlan0, but no eth0). I need to create a virtual adapter for using Vagrant with NFS, but am finding that the typical ifup eth0:1... fails with ignoring unknown interface eth0:1=eth0:1 . I also tried creating a virtual interface against wlan0 , but received the same result. How can I create a virtual interface on this machine with no physical interface? | Setting up a dummy interface If you want to create network interfaces, but lack a physical NIC to back it, you can use the dummy link type. You can read more about them here: iproute2 Wikipedia page . Creating eth10 To make this interface you'd first need to make sure that you have the dummy kernel module loaded. You can do this like so: $ sudo lsmod | grep dummy$ sudo modprobe dummy$ sudo lsmod | grep dummydummy 12960 0 With the driver now loaded you can create what ever dummy network interfaces you like: $ sudo ip link add eth10 type dummy NOTE: In older versions of ip you'd do the above like this, appears to have changed along the way. Keeping this here for reference purposes, but based on feedback via comments, the above works now. $ sudo ip link set name eth10 dev dummy0 And confirm it: $ ip link show eth106: eth10: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether c6:ad:af:42:80:45 brd ff:ff:ff:ff:ff:ff Changing the MAC You can then change the MAC address if you like: $ sudo ifconfig eth10 hw ether 00:22:22:ff:ff:ff$ ip link show eth106: eth10: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:22:22:ff:ff:ff brd ff:ff:ff:ff:ff:ff Creating an alias You can then create aliases on top of eth10. $ sudo ip addr add 192.168.100.199/24 brd + dev eth10 label eth10:0 And confirm them like so: $ ifconfig -a eth10eth10: flags=130<BROADCAST,NOARP> mtu 1500 ether 00:22:22:ff:ff:ff txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0$ ifconfig -a eth10:0eth10:0: flags=130<BROADCAST,NOARP> mtu 1500 inet 192.168.100.199 netmask 255.255.255.0 broadcast 192.168.100.255 ether 00:22:22:ff:ff:ff txqueuelen 0 (Ethernet) Or using ip : $ ip a | grep -w inet inet 127.0.0.1/8 scope host lo inet 192.168.1.20/24 brd 192.168.1.255 scope global wlp3s0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet 192.168.100.199/24 brd 192.168.100.255 scope global eth10:0 Removing all this? If you want to unwind all this you can run these commands to do so: $ sudo ip addr del 192.168.100.199/24 brd + dev eth10 label eth10:0$ sudo ip link delete eth10 type dummy$ sudo rmmod dummy References MiniTip: Setting IP Aliases under Fedora Linux Networking: Dummy Interfaces and Virtual Bridges ip-link man page iproute2 HOWTO iproute2 cheatsheet | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/152331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26343/"
]
} |
152,346 | I was wondering: when installing something, there's an easy way of double clicking an install executable file, and on the other hand, there is a way of building it from source. The latter one, downloading a source bundle, is really cumbersome. But what is the fundamental difference between these two methods? | All software are programs , which are also called source packages . So all source packages need to be built first, to run on your system. The binary packages are one that are already build from source by someone with general features and parameters provided in the software so that a large number of users can install and use it. Binary packages are easy to install . But may not have all options from the upstream package. So for installing from source, you need to build the source code yourself. That means you need to take care of the dependencies yourself. Also you need to be aware of all the features of the package so that you can build it accordingly. Advantages of installing from source: You can install the latest version and can always stay updated, whether it be a security patch or a new feature. Allows you to trim down the features while installing so as to suit your needs. Similarly you can add some features which may not be provided in the binary. Install it in a location you wish. In case of some software you may provide your hardware specific info for a suitable installation. In short installing from source gives you heavy customization option at the same time it takes a lot of effort, while installation from binary is easier but you may not be able to customize as you wish. Update : Adding the argument related to security in the comments below. Yes it is true that while installing from binary you don't have the integrity of the source code. But then it depends as to where you have got the binary from. There are lots of trusted sources from where you can get the binary of any new project, the only negative is the time . It may take some time for the binary of the updates or even a new project to appear in our trusted repositories. And above all things, about software security, I'd like to highlight this hilarious page at bell-labs provided by Joe in the below comments. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/152346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72825/"
]
} |
152,347 | For example, I type sudo apt-get update and then a long list of stuff will come up for minutes.. if I want to escape from seeing this tiresome updating and take control of bash command line again(without quitting the original command), is there a hotkey or something like that to do it? | You can stop apt-get temporarily with Ctrl-Z and then restart the job in the background with the bg command . $ apt-get foo bar ...^Zbash: suspended apt-get$ bgbash: continued apt-get$ However, when you do you'll continue getting the output from apt-get on the terminal. It isn't possible to stop that after you've already started the command (other than with dirty hacks using a debugger). You can run other commands here, though. While the job is stopped, it is still alive but not running, so it won't make any progress but you won't see any output either. What you may find useful is the screen or tmux commands. They let you run multiple sessions with different terminals within the same physical terminal. If you're using X, you can run additional terminal emulators or perhaps create a new tab in your current one. Whichever way, you can run apt-get update in one shell and then switch to another to continue working. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72825/"
]
} |
152,358 | I have a Fedora 20 installation in a VirtualBox virtual machine. Now it notifies me of "OS Updates", that "Includes performance, stability and security improvements for all users", and I have the option to "Restart & Install". However, clicking on "OS Updates" brings up the contents of the "OS Updates", and I can't find a new kernel, libc, or systemd in the list of packages to update. So, what is it that calls for a restart? These packages are listed when I issue sudo yum update : ================================================================================Updating: chkconfig x86_64 1.3.62-1.fc20 updates 172 k chrony x86_64 1.30-2.fc20 updates 262 k emacs-filesystem noarch 1:24.3-24.fc20 updates 58 k file x86_64 5.19-4.fc20 updates 59 k file-libs x86_64 5.19-4.fc20 updates 401 k gdb x86_64 7.7.1-18.fc20 updates 2.6 M ghostscript x86_64 9.14-4.fc20 updates 4.4 M hwdata noarch 0.269-1.fc20 updates 1.3 M libndp x86_64 1.4-1.fc20 updates 30 k libreport x86_64 2.2.3-2.fc20 updates 405 k libreport-anaconda x86_64 2.2.3-2.fc20 updates 43 k libreport-cli x86_64 2.2.3-2.fc20 updates 47 k libreport-fedora x86_64 2.2.3-2.fc20 updates 40 k libreport-filesystem x86_64 2.2.3-2.fc20 updates 35 k libreport-gtk x86_64 2.2.3-2.fc20 updates 94 k libreport-plugin-bugzilla x86_64 2.2.3-2.fc20 updates 79 k libreport-plugin-kerneloops x86_64 2.2.3-2.fc20 updates 45 k libreport-plugin-logger x86_64 2.2.3-2.fc20 updates 48 k libreport-plugin-reportuploader x86_64 2.2.3-2.fc20 updates 52 k libreport-plugin-ureport x86_64 2.2.3-2.fc20 updates 52 k libreport-python x86_64 2.2.3-2.fc20 updates 63 k libreport-python3 x86_64 2.2.3-2.fc20 updates 49 k libreport-web x86_64 2.2.3-2.fc20 updates 46 k libserf x86_64 1.3.7-1.fc20 updates 53 k libteam x86_64 1.12-1.fc20 updates 46 k perl-Socket x86_64 1:2.015-1.fc20 updates 50 k poppler-data noarch 0.4.7-1.fc20 updates 2.2 M ppp x86_64 2.4.5-34.fc20 updates 359 k selinux-policy noarch 3.12.1-180.fc20 updates 351 k selinux-policy-targeted noarch 3.12.1-180.fc20 updates 3.8 M sqlite x86_64 3.8.6-2.fc20 updates 433 k teamd x86_64 1.12-1.fc20 updates 108 k tzdata noarch 2014f-1.fc20 updates 430 k tzdata-java noarch 2014f-1.fc20 updates 147 k vim-minimal x86_64 2:7.4.402-1.fc20 updates 439 k zeitgeist-libs x86_64 0.9.16-0.2.20140808.git.ce9affa.fc20 updates 141 kTransaction Summary================================================================================ | Fedora running GNOME uses simple heuristics to work out if an update is a OS/System update or an application update. If the package has a .desktop file (which are normally used to populate the DE's menus) it is considered an user application and can be updated without a reboot. Without this file, it is considered a OS or System update and a 'Update and Restart' is offered. You can avoid this by running yum update from the command prompt. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152358",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5923/"
]
} |
152,369 | My coworker has the following in the ~/.bash_profile of many of our servers: echo -e "\033]50;SetProfile=Production\a" The text doesn't seem to matter, since this also works: echo -e "\033]50;ANY_TEXT\a" But no text doesn't work; the \a is also required. This causes his terminal in OSX to change profiles (different colours, etc.); but in my xterm, it changes the font to huge; which I can't seem to reset. I have tried to reset this with: Setting VT fonts with shift+right click Do "soft reset" and "full reset" with shift+middle click Sending of various escape codes & commands: $ echo -e "\033c" # Reset terminal, no effect$ echo -e "\033[0;m" # Reset attributes, no effect$ tput sgr0 # No effect$ tput reset # No effect My questions: Why does this work on xterm & what exactly does it do? Code 50 is listed as "Reserved"? How do I reset this? Screenshot: | Looking at the list of xterm escape codes reveals that (esc)]50;name(bel) sets the xterm's font to the font name , or to an entry in the font menu if the first character of name is a # . The simplest way to reset it is to use the xterm's font menu ( Ctrl + right mouse click) and select an entry other than Default . Alternatively, you can find out which font the xterm uses on startup, and set that with the escape sequence. In the font menu you'll also find an option Allow Font Ops ; if you uncheck that, you cannot any more change the font using escape sequences. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33645/"
]
} |
152,372 | Soon as I execute the bash script. Given that I have set -x I will visually observe what is happening. But at certain parts, I want it to stop and ask me if I still want to continue executing the next lines. I know about: set -e which simply ensures bash script exits on error, but I'd rather not use that. Rather I want to just divide my codes and make the bash script ask me every now and then if it should proceed or not. | You can use read for interactive scripts. For example: echo "Do you want to continue?(yes/no)"read inputif [ "$input" == "yes" ]thenecho "continue"fi And you can have your if conditions to execute further based on the input provided. EDIT If you want to use this in multiple places then you can create a funtion and call wherever you want the user intervention. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
152,379 | This question is closely related to How to "correctly" start an application from a shell but tries to tackle a more specific problem. How can I spawn an application from a shell and thereby making it a child of another process. Here is what I mean exemplified with two graphics: systemd-+-acpid |-bash---chromium-+-chrome-sandbox---chromium-+-chrome-sandbox---nacl_helper | | `-chromium---5*[chromium-+-{Chrome_ChildIOT}] | | |-{Compositor}] | | |-{HTMLParserThrea}] | | |-{OptimizingCompi}] | | `-3*[{v8:SweeperThrea}]] | |-chromium | |-chromium-+-chromium | | |-{Chrome_ChildIOT} | | `-{Watchdog} | |-{AudioThread} | |-3*[{BrowserBlocking}] | |-{BrowserWatchdog} | |-5*[{CachePoolWorker}] | |-{Chrome_CacheThr} | |-{Chrome_DBThread} | |-{Chrome_FileThre} | |-{Chrome_FileUser} | |-{Chrome_HistoryT} | |-{Chrome_IOThread} | |-{Chrome_ProcessL} | |-{Chrome_SafeBrow} | |-{CrShutdownDetec} | |-{IndexedDB} | |-{LevelDBEnv} | |-{NSS SSL ThreadW} | |-{NetworkChangeNo} | |-2*[{Proxy resolver}] | |-{WorkerPool/1201} | |-{WorkerPool/2059} | |-{WorkerPool/2579} | |-{WorkerPool/2590} | |-{WorkerPool/2592} | |-{WorkerPool/2608} | |-{WorkerPool/2973} | |-{WorkerPool/2974} | |-{chromium} | |-{extension_crash} | |-{gpu-process_cra} | |-{handle-watcher-} | |-{inotify_reader} | |-{ppapi_crash_upl} | `-{renderer_crash_} |-2*[dbus-daemon] |-dbus-launch |-dhcpcd |-firefox-+-4*[{Analysis Helper}] | |-{Cache I/O} | |-{Cache2 I/O} | |-{Cert Verify} | |-3*[{DOM Worker}] | |-{Gecko_IOThread} | |-{HTML5 Parser} | |-{Hang Monitor} | |-{Image Scaler} | |-{JS GC Helper} | |-{JS Watchdog} | |-{Proxy R~olution} | |-{Socket Thread} | |-{Timer} | |-{URL Classifier} | |-{gmain} | |-{localStorage DB} | |-{mozStorage #1} | |-{mozStorage #2} | |-{mozStorage #3} | |-{mozStorage #4} | `-{mozStorage #5} |-gpg-agent |-login---bash---startx---xinit-+-Xorg.bin-+-xf86-video-inte | | `-{Xorg.bin} | `-dwm-+-dwmstatus | `-xterm---bash-+-bash | `-pstree |-systemd---(sd-pam) |-systemd-journal |-systemd-logind |-systemd-udevd |-wpa_actiond `-wpa_supplicant The process tree shows chromium and firefox as children of the init process that starts at boot and has PID 1 . But what I want to achieve is to start firefox and chromium as children of dwm . Hence, I want a similar behaviour to what you can see under the weston part of the following process tree where firefox has weston-desktop as its parent: systemd-+-acpid |-bash---chromium-+-chrome-sandbox---chromium-+-chrome-sandbox---nacl_helper | | `-chromium-+-3*[chromium-+-{Chrome_ChildIOT}] | | | |-{Compositor}] | | | |-{HTMLParserThrea}] | | | |-{OptimizingCompi}] | | | `-3*[{v8:SweeperThrea}]] | | `-4*[chromium-+-{Chrome_ChildIOT}] | | |-{CompositorRaste}] | | |-{Compositor}] | | |-{HTMLParserThrea}] | | |-{OptimizingCompi}] | | `-3*[{v8:SweeperThrea}]] | |-{AudioThread} | |-3*[{BrowserBlocking}] | |-{BrowserWatchdog} | |-5*[{CachePoolWorker}] | |-{Chrome_CacheThr} | |-{Chrome_DBThread} | |-{Chrome_FileThre} | |-{Chrome_FileUser} | |-{Chrome_HistoryT} | |-{Chrome_IOThread} | |-{Chrome_ProcessL} | |-{Chrome_SafeBrow} | |-{Chrome_SyncThre} | |-{CrShutdownDetec} | |-{IndexedDB} | |-{NSS SSL ThreadW} | |-{NetworkChangeNo} | |-2*[{Proxy resolver}] | |-{WorkerPool/2315} | |-{WorkerPool/2316} | |-{WorkerPool/2481} | |-{chromium} | |-{extension_crash} | |-{gpu-process_cra} | |-{handle-watcher-} | |-{inotify_reader} | |-{renderer_crash_} | `-{sandbox_ipc_thr} |-2*[dbus-daemon] |-dbus-launch |-dhcpcd |-gpg-agent |-login---bash---startx---xinit-+-Xorg.bin-+-xf86-video-inte | | `-{Xorg.bin} | `-dwm-+-dwmstatus | `-xterm---bash |-login---bash---weston-launch---weston-+-Xwayland---4*[{Xwayland}] | |-weston-desktop--+-firefox-+-firefox | | | |-4*[{Analysis Helper}] | | | |-{Cache2 I/O} | | | |-{Cert Verify} | | | |-{DNS Resolver #1} | | | |-{DNS Resolver #2} | | | |-2*[{DOM Worker}] | | | |-{Gecko_IOThread} | | | |-{HTML5 Parser} | | | |-{Hang Monitor} | | | |-{Image Scaler} | | | |-{ImageDecoder #1} | | | |-{ImageDecoder #2} | | | |-{ImageDecoder #3} | | | |-{JS GC Helper} | | | |-{JS Watchdog} | | | |-{Socket Thread} | | | |-{Timer} | | | |-{URL Classifier} | | | |-{gmain} | | | |-{localStorage DB} | | | |-{mozStorage #1} | | | |-{mozStorage #2} | | | |-{mozStorage #3} | | | |-{mozStorage #4} | | | `-{mozStorage #5} | | `-weston-terminal---bash---pstree | `-weston-keyboard |-systemd---(sd-pam) |-systemd-journal |-systemd-logind |-systemd-udevd |-tmux---bash |-wpa_actiond `-wpa_supplicant One possible solution would be to use nsenter from util-linux . I could enter the namespace of the dwm process and fork a new firefox process which would then be the child of dwm . However, that seems like a lot of work. Is there some easier way to do this? | You can not start a process as the child of the shell, and then "reparent" it so another process becomes it's parent. So you need to use a parent process that explicitly starts the children. init with PID 1 is an exception, processes can become it's child as it collects processes that lost their original parent process. (With upstart, there can be multiple init processes, they do not share the PID 1, but otherwise the roles are very similar.) (See also PR_SET_CHILD_SUBREAPER in man 2 prctl ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65869/"
]
} |
152,391 | Usually, $0 in a script is set to the name of the script, or to whatever it was invoked as (including the path). However, if I use bash with the -c option, $0 is set to the first of the arguments passed after the command string: bash -c 'echo $0' foo bar # foo In effect, it seems like positional parameters have been shifted, but including $0 . However shift in the command string doesn't affect $0 (as normal): bash -c 'echo $0; shift; echo $0' foo bar# foo# foo Why this apparently odd behaviour for command strings?Note that I am looking for the reason, the rationale, behind implementing such odd behaviour. One could speculate that such a command string wouldn't need the $0 parameter as usually defined, so for economy it is also used for normal arguments. However, in that case the behaviour of shift is odd. Another possibility is that $0 is used to define the behaviour of programs (a la bash called as sh or vim called as vi ), but that cannot be, since $0 here is only seen in the command string and not by programs called within it. I cannot think of any other uses for $0 , so I am at a loss to explain this. | That gives you an opportunity to set/choose $0 when using an inline script. Otherwise, $0 would just be bash . Then you can do for instance: $ echo foo > foo$ bash -c 'wc -c < "${1?}"' getlength foo4$ rm -f bar$ bash -c 'wc -c < "${1?}"' getlength bargetlength: bar: No such file or directory$ bash -c 'wc -c < "${1?}"' getlengthgetlength: 1: parameter not set Not all shells used to do that. The Bourne shell did. The Korn (and Almquist) shell chose to have the first parameter go to $1 instead. POSIX eventually went for the Bourne way, so ksh and ash derivatives reverted to that later (more on that at http://www.in-ulm.de/~mascheck/various/find/#shell ). That meant that for a long time for sh (which depending on the system was based on the Bourne, Almquist or Korn shell), you didn't know whether the first argument went into $0 or $1 , so for portability, you had to do things like: sh -c 'echo foo in "$1"' foo foo Or: sh -c 'shift "$2"; echo txt files are "$@"' tentative-arg0 3 2 *.txt Thankfully, POSIX has specified the new behavior where the first argument goes in $0 , so we can now portably do: sh -c 'echo txt files are "$@"' meaningful-arg0-for-error *.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70524/"
]
} |
152,412 | IINM my system is failing when bash ing for i in {0..10000000}; # Seven zeroes.do false;done # `bash` exited and its `tmux` pane/window was closed. or for i in $(seq 0 10000000); # Seven zeroes.do false;done # `bash` exited and its `tmux` pane/window was closed. but not when for i in {0..1000000}; # Six zeroes.do false;done # Finished correctly. Can you please briefly explain the internals of this behavior and prompt a workaround for getting the task done? | for i in {0..1000000} and for i in $(seq 1000000) both build up a big list and then loop over it. That's inefficient and uses a lot of memory. Use: for ((i = 0; i<= 1000000; i++)) instead. Or POSIXly: i=0; while [ "$i" -le 1000000 ]; do ... i=$(($i + 1))done Or: seq 1000000 | xargs... To get a file full of CRLFs: yes $'\r' | head -n 1000000 > file Generally, loops should be avoided when possible in shells. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41754/"
]
} |
152,416 | I really like tmux and use it often so I set the following in my .zprofile : [[ $TERM != "screen" ]] && exec tmux so when I open a new window, tmux will be there for me. However, there are some strange things that happen when tmux is running: gnuplot , octave and other programs that need to launch X11 to draw graphscan't seem to do so (or it takes really long - 10 minutes or so). When X11is already running, they don't have this problem though. Some scripts I write use osascript to alert me with a notification when something happens: osascript -e 'display notification "some text" with title "Foo"' this works fine when tmux is not active, but fails to do anything when called from within a tmux session. (note that other osascript actions do work) Does anyone have an idea why this might be and what might be done to fix this? Note: I've posted a similar question about the first problem on the apple.se sitesome time ago but got no answer. The second problem only sprung up recently so I thought I'd try my luck here. | for i in {0..1000000} and for i in $(seq 1000000) both build up a big list and then loop over it. That's inefficient and uses a lot of memory. Use: for ((i = 0; i<= 1000000; i++)) instead. Or POSIXly: i=0; while [ "$i" -le 1000000 ]; do ... i=$(($i + 1))done Or: seq 1000000 | xargs... To get a file full of CRLFs: yes $'\r' | head -n 1000000 > file Generally, loops should be avoided when possible in shells. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11279/"
]
} |
152,417 | I have the following setup on an Ubuntu machine: ~/dotfiles/authorized_keys2~/.ssh/authorized_keys2 -> /home/wayne/dotfiles/authorized_keys2 I had the same setup on my Arch machine, but when I connect with -v, debug1: Authentications that can continue: publickey,passworddebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/wayne/.ssh/id_rsadebug1: Authentications that can continue: publickey,password I found this page on the Arch Wiki, which has this line: $ chmod 600 ~/.ssh/authorized_keys So I added another symlink: authorized_keys -> /home/wayne/dotfiles/authorized_keys2 And yet still, no dice. And yes, I have ensured that the correct key is present in authorized_keys . Why can I not connect using my keys? Edit: My permissions are set correctly on my home and ssh folders (and key file): drwxr-x--x 150 wayne family 13k Aug 27 07:38 wayne/drwx------ 2 wayne family 4.1k Aug 27 07:24 .ssh/-rw------- 1 wayne family 6.4k Aug 20 07:01 authorized_keys2 | The permissions on your authorized_keys file and the directories leading to it must be sufficiently restrictive: they must be only writable by you or root (recent versions of OpenSSH also allow them to be group-writable if you are the single user in that group). See Why am I still getting a password prompt with ssh with public key authentication? for the full story. In your case, authorized_keys is a symbolic link. As of OpenSSH 5.9 (I haven't checked other versions), in that case, the server checks the permissions leading to the ultimate target of the symbolic link, with all intermediate symbolic links expanded (the canonical path). Assuming that all components of /home/wayne/dotfiles/authorized_keys2 are directories except for the last one which is a regular files, OpenSSH checks the permissions of /home/wayne , /home/wayne/dotfiles and /home/wayne/dotfiles/authorized_keys2 . If you have root access on the server, check the server logs for a message of the form bad ownership or modes for … . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5788/"
]
} |
152,435 | I have a laptop with a built-in screen and an attached monitor. When I start a Google's video Hangout and share my desktop, I would like to be able to share only the attached screen, but I don't know how. Right now I have two monitors: LVDS1 corresponds to my laptop's screen, which is configured as the secondary screen and DP1 which is my primary screen. But the problem still remains if I change my laptop's screen to be the primary screen. $ xrandrScreen 0: minimum 320 x 200, current 3286 x 1468, maximum 8192 x 8192LVDS1 connected 1366x768+1920+700 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.06*+ 1024x768 60.00 800x600 60.32 56.25 640x480 59.94 VGA1 disconnected (normal left inverted right x axis y axis)HDMI1 disconnected (normal left inverted right x axis y axis)DP1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 475mm x 267mm 1920x1080 60.00*+ 1280x1024 75.02 60.02 1152x864 75.00 1024x768 75.08 60.00 800x600 75.00 60.32 640x480 75.00 60.00 720x400 70.08 Whenever I start sharing my desktop in the Hangout, only the built-in (smaller) screen is shared. Best thing would be to be able to chose which one to share, but if not, how could I share only the attached (bigger) screen? I bet Google's Hangout is looking for a configuration file to choose which screen to share, but don't know which file it is. NOTE Using Fedora 20, x86_64, Linux 3.15.10-200, GNOME Shell 3.10.4-8, Firefox 31. NOTE 2 Using Google Chrome makes Google Hangouts share both screens at the same time instead of only the laptop's screen, which I think is even worse. Still trying to find out how could I choose which screen to share. | Problem It turns out there's already an open issue in the Chromium tracker about this annoying inconvenience. Existing options offered by Hangouts have major drawbacks: Share Entire Screen: If you have multiple screens (I have three) and share "Entire Screen", other people in the hangout won't be able to see anything. Share Application: If you only share a specific application, then: You will have to manually switch to other apps while streaming by going back to hangouts and switching Screen Share on/off. In some applications, extra windows (such as dialogs for preferences, menus, popups, etc.) won't be captured as part of the app you're sharing. And most of the times it's these dialogs you want to focus on. Solution / workaround A very good workaround is at Comment 18 of this same discussion, so all the credits should go to the comment's author. I'll summarize the process here, which allows you to Share a Part/Area of your multi-monitor screen in Google Hangouts running in a Linux Machine . Open VLC in "Screen Capture" mode and tell it which part of your X11 screen you want it to capture, using the appropriate Screen Module command-line parameters . You can either do this through GUI configuration OR using the command line: vlc \ --no-video-deco \ --no-embedded-video \ --screen-fps=20 \ --screen-top=32 \ --screen-left=0 \ --screen-width=1920 \ --screen-height=1000 \ screen:// If VLC complains about not being able to open screen:// , please make sure you have correct module installed. For me, on ubuntu 19.10, I had to install an additional package vlc-plugin-extra-access by invoking apt install vlc-plugin-access-extra . Go back to Google Hangouts and share the newly opened VLC window, which now acts as your "portal" to the interesting part of your screen. Important notes Move the VLC window away from the part of the screen you are capturing to avoid inception effects . Do NOT resize OR minimize the VLC window because it will affect the resolution of your screen share. If you want to get it out of your way while streaming to hangouts, just move it off-screen WITHOUT resizing it, or just pretend it's not there. The mouse pointer is not captured by VLC in linux. The author of the workaround suggests a solution for this as well: ExtraMaus , a simple C programs which creates a "clone" of your mouse, but visible by VLC. [TL;DR] Explaining the values I chose in the example The screen:// parameter indicates we want to enable the Screen Capture module. You'll always use this parameter as is. The flags --no-video-deco and --no-embedded-video hide the window menu and video control toolbar respectively. You don't want to share these through Hangouts, so I suggest you always include these parameters. The --screen-fps=20 does not have to be 20. You can make it 30 or 10, since performance is primarily affected by how Chrome encodes the video stream. The area of the screen you want to captured follows the standard convention [ --screen-top , --screen-left , --screen-width , --screen-height ]. Supposing I had two monitors, each 1920x1080, giving a total 3840x1080 "virtual" screen when placed one next to the other, I could give the following coordinates: [ 0, 0, 1920, 1080] for my entire left screen [ 0, 1920, 1920, 1080] for my entire right screen [32, 0, 1920, 1000] for a part of my left screen which spans across its full width but trims 32 pixels from its top (where I usually have a window's title bar) and 1080-1000-32 = 48 pixels from its bottom (where I have my KDE taskbar). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/152435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66295/"
]
} |
152,442 | I've commonly seen references to a wheel user group online as well as when setting up my sudoers file. Does naming a group wheel imply something special about the group or is it just a name for a generic group used in the same manner that foo and bar are thrown about? | Rather than have to dole out individual permissions on a system, you can add users to the wheel group and they can gain access to administrator levels, simply by being in the wheel group. It's typically tied directly into sudo . ## Allows people in group wheel to run all commands%wheel ALL=(ALL) ALL Which means you can do anything on the system with sudo <cmd> . Previously you needed to be in the wheel group if you wanted to have access to use certain commands, such as su . excerpt - Wheel on Wikipedia Modern Unix systems use user groups to control access privileges. The wheel group is a special user group used on some Unix systems to control access to the su command, which allows a user to masquerade as another user (usually the super user). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34796/"
]
} |
152,477 | I ran the following command to give the wheel group rwx permissions on new files and subdirectories created: [belmin@server1]$ ls -latotal 24drwxr-sr-x+ 2 guards guards 4096 Aug 27 15:30 .drwxr-xr-x 104 root root 12288 Aug 27 15:19 ..[belmin@server1]$ sudo setfacl -m d:g:wheel:rwX .[belmin@server1]$ getfacl .# file: .# owner: guards# group: guards# flags: -s-user::rwxgroup::r-xgroup:wheel:rwxother::r-xdefault:user::rwxdefault:group::r-xdefault:group:wheel:rwxdefault:mask::rwxdefault:other::r-x However, when I create a file as root, I am not completely clear how the effective permissions are calculated: [belmin@server1]$ sudo touch foo[belmin@server1]$ getfacl foo# file: foo# owner: root# group: guardsuser::rw-group::r-x #effective:r--group:wheel:rwx #effective:rw-group:guards:rwx #effective:rw-mask::rw-other::r-- Can someone elaborate on what this means? | effective permissions are formed by ANDing the actual (real?) permissions with the mask . Since the mask of your file is rw- , all the effective permissions have the x bit turned off. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2372/"
]
} |
152,484 | I want to use my script on many files sometimes. I used this for the purpose: for etminput in $1do #processdone But this just gives the first input.How can I do the process on every wildcard matches? | If you want to loop over all the arguments to your script, in any Bourne like shell, it's: for i do something with "$i"done You could also do: for i in "$@"; do something with "$i"done but it's longer and not as portable (though is for modern shells). Note that: for i; do something with "$i"done is neither Bourne nor POSIX so should be avoided (though it works in many shells) For completeness, in non-Bourne shells: csh/tcsh @ i = 1while ($i <= $#argv) something with $argv[$i]:q @ i++end You cannot use: foreach i ($argv:q) something with $i:qend because that skips empty arguments rc/akanga for (i) something with $i ( rc generally is what shells should be like). es for (i=$*) something with $i (es is rc on steroids). fish for i in $argv something with $iend zsh Though it will accept the Bourne syntax, it also supports shorter ones like: for i ("$@") something with "$i" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40417/"
]
} |
152,485 | I tried to create some udev rules to mount and unmount my USB flash drives;the rules for the moment are very simple: ACTION=="add",KERNEL=="sd[b-z]",RUN+="/root/scripts/plug_flash_drive.sh %k"ACTION=="remove",KERNEL=="sd[b-z]",RUN+="/root/scripts/unplug_flash_drive.sh %k" plug_flash_drive.sh is also very simple: device_name=$1mount_options="umask=000,utf8"if [ ! -e "/media/$device_name" ]; then mkdir "/media/$device_name"fisleep 1/usr/bin/mount "/dev/$device_name" "/media/$device_name" -o "$mount_options" unplug_flash_drive.sh: device_name=$1umount "/dev/$device_name"rmdir "/media/$device_name" I have done some tests so I can ascertain that: When plugged in, my flash drive is detected; a file is created in /dev plug_flash_drive.sh is called by udev the mkdir part of the script works however, it seems that the "mount" part of the script is not executed, so my drive is not mounted when I call my scripts on the command line, they perfectly work Does anybody know why mount is not executed when called by udev? EDIT 28/08/14:I added "grep -q /proc/mounts && echo success || echo failure" at the end of my script to check in my debug log if the device is actually mounted before the script ends. It appears that the device is mounted at that point even when the script is called by udev.So the real problem is now "my block device is seemingly unmounted after the mount script end when called through udev" :s | systemd-udevd runs in its own file system namespace and by default mounts done within udev .rules do not propagate to the host. To make your old scripts work you can set MountFlags=shared in /usr/lib/systemd/system/systemd-udevd.service or (better) creating and editing its copy at /etc/systemd/system/ See man 5 systemd.exec for more information, MountFlags option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82213/"
]
} |
152,512 | I am running FreeBSD 10 on an ASUS M50VM series laptop. I was following along with the handbook to the point at which it gets into using pkg to find software. Every time I run pkg, with or without options or arguments, I get the following output: $ pkgThe package management tool is not yet installed on your system.Do you want to fetch and install it now [y/N]: yBootstrapping pkg from pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest, please wait...Verifying signature with trusted certificate pkg.freebsd.org.2013102301... donepkg: fail to extract pkg-static$ My FreeBSD laptop is connected with an Ethernet cable to my router, which I know is providing Internet access, as the Windows desktop I am currently using to post this question is also connected with a similar cable to the same router. What am I missing? What are possible causes of this issue? What should I check? | That dollar sign ( $ ) in the command line prompt makes me suggest you try to run pkg as an ordinary user. Try to login as root (e.g., by pressing Alt + F2 ) and run pkg from that session. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82224/"
]
} |
152,513 | I want to delete old files in a directory that has a huge number of files in multiple subdirectories. I am trying to use the following - after some googling it seems to be the recommended and efficient way: find . -mindepth 2 -mtime +5 -print -delete My expectation is that, this should print a file that satisfies the conditions (modified more than 5 days ago and satisfies the mindepth condition) and then delete it, and then move on to the next file. However, as this command runs, I can see that the find's memory usage is increasing, but nothing has been printed (and therefore I think nothing has been deleted yet). This seems to imply that find is first collecting all files that satisfy the conditions and after traversing the whole filesystem tree, it will print and then delete the files. Is there a way to get it to delete it right away after running the tests on the file? This would help do the clean up incrementally - I can choose to kill the command and then rerun it later (which would effectively resume file deletion). This does not seem to happen currently because find has not begun deleting anything until its done traversing the gigantic filesystem tree. Is there any way around this? EDIT - Including requested data about my use case: The directories I have to clean up have a maximum depth of about 4; regular files are present only at the leaf of the filesystem. There are around about 600 million regular files, with the leaf directories containing at most 5 files. The directory fan-out at the lower levels is about 3. The fan-out is huge at the upper levels. Total space occupied is 6.5TB on a single 7.2TB LVM disk (with 4 physical ~2 TB HDDs) | The reason why the find command is slow That is a really interesting issue... or, honestly, mallicious : The command find . -mindepth 2 -mtime +5 -print -delete is very different from the usual tryout variant, leaving out the dangerous part, -delete : find . -mindepth 2 -mtime +5 -print The tricky part is that the action -delete implies the option -depth . The command including delete is really find . -depth -mindepth 2 -mtime +5 -print -delete and should be tested with find . -depth -mindepth 2 -mtime +5 -print That is closely related to the symtoms you see; The option -depth is changing the tree traversal algorithm for the file system tree from an preorder depth-first search to an inorder depth-first search . Before, each file or directory that was reached was immediately used, and forgotten about. Find was using the tree itself to find it's way. find will now need to collect all directories that could contain files or directories still to be found, before deleting the files in the deepest directoies first . For this, it needs to do the work of planing and remembering traversal steps itself, and - that's the point - in a different order than the filesystem tree naturally supports. So, indeed, it needs to collect data over many files before the first step of output work. Find has to keep track of some directories to visit later, which is not a problem for a few directories. But maybe with many directories, for various degrees of many. Also, performance problems outside of find will get noticable in this kind of situation; So it is possible it's not even find that's slow, but something else. The performance and memory impact of that depends on your directory structure etc. The relevant sections from man find : See the "Warnings": ACTIONS -delete Delete files; true if removal succeeded. If the removal failed, an error message is issued. If -delete fails, find's exit status will be nonzero (when it eventually exits). Use of -delete auto‐ matically turns on the -depth option. Warnings: Don't forget that the find command line is evaluated as an expression, so putting -delete first will make find try to delete everything below the starting points you specified. When testing a find command line that you later intend to use with -delete, you should explicitly specify -depth in order to avoid later surprises. Because -delete implies -depth, you cannot use‐ fully use -prune and -delete together. [ ... ] And, from a section further up: OPTIONS [ ... ] -depth Process each directory's contents before the directory itself. The -delete action also implies -depth. The faster solution to delete the files You do not really need to delete the directories in the same run of deleting the files, right? If we are not deleting directories, we do not need the whole -depth thing, we can just find a file and delete it, and go on to the next, as you proposed. This time we can use the simple print variant for testing the find , with implicit -print . We want to find only plain files, no symlinks, directories, special files etc: find . -mindepth 2 -mtime +5 -type f We use xargs to delete more than one file per rm process started, taking care of odd filenames by using a null byte as separator: Testing this command - note the echo in front of the rm , so it prints what will be run later: find . -mindepth 2 -mtime +5 -type f -print0 | xargs -0 echo rm The lines will be very long and hard to read; For an initial test it could help to get readable output with only three files per line by adding -n 3 as first arguments of xargs If all looks good, remove the echo in front of the rm and run again. That should be a lot faster ; In case we are talking about millions of files - you wrote it's 600 million files in total - there is something more to take into account: Most programs, including find , read directories using the library call readdir (3) . That usually uses a buffer of 32 KB to read directories;That becomes a problem when the directories, containing huge lists of possibly long filenames, are big. The way to work around it is to directly use the system call for reading directory entries, getdents (2) , and handle the buffering in a more suitable way. For details, see You can list a directory containing 8 million files! But not with ls.. (It would be interesting if you can add details to your question on the typical numbers of files per directroy, directories per directory, max depth of paths; Also, which filesystem is used.) (If it is still slow, you should check for filesystem performance problems.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23333/"
]
} |
152,553 | Given this piece of bash: PARMS='-rvu'PARMS+=" --delete --exclude='.git'"echo $PARMSrsync ${PARMS} . ${TARGET} The echo shows the PARMS string as expected, no error is displayed, but rsync silently acts as if the options added by the += did not exist. However, this works as expected: PARMS='-rvu'rsync ${PARMS} --delete --exclude='.git' . ${TARGET} I guess I screwed something up with bash quotes (always had problems with those), but not exactly sure what and why are the options ignored even though the string seems to have been built correctly. | There is a difference between: PARMS+="... --exclude='.git'" and ... --exclude='.git' In the first, the single quotes are inside quotes themselves, so they are literally present in the substituted text given to rsync as arguments. rsync gets an argument whose value is --exclude='.git' . In the second, the single quotes are interpreted by the shell at the time they're written, because they aren't inside quotes themselves, and rsync gets to see --exclude=.git . In this case, you don't need the single quotes at all — .git is a perfectly valid shell word on its own, with no special characters, so you can use it literally in the command. Better for this kind of thing, though, is an array : PARMS=(-rvu)PARMS+=(--delete --exclude='.git')rsync "${PARMS[@]}" This builds up your command as separate words, with whatever quoting you want interpreted at the time you write the array line. "${PARMS[@]}" expands to each entry in the array as a separate argument, even if the argument itself has special characters or spaces in it, so rsync sees what you wrote as you meant it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/152553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82253/"
]
} |
152,561 | I have small file that does initialize a tmux session and then creates some windows. After some debugging and tweaking things worked fine until I renamed the text file (with the tmux commands) from spam to xset : $ source xsetbash: source: /usr/bin/xset: cannot execute binary file I have now renamed the file back and source spam works again, but I am wondering why this is. The file is in my home directory, and not in /usr/bin . | the bash internal command source, first looks for the filename in PATH, unless there is a slash ( / ) in the filename. xset is an executable file in your PATH, hence the problem. You can either execute source ./xset or change the sourcepath option to off with: shopt -u sourcepath From the bash man-page: source filename [arguments] Read and execute commands from filename in the current shell environment and return the exit status of the last command exe‐ cuted from filename. If filename does not contain a slash, file names in PATH are used to find the directory containing file‐ name. The file searched for in PATH need not be executable. When bash is not in posix mode, the current directory is searched if no file is found in PATH. If the sourcepath option to the shopt builtin command is turned off, the PATH is not searched. If any arguments are supplied, they become the posi‐ tional parameters when filename is executed. Otherwise the positional parameters are unchanged. The return status is the status of the last command exited within the script (0 if no commands are executed), and false if filename is not found or cannot be read. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82255/"
]
} |
152,653 | I'm learning flex and (for really the first time) using the command line. I created a Makefile to simplify my testing process but I don't understand why the commands are in reverse. For instance, in the terminal the order is as such: flex test.l this creates the file lex.yy.c then I compile that: g++ lex.yy.c -ll -o test The Makefile reads backwards: test: lex.yy.c g++ lex.yy.c -ll -o testlex.yy.c: test.l flex test.l So what happens specifically when I run make ? | Makefiles follow this format (Makefiles should always use tabs instead of spaces, since it is required in most (if not all) implementations of make ) : target: dependencies operations to build target The target is what you're willing to build/compile/create. There may be several of them, and they should be built in the Makefile order, unless dependencies need to be met first. The first target in your file is called the default target , it is what make tries to build when you call it without arguments. The dependencies are the different pieces required to build a target. In this Makefile, you have two targets: test lex.yy.c Since lex.yy.c is a dependency required to build test , it'll be built first from test.l . Once it is generated, it'll be possible to compile test . Basically, make ... : Tries to build test . Unresolved dependency. lex.yy.c does not exist (or has been updated), it needs to be built first. Reading operations to build lex.yy.c . Running flex test.l : lex.yy.c is built. All dependencies for test are met. Running g++ lex.yy.c -ll -o test . test is created. Additional info: For more information about makefiles, I would recommend The Linux Development Platform by Rafeeq Ur Rehman . Chapter 4: Using GNU Make . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77253/"
]
} |
152,656 | I have a list of files in a folder, which I will like to rename according to a textfile. For example: These are the 5 files in the folder. 101_T1.nii107_T1.nii 109_T1.nii118_T1.nii120_T1.nii I will like to have them rename using a text file containing a list of new filenames in the same order, without the extension .nii : n01n02n03n04n05 How may I go about doing so? | one liner, this command reads the 'list' txt and parses for each line a file. for file in *.nii; do read line; mv -v "${file}" "${line}"; done < list | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82332/"
]
} |
152,691 | I just installed the basic Linux CentOS 7 (no desktop) and am experimenting with the system. Every time I make a mistake (entering things that the command line doesn't like), the computer beeps and it's driving me crazy. What do I type in the command line to stop this annoying beep? [root@localhost /]# #what should I run here? | This should work: echo 'set bell-style none' >> ~/.inputrc Once that's done, open a new terminal and test it. Source Edit: changed > (overwrite/create file) to >> (append to file), since it is safer to use. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82350/"
]
} |
152,709 | I opened the Vim user manual with the :h user-manual | only command (to open it in one window) and now I want to enter/open the separate text files. I tried to put my cursor over a section, such as |usr_01.txt| , and press CTRL-j but nothing happened. Overview ~Getting Started|usr_01.txt| About the manuals|usr_02.txt| The first steps in Vim|usr_03.txt| Moving around|usr_04.txt| Making small changes|usr_05.txt| Set your settings|usr_06.txt| Using syntax highlighting |usr_07.txt| Editing more than one file|usr_08.txt| Splitting windows | To navigate to the section under the cursor, you use Ctrl ] (that's a right bracket, not a j ): JUMPING AROUND The text contains hyperlinks between the two parts, allowing you to quicklyjump between the description of an editing task and a precise explanation ofthe commands and options used for it. Use these two commands: Press CTRL-] to jump to a subject under the cursor.Press CTRL-O to jump back (repeat to go further back). http://vimdoc.sourceforge.net/vimum.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43461/"
]
} |
152,717 | If I do find . -exec echo {} + it prints all paths in one line, i.e. command echo is only executed once. But according to man find , -exec command {} + ... the number of invocations of the command will be much less than the number of matched files. ... It seems that in some circumstances the command will be executed multiple times. Am I right? Please exemplify. | POSIX defined find -exec utility_name [argument ...] {} + as: The end of the primary expression shall be punctuated by a <semicolon> or by a <plus-sign>. Only a <plus-sign> that immediately follows an argument containing only the two characters "{}" shall punctuate the end of the primary expression. Other uses of the <plus-sign> shall not be treated as special. If the primary expression is punctuated by a <semicolon>, the utility utility_name shall be invoked once for each pathname and the primary shall evaluate as true if the utility returns a zero value as exit status. A utility_name or argument containing only the two characters "{}" shall be replaced by the current pathname. If a utility_name or argument string contains the two characters "{}", but not just the two characters "{}", it is implementation-defined whether find replaces those two characters or uses the string without change. If the primary expression is punctuated by a <plus-sign>, the primary shall always evaluate as true, and the pathnames for which the primary is evaluated shall be aggregated into sets. The utility utility_name shall be invoked once for each set of aggregated pathnames. Each invocation shall begin after the last pathname in the set is aggregated, and shall be completed before the find utility exits and before the first pathname in the next set (if any) is aggregated for this primary, but it is otherwise unspecified whether the invocation occurs before, during, or after the evaluations of other primaries. If any invocation returns a non-zero value as exit status, the find utility shall return a non-zero exit status. An argument containing only the two characters "{}" shall be replaced by the set of aggregated pathnames, with each pathname passed as a separate argument to the invoked utility in the same order that it was aggregated. The size of any set of two or more pathnames shall be limited such that execution of the utility does not cause the system's {ARG_MAX} limit to be exceeded . If more than one argument containing the two characters "{}" is present, the behavior is unspecified. When length set of file name you found exceed system ARG_MAX , the command is executed. You can get ARG_MAX using getconf : $ getconf ARG_MAX2097152 On some system, actual value of ARG_MAX can be different, you can refer here for more details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54511/"
]
} |
152,738 | I have tried tmux -c "shell command" split-window but it does not seem to work. Using tmux split-window , one can split a new window. UPDATE : Using tmux split-window 'exec ping g.cn' can run the ping command , but when stoped the new window will be closed. | Use: tmux split-window "shell command" The split-window command has the following syntax: split-window [-dhvP] [-c start-directory] [-l size | -p percentage] [-t target-pane] [shell-command] [-F format] (from man tmux , section "Windows and Panes"). Note that the order is important - the command has to come after any of those preceding options that appear, and it has to be a single argument, so you need to quote it if it has spaces. For commands like ping -c that terminate quickly, you can set the remain-on-exit option first: tmux set-option remain-on-exit ontmux split-window 'ping -c 3 127.0.0.1' The pane will remain open after ping finishes, but be marked "dead" until you close it manually. If you don't want to change the overall options, there is another approach. The command is run with sh -c , and you can exploit that to make the window stay alive at the end: tmux split-window 'ping -c 3 127.0.0.1 ; read' Here you use the shell read command to wait for a user-input newline after the main command has finished. In this case, the command output will remain until you press Enter in the pane, and then it will automatically close. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/152738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46826/"
]
} |
152,773 | I wrote a simple script which echo -es its PID: #/bin/bashwhile true; do echo $$; sleep 0.5;done I'm running said script (it says 3844 over and over) in one terminal and trying to tail the file descriptor in another one: $ tail -f /proc/3844/fd/1 It doesn't print anything to the screen and hangs until ^c . Why? Also, all of the STD file descriptors (IN/OUT/ERR) link to the same pts: $ ls -l /proc/3844/fd/total 0lrwx------ 1 mg mg 64 sie 29 13:42 0 -> /dev/pts/14lrwx------ 1 mg mg 64 sie 29 13:42 1 -> /dev/pts/14lrwx------ 1 mg mg 64 sie 29 13:42 2 -> /dev/pts/14lr-x------ 1 mg mg 64 sie 29 13:42 254 -> /home/user/test.shlrwx------ 1 mg mg 64 sie 29 13:42 255 -> /dev/pts/14 Is this normal? Running Ubuntu GNOME 14.04. If you think this question belongs to SO or SU instead of UL, do tell. | Make a strace of tail -f , it explains everything. The interesting part: 13791 fstat(3, {st_mode=S_IFREG|0644, st_size=139, ...}) = 013791 fstatfs(3, {...}) = 013791 inotify_init() = 413791 inotify_add_watch(4, "/path/to/file", IN_MODIFY|IN_ATTRIB|IN_DELETE_SELF|IN_MOVE_SELF) = 113791 fstat(3, {st_mode=S_IFREG|0644, st_size=139, ...}) = 013791 read(4, 0xd981c0, 26) = -1 EINTR (Interrupted system call) What it does? It sets up an inotify handler to the file, and then waits until something happens with this file. If the kernel says tail through this inotify handler, that the file changed (normally, was appended), then tail 1) seeks 2) reads the changes 3) writes them out to the screen. /proc/3844/fd/1 on your system is a symbolic link to /dev/pts/14 , which is a character device. There is no such thing as some like a "memory map", which could be accessed by that. Thus, there is nothing whose changes could be signed to the inotify, because there is no disk or memory area which could be accessed by that. This character device is a virtual terminal, which practically works as as if it were a network socket. Programs running on this virtual terminal are connecting to this device (just as if you telnet-ted into a tcp port), and writing what they want to write into. There are complexer things as well, for example locking the screen, terminal control sequences and such, these are normally handled by ioctl() calls. I think, you want to somehow watch a virtual terminal. It can be done on linux, but it is not so simple, it needs some network proxy-like functionality, and a little bit of tricky usage of these ioctl() calls. But there are tools which can do that. Currently I can't remember, which debian package has the tool for this goal, but with a little googling you could find that probably easily. Extension: as @Jajesh mentioned here (give him a +1 if you gave me), the tool is named watch . Extension #2: @kelnos mentioned, a simple cat /dev/pts/14 were also enough. I tried that, and yes, it worked, but not correctly. I didn't experimented a lot with that, but it seems to me as if an output going into that virtual terminal gone either to the cat command, or to its original location, and never to both. But it is not sure. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82399/"
]
} |
152,774 | I want to remove a newline character in a particular line through perl one-liner command. Input: 1407233497,1407233514,bar1407233498,1407233515,foomingstats&fmt=n1407233499,1407233516,foobar Expected output: 1407233497,1407233514,bar1407233498,1407233515,foomingstats&fmt=n1407233499,1407233516,foobar What i tried so far? This regex \n(?!\d+,\d+) matches exactly the newline that i want to remove. But i don't know how to implement it through perl one-liner command. I tried, perl -pe 's/\n(?!\d+,\d+)//g' file But it removes all the newline characters in that file and finally print the below in a single line, 1407233497,1407233514,bar1407233498,1407233515,foomingstats&fmt=n1407233499,1407233516,foobar I will be so happy if the perl one-liner command uses above regex... | Try: $ perl -00pe 's/\n(?!\d+,\d+)//g' file1407233497,1407233514,bar1407233498,1407233515,foomingstats&fmt=n1407233499,1407233516,foobar perl read file line by line by default with -p option, so your regex can not work. -00 option turns paragraph slurp mode on, your regex now can work on multiline. From perldoc perlrun: -0[octal/hexadecimal] specifies the input record separator ($/ ) as an octal or hexadecimal number. If there are no digits, the null character is the separator. Other switches may precede or follow the digits. For example, if you have a version of find which can print filenames terminated by the null character. ... The special value 00 will cause Perl to slurp files in paragraph mode. Any value 0400 or above will cause Perl to slurp files whole, but by convention the value 0777 is the one normally used for this purpose | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56901/"
]
} |
152,792 | I get error when converting a large file: $ iconv -f GB2312 -t UTF-8 2001.txt -o 2001_u.txticonv: illegal input sequence at position 245256667 What does the position mean in this error? I tried it is not line number. How to get to the position in other tools or editors like emacs? | It's the 245256667 byte of the file. If you do a: dd if=2001.txt of=error.txt bs=1 count=10 skip=245256667 You should be able to see the non valid utf8 sequence by doing a hexdump -C error.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38063/"
]
} |
152,799 | My partition table looks like this: Device Boot Start End Blocks Id System/dev/sda1 * 2048 32505855 16251904 83 Linux/dev/sda2 32505856 33554431 524288 83 Linux When I went to lay down a filesystem on sda2 , it threw this error: sudo mkfs -t ext4 /dev/sda2mke2fs 1.42.9 (4-Feb-2014)mkfs.ext4: inode_size (128) * inodes_count (0) too big for afilesystem with 0 blocks, specify higher inode_ratio (-i) or lower inode count (-N). I have tried both with an extended partition and a primary partition and get the same error. I have Ubuntu 14.04TLS. What to do? | 1: it doesn't have to do anything with primary/extended/logical partitions. 2: I think you wanted to say "logical" partition instead of "extended". 3: mkfs thinks your partition size if 0 bytes. It was very surely, because the kernel wasn't able to update the partition table after a repartitioning. After you edited the partition table, didn't you get some warnings about that a reboot is needed? On Linux, there is two different partition table: there is one on the zeroth block of the hard disk. And there is one in the kernel memory. You can read the first with an fdisk -l /dev/sda command. And the second can you read with a cat /proc/partitions command. These two need to be in sync, but it is not always possible. For example, you can't change the limits of a currently used partition. In this case, the kernel partition table won't changed. You can let the kernel re-read the disk partition table with the command blockdev --rereadpt /dev/sda . Most partitioning tools execute this command after they wrote out your newly changed partition table to the disk. The problem is that only newer linux kernels are capable to re-read a partition table of a used hard disk. From this viewpoint, a hard disk is considered as "used" if there is a simple partition which is used on it, either by a tool, or a mount point or it is an active swap partition. And even these newer kernels aren't able to change the limits of a partition currently being used. I think, your root system is on /dev/sda , thus you need to do a reboot after you did a repartitioning. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81926/"
]
} |
152,826 | I have multiple cron job entries configured under a single account: 0 0 * * * /foo/foo.sh0 2 * * * /foo/foo2.sh0 4 * * * /foo/foo3.sh Right now, if any of these encounter an error, an email is sent to the user account. Can I configure cron to send a notification to a different email address depending on which entry encounters the error? For example, if an error occurs while running /foo/foo.sh send to [email protected]. If an error occurs while running /foo/foo2.sh send to [email protected]. I know I can set [email protected] but I think that's for ALL cron entries (for the account I'm logged in as)... I don't think that's a per-entry setting. Hopefully this makes sense. Thanks. :-) | It it perfectly Ok to use MAILTO= per-entry, i.e.: MAILTO="address1"0 0 * * * /foo/foo.shMAILTO="address2"0 2 * * * /foo/foo2.shMAILTO="address3"0 4 * * * /foo/foo3.shMAILTO="address4"0 6 * * * /foo/foo4.sh0 8 * * * /foo/foo5.sh0 10 * * * /foo/foo6.sh And so on. Cheers, | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
152,857 | Can I use seq to go from 001 to 999? | Yes using the -w parameter: -w, --equal-width equalize width by padding with leading zeroes e.g. seq -w 0 999 gives 000001...999 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82448/"
]
} |
152,864 | I'm trying to determine if, in Linux, environment variables for a process are observable by other (non-root) users. The immediate use case is putting secrets into environment variables. This is discussed in many places throughout the web as being insecure, but I haven't been able to zero in on the exact exposure point in Linux. Note that I am not talking about putting cleartext secrets into files. Also note that I am not talking about exposure to the root account (I view attempting to hide secrets from an adversary with root as a nonstarter). This question appears to address mine, with comments that classify environment variables as being completely without security, or only simply being obfuscated, but how does one access them? In my tests one unprivileged user can't observe environment variables for another user through the process table ('ps auxwwe'). The commands that set environment variables (e.g. export) are shell builtins which don't make it onto the process table and by extension aren't in /proc/$pid/cmdline. /proc/$pid/environ is only readable by the UID of the process owner. Perhaps the confusion is between different operating systems or versions. Various (recent) sources across the web decry the insecurity of environment variables, but my spot-checking of different linux versions seems to indicate that this isn't possible going back at least to 2007 (probably further but I don't have boxes on hand to test). In Linux, how can a non-privileged user observe environment variables for another's processes? | As Gilles explained in a very comprehensive answer to a similar question on security.stackexchange.com, process environments are only accessible to the user that owns the process (and root of course). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/152864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48658/"
]
} |
152,886 | I'm trying to run a game called "Dofus", in Manjaro Linux. I've installed it with packer, that put it under /opt/ankama folder. This folder ownership (and for every file inside it) is root user, and games group. As instructed by the installing package, I've added myself (user familia ) in the games group (by not doing so, "I would have to input my password every time I tried to run the updater"). However, when running the game, it crashes after inputting my password (which shouldn't be required). Checking the logs, I've got some errors like those: [29/08 20:44:07.114]{T001}INFO c/net/NetworkAccessManager.cpp L87 : Starting request GET http://dl.ak.ankama.com/updates/uc1/projects/dofus2/updates/check.9554275D[29/08 20:44:07.291]{T001}INFO c/net/NetworkAccessManager.cpp L313 : Request GET http://dl.ak.ankama.com/updates/uc1/projects/dofus2/updates/check.9554275D Finished (status : 200)[29/08 20:44:07.292]{T001}ERROR n/src/update/UpdateProcess.cpp L852 : Can not cache script data So, I suspect Permission Denied errors. An error message a moment after starting That translates to "An error has happened while writing to the disk - verify if you have the sufficient rights and enough disk space". Then, after some research, I came across "auditd" that can log file accesses in a folder. After setting it up, and seeing which file accesses were unsuccessful, this is the result . All of those errors actually refer to a unique file, /opt/ankama/transition/transition , with a syscall to ( open ). This file's permissions are rwxrwxr-x ( 775 ). So, I've rwx permissions to it, yet it gives me an error exit -13 , which is a EACESS error (Permission Denied). I've already tried to reboot the computer, to log in and log out. None of them worked. If I set the folder permissions to familia:games , it runs with no trouble, I don't even need to input my password. However, it doesn't seem right this way. Any ideas of why I get Permission Denied errors even though I have read/write/execute permissions? Mark has said that I could need +x permissions in all directories of the path prefix. The path itself is /opt/ankama/transition/transition . The permissions for the path prefixes are: /opt - drwxr-xr-x(755), ownership root:root /opt/ankama - drwxr-xr-x(755), ownership root:games /opt/ankama/transition - drwxrwxr-x(775), ownership root:games However, one thing that I've noticed is that all subfolders of /opt/ankama are 775 , even though the folder itself is 755 . I don't think this means anything, and changing the permissions to 775 doesn't work. Also, Giel suggested that I could have AppArmor running on my system. However, running # cat /sys/module/apparmor/parameters/enabled gives me N . | First, when you add yourself to a group, the change is not applied immediately. The easiest thing is to logout and log back in. Then there are write permissions of data files (as mentioned already in some of the comments). However, the solutions are not good for security. Add a group for the game. Do not add any user to this group. Make the game executable by chmod -R ugo+rX game-directory Give write permissions to group only and no-one else using chmod -R ug+w,o-w game-directory Add game to group chgrp -R game-group game-directory , chmod -R g+s game-directory or just addgroup game-group; chgrp -R game-group game-directory; chmod -R u=rwX,g=rwXs,o=rX game-directory If game needs to change permissions then you can do the same but for user instead of group. ie. adduser game-owner; addgroup game-group; chown -R game-owner:game-group game-directory; chmod -R u=rwXs,g=rwXs,o=rX game-directory | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/152886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82468/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.