source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
208,499 | I am trying to figure out how to export the names of the files I have in a directory to be saved in a CSV file that I can edit in Excel. The directory looks like this: $ lsSample_38_41_1_A01 Sample_38_41_1_A11 Sample_38_41_1_B09Sample_38_41_1_C07 Sample_38_41_1_D05 Sample_38_41_1_E03Sample_38_41_1_F01 I want each row of the CSV file to have the name of a file found in that directory. It should look like this in Excel: A B C1 Sample_38_41_1_A012 Sample_38_41_1_A113 Sample_38_41_1_B09 4 Sample_38_41_1_C07 5 Sample_38_41_1_D056 Sample_38_41_1_E037 Sample_38_41_1_F018 ... | Since your example file names don't have any double quotes or commas in them, the solution is quite simple: $ 'ls' > files.csv There are a couple of subtleties here: You want to quote the ls command in case you have an alias that adds flags to it that affect the output, such as -F , which appends file type sigils, or -C , which forces multi-column output. Such aliases are quite common on modern Unix and Unix-like systems. When ls is writing to a pipeline instead of a terminal, it prints one file name per line instead of the multi-column output you show in your question. POSIX requires single-column output in this case , and GNU and BSD ls obey. This CSV file won't have a header line, but Excel can cope with that; not all CSV readers can. But What If There Are Special Characters? Double quotes and commas are special characters in CSV files, so if you try the above command on a directory containing files named using such characters, you won't get a valid CSV file. It's not too difficult to cope with these cases. First let's take the case of files that may only have commas in them. This is going to be a much more common case since double quotes have meaning in Unix command shells, so there is a strong disincentive to using them in file names: $ 'ls' | sed -e 's/^/"/' -e 's/$/"/' > files.csv These sed string replacement commands put double quotes at the beginning and end of each line, which prevents a CSV reader from treating commas as field separators. Another way to achieve the same end is perl -ne 'chomp ; print "\"$_\"\n"' If you really do have double-quotes in your file names, the sed solution extends naturally: $ 'ls' | sed -e 's/"/\\"/g' -e 's/^/"/' -e 's/$/"/' > files.csv That is to say, we escape any existing double-quote characters before wrapping the line in semantic quotes. Some CSV readers handle double-quote escaping differently, treating two double-quote characters in a row as a literal double-quote: $ 'ls' | sed -e 's/"/""/g' -e 's/^/"/' -e 's/$/"/' > files.csv | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118675/"
]
} |
208,538 | I'm trying to run some benchmarks on a multicore machine and I'd like to tell the Linux kernel to simply avoid certain cores unless explicitly told to use them. The idea is that I could set aside a handful of cores (the machine has 6 physical cores) for benchmarking and use cpu mask to allow only benchmark processes onto the given cores. Is this feasible? | You could isolate some cpu cores from kernel scheduling using isolcpus parameter. Add this parameter to your grub.conf and reboot to take effect. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87883/"
]
} |
208,548 | How can I tell whether my system is Unix or Linux? I am using a Macbook Pro of 2010 vintage. | POSIX defines uname ("Unix name") to provide information about the operating system and hardware platform; running uname gives the name of the implementation of the operating system (or according to the coreutils documentation, the kernel). You can do this interactively in a terminal, or use the output in a script. On Linux systems, uname will print Linux . On Mac OS X systems, uname will print Darwin . (Strictly speaking, any operating system with a Darwin kernel will produce this, but you're very unlikely to encounter anything other than Mac OS X in this case.) This will allow you to determine what any Mac is running. As Rob points out, if you're running Mac OS X ( Darwin as indicated by uname ), then you're running a certified version of Unix ; if you're running Linux then you're not. On a Mac there are many other possibilities; your script could end up running on Solaris ( uname will print SunOS then), on FreeBSD ( FreeBSD ), on Windows with Cygwin ( CYGWIN ), MSYS or MSYS2 ( MSYS ), a MinGW or MinGW-w64 shell ( MINGW64 , MINGW32 ), Interix ( Interix ), and probably others I'm not aware of. uname -a will print all the available information as determined by uname , but it's harder to parse. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118711/"
]
} |
208,568 | I want to compile as fast as possible. Go figure. And would like to automate the choice of the number following the -j option. How can I programmatically choose that value, e.g. in a shell script? Is the output of nproc equivalent to the number of threads I have available to compile with? make -j1 make -j16 | nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT. The number of jobs you can run in parallel with make using the -j option depends on a number of factors: the amount of available memory the amount of memory used by each make job the extent to which make jobs are I/O- or CPU-bound make -j$(nproc) is a decent place to start, but you can usually use higher values, as long as you don't exhaust your available memory and start thrashing. For really fast builds, if you have enough memory, I recommend using a tmpfs , that way most jobs will be CPU-bound and make -j$(nproc) will work as fast as possible. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/208568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
208,588 | I'm currently doing this to sort and uniq the output of two different commands: tshark -r sample.pcap -T fields -e eth.src -e ip.src > hellotshark -r sample.pcap -T fields -e eth.dst -e ip.dst >> hellosort < hello | uniq > hello_uniq In a nutshell, I'm outputting source MAC addresses and IPs into a file. I'm then appending destination MAC addresses and IPs to that same file. I then sort the file and input that into uniq to end up with a list of unique MAC to IP address mapping. Is there a way to do this in one line? (Note: the use of tshark is not really relevant here, my question applies to any two sources of output like that) | nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT. The number of jobs you can run in parallel with make using the -j option depends on a number of factors: the amount of available memory the amount of memory used by each make job the extent to which make jobs are I/O- or CPU-bound make -j$(nproc) is a decent place to start, but you can usually use higher values, as long as you don't exhaust your available memory and start thrashing. For really fast builds, if you have enough memory, I recommend using a tmpfs , that way most jobs will be CPU-bound and make -j$(nproc) will work as fast as possible. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/208588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89986/"
]
} |
208,597 | I have the following snippet shown as my top command output. One real quick question here being, the values of Mem are shown in what granularity? Are they the number of bytes? Mem: 8191488k total, 4277448k used, 3914040k free, 292356k buffersSwap: 0k total, 0k used, 0k free, 3382180k cached Asking this question because, free -m command gives the output as total used free shared buffers cachedMem: 7999 4177 3822 0 285 3302-/+ buffers/cache: 588 7410Swap: 0 0 0 | nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT. The number of jobs you can run in parallel with make using the -j option depends on a number of factors: the amount of available memory the amount of memory used by each make job the extent to which make jobs are I/O- or CPU-bound make -j$(nproc) is a decent place to start, but you can usually use higher values, as long as you don't exhaust your available memory and start thrashing. For really fast builds, if you have enough memory, I recommend using a tmpfs , that way most jobs will be CPU-bound and make -j$(nproc) will work as fast as possible. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/208597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118741/"
]
} |
208,607 | When running this script , I run into an error on this line (relevant snippet below): ..._NEW_PATH=$("$_THIS_DIR/conda" ..activate "$@")if (( $? == 0 )); then export PATH=$_NEW_PATH # If the string contains / it's a path if [[ "$@" == */* ]]; then export CONDA_DEFAULT_ENV=$(get_abs_filename "$@") else export CONDA_DEFAULT_ENV="$@" fi # ==== The next line returns an error # ==== with the message: "export: not valid in this context /Users/avazquez/anaconda3" export CONDA_ENV_PATH=$(get_dirname $_THIS_DIR) if (( $("$_THIS_DIR/conda" ..changeps1) )); then CONDA_OLD_PS1="$PS1" PS1="($CONDA_DEFAULT_ENV)$PS1" fielse return $?fi... Why is that? I found this ticket , but I don't have that syntax error. I found reports of the same problem in GitHub threads (e.g. here ) and mailing lists (e.g. here ) | In zsh, Command Substitution result was performed word splitting if was not enclosed in double quotes. So if your command substitution result contain any whitespace, tab, or newline, the export command will be broken into parts: $ export a=$(echo 1 -2)export: not valid in this context: -2 You need to double quote command substitution to make it work, or using the safer syntax: PATH=$_NEW_PATH; export PATH or even: PATH=$_NEW_PATH export PATH | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
208,608 | Is there a command to show which btrfs subvolumes are mounted on each mountpoint? Alternatively, is there somewhere (e.g. in the /proc or /sys trees) where the information can be read? The output from the df command simply shows the filesystem's root device and /etc/mtab (aka /proc/mounts ) doesn't contain the subvol=... option. | findmnt -nt btrfs , the source subvolume is in [...] , the mountpoint is the first column. Alternatively, you could look into the file /proc/self/mountinfo yourself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208608",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21046/"
]
} |
208,615 | When I use the type command to find out if cat is a shell built-in or an external program I get the output below: -$ type catcat is hashed (/bin/cat)-$ Does this mean that cat is an external program which is /bin/cat ? I got confused, because when I checked the output below for echo I got to see that it is a built-in but also a program /bin/echo -$ type echoecho is a shell builtin-$ which echo/bin/echo-$ So I could not use the logic that /bin/cat necessarily means an external program, because echo was /bin/echo but still a built-in. So how do I know what cat is? Built-in or external? | type tells you what the shell would use. For example: $ type echoecho is a shell builtin$ type /bin/echo/bin/echo is /bin/echo That means that if, at the bash prompt, you type echo , you will get the built-in. If you specify the path, as in /bin/echo , you will get the external command. which , by contrast is an external program that has no special knowledge of what the shell will do. On debian-like systems, which is a shell script which searches the PATH for the executable. Thus, it will give you the name of the external executable even if the shell would use a built-in. If a command is only available as a built-in, which will return nothing: $ type helphelp is a shell builtin$ which help$ Now, let;s look at cat : $ type catcat is hashed (/bin/cat)$ which cat/bin/cat cat is an external executable, not a shell builtin. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/208615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109220/"
]
} |
208,633 | Suppose I run a command like this in the terminal: ~$ echo 'sleep 2; echo "hello!"' | sh then start typing the next line. After two seconds the words "hello!\n" will be inserted into whatever I'm writing. I know there is a workaround to this (pressing up then down which refreshes the prompt), however on other systems that don't have history---eg, using a MUD through telnet---this is not possible. Does anybody know of an ncurses app or terminal emulator that separates stdin from stdout? This seems pretty easy to make in ncurses, you just have to use some clever dup2s, but before I make it I want to know if someone has done it before. Any other solutions to the main problem are welcome, as well. | This isn't as easy as it sounds to change. It's related to the terminal cooked vs. raw mode and whether echo is enabled or not. When the terminal is in cooked mode (the default), the kernel reads everything that comes in as input and processes it using rudimentary line editing capabilities which include echoing normal text immediately, processing the erase and kill characters which erase a single character and the whole of the current line respectively, and a few other things. Lines of text only actually appear on the terminal's input when you press enter. During the whole time up until you press enter, everything happens entirely inside the kernel and no process running on the terminal receives a single byte , therefore the foreground application doesn't even know that the user is in the process of typing anything. A process running on the tty cannot suppress this echo even if it wants to just because the echo would come at an inopportune time (e.g. intermixed with output) because such a process is not even aware that the input is happening. You can set the terminal to raw mode instead with no echo to suppress this ( stty raw , or with termios), but then you lose the kernel's line editing capabilities completely — which means for example that you cannot correct a typo by pressing Ctrl - u and starting over. More importantly, you will have a lot of trouble using any program that depends on the kernel's cooked processing (basically anything that doesn't use readline or ncurses) because you will be typing completely blind at such programs! Oh, and also: without the terminal cooked processing you lose the kernel's interception of job control shortcuts to interrupt and suspend (by default Ctrl - c and Ctrl - z respectively). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15940/"
]
} |
208,656 | How can I put a bit mask on /dev/zero so that I can have a source not only for 0x00 but also for any byte between 0x01 and 0xFF? | The following bash code is set to work with the byte being representred in binary . However you can easily change it to handle ocatal , decimal or hex by simply changing the radix r value of 2 to 8 , 10 or 16 respectively and setting b= accordingly. r=2; b=01111110printf -vo '\\%o' "$(($r#$b))"; </dev/zero tr '\0' "$o" EDIT - It does handle the full range of byte values: hex 00 - FF (when I wrote 00-7F below, I was considering only single-byte UTF-8 characters). If, for example, you only want 4 bytes (characters in the UTF-8 'ASCII'-only hex 00-7F range) , you can pipe it into head : ... | head -c4 Output (4 chars): ~~~~ To see the output in 8-bit format, pipe it into xxd (or any other 1's and 0's byte dump*): eg. b=10000000 and piping to: ... | head -c4 | xxd -b 0000000: 10000000 10000000 10000000 10000000 .... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
208,667 | I have a file like below: blablablablablabla***thingsIwantToRead1thingsIwantToRead2thingsIwantToRead3blablablablablabla I want to extract the paragraph with thingsIwantToRead . When I had to deal with such a problem, I used AWK like this: awk 'BEGIN{ FS="Separator above the paragraph"; RS="" } {print $2}' $file.txt | awk 'BEGIN{ FS="separator below the paragraph"; RS="" } {print $1}' And it worked. In this case, I tried to put FS="***" , "\*{3}" , "\*\*" (it is not working because AWK treats it like a normal asterisk), "\\*\\*" or whatever regex I could think of, but it's not working (it's printing nothing). Do you know why? If not, do you know another way to deal with my problem? Below an extract of the file I want to parse: 13.2000000000 , 3*0.00000000000 , 11.6500000000 , 3*0.00000000000 , 17.8800000000Blablabla SATELLITE EPHEMERIS ===================Output frame: Mean of J2000 Epoch A E I RA AofP TA Flight Ang*****************************************************************************************************************2012/10/01 00:00:00.000 6998.239 0.001233 97.95558 77.41733 89.98551 290.75808 359.933982012/10/01 00:05:00.000 6993.163 0.001168 97.95869 77.41920 124.72698 274.57362 359.933272012/10/01 00:10:00.000 6987.347 0.001004 97.96219 77.42327 170.94020 246.92395 359.947062012/10/01 00:15:00.000 6983.173 0.000893 97.96468 77.42930 224.76158 211.67042 359.97311 <np> ---------------- Predicted Orbit: ---------------- Blablabla And I want to extract: 2012/10/01 00:00:00.000 6998.239 0.001233 97.95558 77.41733 89.98551 290.75808 359.933982012/10/01 00:05:00.000 6993.163 0.001168 97.95869 77.41920 124.72698 274.57362 359.933272012/10/01 00:10:00.000 6987.347 0.001004 97.96219 77.42327 170.94020 246.92395 359.947062012/10/01 00:15:00.000 6983.173 0.000893 97.96468 77.42930 224.76158 211.67042 359.97311 And the command I tried to use to get the numbers after the line of *'s: `awk 'BEGIN{ FS="\\*{2,}"; RS="" } {print $2}' file | awk 'BEGIN{ FS="<np>"; RS="" } {print $1}'` | Tell awk to print between the two delimiters. Specifically: awk '/\*{4,}/,/<np>/' file That will also print the lines containing the delimiters, so you can remove them with: awk '/\*{4,}/,/<np>/' file | tail -n +2 | head -n -1 Alternatively, you can set a variable to true if a line matches the 1st delimiter and to false when it matches the second and only print when it is true: awk '/\*{4,}/{a=1; next}/<np>/{a=0}(a==1){print}' file The command above will set a to 1 if the current line matches 4 or more * and will also skip to the next line. This means that the *** line will never be printed. This was in answer to the original, misunderstood, version of the question. I'm leaving it here since it can be useful in a slightly different situation. First of all, you don't want FS (field separator), you want RS (record separator). Then, to pass a literal * , you need to escape it twice. Once to escape the * and once to escape the backslash (otherwise, awk will try to match it in the same way as \r or \t ). Then, you print the 2nd "line": $ awk -vRS='\\*\\*\\*' 'NR==2' filethingsIwantToRead1 thingsIwantToRead2 thingsIwantToRead3 To avoid the blank lines around the output, use: $ awk -vRS='\n\\*\\*\\*\n' 'NR==2' filethingsIwantToRead1 thingsIwantToRead2 thingsIwantToRead3 Note that this assumes a *** after each paragraph, not only after the first one as you show. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118797/"
]
} |
208,668 | Using Python, I scan Unix mounted volumes for files, then add or purge the filenames in a database, based on their existence. I've just realised that if the volume being scanned is unmounted for some reason, the scan will assume every filename on that volume should be purged! Yikes. Is there any better way to mount volumes or any suggestions at all? About the only thing I can think of is to put a permanent dummy file on each volume which I check for before scanning, thereby ensuring that the volume is only scanned if the dummy file can be located. | You can use mountpoint to check if the given directory is a mount point. Example mountpoint /mnt/foo; printf "$?\n"/dev/foo is a mountpoint0mountpoint /mnt/bar; printf "$?\n"/dev/bar is not a mountpoint1 As the return value indicates, this can easily by used in an if statement in a script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118627/"
]
} |
208,680 | I'm having issues merging two files using the join command:The first file is a csv file: NAIN GENIEU 01/01/1900,A,BNAIN GENIEUR 01/01/1917,C,DNAINGENIEUR 21/01/1917,E,F The second file contains only the interesting id: NAIN GENIEUR 01/01/1917 I would like this as output: NAIN GENIEUR 01/01/1917,C,D Both files are sorted with bash sort command. When I use join without any argument it default to spaces so it joints by PSEUDO but doesn't account for BIRTHDAY or anything after a space in PSEUDO .When I use -t"," argument, I have no output at all (even though there should be) Any clue on how to solve this? BTW I use join v.8.4 EDIT I tried putting quotation marks around the first field (which may contain spaces) but it doesn't help. | You can use mountpoint to check if the given directory is a mount point. Example mountpoint /mnt/foo; printf "$?\n"/dev/foo is a mountpoint0mountpoint /mnt/bar; printf "$?\n"/dev/bar is not a mountpoint1 As the return value indicates, this can easily by used in an if statement in a script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208680",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118802/"
]
} |
208,704 | I am trying to find all the files which have 'pillar' in the name (case incentive) and do not contain 'cache' (also case insensitive) with find . -iname '*pillar*' -and -not -iname '*cache*' but it not working as I find (among others) ./Caches/Metadata/Safari/History/https:%2F%2Fwww.google.ch%2Fsearch?q=pillars+of+eternity+dropbox&client=safari&rls=en&biw=1440&bih=726&ei=CnBDVbhXxulSnLaBwAk&start=10&sa=N%23.webhistory What am I doing wrong? | It looks like you want to avoid looking for files in *cache* directories more than finding files with *pillar* and not *cache* in their name. Then, just tell find not to bother descending into *cache* directories: find . -iname '*cache*' -prune -o -iname '*pillar*' -print Or with zsh -o extendedglob : ls -ld -- (#i)(^*cache*/)#*pillar* (not strictly equivalent as that would report a foo/pillar-cache file) Or (less efficient as it descends the whole tree like in @apaul's solution ): ls -ld -- (#i)**/*pillar*~*cache* Details on the zsh specific globs: (#i) : turn on case insensitive matching ^ : negation glob operator (...) : grouping (like @(...) in ksh ). <something># : zero or more of <something> (like * in regexps). ~ : and-not operator (matches on the whole path) **/ : 0 or more directory levels (short for (*/)# ). Add the (D) glob qualifier if you want to descend into hidden dires and match hidden files like in the find solution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10616/"
]
} |
208,764 | I would like to extract a user name from the file between the keywords "/" and "NET" like below. How can I do so with programs like awk , sed or cut . from: audit: command=true rsa1/[email protected] runningaudit: command=true user2/[email protected] executing to: audit: command=true rsa1 runningaudit: command=true user2 executing | Using sed : < inputfile sed 's/\/.*NET//' > outputfile Using sed in-place: sed -i.bak 's/\/.*NET//' inputfile Command #1 breakdown : < inputfile : redirects the content of inputfile to sed 's stdin > outputfile : redirects the content of sed 's stdout to outputfile Command #2 breakdown : -i.bak : Forces sed to create an inputfile.bak backup file and to edit inputfile in-place inputfile : Forces sed to read the input from inputfile Regex breakdown : s : asserts to perform a substitution / : starts the search pattern \/ : matches a / character .*NET : matches any number of any character up to the end of a NET string / : stops the search pattern / starts the replace pattern / : stops the replace pattern Sample output: ~/tmp$ cat inputfileaudit: command=true rsa1/[email protected] runningaudit: command=true user2/[email protected] executing~/tmp$ < inputfile sed 's/\/.*NET//' > outputfile~/tmp$ cat outputfile audit: command=true rsa1 runningaudit: command=true user2 executing~/tmp$ sed -i.bak 's/\/.*NET//' inputfile~/tmp$ cat inputfile.bakaudit: command=true rsa1/[email protected] runningaudit: command=true user2/[email protected] executing~/tmp$ cat inputfileaudit: command=true rsa1 runningaudit: command=true user2 executing~/tmp$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118863/"
]
} |
208,772 | I am trying to install Kali Linux on a Virtual Machine using Virtual Box on an OS X host. The only issue is I get this error: Here are logs of my errors: hardware-summary partman syslog I had my hard drive formatted as / /temp /users etc. How would I install Kali linux in VirturalBox with an OS X Host? The Image I used was kali-linux-1.1.0a-amd64.iso and I also checked that the sha1 hash was valid. Small print: The log information was found by using the 'web server' and have been uploaded to pastebin since this post would have been too big. The log names I have given is what was on the web server | Increase your virtual hard disk space to 12 GB or more.I faced similar issue and the above resolved my issue. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118866/"
]
} |
208,784 | I can change the master volume with these commands (from the command line), and it affects all applications, but how do I change the volume for just one application ( XMMS for example)? amixer -q set Master toggle # or pactl set-sink-mute 0 toggleamixer -q sset Master 5%+ unmute # or pactl set-sink-volume 0 -- -5%amixer -q sset Master 5%- unmute # or pactl set-sink-volume 0 -- +5% pacmd dump # is interesting, and there are GUI applications that can do this: gnome-control-center sound , pavucontrol | You can get the number of sink Input with pactl command. $ pactl list sink-inputs...Sink Input #7119 Driver: protocol-native.c Owner Module: 12 Client: 6298 Sink: 0...Properties: application.icon_name = "google-chrome" media.name = "Playback" application.name = "Chromium"... Using this number(#7119), you specify the sink Input. $ pactl set-sink-input-mute 7119 toggle It will identify the application with application.icon_name property.The following is a case to specify the Chromium. #!/bin/shLANGUAGE="en_US"app_name="Chromium"current_sink_num=''sink_num_check=''app_name_check=''pactl list sink-inputs |while read line; do \ sink_num_check=$(echo "$line" |sed -rn 's/^Sink Input #(.*)/\1/p') if [ "$sink_num_check" != "" ]; then current_sink_num="$sink_num_check" else app_name_check=$(echo "$line" \ |sed -rn 's/application.name = "([^"]*)"/\1/p') if [ "$app_name_check" = "$app_name" ]; then echo "$current_sink_num" "$app_name_check" pactl set-sink-input-mute "$current_sink_num" toggle fi fidone | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54259/"
]
} |
208,808 | I want to block all IPs except for my own home IP from doing anything on my server. How can I do this with iptables? For the example lets say my home IP is 1.2.3.4 My server still needs to be able to connect to various IPs. Also by doing this will this cause any general problems? Something like this? (doesn't work) /sbin/iptables -A INPUT -s 1.2.3.4 -j ACCEPTiptables -A INPUT -s 0.0.0.0/0 -j DROPiptables -A OUTPUT -d 0.0.0.0/0 -j DROP | Leave the OUTPUT chain untouched. Put these in your INPUT chain iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPTiptables -A INPUT -s 1.2.3.4 -j ACCEPTiptables -A INPUT -j DROP # or REJECT The first rule allows your iptables configuration to accept traffic for established connections (i.e. those initiated by your own server to other destinations). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102085/"
]
} |
208,815 | I use a script that writes all mountpoints of my devices to a textfile using df. How can I execute my script every time any device (especially USB) is mounted? script to execute: #!/bin/bash# save all mountpoints to textfiledf -h /dev/sd*| grep /dev/sd| awk '{print $6}' > /home/<user>/FirstTextfile# do somethingwhile read line do echo "mountpoint:${line%/*}/ devicename:${line##*/}}" >> home/<user>/AnotherTextfile Debian 8.0 (jessie), Linux 3.16.0, Gnome 3.14. | Leave the OUTPUT chain untouched. Put these in your INPUT chain iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPTiptables -A INPUT -s 1.2.3.4 -j ACCEPTiptables -A INPUT -j DROP # or REJECT The first rule allows your iptables configuration to accept traffic for established connections (i.e. those initiated by your own server to other destinations). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118897/"
]
} |
208,819 | Ansible variables come from a variety of sources. It is for example possible to provide host_vars and group_vars by creating YAML files in a subfolder named host_vars and group_vars respectively of the folder containing the inventory file. How can I list all of the variables Ansible would know about a group or host inside a playbook? Note: I tried ansible -m debug -e 'var=hostvars' host and ansible -m debug -e '- debug: var=hostvars' to no avail. Hint: ansible <group|host> -m setup is not the correct answer as it does not include all the variables that come from other sources (it only contains { "ansible_facts" : { ... } } . In fact it does not even include variables provided by a dynamic inventory script (via _meta and so on). Ansible version: 1.9.1. | ansible <host pattern> -m debug -a "var=hostvars[inventory_hostname]" seems to work. Replace <host pattern> by any valid host pattern . Valid variable sources ( host_vars , group_vars , _meta in a dynamic inventory, etc.) are all taken into account. With dynamic inventory script hosts.sh : #!/bin/shif test "$1" = "--host"; then echo {}else cat <<EOF{ "ungrouped": [ "x.example.com", "y.example.com" ], "group1": [ "a.example.com" ], "group2": [ "b.example.com" ], "groups": { "children": [ "group1", "group2" ], "vars": { "ansible_ssh_user": "user" } }, "_meta": { "hostvars": { "a.example.com": { "ansible_ssh_host": "10.0.0.1" }, "b.example.com": { "ansible_ssh_host": "10.0.0.2" } } }}EOFfi You can get: $ chmod +x hosts.sh$ ansible -i hosts.sh a.example.com -m debug -a "var=hostvars[inventory_hostname]"a.example.com | success >> { "var": { "hostvars": { "ansible_ssh_host": "10.0.0.1", "ansible_ssh_user": "user", "group_names": [ "group1", "groups" ], "groups": { "all": [ "x.example.com", "y.example.com", "a.example.com", "b.example.com" ], "group1": [ "a.example.com" ], "group2": [ "b.example.com" ], "groups": [ "a.example.com", "b.example.com" ], "ungrouped": [ "x.example.com", "y.example.com" ] }, "inventory_hostname": "a.example.com", "inventory_hostname_short": "a" } }} | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/208819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
208,824 | In the test environment, we have a directory structure for log files that looks like this: /var/Logs/int/app-id/region-code/log/file-name.log/var/Logs/sat/app-id/region-code/log/file-name.log/var/Logs/cat/app-id/region-code/log/file-name.log There are many app-ids per environment and several region-codes per app-id (depending on app). Is there a single command that will allow me to change directory from int to sat, keeping all of the rest of the path the same? Something like the following: $ pwd/var/Logs/int/abc/01/log$ cdswap int sat$ pwd/var/Logs/sat/abc/01/log$ cdswap abc def$ pwd/var/Logs/sat/def/01/log It would be a bonus if this also worked: $ cdswap def/01 ghi/02$ pwd/var/Logs/sat/ghi/02/log If there is no such command, could I set up an alias that would effectively do the same thing? How would that look? Thanks for your help! | In zsh, cdswap is… cd . When given two arguments, cd replaces the first argument by the second argument in the current directory and changes to the resulting directory. You can emulate this in bash by making cd a function. cd () { local i=1 while [[ "${!i}" = -* ]]; do ((++i)); done if ((i == $# - 1)); then local operands operands=("$@") operands[$i]=${PWD/${!i}/${!#}} if [[ "${operands[$i]}" == "$PWD" ]]; then echo >&2 "cd: string not in pwd: ${operands[$i]}" return 1 fi set -- "${operands[@]:$(($#-1))}" fi builtin cd "$@"} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118904/"
]
} |
208,838 | I'm trying to retrieve the group ID of two groups ( syslog and utmp ) by name using an Ansible task. For testing purposes I have created a playbook to retrieve the information from the Ansible host itself. ---- name: My playbook hosts: enabled sudo: True connection: local gather_facts: False tasks: - name: Determine GIDs shell: "getent group {{ item }} | cut -d : -f 3" register: gid_{{item}} failed_when: gid_{{item}}.rc != 0 changed_when: false with_items: - syslog - utmp Unfortunately I get the following error when running the playbook: fatal: [hostname] => error while evaluating conditional: gid_syslog.rc != 0 How can I consolidate a task like this one into a parametrized form while registering separate variables, one per item , for later use? So the goal is to have variables based on the group name which can then be used in later tasks. I'm using the int filter on gid_syslog.stdout and gid_utmp.stdout to do some calculation based on the GID in later tasks. I also tried using gid.{{item}} and gid[item] instead of gid_{{item}} to no avail. The following works fine in contrast to the above: ---- name: My playbook hosts: enabled sudo: True connection: local gather_facts: False tasks: - name: Determine syslog GID shell: "getent group syslog | cut -d : -f 3" register: gid_syslog failed_when: gid_syslog.rc != 0 changed_when: false - name: Determine utmp GID shell: "getent group utmp | cut -d : -f 3" register: gid_utmp failed_when: gid_utmp.rc != 0 changed_when: false | I suppose there's no easy way for that. And register with with_items loop just puts all results of them into an array variable.results . Try the following tasks: tasks: - name: Determine GIDs shell: "getent group {{ item }} | cut -d : -f 3" register: gids changed_when: false with_items: - syslog - utmp - debug: var: gids - assert: that: - item.rc == 0 with_items: gids.results - set_fact: gid_syslog: "{{gids.results[0]}}" gid_utmp: "{{gids.results[1]}}" - debug: msg: "{{gid_syslog.stdout}} {{gid_utmp.stdout}}" You cannot either use variable expansion in set_fact keys like this: - set_fact: "gid_{{item.item}}": "{{item}}" with_items: gids.results | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
208,846 | I was using lsof to track down deleted files that were still taking up space and I realized that I wasn't quite sure what an offset is with respect to a file. lsof 's man page was less than helpful in this regard and searching around I couldn't get a clear picture of what it is. What is a file offset and why is it useful to have that piece of information? | The offset is the current position in the file, as maintained by the kernel for a given file description (see the lseek(2) and open(2) manpages for details). As to why it's useful in lsof 's output, I'm not really sure. It can give some idea of a process's progress through a file, although it won't cover all cases (memory-mapped files won't show offset changes). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61730/"
]
} |
208,870 | We use && operator to run a command after previous one finishes. update-system-configuration && restart-service but how do I run a command only if previous command is unsuccessful ? For example, if I have to update system configuration and if it fails I need to send a mail to system admin? Edit: Come on this is not a duplicate question of control operators. This will help users who are searching for specifically this, I understand that answer to control operators will answer this question too, but people searching specifically for how to handle unsuccessful commands won't reach there directly, otherwise I would have got there before asking this question. | && executes the command which follow only if the command which precedes it succeeds. || does the opposite: update-system-configuration || echo "Update failed" | mail -s "Help Me" admin@host Documentation From man bash : AND and OR lists are sequences of one of more pipelines separated by the && and || control operators, respectively. AND and OR lists are executed with left associativity. An AND list has the form command1 && command2 command2 is executed if, and only if, command1 returns an exit status of zero. An OR list has the form command1 || command2 command2 is executed if and only if command1 returns a non-zero exit status. The return status of AND and OR lists is the exit status of the last command executed in the list. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/208870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64355/"
]
} |
208,894 | I want to use GPG for local encryption only, and after reading the man file, I'm doing the following in order to encrypt a whole directory: I zip the directory with a password " zip -r -e foo foo ", then I encrypt it with " gpg -c foo.zip " using a passphrase. Is this an elegant and secure way of encrypting directories? Am I using GPG's full cryptographic power? Are there better alternatives? So there's no a way to encrypt a whole directory without zip it or tar it? | Is this an elegant and secure way of encrypting directories? Elegant -- no. Secure -- as secure as gpg . Am I using GPG's full cryptographic power? Yes. Are there better alternatives? tar the directory first instead of zip . gpg compresses data anyway. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91570/"
]
} |
208,908 | When I scanned my $HOME directory with baobab (Disk Usage Analyzer), I found that ~/.cache is consuming about half a GB. I also tried to restart and again check size but no difference. So, I am planning to rm -rf ~/.cache . Let me know Is it safe to clear ~/.cache ? | It is safe to clear ~/.cache/ , new user accounts start with an empty directory anyway. You might want to log out after doing this though since programs might still use this directory. These programs can be found with this command: find ~/.cache -print0 | xargs -0 lsof -n In my case I would most likely be fine with just closing Firefox before removal. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/208908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
208,911 | I have a file like one below var 3 2014 stringvar1 4 2011 string4var2 6 1999 string2var3 1 2016 string6 Then i have this while read loop to compare one of the columns to a number then echo something. However, instead of echoing my desired phrase, it echoes something else. while read numdoif [ "$num" = "0" ]; thenecho "Number is equal to zero"elseecho "number is not equal to 0"fidone < home/dir/file.txt | awk '{print $2}' instead of echoing the above, it echoes the 2nd column of the file. | you should try awk '{print $2}' home/dir/file.txt | while read numdo if [ "$num" = "0" ]; then echo "Number is equal to zero" else echo "number is not equal to 0" fi done for a mixed awk/bash solution. As other have pointed out, awk redirection occur later. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118954/"
]
} |
208,933 | I need to go back to basics on a VPS but I don't want to wipe it clean. I understand that Debian stores apt-get commands somewhere, but these are separated in to separate log files. Is it possible to bring all these together to get a complete list of all apt-get commands that I've issued (basically so that I can reverse them)? I am looking for this output: $ blahapt-get install libpack-4 libpack-5 libpack-6 libpack-devapt-get purge libpack-4apt-get install blah-1 blah-2apt-get purge somepack-1 apt-get install libpack-4 libpack-5 libpack-6 libpack-devapt-get purge libpack-4apt-get install blah-1 blah-2apt-get purge somepack-1apt-get install libpack-4 libpack-5 libpack-6 libpack-devapt-get purge libpack-4apt-get install blah-1 blah-2apt-get purge somepack-1apt-get install libpack-4 libpack-5 libpack-6 libpack-devapt-get purge libpack-4apt-get install blah-1 blah-2apt-get purge somepack-1 | you should try awk '{print $2}' home/dir/file.txt | while read numdo if [ "$num" = "0" ]; then echo "Number is equal to zero" else echo "number is not equal to 0" fi done for a mixed awk/bash solution. As other have pointed out, awk redirection occur later. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/208933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118967/"
]
} |
208,960 | I have ubuntu server on digitalocean and I want to give someone a folder for their domain on my server, my problem is, I don't want that user to see my folders or files or to be able to move out their folder. How can I restrict this user in their folder and not allow to him to move out and see other files/directories ? | I solved my problem by this way: Create a new group $ sudo addgroup exchangefiles Create the chroot directory $ sudo mkdir /var/www/GroupFolder/$ sudo chmod g+rx /var/www/GroupFolder/ Create the group-writable directory $ sudo mkdir -p /var/www/GroupFolder/files/$ sudo chmod g+rwx /var/www/GroupFolder/files/ Give them both to the new group $ sudo chgrp -R exchangefiles /var/www/GroupFolder/ after that I went to /etc/ssh/sshd_config and added to the end of the file: Match Group exchangefiles # Force the connection to use SFTP and chroot to the required directory. ForceCommand internal-sftp ChrootDirectory /var/www/GroupFolder/ # Disable tunneling, authentication agent, TCP and X11 forwarding. PermitTunnel no AllowAgentForwarding no AllowTcpForwarding no X11Forwarding no Now I'm going to add new user with obama name to my group: $ sudo adduser --ingroup exchangefiles obama Now everything is done, so we need to restart the ssh service: $ sudo service ssh restart notice: the user now can't do any thing out file directory I mean all his file must be in file Folder. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/208960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118981/"
]
} |
208,981 | This problem is related to my attempt to import questions and their answers in a Excel file into .txt -file which Anki flashcard program handles as described here .I cannot have more than 2 fields so I need to make options one field. Data stored as CSV from LibreOffice (semicolon as field separator - only distinction what the manual says) as instructed in Anki manual Question ipsun; option 1 ; option 2 ; option 3 ; option 4 ; ... ; option nQuestion ipsun; option 1 ; option 2 ; option 3 ; option 4 ; ... ; option n... where each entry with all options is in one line i.e. one "flashcard". In one card, front-part before semicolon, and back-part after semicolon. Second flashcard in newline and so on. Wanted output which should be in UTF-8 Question ipsun; option 1 | option 2 | option 3 | option 4 | ... | option nQuestion ipsun; option 1 | option 2 | option 3 | option 4 | ... | option n... My pseudocode in Perl based on this answer perl -00 -pe s/;/\0/; s/;/ |/g; s/\0/;/' file Commented perl -00 -pe ' # each record is separated by blank lines (-00) # read the file a record at a time and auto-print (-p) s/;/\0/; # turn the first semicolon into a null byte s/;/ |/g; # replace all other semicolons with " |" s/\0/;/ # restore the first semicolon' file How can you replace all semicolons after 1st semicolon? | sed 'y/|;/\n|/;s/|/;/;y/\n/|/' <<\INQuestion ipsun; option 1 ; option 2 ; option 3 ; option 4 ; ... ; option nIN Note that this does not use a regexp to handle the majority of the replacements, but rather uses a more basic (and far more performant) translation function to do so - and does so in a POSIX portable fashion. This should work on any machine with a POSIX sed installed. It translates ; semicolons to | pipes and | pipes to \n ewlines simultaneously. The | pipes are set aside as \n ewlines in case any occur on an input line. It then s/// ubstitutes the first occurring | pipe for a ; semicolon, and then translates all \n ewlines to | pipes - thus restoring any it might have set aside to robustly handle the single s/// ubstitution. While I use a <<\IN here-document for the sake of copy/pastable demonstration, you should probably use <infile >outfile . OUTPUT: Question ipsun; option 1 | option 2 | option 3 | option 4 | ... | option n | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
208,982 | I am trying to run a bash script I have via cron, and I am getting the following error at the beginning of the execution: tput: No value for $TERM and no -T specified Here is what is in my crontab: 0 8 * * 1-5 cd /var/www/inv/ && /var/www/inv/unitTest run all 2>&1| mail -r "[email protected]" -s "Daily Inventory Unit Test Results" [email protected] | Your unit test script probably calls tput in order to generate pretty output showing which tests pass and fail. Under cron there is no terminal and thus no terminal type ( $TERM ), so tput cannot control the nonexistent terminal. Your unit test script needs to have 2 modes: running on a terminal: it can call tput to generate pretty-looking output not running on a terminal: it should not call tput and instead generate a generic text-only output format that is suitable for piping into an email as you are doing here. The easiest way for the unit tests to know whether or not they are running on a terminal is to test of the stdio file descritors refer to a terminal. If it's a shell script, then: if [ -t 1 ]; then tput bold; echo pretty; tput sgr0else echo uglyfi Basically: do not call tput unless you are running on a terminal, and you will thus avoid the error you are getting, plus produce reasonable output in whichever mode you happen to be running under. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/208982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118894/"
]
} |
209,009 | I want to set up a directory where all new files and directories have a certain access mask and also the directories have the sticky bit set (the t one, which restricts deletion of files inside those directories). For the first part, my understanding is that I need to set the default ACL for the parent directory. However, new directories do not inherit the t bit from the parent. Hence, non-owners can delete files in the subdirectories. Can I fix that? | This is a configuration that allows members of a group, acltest , to createand modify group files while disallowing the deletion and renaming of filesexcept by their owner and "others," nothing. Using the username, lev andassuming umask of 022: groupadd acltestusermod -a -G acltest lev Log out of the root account and the lev account. Log in and become root or use sudo : mkdir /tmp/acltestchown root:acltest /tmp/acltestchmod 0770 /tmp/acltestchmod g+s /tmp/acltestchmod +t /tmp/acltestsetfacl -d -m g:acltest:rwx /tmp/acltestsetfacl -m g:acltest:rwx /tmp/acltest ACL cannot set the sticky bit, and the sticky bit is not copied to subdirectories. But, you might use inotify or similar software to detect changes in the file system, such as new directories, and then react accordingly. For example, in Debian: apt-get install inotify-tools Then make a script for inotify , like /usr/local/sbin/set_sticky.sh . #!/usr/bin/env bashinotifywait -m -r -e create /tmp/acltest |while read path event file; do case "$event" in *ISDIR*) chmod +t $path$file ;; esacdone Give it execute permission for root : chmod 0700 /usr/local/sbin/set_sticky.sh . Then run it at boot time from, say, /etc/rc.local or whichever RC file is appropriate: /usr/local/sbin/set_sticky.sh & Of course, in this example, /tmp/acltest should disappear on reboot. Otherwise, this should work like a charm. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16669/"
]
} |
209,053 | Is there a way to save the changes I made to my vim buffer as a patch file for the original file, without saving it as a separate file and using diff? | It's possible to do this without a plugin using the w command, so the buffer contents can be used in a shell command: :w !diff -au "%" - > changes.patch ( % is substituted with the path of the file being edited, - reads the buffer from stdin) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119031/"
]
} |
209,068 | I have a file named Element_query containing the result of a query : SQL> select count (*) from element;[Output of the query which I want to keep in my file]SQL> spool off; I want to delete 1st line and last line using shell command. | Using GNU sed : sed -i '1d;$d' Element_query How it works : -i option edit the file itself. You could also remove that option and redirect the output to a new file or another command if you want. 1d deletes the first line ( 1 to only act on the first line, d to delete it) $d deletes the last line ( $ to only act on the last line, d to delete it) Going further : You can also delete a range. For example, 1,5d would delete the first 5 lines. You can also delete every line that begins with SQL> using the statement /^SQL> /d You could delete every blank line with /^$/d Finally, you can combine any of the statement by separating them with a semi-colon ( statement1;statement2;satement3;... ) or by specifying them separately on the command line ( -e 'statement1' -e 'statement 2' ... ) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/209068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
209,086 | Pseudocode which is a continuation from this answer gsed 's/|/1)/g' 's/|/2)/2g' 's/|/3)/3g' 's/|/5)/4g' 's/|/5)/5g' <input.csv >output.csv which is of course not working. I am interested in how gsed can manage such an looping. How does gsed extend to such looping? | For the case above, you can do it like this: gsed 's/|/1)/; s/|/2)/; s/|/3)/; s/|/4)/; s/|/5)/' Example: $ echo '| | | | |' | sed 's/|/1)/; s/|/2)/; s/|/3)/; s/|/4)/; s/|/5)/'1) 2) 3) 4) 5) This works if you can estimate in advance the maximum number of | on a line, and add s/|/N)/ accordingly. If you can't estimate the maximum number of | on a line it can still be done with gsed , using a counter in the hold buffer, and incrementing it with this clever device by Bruno Haible. The actual implementation is a little tricky though, and thus I'll leave it to the masochistic astute reader. The easy way is, of course, to just use awk : awk '{ cnt = 0; while(sub(/\|/, ++cnt ")")); print }' Proof: $ echo '| | | | |' | awk '{ cnt = 0; while(sub(/\|/, ++cnt ")")); print }'1) 2) 3) 4) 5) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
209,123 | I obviously understand that one can add value to internal field separator variable. For example: $ IFS=blah$ echo "$IFS"blah$ I also understand that read -r line will save data from stdin to variable named line : $ read -r line <<< blah$ echo "$line"blah$ However, how can a command assign variable value? And does it first store data from stdin to variable line and then give value of line to IFS ? | In POSIX shells, read , without any option doesn't read a line , it reads words from a (possibly backslash-continued) line, where words are $IFS delimited and backslash can be used to escape the delimiters (or continue lines). The generic syntax is: read word1 word2... remaining_words read reads stdin one byte at a time¹ until it finds an unescaped newline character (or end-of-input), splits that according to complex rules and stores the result of that splitting into $word1 , $word2 ... $remaining_words . For instance on an input like: <tab> foo bar\ baz bl\ah blah\whatever whatever and with the default value of $IFS , read a b c would assign: $a ⇐ foo $b ⇐ bar baz $c ⇐ blah blahwhatever whatever Now if passed only one argument, that doesn't become read line . It's still read remaining_words . Backslash processing is still done, IFS whitespace characters² are still removed from the beginning and end. The -r option removes the backslash processing. So that same command above with -r would instead assign $a ⇐ foo $b ⇐ bar\ $c ⇐ baz bl\ah blah\ Now, for the splitting part, it's important to realise that there are two classes of characters for $IFS : the IFS whitespace characters² (including space and tab (and newline, though here that doesn't matter unless you use -d), which also happen to be in the default value of $IFS ) and the others. The treatment for those two classes of characters is different. With IFS=: ( : being not an IFS whitespace character), an input like :foo::bar:: would be split into "" , "foo" , "" , bar and "" (and an extra "" with some implementations though that doesn't matter except for read -a ). While if we replace that : with space, the splitting is done into only foo and bar . That is leading and trailing ones are ignored, and sequences of them are treated like one. There are additional rules when whitespace and non-whitespace characters are combined in $IFS . Some implementations can add/remove the special treatment by doubling the characters in IFS ( IFS=:: or IFS=' ' ). So here, if we don't want the leading and trailing unescaped whitespace characters to be stripped, we need to remove those IFS white space characters from IFS. Even with IFS-non-whitespace characters, if the input line contains one (and only one) of those characters and it's the last character in the line (like IFS=: read -r word on a input like foo: ) with POSIX shells (not zsh nor some pdksh versions), that input is considered as one foo word because in those shells, the characters $IFS are considered as terminators , so word will contain foo , not foo: . So, the canonical way to read one line of input with the read builtin is: IFS= read -r line (note that for most read implementations, that only works for text lines as the NUL character is not supported except in zsh ). Using var=value cmd syntax makes sure IFS is only set differently for the duration of that cmd command. History note The read builtin was introduced by the Bourne shell and was already to read words , not lines. There are a few important differences with modern POSIX shells. The Bourne shell's read didn't support a -r option (which was introduced by the Korn shell), so there's no way to disable backslash processing other than pre-processing the input with something like sed 's/\\/&&/g' there. The Bourne shell didn't have that notion of two classes of characters (which again was introduced by ksh). In the Bourne shell all characters undergo the same treatment as IFS whitespace characters do in ksh, that is IFS=: read a b c on an input like foo::bar would assign bar to $b , not the empty string. In the Bourne shell, with: var=value cmd If cmd is a built-in (like read is), var remains set to value after cmd has finished. That's particularly critical with $IFS because in the Bourne shell, $IFS is used to split everything, not only the expansions. Also, if you remove the space character from $IFS in the Bourne shell, "$@" no longer works. In the Bourne shell, redirecting a compound command causes it to run in a subshell (in the earliest versions, even things like read var < file or exec 3< file; read var <&3 didn't work), so it was rare in the Bourne shell to use read for anything but user input on the terminal (where that line continuation handling made sense) Some Unices (like HP/UX, there's also one in util-linux ) still have a line command to read one line of input (that used to be a standard UNIX command up until the Single UNIX Specification version 2 ). That's basically the same as head -n 1 except that it reads one byte at a time to make sure it doesn't read more than one line. On those systems, you can do: line=`line` Of course, that means spawning a new process, execute a command and read its output through a pipe, so a lot less efficient than ksh's IFS= read -r line , but still a lot more intuitive. ¹ though on seekable input, some implementations can revert to reading by blocks and seek-back afterwards as an optimisation. ksh93 goes even further and remembers what was read and uses it for the next read invocation, though that's currently broken ² IFS whitespace characters , per POSIX being the characters classified as [:space:] in the locale and that happen to be in $IFS though in ksh88 (on which the POSIX specification is based) and in most shells, that's still limited to SPC, TAB and NL. The only POSIX compliant shell in that regard I found was yash . ksh93 and bash (since 5.0) also include other whitespace (such as CR, FF, VT...), but limited to the single-byte ones (beware on some systems like Solaris, that includes the non-breaking-space which is single byte in some locales) | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/209123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
209,152 | When I run update-grub or I try to reinstall it, I get a "syntax error". The output is somewhat like this: error: syntax error.error: Incorrect command.error: syntax error.error: line no: 262Syntax errors are detected in generated GRUB config file.Ensure that there are no errors in /etc/default/gruband /etc/grub.d/* files or please file a bug report with/boot/grub/grub.cfg.new file attached. Why is this happening? What can I do? Background After a Manjaro update, my system did no longer boot. It said "file /boot/vmlinuz-316-x86_64 not found". And then "you need to load the kernel first". I then booted from a usb stick (the manjaro live/installer disk), and followed the instructions from https://wiki.manjaro.org/index.php/Restore_the_GRUB_Bootloader (UEFI systems) with chroot and update-grub. In fact I first noticed the "syntax error" trouble in the step where I tried to reinstall grub, after I got "EFI variables are not supported on this system." I imagine (but don't know for sure) that this might have been going on for a while unnoticed. Any update to grub.cfg failed, but the old grub.cfg was still "good enough". But with the update, the vmlinuz file was renamed, and the grub.cfg referred to an old, no longer existing, vmlinuz file. This is why the boot failed. (I already know the answer while I am writing this. It may not be a complete explanation, but it was enough for me to fix it. I just want to share the result, to save others the trouble) | For me it was a very specific answer, but I want to explain in a more general way how to troubleshoot this. Actually a lot of the information is already in the error message, but to me it was not obvious at first. In short: Follow the line number, in /boot/grub/grub.cfg.new. Try to understand why what you find there is a syntax error. Follow the comment in this file, that points to either /etc/default/grub, or a specific file in /etc/grub/*. In case of a proxy script, follow the hint to a file in /etc/grub.d/proxifiedScripts/ . Troubleshooting steps, in detail The /boot/grub/grub.cfg is automatically created on "update-grub", based on a number of files: /etc/default/grub , and any files in /etc/grub.d/* . /boot/grub/grub.cfg.new However, in case of a syntax error (or any error, I suppose), the original /boot/grub/grub.cfg is NOT overwritten, but instead the new file is created in /boot/grub/grub.cfg.new . The error message contains a line number, in my case 262, that refers to this /boot/grub/grub.cfg.new file. In my case, this was 262. Looking at the file, I found this: ### BEGIN /etc/grub.d/60_memtest86+_proxy ###if [ "${grub_platform}" == "pc" ]; thenfi### END /etc/grub.d/60_memtest86+_proxy ### I learned that en empty if/then/fi block in shell script is not allowed, so this was the syntax error. Quite stupid language design imo, but this is how it is. I also found a fix, which is to add a meaningless statement in the block. A colon was suggested, but there might be other solutions. ### BEGIN /etc/grub.d/60_memtest86+_proxy ###if [ "${grub_platform}" == "pc" ]; then :fi### END /etc/grub.d/60_memtest86+_proxy ### Even better would be to remove this meaningless block completely. Now we don't really want to edit this file manually, because the changes would be wiped on the next update-grub (if successful, which is the goal). /etc/grub.d/* The snippet contains a hint where to look next: /etc/grub.d/60_memtest86+_proxy . This file says: #!/bin/sh#THIS IS A GRUB PROXY SCRIPT'/etc/grub.d/proxifiedScripts/memtest86+' | /etc/grub.d/bin/grubcfg_proxy "+*+#text-'Memory Tester (memtest86+)'~30b99791e52c3f0cb32601c5b8f57cc7~" /etc/grub.d/proxifiedScripts/* The relevant part of /etc/grub.d/proxifiedScripts/memtest86+ is this: [..] cat << EOFif [ "\${grub_platform}" == "pc" ]; then menuentry "Memory Tester (memtest86+)" ${CLASS} { search --fs-uuid --no-floppy --set=root ${_GRUB_MEMTEST_HINTS_STRING} ${_GRUB_MEMTEST_FS_UUID} linux16 ${_GRUB_MEMTEST_REL_PATH} ${GRUB_CMDLINE_MEMTEST86} }fiEOF[..] The file itself is a shell script, but then it has those "cat" statements. These print the shell script snippets that should finally go into /boot/grub/grub.cfg . With some modifications, maybe. In the /boot/grub/grub.cfg.new , we observe that the "menuentry ..." stuff is actually missing, and instead we get an empty then..fi block. Why the "menuentry ..." disappears, I don't know. Maybe grub thinks that it is not needed. Unfortunately, the removal breaks the script. Workaround The trick / workaround was to add a colon in this file, like this: if [ "\${grub_platform}" == "pc" ]; then : menuentry "Memory Tester (memtest86+)" ${CLASS} { search --fs-uuid --no-floppy --set=root ${_GRUB_MEMTEST_HINTS_STRING} ${_GRUB_MEMTEST_FS_UUID} linux16 ${_GRUB_MEMTEST_REL_PATH} ${GRUB_CMDLINE_MEMTEST86} } When running update-grub, this generates a grub.cfg with the workaround described above. Background / More investigation The /etc/grub.d/ folder on my system actually contained two files for memtest86+_proxy: 60_memtest86+_proxy and 62_memtest86+_proxy . I assume that one of them is a leftover of some sort. But both of them have the same updated timestamp, so I really don't know which of them would be safe to delete. A diff shows this: --- /etc/grub.d/60_memtest86+_proxy 2015-01-08 15:54:02.228927526 +0100+++ /etc/grub.d/62_memtest86+_proxy 2015-01-08 15:54:02.228927526 +0100@@ -1,6 +1,6 @@ #!/bin/sh #THIS IS A GRUB PROXY SCRIPT-'/etc/grub.d/proxifiedScripts/memtest86+' | /etc/grub.d/bin/grubcfg_proxy "+*-+#text--'Memory Tester (memtest86+)'~30b99791e52c3f0cb32601c5b8f57cc7~+'/etc/grub.d/proxifiedScripts/memtest86+' | /etc/grub.d/bin/grubcfg_proxy "+'Memory Tester (memtest86+)'~30b99791e52c3f0cb32601c5b8f57cc7~+-*+-#text "\ No newline at end of file So, both of the files refer to the same proxified script, but the result is piped through the grubcfg_proxy binary, with different parameters. These different parameters could be responsible for removing the "menuentry ..." stuff in case of the 60_memtest86+_proxy . Conclusion Others may have completely different problems. But the troubleshooting, at least the first steps, should be quite similar. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29368/"
]
} |
209,165 | Just starting to learn about btrfs and considering switching. My current thought of btrfs is that operates pretty much like git, with everything tracked, and a commit happening every 30 seconds after changes. But, my gut is telling me I must be misunderstanding, or hard drive space would get used up much faster -- so I'm wondering if it's more like git, with everything tracked, files added to the staging area every 30 second after changes, and files only being committed on snapshots. If I don't do snapshots, can you roll back a single file to several changes ago? Or is that only kept if you do snapshots? i.e. If you run a for loop 10,000 times appending to a file with 31 second sleeps inbetween, are you going to see an ancestry tree for that file of 10,000 entries, and can you go back to each of those? Can btrfs snapshots of root be used and thought of just like VMware/VirtualBox snapshots? Where you can shutdown in one, save its state, move to another, boot, make changes so you have a diverging snapshot branch, and move wherever along the tree you want? If so, is there a bootloader that lets you pick a snapshot tree node? (Without making grub.cfg menu entries for each snapshot.) I label snapshot A, make changes and label it B. If I go back to snapshot A and make changes (even just by booting changing /var/log), are those changes made in a "detached" or "unlabeled" snapshot, so those changes would be invisible if going back to B? If so, what happens if I have changes in this "unlabeled" snapshot, and accidentally ask to change to another before labeling it? When deleting a file, is there "this file is deleted" metadata written, so space is still taken by all the versions of the file? Or, does it delete all previous versions, assuming there's no snapshot still pointing to it? If I build gcc from source, as an example, I think the build directory winds up being 5-8GB. If I build it periodically from source, I'm "chewing up" a bunch of hard drive space, right? (Even assuming delete removes everything for the file being deleted, I don't know how many files are deleted in the build process without a make clean -- whether existing object files are technically deleted or just "re-written inside" of them.) | I think that most of your questions can be answered simply by remembering that in Btrfs, a snapshot is not really special, it's just a Btrfs subvolume. It just happens that when it's created, it has initial contents instead of being empty, and the storage space for those initial contents is shared with whatever subvolume the snapshot came from. A snapshot is just like a (full) copy, except it's more economical because of the shared storage. If I don't do snapshots, can you roll back a single file to several changes ago? No. Just like with any regular filesystem, modifying files is destructive. You can't magically go back to an earlier version. Can btrfs snapshots of root be used and thought of just like VMware/VirtualBox snapshots? VM disk images are usually block devices, not filesystems or files on filesystems, so I think it's a little different. You could use a Btrfs file as backing store for a VM virtual block device, I guess. In which case the answer to that question is yes. Except if you use the NOCOW option (which is actually recommended for disk images). Then probably not, because copy-on-write is the magic that makes snapshots work. I label snapshot A, make changes and label it B. If I go back to snapshot A and make changes (even just by booting changing /var/log), are those changes made in a "detached" or "unlabeled" snapshot, so those changes would be invisible if going back to B? Every subvolume (including snapshots) in Btrfs has a name, so you cannot have an "unlabeled" snapshot. In general, any changes you make in one Btrfs subvolume (whether it was created as a snapshot or not) are absolutely not ever visible in another Btrfs subvolume. Just remember that a snapshot is just like a copy, but more economical. When deleting a file, is there "this file is deleted" metadata written, so space is still taken by all the versions of the file? When deleting a file, its directory entry is removed. That is a modification to the directory, and like all modifications, it will be private to the subvolume in which it occurred. Then after that, if and only if the storage space for the file is not used by any other part of the filesystem, it's freed. Deleting a file whose storage is shared between multiple snapshots is a lot like deleting a file in any regular filesystem when it has multiple (hard) links. The storage [inode] is freed iff it is not referenced anymore. If I build gcc from source, as an example, I think the build directory winds up being 5-8GB. If I build it periodically from source, I'm "chewing up" a bunch of hard drive space, right? If you build gcc multiple times in multiple different directories, then yeah, it will use more and more space. If you delete copies in between builds or overwrite the same build directory each time, then, no, there's no particular reason why it would keep using more and more space. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117454/"
]
} |
209,183 | I have 5 million files which take up about 1TB of storage space. I need to transfer these files to a third party. What's the best way to do this? I have tried reducing the size using .tar.gz, but even though my computer has 8GB RAM, I get an "out of system memory" error. Is the best solution to snail-mail the files over? | Additional information provided in the comments reveals that the OP is using a GUI method to create the .tar.gz file. GUI software often includes a lot more bloat than the equivalent command line equivalent software, or performs additional unnecessary tasks for the sake of some "extra" feature such as a progress bar. It wouldn't surprise me if the GUI software is trying to collect a list of all the filenames in memory. It's unnecessary to do that in order to create an archive. The dedicated tools tar and gzip are defintely designed to work with streaming input and output which means that they can deal with input and output a lot bigger than memory. If you avoid the GUI program, you can most likely generate this archive using a completely normal everyday tar invocation like this: tar czf foo.tar.gz foo where foo is the directory that contains all your 5 million files. The other answers to this question give you a couple of additional alternative tar commands to try in case you want to split the result into multiple pieces, etc... | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209183",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4430/"
]
} |
209,196 | I have file file1.txt whose contents are as follows: Date List----------- Quarter Date Year Date Month Date Now I want to read the non space elements from each row of file and to write to a variable.For example for row 2 variable should contain Quarter Year only after removing space. I tried: tail -1 file1.txt > variable1 But it doesn't work. | Using sed : variable1="$(< inputfile sed -n '3s/ *//p')" variable1="$([...])" : runs the command [...] in a subshell and assigns its output to the variable $variable < inputfile : redirects the content of inputfile to sed 's stdin -n : suppresses output sed command breakdown : 3 : asserts to perform the following command only on the 3rd line of input s : asserts to perform a substitution / : starts the search pattern * : matches zero or more characters / : stops the search pattern / starts the replacement string / : stops the replacement string (hence actually replacing with nothing) / starts the modifiers p : prints only the lines where the substitution succeeded | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108771/"
]
} |
209,211 | I use trash-put to trash files from command line. Recently, I had aliased my rm command to trash-put so that I don't accidentally delete something important. However, what happened now was that I had to delete some files from my /var/log folder to free up some space on the / filesystem. I did this using sudo : sudo rm /var/log/somelog#Above command is equivalent to: sudo trash-put /var/log/somelog After doing this, there was no free space recovered on the partition since the files must have moved to some trash-can. However, when I checked my trash-can there were no files. I tried to see if there was .Trash-100 folder on the / partition, but even that was not there. So, where did my trashed file go? And how do I find it so that I can decimate it to recover some space? | Those files you removed may actually still be opened by another process. In that case the file space will become available when that process closes it's handle to the file. You can lookup these files with lsof : lsof |grep "var/log"|grep deleted | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89385/"
]
} |
209,231 | man 5 shadow says this about the 2 nd field: encrypted password Is that true nowadays? I think it should say "hashed password". Am I correct? | No, the shadow file does not contain encrypted passwords, not on any Unix variant that I've seen. That would require an encryption key somewhere — where would it be? Even the original version of the crypt function was in fact a hash function. It operated by using the password as a key for a variant of DES . The output of crypt is the encryption of a block with all bits zero. Although this uses an encryption function as part of the implementation, the crypt operation is not an encryption operation, it is a hash function : a function whose inverse is hard to compute and such that it is difficult to find two values producing the same output. Within its limitations, the original DES-based crypt implementation followed the basic principles of a good password hash function : irreversible function, with a salt , and a slow-down factor. It's the limitations, not the design, that make it unsuitable given today's computing power: maximum of 8 characters in the password, total size that makes it amenable to brute force, salt too short, iteration count too short. Because of the crypt name (due to the fact that crypt uses encryption internally), and because until recently few people were educated in cryptography, a lot of documentation of the crypt function and of equivalents in other environments describes it as “password encryption”. But it is in fact a password hash, and always has been. Modern systems use password hashing functions based on more robust algorithms. Although some of these algorithms are known as “MD5”, “SHA-256” and “SHA-512”, the hash computation is not something like MD5(password + salt) but an iterated hash which meets the slowness requirement (though common methods lack the memory hardness that protects against GPU-based acceleration). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47954/"
]
} |
209,244 | I am trying to clear my filesystem cache from inside a docker container, like so: docker run --rm ubuntu:vivid sh -c "/bin/echo 3 > /proc/sys/vm/drop_caches" If I run this command I get sh: 1: cannot create /proc/sys/vm/drop_caches: Read-only file system which is expected, as I cannot write to /proc from inside the container. Now when I call docker run --rm --privileged ubuntu:vivid sh -c "/bin/echo 3 > /proc/sys/vm/drop_caches" it works, which also makes sense to me, as the --privileged container can do (almost) anything on the host. My question is: how do I find out, which Linux capability I need to set in the command docker run --rm --cap-add=??? ubuntu:vivid sh -c "/bin/echo 3 > /proc/sys/vm/drop_caches" in order to make this work without having to set --privileged ? | The proc filesystem doesn't support capabilities, ACL, or even changing basic permissions with chmod . Unix permissions determine whether the calling process gets access. Thus only root can write that file. With user namespaces, that's the global root (the one in the original namespace); root in a container doesn't get to change sysctl settings. As far as I know, the only solution to change a sysctl setting from inside a non-privileged namespace is to arrange a communication channel with the outside (e.g. a socket or pipe), and have the listening process run as root outside the container. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119152/"
]
} |
209,249 | HP-UX ***** B.11.23 U ia64 **** unlimited-user license find . -type d -name *log* | xargs ls -la gives me the directory names (the ones which contain log in the directory name) followed by all files within that directory. The directories /var/opt/SID/application_a/log/ , /var/opt/SID/application_b/log/ , /var/opt/SID/application_c/log/ and so on contain log files. I want only the two latest logfiles to be listed by the ls command, which I usually find using ls -latr | tail -2 . The output has to be something like this.. /var/opt/SID/application_a/log/-rw-rw-rw- 1 user1 user1 59698 Jun 11 2013 log1-rw-rw-rw- 1 user1 user1 59698 Jun 10 2013 log2/var/opt/SID/application_b/log/-rw-rw-rw- 1 user1 user1 59698 Jun 11 2013 log1-rw-rw-rw- 1 user1 user1 59698 Jun 10 2013 log2/var/opt/SID/application_c/log/-rw-rw-rw- 1 user1 user1 59698 Jun 11 2013 log1-rw-rw-rw- 1 user1 user1 59698 Jun 10 2013 log2 find . -type d -name *log* | xargs ls -la | tail -2 does not give me the above result. What I get is a list of last two files of find . -type d -name *log* | xargs ls -la command. So can I pipe commands after a piped xargs ? How else do I query, to get the resultant list of files in the above format? find . -type d -name *log* | xargs sh -c "ls -ltr | tail -10" gives me a list of ten directory names inside the current directory which happens to be /var/opt/SID and that is also not what I want. | You are almost there. In your last command, you can use -I to do the ls correctly -I replace-str Replace occurrences of replace-str in the initial-arguments with names read from standard input. Also, unquoted blanks do not terminate input items; instead the separator is the newline character. Implies -x and -L 1 . So, with find . -type d -name "*log*" | xargs -I {} sh -c "echo {}; ls -la {} | tail -2" you will echo the dir found by find , then do the ls | tail on it. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/209249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102998/"
]
} |
209,317 | On a distributed System (NFS root), I'd like to change the way the desktopappears to users that have never logged in on the system before, i.e. have no settings set.I'd like to change some desktop symbols, change the default activity to desktop symbols and switch from the simple program launcher to kickoff. How would I do that? | Since plasma5's modularization the config files are not saved in a single folder anymore. You will find different files in ~/.config (typically ending with .rc) and some other parts in ~/.local (e.g. plasma themes). The plasmoids are saved in "~/.config/plasma-org.kde.plasma.desktop-appletsrc". It also saves the folder view desktop and the kickoff launcher. A template folder for new users is located under /etc/skel/ . The contained files will be copied to the users home directory while creating a new user. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209317",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119215/"
]
} |
209,356 | I just installed Kali Linux on Dell Inspiron 1545, and am unable to get wireless connection. I attempted the following root@kali:~# lspci -knn | grep Net -A20c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] (rev 01) Subsystem: Dell Wireless 1397 WLAN Mini-Card [1028:000c]# apt-get install firmware-iwlwifi# modprobe -r iwlwifi; modprobe iwlwifi I even tried commands I used when I installed Ubuntu on the same laptop sudo apt-get install bcmwl-kernel-sourcesudo modprobe -r b43 bcmasudo modprobe wl How to fix? UPDATE I also ran sudo apt-get install kali-linux-wireless but I still cannot detect and connect to my home wi-fi. Thanks! Tried first solution I tried first solution, but when I go to Administration > System, I get package manager. I searched for everything Broadcom downloaded, installed, rebooted, but still no wireless. There has to be an easy solution ...... Am trying this, http://www.chokepoint.net/2014/04/installing-broadcom-bcm43142-drivers-on.html I was getting errors with install. After I reboot, still no wireless detector. Update I'm trying all your suggestions .... seems I will have to read the links you provided. Who knew getting wireless would be so difficult! I will extend bounty time if possible. | May be your wireless card is in turned off state, does the laptop have any dedicated physical switch or key combo(like Fn+F3 on my acer laptop) to turn on/off Wi-Fi ? most laptops also have a LED to show Wi-Fi card state. Device firmwares are not pre installed in kali-linux(my last used version 1.0.4 can't tell about latest versions) , so if not already installed, install them. sudo apt-get install firmware-linux firmware-linux-free firmware-linux-nonfree Install Broadcom wireless card firmware sudo apt-get install firmware-brcm80211 firmware-b43-installer firmware-b43legacy-installer broadcom-sta-dkms Then use proper kernel drivers, b43 or b43legacy, iwlwifi is Intel Wi-Fi card driver so firmware-iwlwifi is not necessary. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119230/"
]
} |
209,381 | In setting it up so I could execute Sublime Text from bash, I discovered two methods of doing this via different tutorials: Method 1) Create a symlink from /usr/local/bin/subl to Sublime's bin dir: sudo ln -s /Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl /usr/local/bin/subl This took advantage that /usr/local/bin is already in my PATH variable. ...or... Method 2) Update your PATH to include the path to Sublime's bin folder: export PATH="/Applications/Sublime Text.app/Contents/SharedSupport/bin/":$PATH Both work, but I'm wondering if one method is better than the other, or are they equally fine? The only advantage I can potentially see is for Method 1 if is if it's beneficial to have less dirs in your PATH directory (speed/performance in looking for the executable?). | If you are looking for way to run/execute program/script as a command directly from terminal , then I think putting scripts or links to /usr/local/bin is good choice! Also the advantage is that it is already in path. Visit this related post. But if a program directory provides several executables, then I think exporting path of that directory may useful/better than creating several symlinks individually. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119257/"
]
} |
209,390 | In Bash, which sets $HOSTNAME for you, I was able to calculate the total length of prompt line simply by using something like: PS1_length=$((${#USER}+${#HOSTNAME}+${#PWDNAME}+12+${#PS1_GIT})) It was useful e.g. when creating the fill-line like this: However zsh fails to set $HOSTNAME correctly and I can't think of a way to emulate the similar behavior in it — any thoughts? | You should just set HOSTNAME=$(hostname) in your ~/.zshrc Or as Caleb pointed out there is a variable HOST set, so to keep your prompt portable you could also do: HOSTNAME=$HOST | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20096/"
]
} |
209,393 | I am in the process of porting my "configure openvpn server" from ubuntu 14 to debian 8. So far it works well except for this section: # Set up iptables to forward packets for vpn and do this upon startup.echo 'iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPTiptables -A FORWARD -s 10.8.0.0/24 -j ACCEPTiptables -A FORWARD -j REJECTiptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADEexit 0' > /etc/rc.local It appears that debian 8 does not use this /etc/rc.local file so my vpn server won't forward traffic correctly after a reboot. I have to manually call that script, or execute the commands. What is the "debian 8 way" for updating the iptables on boot? Update After reading that /etc/rc.local should work, I made sure the permissions were set to 755 and updated the script to as follows: /bin/echo "starting..." > /root/rc.local.log/sbin/iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT/sbin/iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT/sbin/iptables -A FORWARD -j REJECT/sbin/iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE/bin/echo "completed successfully" >> /root/rc.local.logexit 0 I then created an empty /root/rc.local.log and gave it 777 permissions before rebooting. The file remained empty, making me think that the /etc/rc.local script is not being executed at all. | The Debian way of setting up iptables on boot is by using the iptables-persistent package. Simply install the iptables-persistent package, set up the iptables rules like you want them, and then run netfilter-persistent save . (Note that the command starts with netfilter and not iptables .) See the man page for netfilter-persistent for more details. The method for saving the tables has changes since Debian 7 (Wheezy). In Wheezy one would do something like: invoke-rc.d iptables-persistent save . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64440/"
]
} |
209,404 | Given a file L with one non-negative integer per line and text file F, what would be a fast way to keep only those lines in F, whose line number appears in file L? Example: $ cat L.txt13$ cat F.txtHello WorldHallo WeltHola mundo$ command-in-question -x L.txt F.txtHello WorldHola mundo I'm looking for a command that can handle a file L with 500 million or more entries; file L is sorted numerically. Note: I'm halfway through an implementation for a command-in-question but I just wondered, whether one might be able to use some Unix tools here as well. Update: Thank for all the answers, I learned a lot today! I would like to accept more one answer, but that's not possible. | grep -n | sort | sed | cut ( export LC_ALL=C grep -n '' | sort -t: -nmk1,1 ./L - | sed /:/d\;n | cut -sd: -f2-) <./F That should work pretty quickly (some timed tests are included below) with input of any size. Some notes on how: export LC_ALL=C Because the point of the following operation is to get the entire file of ./F stacked up inline with its ./L lineno's file, the only characters we'll really need to worry about are ASCII [0-9] digits and the : colon. For that reason it is more simple to worry about finding those 11 characters in a set of 128 possibles than it is if UTF-8 is otherwise involved. grep -n '' This inserts the string LINENO: into the head of every line in stdin - or <./F . sort -t: -nmk1,1 ./L - sort neglects to sort its input files at all, and instead (correctly) presumes they are presorted and -m erges them in -numerically sorted order, ignoring basically anything beyond any possible -k1,1 st occurring -t: colon character anyway. While this may require some temp space to do (depending on how far apart some sequences may occur) , it will not require much as compared to a proper sort, and it will be very fast because it involves zero backtracking. sort will output a single stream where any lineno's in ./L will immediately precede the corresponding lines in ./F . ./L 's lines always come first because they are shorter. sed /:/d\;n If the current line matches a /:/ colon d elete it from output. Else, auto-print the current and n ext line. And so sed prunes sort 's output to only sequential line pairs which do not match a colon and the following line - or, to only a line from ./L and then the next. cut -sd: -f2- cut -s uppresses from output those of its input lines which do not contain at least one of its -d: elimiter strings - and so ./L 's lines are pruned completely. For those lines which do, their first : colon-delimited -f ield is cut away - and so goes all of grep 's inserted lineno's. small input test seq 5 | sed -ne'2,3!w /tmp/L s/.*/a-z &\& 0-9/p' >/tmp/F ...generates 5 lines of sample input. Then... ( export LC_ALL=C; </tmp/F \ grep -n '' | sort -t: -nmk1,1 ./L - | sed /:/d\;n | cut -sd: -f2-)| head - /tmp[FL] ...prints... ==> standard input <==a-z 1& 0-9a-z 4& 0-9a-z 5& 0-9==> /tmp/F <==a-z 1& 0-9a-z 2& 0-9a-z 3& 0-9a-z 4& 0-9a-z 5& 0-9==> /tmp/L <==145 bigger timed tests I created a couple of pretty large files: seq 5000000 | tee /tmp/F |sort -R | head -n1500000 |sort -n >/tmp/L ...which put 5mil lines in /tmp/F and 1.5mil randomly selected lines of that into /tmp/L . I then did: time \( export LC_ALL=C grep -n '' | sort -t: -nmk1,1 ./L - | sed /:/d\;n | cut -sd: -f2-) <./F |wc - l It printed: 1500000grep -n '' \ 0.82s user 0.05s system 73% cpu 1.185 totalsort -t: -nmk1,1 /tmp/L - \ 0.92s user 0.11s system 86% cpu 1.185 totalsed /:/d\;n \ 1.02s user 0.14s system 98% cpu 1.185 totalcut -sd: -f2- \ 0.79s user 0.17s system 80% cpu 1.184 totalwc -l \ 0.05s user 0.07s system 10% cpu 1.183 total (I added the backslashes there) Among the solutions currently offered here, this is the fastest of all of them but one when pitted against the dataset generated above on my machine. Of the others only one came close to contending for second-place, and that is meuh's perl here . This is by no means the original solution offered - it has dropped a third of its execution time thanks to advice/inspiration offered by others. See the post history for slower solutions (but why?) . Also, it is worth noting that some other answers might very well contend better if it were not for the multi-cpu architecture of my system and the concurrent execution of each of the processes in that pipeline. They all work at the same time - each on its own processor core - passing around the data and doing their small part of the whole. It's pretty cool. but the fastest solution is... But it is not the fastest solution. The fastest solution offered here, hands-down, is the C program . I called it cselect . After copying it to my X clipboard, I compiled it like: xsel -bo | cc -xc - -o cselect I then did: time \ ./cselect /tmp/L /tmp/F |wc -l ...and the results were... 1500000./cselect /tmp/L /tmp/F \ 0.50s user 0.05s system 99% cpu 0.551 totalwc -l \ 0.05s user 0.05s system 19% cpu 0.551 total | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209404",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376/"
]
} |
209,409 | I have written an expect script that works perfect for testing passwords on our network until it has looped through my list quite a bit, then the debug output shows that there is nothing to match. Here is some debug that shows the buffer with data in it, then the next spawn with no data: ssh: connect to host 12.23.34.56 port 22: No route to hostexpect: does "ssh: connect to host 12.23.34.56 port 22: No route to host\r\r\n" (spawn_id exp69) match glob pattern "(yes/no)? "? no"assword: "? no"]# "? no"oute to host"? yesexpect: set expect_out(0,string) "oute to host"expect: set expect_out(spawn_id) "exp69"expect: set expect_out(buffer) "ssh: connect to host 12.23.34.56 port 22: No route to host"spawn ssh -o PreferredAuthentications=password -o NumberOfPasswordPrompts=3 [email protected]: waiting for sync byteparent: telling child to go aheadparent: now unsynchronized from childspawn: returns {26418}expect: does "" (spawn_id exp70) match glob pattern "(yes/no)? "? no"assword: "? no"]# "? no"oute to host"? no"onnection refused"? no"isconnected from"? noexpect: timed out Every spawn_id after exp69 has nothing in the does "" match section. I think this is related to the buffer some how, but I've tried: match_max 200000match_max 600000 And this didn't seem to make any difference. I've removed the real ips and changed it to x.x.x.x. I"m not actually testing x.x.x.x (but the 12.23.34.56 did slip into my list of servers to check) The script itself is running expect and loops over another file called "servers.txt" and tries line by line to execute a series of commands on that server. It logs what worked and what didn't. Here is what is in the script: #!/usr/bin/expect# where to log info, open "writable", this is our homegrown log# unlike the others that followset logfile [open "passcheck-results" "w"]# clobber and loglog_file -noappend passcheck-logfile# disable user viewing of process as it happens# 1=on, 0=off# disables screen output only, recommended 0log_user 1# enable verbose debugging, from expect# 1=on, 0=off# useful for debugging the script itself, recommended: 0exp_internal -f debug.log 0# default waits for 10s, and if no response, kills process# too low and machines may not respond fast enough# in particular real timeouts are not registered if too low# set to -1 for infinite, real timeouts can take 60s or more# instead of waiting 60s for EVERY failure, set this lower# recommend the 10s defaultset timeout 10# if you do not get all the response, you expect, increase buffermatch_max 600000# get by argv functions instead# set nohistory save on CLI/bashset passwords { password1 password2 password3 }# open the list of servers to process# warning, no error checking on file openset input [open "servers.txt" "r"]# loop over the list of servers with prompt logicforeach ip [split [read $input] "\n"] { # slowing it down a bit sleep 2 # had to change =password to get results I wanted # loop over line of servers spawn ssh -o PreferredAuthentications=password \ -o NumberOfPasswordPrompts=[llength $passwords] admin@$ip # verify where exactly to reset this count set try 0 # account for other possibilities expect { "(yes/no)? " { # new host detected sleep 1 send "yes\r" exp_continue } "assword: " { if { $try >= [llength $passwords] } { puts $logfile "Bad_Passwords $ip" #send_error ">>> wrong passwords\n" exit 1 } sleep 1 send [lindex $passwords $try]\r incr try exp_continue } "\]\# " { puts $logfile "succeeded_$try $ip" sleep 1 send "exit\r" } "oute to host" { puts $logfile "No_Route_to_Host $ip" } "onnection refused" { puts $logfile "Refused $ip" } "isconnected from" { puts $logfile "Disconnected $ip" } timeout { puts $logfile "timed_out_or_fail $ip" } }} | grep -n | sort | sed | cut ( export LC_ALL=C grep -n '' | sort -t: -nmk1,1 ./L - | sed /:/d\;n | cut -sd: -f2-) <./F That should work pretty quickly (some timed tests are included below) with input of any size. Some notes on how: export LC_ALL=C Because the point of the following operation is to get the entire file of ./F stacked up inline with its ./L lineno's file, the only characters we'll really need to worry about are ASCII [0-9] digits and the : colon. For that reason it is more simple to worry about finding those 11 characters in a set of 128 possibles than it is if UTF-8 is otherwise involved. grep -n '' This inserts the string LINENO: into the head of every line in stdin - or <./F . sort -t: -nmk1,1 ./L - sort neglects to sort its input files at all, and instead (correctly) presumes they are presorted and -m erges them in -numerically sorted order, ignoring basically anything beyond any possible -k1,1 st occurring -t: colon character anyway. While this may require some temp space to do (depending on how far apart some sequences may occur) , it will not require much as compared to a proper sort, and it will be very fast because it involves zero backtracking. sort will output a single stream where any lineno's in ./L will immediately precede the corresponding lines in ./F . ./L 's lines always come first because they are shorter. sed /:/d\;n If the current line matches a /:/ colon d elete it from output. Else, auto-print the current and n ext line. And so sed prunes sort 's output to only sequential line pairs which do not match a colon and the following line - or, to only a line from ./L and then the next. cut -sd: -f2- cut -s uppresses from output those of its input lines which do not contain at least one of its -d: elimiter strings - and so ./L 's lines are pruned completely. For those lines which do, their first : colon-delimited -f ield is cut away - and so goes all of grep 's inserted lineno's. small input test seq 5 | sed -ne'2,3!w /tmp/L s/.*/a-z &\& 0-9/p' >/tmp/F ...generates 5 lines of sample input. Then... ( export LC_ALL=C; </tmp/F \ grep -n '' | sort -t: -nmk1,1 ./L - | sed /:/d\;n | cut -sd: -f2-)| head - /tmp[FL] ...prints... ==> standard input <==a-z 1& 0-9a-z 4& 0-9a-z 5& 0-9==> /tmp/F <==a-z 1& 0-9a-z 2& 0-9a-z 3& 0-9a-z 4& 0-9a-z 5& 0-9==> /tmp/L <==145 bigger timed tests I created a couple of pretty large files: seq 5000000 | tee /tmp/F |sort -R | head -n1500000 |sort -n >/tmp/L ...which put 5mil lines in /tmp/F and 1.5mil randomly selected lines of that into /tmp/L . I then did: time \( export LC_ALL=C grep -n '' | sort -t: -nmk1,1 ./L - | sed /:/d\;n | cut -sd: -f2-) <./F |wc - l It printed: 1500000grep -n '' \ 0.82s user 0.05s system 73% cpu 1.185 totalsort -t: -nmk1,1 /tmp/L - \ 0.92s user 0.11s system 86% cpu 1.185 totalsed /:/d\;n \ 1.02s user 0.14s system 98% cpu 1.185 totalcut -sd: -f2- \ 0.79s user 0.17s system 80% cpu 1.184 totalwc -l \ 0.05s user 0.07s system 10% cpu 1.183 total (I added the backslashes there) Among the solutions currently offered here, this is the fastest of all of them but one when pitted against the dataset generated above on my machine. Of the others only one came close to contending for second-place, and that is meuh's perl here . This is by no means the original solution offered - it has dropped a third of its execution time thanks to advice/inspiration offered by others. See the post history for slower solutions (but why?) . Also, it is worth noting that some other answers might very well contend better if it were not for the multi-cpu architecture of my system and the concurrent execution of each of the processes in that pipeline. They all work at the same time - each on its own processor core - passing around the data and doing their small part of the whole. It's pretty cool. but the fastest solution is... But it is not the fastest solution. The fastest solution offered here, hands-down, is the C program . I called it cselect . After copying it to my X clipboard, I compiled it like: xsel -bo | cc -xc - -o cselect I then did: time \ ./cselect /tmp/L /tmp/F |wc -l ...and the results were... 1500000./cselect /tmp/L /tmp/F \ 0.50s user 0.05s system 99% cpu 0.551 totalwc -l \ 0.05s user 0.05s system 19% cpu 0.551 total | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
209,442 | If I had a log file looking like this: AABCCCABB I would like to output (remove the immediately-successive duplicates): ABCAB How do I do this? | That's the job for uniq : LC_ALL=C uniq file GNU uniq in some locales can report first of sequencesof lines that sort the same . Using LC_ALL=C forced bytes comparison behavior, give you persistent result. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50597/"
]
} |
209,497 | I'm trying to write a very basic cron job but it doesn't seem to be saving. Here's what I've done: 1) crontab -e This opens up a file with vim. 2) #!usr/bin/env python0 23 * * 0 ~/Desktop/SquashScraper/helpfulFunctions.py 3) :wq 4) crontab -l Nothing shows up and I get this message: crontab: no crontab for ben I've looked around and most people with similar issues had editor problems. My crontab opens correctly with vim, so that doesn't seem to be the issue. Any idea why this might not be working / saving properly? Thanks,bclayman Edit to include: | For some reason /usr/bin/vi is not working correctly on your machine as you can tell from the error message: crontab: "/usr/bin/vi" exited with status 1 What happened there is that when you leave vi it is producing an error code. When crontab sees that vi exited with an error code, it will not trust the contents of the file vi was editing and simply doesn't make any changes to your crontab. You can try to investigate further why vi is not working, or if you prefer to, you can use a completely different editor. For example if you prefer to use vim , you can type: EDITOR=/usr/bin/vim crontab -e Alternatively you can keep the "official" version of your crontab under your home directory. Then edit the version under your home directory and finally install it using: crontab filename | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119330/"
]
} |
209,498 | When doing a search like find -type d , adding the - print0 argument right after the find command such as find -print0 -type d causes the search to return more results than without it. | If you understand the && and || operators in the shell(and also in C, C++, and derivative languages),then you understand -a and -o in find . To refresh your memory: In the shell, command1 && command2 runs command1 , and, if it ( command1 ) succeeds,it (the shell) runs command2 . command1 || command2 runs command1 , and, if it ( command1 ) fails,it (the shell) runs command2 . In the compilable languages, expr1 && expr2 evaluates expr1 . If it ( expr1 ) evaluates to false (zero),it returns that as the value of the overall expression. Otherwise (if expr1 evaluates to a true (non-zero) value),it evaluates expr2 and returns that as the value of the overall expression. expr1 || expr2 evaluates expr1 . If it ( expr1 ) evaluates to a true (non-zero) value,it returns that as the value of the overall expression. Otherwise (if expr1 evaluates to false (zero))it evaluates expr2 and returns that as the value of the overall expression. This is known as “short-circuit evaluation”, in that it allows the evaluation of a Boolean expression without evaluating terms whose values are not needed to determine the overall value of the expression. Quoting from find(1) , GNU find searches the directory tree rooted at each given file nameby evaluating the given expression from left to right,according to the rules of precedence (see section OPERATORS),until the outcome is known(the left hand side is false for and operations, true for or ),at which point find moves on to the next file name. ⋮ EXPRESSIONS The expression is made up of … tests (which return a true or false value),and actions (which have side effects and return a true or false value),all separated by operators. -and is assumed where the operator is omitted. ⋮ The subsection on ACTIONS states that -print , like most of the actions, always returns a value of true. ⋮ OPERATORS ⋮ expr1 expr2 expr1 -a expr2 expr1 -and expr2 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ not POSIX compliant And; expr2 is not evaluated if expr1 is false. expr1 -o expr2 expr1 -or expr2 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ not POSIX compliant Or; expr2 is not evaluated if expr1 is true. The Open Group Specification for find has similar things to say: The find utility shall recursively descend the directory hierarchy …,evaluating a Boolean expression composed of the primariesdescribed in the OPERANDS section for each file encountered. ⋮ OPERANDS ⋮ -print The primary shall always evaluate as true;it shall cause the current pathname to be written to standard output. ⋮ The primaries can be combined using the following operators(in order of decreasing precedence): ⋮ expression [-a] expression Conjunction of primaries; the AND operator is impliedby the juxtaposition of two primariesor made explicit by the optional -a operator. The second expression shall not be evaluatedif the first expression is false. expression -o expression Alternation of primaries; the OR operator. The second expression shall not be evaluatedif the first expression is true. Both documents say,“If no expression is present, -print shall be used as the expression.” ---------------- TL;DR ---------------- So, find -type d is equivalent to find -type d -print which is equivalent to find -type d -a -print which means, for each file, evaluate the -type d test. If it is true (i.e., if the current “file” is a directory),evaluate (perform) the -print action. Whereas , find -print -type d is equivalent to find -print -a -type d which means, for each file, evaluate (perform) the -print action (i.e., this happens for all files ) . If it is true (which -print always is), evaluate the -type d test. And, since that’s the end of the command,the result of the -type d test is ignored. So there you have it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74995/"
]
} |
209,529 | I am trying to use 'date' to get the time in a different time zone, and failing. All the methods I've found on google involve changing the time zone on the system, but that is not what I want. Is there a single command that will return current time in a time zone different from my own? | Timezones are listed in /usr/share/zoneinfo . If you wanted the current time in Singapore, for example, you could pass that to date : TZ=Asia/Singapore dateSun Jun 14 17:17:49 SGT 2015 To simplify this procedure, if you need to frequently establish the local time in different timezones, you could add a couple of functions to your shell rc file (eg, .bashrc ): zones() { ls /usr/share/zoneinfo/"$1" ;}zone() { TZ="$1"/"$2" date; } The first will print the correct zone list for a region, and armed with that information, you can then print the local time. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104893/"
]
} |
209,534 | Linux Mint 17.1 (MATE) running on HP G250 Laptop and older HP desktops. It's just me and the dog at home and I like to run the computer all day, but it keeps returning to the login screen after a few minutes of inactivity. Typing the long "secure password" all day gets tiring and I'd like to at least lengthen the time, or even stop the timeout alltogether. | In the terminal type mate-screensaver-preferences & , or from the Control Panel, select Screensaver - then deselect Lock screen when screensaver is active . You can find timeout settings there, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67386/"
]
} |
209,566 | I created an img file via the following command: dd if=/dev/zero bs=2M count=200 > binary.img It's just a file with zeroes, but I can use it in fdisk and create a partition table: # fdisk binary.imgDevice does not contain a recognized partition table.Created a new DOS disklabel with disk identifier 0x51707f21.Command (m for help): pDisk binary.img: 400 MiB, 419430400 bytes, 819200 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x51707f21 and, let's say, one partition: Command (m for help): nPartition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions)Select (default p): pPartition number (1-4, default 1): First sector (2048-819199, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-819199, default 819199): Created a new partition 1 of type 'Linux' and of size 399 MiB.Command (m for help): wThe partition table has been altered.Syncing disks. When I check the partition table, I get the following result: Command (m for help): pDisk binary.img: 400 MiB, 419430400 bytes, 819200 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x7f3a8a6aDevice Boot Start End Sectors Size Id Typebinary.img1 2048 819199 817152 399M 83 Linux So the partition exists. When I try to format this partition via gparted, I get the following error: I don't know why it looks for binary.img1 , and I have no idea how to format the partition from command live. Does anyone know how to format it using ext4 filesystem? | You can access the disk image and its individual partitions via the loopback feature. You have already discovered that some disk utilities will operate (reasonably) happily on disk images. However, mkfs is not one of them (but strangely mount is). Here is output from fdisk -lu binary.img : Disk binary.img: 400 MiB, 419430400 bytes, 819200 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytes...Device Boot Start End Sectors Size Id Typebinary.img1 2048 819199 817152 399M 83 Linux To access the partition you've created you have a couple of choices The explicit route losetup --offset $((512*2048)) --sizelimit $((512*817152)) --show --find binary.img/dev/loop0 The output /dev/loop0 is the name of the loop device that has been allocated. The --offset parameter is just the partition's offset ( Start ) multiplied by the sector size ( 512 ). Whereas --sizelimit is the size of the partition, and you can calculate it in the following way: End-Start+1, which is 819199-2048+1=817152 , and that number also has to be multiplied by the sector size. You can then use /dev/loop0 as your reference to the partition: mkfs -t ext4 -L img1 /dev/loop0mkdir -p /mnt/img1mount /dev/loop0 /mnt/img1...umount /mnt/img1losetup -d /dev/loop0 The implicit route losetup --partscan --show --find binary.img/dev/loop0 The output /dev/loop0 is the name of the primary loop device that has been allocated. In addition, the --partscan option tells the kernel to scan the device for a partition table and assign subsidiary loop devices automatically. In your case with the one partition you also get /dev/loop0p1 , which you can then use as your reference to the partition: mkfs -t ext4 -L img1 /dev/loop0p1mkdir -p /mnt/img1mount /dev/loop0p1 /mnt/img1...umount /mnt/img1losetup -d /dev/loop0 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52763/"
]
} |
209,574 | I am running program using runit to run at startup. I want all the output by from the program that is run by runit to be logged to a file. I have looked at svlogd but I cannot figure out how to get it running. | I cannot figure out how to get it running. In the daemontools family world, log services are just services like any other. So you run svlogd with a run program just like you would run a "main" service with a run program. The special things about "log" services are merely that: The "log" service directory is located using a symbolic link from (or a straight subdirectory beneath) the "main" service directory. Some, but not all, daemontools family toolsets tightly bind "log" and "main" services and operate upon the twain as a unit. This is to a degree the case with runit. Otherwise, they are just like everything else. So make a "log" service to run svlogd just like you would make any other service, put it into the right place relative to your "main" service, and set things off. Further reading Jonathan de Boyne Pollard (2015). "Logging" . The daemontools family . Frequently Given Answers. Gerrit Pape. "How do I create a new service directory with an appendant log service?" . runit Frequently Asked Questions. Daniel J. Bernstein. "How do I create a service directory with a log?" . daemontools Frequently Asked Questions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118306/"
]
} |
209,579 | I created a environment variable in one terminal window and tried to echo it in another terminal window. That displayed nothing. $TEST=hello After that I exported it and tried again to echo it in a different terminal window. result was same as before. export TEST but if I execute the same code at the login (appending the code to ~/.profile file) variables can be used any terminal window. What is happening here? What is the different between executing a code in a terminal and executing the same at the login? | export makes a variable something that will be included in child process environments. It does not affect other already existing environments. In general there isn't a way to set a variable in one terminal and have it automatically appear in another terminal, the environment is established for each process on its own. Adding it to your .profile makes it so that your environment will be setup to include that new variable each time you log in though. So it's not being exported from one shell to another, but instead is instructing a new shell to include it when it sets up the initial environment. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110279/"
]
} |
209,585 | I'm running Ubuntu 14.04 LTS and nginx on a Digital Ocean VPS and occasionally receive these emails about a failed cron job: Subject Cron test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) The body of the email is: /etc/cron.daily/logrotate: error: error running shared postrotate script for '/var/log/nginx/*.log ' run-parts: /etc/cron.daily/logrotate exited with return code 1 Any thoughts on how I can resolve this? Update: /var/log/nginx/*.log { weekly missingok rotate 52 compress delaycompress notifempty create 0640 www-data adm sharedscripts prerotate if [ -d /etc/logrotate.d/httpd-prerotate ]; then \ run-parts /etc/logrotate.d/httpd-prerotate; \ fi endscript postrotate invoke-rc.d nginx rotate >/dev/null 2>&1 endscript } Update: $ sudo invoke-rc.d nginx rotateinitctl: invalid command: rotateTry `initctl --help' for more information. | The post rotate action appears to be incorrect Try invoke-rc.d nginx reload >/dev/null 2>&1 If you look at the nginx command you will see the actions it will accept. Also the message you got says check initctl --help xtian@fujiu1404:~/tmp$ initctl helpJob commands: start Start job. stop Stop job. restart Restart job. reload Send HUP signal to job. status Query status of job. list List known jobs. so reload should work and send HUP signal to nginx to force reopen of logfiles. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119371/"
]
} |
209,604 | I plan to get a new notebook and try to find out if a quad-core processor gives me any advantages over a regular dual-core machine. I use common Linux Distributions (Ubuntu, Arch etc.) and mostly Graphics Software: Scribus, Inkscape, Gimp. I want to use this new processor for a few years. I've done a lot of research but could not find any reliable and up-to-date answers. So: The latest kernel makes use of multi core processors. But does that give me any noticeable advantages on a daily basis? I'm talking about regular multi-tasking with common Linux applications. | You haven't found any reliable answer because there is no widely applicable reliable answer. The performance gain from multiple cores is hard to predict except for well-defined tasks, and even then it can depend on many other factors such as available memory (no benefit from multiple cores if they're all waiting for some file to load). For ordinary desktop use, you can generally gain responsiveness from having two cores: one to run the application from which you want a response, one to run GUI effects. The cores are idle most of the time but both do work when you start some task. Beyond two cores, the benefits tend to trail off. And even with two cores, a very lean GUI can mean that you don't get any benefit. Parallelizing a single task is difficult; except for some very simple cases (for which the technical term is “embarrassingly parallel”) it requires significant effort from the programmer and is often plain not doable. Displaying a web page, for example, is a matter of positioning elements one by one and executing Javascript code, and it all needs to be done in order, so doesn't benefit from multiple cores. The benefit of multiple cores for web browsing is when you want to do something else while a complex web page is being rendered. Some graphics software has parallel routines for large tasks (e.g. some transformations of large images). You will gain from multiple cores there, but again only for those tasks that have been written to take advantage of multiple processors. If you're going to run image transformations as background tasks, you'll definitely benefit from at least two cores (one for the task, one for interactive use) and possibly from more if the task itself takes advantage of multiple cores. More than four cores is unlikely to give any benefit for a machine that doesn't do fancy things such as multiple simultaneous users, large compilations, large numerical calculations, etc. Two cores is likely to have some benefit over one for most tasks. Between two and four, it isn't clear-cut. A faster dual core will give more consistent benefits than going from dual to quad-core, but a faster clock speed has downsides as well, especially for a laptop, since it means the processor will use more power and require louder cooling. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119385/"
]
} |
209,633 | I'm running OpenBSD 5.7 on VirtualBox on my Windows 7, to learn more about Unix, but I can't use sudo with my password. I've set up a user called adminvpn but when I try to run any command using sudo it tells that my password is wrong! I have already tried my root password and my user password and I'm 100% sure they are correct and they are only letters... I have reinstalled OpenBSD 3 times and this is driving me crazy. When I try to use my password for "adminvpn" I get adminvpn is not in the sudoers file. This incident will be reported. I'm able to log in with root and using my root password. I'm almost giving up to running it with a user. | The error message tells you what is wrong even if it doesn't tell you exactly how to fix it. adminvpn is not in the sudoers file. sudo lets you run commands as the administrator. In order to be able to use sudo, you need to be made an administrator. Accounts are not made administrators by default. Run the command visudo (as root, of course) and add a line like the following: adminvpn ALL=(ALL) ALL Save the file and exit. You can now run any command as root with sudo . Note that visudo will run your favorite editor, as indicated by the environment variable VISUAL . If you haven't configured that, you'll get vi. If you're uncomfortable with vi, install another editor (e.g. pkg_add nano ) and add a line like export VISUAL=nano to ~/.profile ; run the command in your interactive shell as well for it to take effect in the current session. Unless configured otherwise, sudo requires you to type your own password. Knowledge of the root password is not necessary, that's the point of sudo . The point of typing your password is to make sure it isn't someone else typing while you left your keyboard unattended. By the way, OpenBSD isn't the most beginner-friendly Unix out there. If you want a gentler learning curve, try Ubuntu. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119404/"
]
} |
209,653 | When I am shell scripting, the majority of what I am doing is wrapping the I/O of other modules in python, matlab, etc. To do so, I usually use text files or something of that nature with the input/output paths. I know to read one line from one file I can use, for file in $(cat $1);do code using $filedone but what if I wanted to do something using equivalent lines from both files? something like the equivalent Java: while((line1 = file1.readLine()) != null) { line2 = file2.readLine(); //do something with both lines...} What is the standard method for doing this in bash? | exec 3<file1exec 4<file2while read line1 <&3 && read line2 <&4do echo "line1=$line1 and line2=$line2"doneexec 3<&-exec 4<&- Discussion In the above, leading and trailing white space is stripped from the input lines. If you want to preserve this whitespace, replace read … with IFS= read … In the above, backslashes in the input will be interpreted as escape characters. If you don't want that, replace read … with read -r … read line1 <&3 reads line1 from file descriptor 3. This can also be written equivalently as read -u3 line1 . Statements such as for file in $(cat $1); have some issues that you should know about it. The shell will apply both word splitting pathname expansion to the contents of the file and, unless you were expecting this, it can lead to various errors. Alternative while read line1 <&3 && read line2 <&4do echo "line1=$line1 and line2=$line2"done 3<file1 4<file2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119407/"
]
} |
209,686 | The following produces nothing in bash: while true ; do upower -d ; sleep 1 ; done | grep percentage | uniq I've discovered that it doesn't matter what the last- or even the second to last- program are in this chain. The second-to-last program always produces expected output, and the last always fails to produce anything. It also doesn't appear to matter if I wrap the while loop in a subshell (via ( while ... done ) ), or wrap everything but the last command in a subshell and pipe into that last command. I am deeply confused by this behavior. My gut tells me... ...that there's some sort of I/O deadlocking in the chain created by blocking reads and writes and the fact that the while loop isn't always producing output. But, on the other hand, I've done this kind of thing plenty of times before, with no issue. Besides, if it were that, wouldn't the problem exist between the while loop and the next command? So I'm flummoxed. Will provide bash version information if no one else can reproduce. In case it isn't immediately obvious... The point of this line of code is to print every change in battery percentage, but only print new battery levels. It uses polling. I suppose I could get the same behavior with simply running upower --monitor-detail | grep percentage | uniq But this is a one-off thing, and I didn't plan on expending more than 5 seconds of thought on this until the above started to fail. At which point it became an interesting problem. Plus, I don't know whether monitor-detail just does polling under the hood, anyway (and I'm not running an strace to check). EDIT: Apparently, the above --monitor-detail version also doesn't produce anything (or, at least, it seems to. The polling / update frequency is pretty low, so I might just not have waited long enough. Although I know I waited long enough for the original issue). Now I'm very, very confused. I guess I should run that strace after all... | You should use grep --line-buffered percentage or else it will take a very long time for the grep stdout buffer to be filled by its output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74972/"
]
} |
209,688 | I opened many Chrome tabs for webpages. Each tab has its own PID, e.g. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDt 3900 1.9 6.3 5718440 508660 ? Sl Jun08 188:31 /opt/google/chrome/chrome --type=gpu-process --channel=3862.0.1604359319 --supports-dual-gpus=false --gpu-driver-bug-workarounds=1,12,42 --disable-accelerated-video-decode --gpu-vendor-id=0x8086 --gpu-device-id=0x2a42 --gpu-driver-vendor --gpu-driver-version I wonder how to find out which tab from many opened tabs corresponds to a given PID? | Press Shift + Esc to bring up Chrome's task manager. Locate the line corresponding to the PID you want (click on the “Process ID” column header to sort by PID). Double-click the line to bring the tab to the foreground. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
209,710 | There are often instances where a certain program will depend on library version x.y and another on x.z but, as far as I'm aware, no package manager will allow me to install both x.y and x.z. Sometimes they will allow both major versions (such as qt4 and qt5, which can be installed at the same time), but (seemingly) never minor versions. Why is this? As in, what is the limiting factor that prevents it? I assume there must be a good reason for not allowing this seemingly useful functionality. E.g., is there not a field to indicate what version to load when loading a shared object and thus no way for Linux to know how to decide which to load? Or is there really no reason for it? Like all minor versions are supposed to be compatible anyway or something? | Actually, you can install multiple versions of a shared library if it's done properly. Shared libraries are usually named as follows: lib<name>.so.<api-version>.<minor> Next, there are symlinks to the library under the following names: lib<name>.solib<name>.so.<api-version> When a developer links against the library to produce a binary, it is the filename that ends in .so that the linker finds. There can indeed be only one of those installed at a time for any given <name> but that only means that a developer cannot target multiple different versions of a library at the same time. With package managers, this .so symlink is part of a separate -dev package which only developers need install. When the linker finds a file with a name ending in .so and uses it, it looks inside that library for a field called soname . The soname advises the linker what filename to embed into the resulting binary and thus what filename will be sought at runtime. The soname is supposed to be set to lib<name>.so.<api-version> . Therefore, at run time, the dynamic linker will seek lib<name>.so.<api-version> and use that. The intention is that: <minor> upgrades don't change the API of the library and when the <minor> gets bumped to a higher version, it's safe to let all the binaries upgrade to the new version. Since the binaries are all seeking the library under the lib<name>.so.<api-version> name, which is a symlink to the latest installed lib<name>.so.<api-version>.<minor> , they get the upgrade. <api-version> upgrades change the API of the library, and it is not safe to let existing binary applications use the new version. In the case that the <api-version> is changed, since those applications are looking for the name lib<name>.so.<api-version> but with a different value for <api-version> , they will not pick up the new version. Package managers don't often package more than one version of the same library within the same distribution version because the whole distribution, including all binaries that make use of the library, is usually compiled to use a consistent version of every library before the distribution is released. Making sure that everything is consistent and that everything in a distribution is compatible with everything else is a big part of the workload for distributors. But you can easily end up with multiple versions of a library if you've upgraded your system from one version of your distritution to another and still have some older packages requiring older library versions. Example: libmysqlclient16 from an older Debian, contains libmysqlclient.so.16.0.0 and symlink libmysqlclient.so.16 . libmysqlclient18 from current Debian, contains libmysqlclient.so.18.0.0 and symlink libmysqlclient.so.18 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111972/"
]
} |
209,746 | I am trying to use an alternate user (non-admin) to execute graphical software on my system. This alternate user has been named and given a UID and GID to match a remote system user of the same name. The UID is 500 so I believe that makes the user a 'non-login' user. Starting from Ubuntu logged into my main account, I open a terminal and su to the alternate user. I then attempt to execute the command to start the application and receive 'No protocol specified'. Is this because of the UID<1000, because of the su or because of the non-admin of the user?How can I get this user to execute the application with a GUI? | The problem is not occurring because of the UID of the user. 500 is just fine as a UID, and that UID doesn't make it a 'non-login' user except in the eyes of the default settings of some few display managers. The error message No protocol specified sounds like an application-specific error message, and an unhelpful one at that, but I am going to guess that the error is that the application is unable to contact your X11 display because it does not have permission to do so because it's running as a different user. Applications need a "magic cookie" (secret token) in order to talk to the X11 server so that other processes on the system running under other users cannot intrude on your display, create windows, and snoop your keystrokes. The other system user does not have access to this magic cookie because the permissions are set so that it is only accessible to the user who started the desktop environment (which is as it should be). Try this, running as your original user, to copy the X11 cookie to the other account: su - <otheruser> -c "unset XAUTHORITY; xauth add $(xauth list)" then run your application. You may also need to unset XAUTHORITY in that shell too. That command extracts the magic cookie ( xauth list ) from your main user and adds it ( xauth add ) to where the other user can get it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96932/"
]
} |
209,767 | I want to match the beginning of lines where the line contains something. Command in Perl/Sed 's/^.?/\section{}/g' but the problem here is that it replaces also the first character of sentences. How can you add to the beginning of lines something for non-empty lines? | sed '/./s/^/\\section{}/' Would prepend \section{} to every line that contain at least one valid character. sed '/^$/!s/^/\\section{}/' Would prepend \section{} to every non-empty line (that is lines that contain at least one byte ). sed 's/./\\section{}&/' Would insert \section{} before the first valid character in every line (that has such a valid character). ( & is replaced by the matched portion). Those distinctions between byte and character can be meaningfull if you're in a multi-byte-per-character locale (like UTF-8 which tends to be the norm nowadays) but dealing with text encoded in an extended single-byte character set. For instance: $ locale charmapUTF-8$ echo Москва | iconv -t iso-8859-5 | sed 's/./\\section{}&/' | iconv -f iso-8859-5Москва (when encoded in iso-8859-5, none of the resuling byte values for Москва form a valid character in UTF-8, so /./ doesn't match anything). Since in your case, encoding would not be a concern since the text you're inserted is ASCII anyway, you can fix the locale to C to avoid surprises: $ echo Москва | iconv -t iso-8859-5 | LC_ALL=C sed 's/./\\section{}&/' | iconv -f iso-8859-5\section{}Москва | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
209,770 | I am trying to SFTP with Filezilla but it is not able to connect to the server and I think this is due to my firewall rules? I can SSH absolutely fine. The port for SSH is 6128. Can anyone tell me what changes I would have to make to allow an FTP connection over SSH given that SSH is already working? (Here are my IPtables rules) Chain INPUT (policy ACCEPT)target prot opt source destinationfail2ban-ssh tcp -- anywhere anywhere multiport dports sshACCEPT all -- anywhere anywhereREJECT all -- anywhere loopback/8 reject-with icmp-port-unreachableACCEPT all -- anywhere anywhere state RELATED,ESTABLISHEDACCEPT udp -- anywhere anywhere udp dpt:9987ACCEPT tcp -- anywhere anywhere tcp dpt:10011ACCEPT tcp -- anywhere anywhere tcp dpt:30033ACCEPT tcp -- anywhere anywhere tcp dpt:httpACCEPT tcp -- anywhere anywhere tcp dpt:httpsACCEPT tcp -- anywhere anywhere state NEW tcp dpt:6128ACCEPT icmp -- anywhere anywhere icmp echo-requestLOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix "iptables denied: "DROP all -- anywhere anywhereChain FORWARD (policy ACCEPT)target prot opt source destinationDROP all -- anywhere anywhereChain OUTPUT (policy ACCEPT)target prot opt source destinationACCEPT all -- anywhere anywhereChain fail2ban-ssh (1 references)target prot opt source destinationRETURN all -- anywhere anywhere | To access your sftp from other hosts, please make sure following is installed and configured properly. Installed OpenSSH servers Configured sshd_config PubkeyAuthentication yes Subsystem sftp internal-sftp Added your public key to ~/.ssh/authorized_keys Start the ssh server with port 22/TCP open # /etc/init.d/sshd start # iptables -I INPUT -j ACCEPT -p tcp --dport 22 Finally, test $ sftp <login>@<hostname> | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119470/"
]
} |
209,774 | I really miss use strict; use warnings in my Perl one-liners such as: perl -ne 'print unless /[0-5],[0-5]?|hernia|.{170,}|benz/' file` especially when I include many conditions separated by | .I have noticed that the filtering becomes more and more unstable when you have more conditions. Examples of data missed or where the command works unstably 123.Symptoms of hernia are all, except: | happy | energetic | sleeping | down regulation of CNS hello | running 5Signshello of mercedes benz car model: | Big tyres | Even bigger roller | Massive emergency of try | Hello world | All from above 1 where two entries where the regex works unstably. How can you use strict and warnings in Perl one-liners? How can you design such long conditional statements better? | To access your sftp from other hosts, please make sure following is installed and configured properly. Installed OpenSSH servers Configured sshd_config PubkeyAuthentication yes Subsystem sftp internal-sftp Added your public key to ~/.ssh/authorized_keys Start the ssh server with port 22/TCP open # /etc/init.d/sshd start # iptables -I INPUT -j ACCEPT -p tcp --dport 22 Finally, test $ sftp <login>@<hostname> | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
209,781 | Pseudocommand perl -pe 'g/^[1-777])/g' data.txt where I try to all numbers at the beginning of a line with ")" following them etc 1) , 67) and 777) . Data where all question numbers should (etc 1) , 16) , 68) ... but not 16. and not 778) ) be found 1) hello A) Options here which should be not matched. 16) hello 16. oldA) Options here which should be not matched. 68) yes questionA) Options here which should be not matched. ...582) hurrayA) Options here which should be not matched. 778)A) Options here which should be not matched. where find only numbers until 777. Wanted output 1,16,68,582,777 which I can then feed to my Sed command.Lastly if possible, return the result as comma-separated for this command: sed -n '1,4,582' full_data.txt , discussed here , so some sort of piping from perl to sed or just in perl? How can you return all numbers at the beginning of the line? | To access your sftp from other hosts, please make sure following is installed and configured properly. Installed OpenSSH servers Configured sshd_config PubkeyAuthentication yes Subsystem sftp internal-sftp Added your public key to ~/.ssh/authorized_keys Start the ssh server with port 22/TCP open # /etc/init.d/sshd start # iptables -I INPUT -j ACCEPT -p tcp --dport 22 Finally, test $ sftp <login>@<hostname> | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
209,820 | I accidentally deleted /etc/redhat-release file. How can I restore or create a new one? I have CentOS Linux release 7.0.1406 (Core). | You can use RPM to see what RPM that file belongs to: $ rpm -qf /etc/redhat-releasecentos-release-7-0.1406.el7.centos.2.5.x86_64 You can then fix it using yum : $ yum reinstall centos-release Might not work If the RPM that was used to do this install is no longer available then the above will not work: $ yum reinstall centos-release-7-0.1406.el7.centos.2.5.x86_64...Installed package centos-release-7-0.1406.el7.centos.2.5.x86_64 (from updates) not available. In this case you can look for that RPM in the CentOS Vault (I search via Google for it), for example. NOTE: The specific package you want is here . You can then download the RPM directly and do the re-install using rpm or yum . $ wget http://vault.centos.org/centos/7.0.1406/updates/x86_64/Packages/centos-release-7-0.1406.el7.centos.2.5.x86_64.rpm Using RPM $ sudo rpm -Uvh --replacepkgs centos-release-7-0.1406.el7.centos.2.5.x86_64.rpmPreparing... ################################# [100%]Updating / installing... 1:centos-release-7-0.1406.el7.cento################################# [100%] Using YUM $ sudo yum reinstall centos-release-7-0.1406.el7.centos.2.5.x86_64.rpmLoaded plugins: dellsysid, fastestmirror, langpacksExamining centos-release-7-0.1406.el7.centos.2.5.x86_64.rpm: centos-release-7-0.1406.el7.centos.2.5.x86_64Resolving Dependencies--> Running transaction check---> Package centos-release.x86_64 0:7-0.1406.el7.centos.2.5 will be reinstalled--> Finished Dependency ResolutionDependencies Resolved======================================================================================================================================================== Package Arch Version Repository Size========================================================================================================================================================Reinstalling: centos-release x86_64 7-0.1406.el7.centos.2.5 /centos-release-7-0.1406.el7.centos.2.5.x86_64 31 kTransaction Summary========================================================================================================================================================Reinstall 1 PackageTotal size: 31 kInstalled size: 31 kIs this ok [y/d/N]: yDownloading packages:Running transaction checkRunning transaction testTransaction test succeededRunning transaction Installing : centos-release-7-0.1406.el7.centos.2.5.x86_64 1/1 Verifying : centos-release-7-0.1406.el7.centos.2.5.x86_64 1/1Installed: centos-release.x86_64 0:7-0.1406.el7.centos.2.5Complete! Why didn't reinstall work? This is a snafu that was created when the individualized RPMs to specific versions of CentOS were deprecated. This directory (and version of CentOS) is deprecated. For normal users, you should use /7/ and not /7.0.1406/ in your path. Please see this FAQ concerning the CentOS release scheme: https://wiki.centos.org/FAQ/General If you know what you are doing, and absolutely want to remain at the 7.0.1406 level, go to http://vault.centos.org/ for packages. Please keep in mind that7.0.1406 no longer gets any updates, nor any security fix's. --- Source: http://mirror.centos.org/centos/7.0.1406/readme So you typically have to reach into the CentOS Vault for packages that fall into this state. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119499/"
]
} |
209,826 | File1: Dms 01.01.2015 feeder1 6kv close at 04:30 Updated Dms 01.01.2015 feeder1 6kv open at 06:20 Updated Dms 04.02.2015 feeder10 6kv close at 17:23 Updated Dms 04.02.2015 feeder12 6kv open at 23:30 Updated Dms 12.04.2015 feeder4 6kv disturb at 12:30 Updated Dms 12.04.2015 feeder7 6kv close at 11:09 Updated Dms 16.05.2015 feeder8 6kv open at 13:10 Updated Dms 01.06.2015 feeder3 6kv close at 05:07 Updated Output will be: Dms 01.01.2015 feeder1 6kv close at 04:30 Updated Dms 01.01.2015 feeder1 6kv open at 06:20 Updated Dms 04.02.2015 feeder10 6kv close at 17:23 Updated Dms 04.02.2015 feeder12 6kv open at 23:30 Updated Dms 12.04.2015 feeder4 6kv disturb at 12:30 Updated Dms 12.04.2015 feeder7 6kv close at 11:09 Updated Dms 16.05.2015 feeder8 6kv open at 13:10 Updated Dms 01.06.2015 feeder3 6kv close at 05:07 EOF I want to change only the last Updated to EOF using command. Please be noted, my line numbers in file is not fixed. It may be 100 lines or may be 500 lines but I want to change only the last word using command. | With sed : sed -e '$s/Updated/EOF/' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
209,832 | I'm trying to create a systemd service on Debian Jessie.I need it to start after network-online.target is reached. The problem is network-online.target fires at the same time as network.target and at that time my interfaces are not configured yet, just started DHCP query. It looks like this issue is specific to Debian because it uses legacy network configuration. How to bypass this problem or how to make network-online.target working? | Since you're using /etc/network/interfaces , you'll need a systemd service to monitor the status of each interface. Check to see if you have /lib/systemd/system/ifup-wait-all-auto.service (installed by the ifupdown package in Ubuntu 15.04). If not, then create /etc/systemd/system/ifup-wait-all-auto.service , and paste in the following: [Unit]Description=Wait for all "auto" /etc/network/interfaces to be up for network-online.targetDocumentation=man:interfaces(5) man:ifup(8)DefaultDependencies=noAfter=local-fs.targetBefore=network-online.target[Service]Type=oneshotRemainAfterExit=yesTimeoutStartSec=2minExecStart=/bin/sh -ec '\ for i in $(ifquery --list --exclude lo --allow auto); do INTERFACES="$INTERFACES$i "; done; \ [ -n "$INTERFACES" ] || exit 0; \ while ! ifquery --state $INTERFACES >/dev/null; do sleep 1; done; \ for i in $INTERFACES; do while [ -e /run/network/ifup-$i.pid ]; do sleep 0.2; done; done'[Install]WantedBy=network-online.target This is the service file as present on an Ubuntu 15.04 system, but with the [Install] section added in to make things a little easier. I'm hoping that the behavior of ifup in Ubuntu 15.04 is the same as the behavior of ifup in Debian Jessie. If not, some modification will be necessary (particularly with the last line). Then, run sudo systemctl enable ifup-wait-all-auto.service . After rebooting your computer, you should see that the network-online.target is reached after the interfaces are brought up (at least). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32942/"
]
} |
209,833 | It appears that $[expr] performs arithmetic expansion just like $((expr)) . But I can't find any mention of $[ in the bash manual. This command gives no results: gunzip -c /usr/share/man/man1/bash.1.gz | grep -E '\$\[' What is this operator and is its behavior standardized anywhere? My bash version: GNU bash, version 3.2.51(1)-release (x86_64-apple-darwin13) | You can find old bash source here . In particular I downloaded bash-1.14.7.tar.gz . In the documentation/bash.txt you will find: Arithmetic Expansion Arithmetic expansion allows the evaluation of an arithmetic expression and the substitution of the result. There are two formats for arithmetic expansion: $[expression] $((expression)) The references to $[ are gone in doc/bash.html from the bash-doc-2.0.tar.gz download and the NEWS file mentions that: The $[...] arithmetic expansion syntax is no longer supported, in favor of $((...)) . $((...)) is also the standard syntax for an arithmetic expansion, but may have been added to the standard later than the original Bash implementation. However, $[...] does still seem to work in Bash 5.0, so it's not completely removed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67823/"
]
} |
209,895 | I'm trying to unset all environment variables that match _PROXY : env | grep -i _proxy | column -t -s '=' | awk '{ print $1 }' | grep -iv 'no_proxy' | xargs -0 -I variable unset variable but it's failing with xargs: unset: No such file or directory . If I try changing unset to echo , however, everything seems to work as expected: I get a list of variables that are set. env | grep -i _proxy | column -t -s '=' | awk '{ print $1 }' | grep -iv 'no_proxy' | xargs -0 -I variable echo variablehttp_proxyftp_proxyFTP_PROXYhttps_proxyHTTPS_PROXYHTTP_PROXY What seems to be going wrong? (If you have an alternate strategy for accomplishing the goal, I'm interested, but I'd most of all like to know why this is failing.) Also, I'm using OS X, in case that's relevant. | That's because unset is a shell builtin and not an external command. This means that xargs can't use it since that only runs commands that are in your $PATH . You'd get the same problem if you tried with cd : $ ls -ltotal 4drwxr-xr-x 2 terdon terdon 4096 Jun 16 02:02 foo$ echo foo | xargs cdxargs: cd: No such file or directory One way around this is to use a shell loop which, since it is part of the shell, will be able to run shell builtins. I also simplified your command a bit, there's no need for column , you can set the field separator for awk and there's no need for a second grep , just tell awk to print lines that don't match no_proxy : while read var; do unset $var; done < <(env | grep -i proxy | awk -F= '!/no_proxy/{print $1}') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3358/"
]
} |
209,898 | I tried really hard to avoid posting a new question for something so basic and already answered in a hundred places, but after spending two hours on this and trying every solution out there I'm thinking they're either outdated or don't apply to the current version of Fedora. What I tried (among other things): gnome-session-properties (doesn't exist anymore) gnome-tweak-tool (can only add existing applications to startup, ie: can't add custom commands) my working .sh script in ~/.config/autostart (chmodded executable) .desktop file in ~/.config/autostart script in rc.local (this appears to be ignored now) script in /etc/init.d (chmodded executable with sudo ) The above all fail to run my script on start up. The script I'm trying to run: #!/bin/shxcompmgr Or even simply this command: xcompmgr My exact setup: Fedora 22Kernel 4.0.4Gnome shell 3.16.2Awesome WM 3.5.6 What is the simplest, up-to-date way of running a command or script on start up on my setup? | I had the same problem. It seems that the key to really enable it is Version=1.0 that I can bet you missed.You can also disable autostart item or delay it's start by adding: X-GNOME-Autostart-enabled=falseX-GNOME-Autostart-Delay=2 You can also use great example with: ln -s /usr/share/applications/pidgin.desktop ~/.config/autostart/pidgin.desktop Full example, that should work: [Desktop Entry]Version=1.0Name=xcompmgrGenericName=xcompmgrComment=xcompmgrExec=/path/to/xcompmgrTerminal=trueType=ApplicationX-Desktop-File-Install-Version=0.22 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89986/"
]
} |
209,950 | In grep you can use --group-separator to write something in between group matches. This comes handy to make it clear what blocks do we have, especially when using -C X option to get context lines. $ cat ahellothis is meand this issomething elsehello hellobyei am done$ grep -C1 --group-separator="+++++++++" 'hello' ahellothis is me+++++++++something elsehello hellobye I learnt in Using empty line as context "group-separator" for grep how to just have an empty line, by saying --group-separator="" . However, what if I want to have two empty lines? I tried saying --group-separator="\n\n" but I get literal \n s: $ grep -C1 --group-separator="\n\n" 'hello' ahellothis is me\n\nsomething elsehello hellobye Other things like --group-separator="\nhello\n" did not work either. | Ooooh I found it, I just need to use the $'' syntax instead of $"" : $ grep -C1 --group-separator=$'\n\n' 'hello' ahellothis is mesomething elsehello hellobye From man bash : QUOTING Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard. Backslash escape sequences, if present, are decoded as follows: (...)\n new line | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40596/"
]
} |
209,971 | Consider I've set variable site and needs to be printed by echo or printf , but If I use single quote to write something and want to use variable then how? Example: $ site=unix.stackexchange.com$ echo "visit:$site"visit:unix.stackexchange.com But If I use single quote: $ echo 'visit:$site'visit:$site Then we know that '' is strong quote and will not expand the variable I've tried something: $ echo 'visit:"$site"'visit:"$site" but do not succeed. So, I am looking for way to print value inside variable while using single quote . | You can't expand variables in single quotes. You can end single quotes and start double quotes, though: echo 'visit:"'"$site"'"' Or, you can backslash double quotes inside of double quotes: echo "visit:\"$site\"" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/209971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
209,974 | What is the difference between find * and find ~ for searching a file?In terminal when my present working directory on root ,then in terminal root@devils-ey3:~# find * -print -quit~ On same directory root@devils-ey3:~# find ~ -print -quit/root But if I change the pwd then the output of find ~ -print -quit is same as before but the other is change.What is the working purpose of * and ~ for find file ? | The basic format of find is find WHERE WHAT So, in find * , the * is taken as the WHERE . Now, * is a wildcard. It matches everything in the current directory (except, by default, files/directories starting with a . ). The Windows equivalent is *.* . This means that * is expanded to all files and directories in your current directory before it is passed to find . To illustrate, consider this directory: $ lsfile file2 If we run set -x to enable debugging info and then run your find command, we see: $ find * -print -quit+ find file file2 -print -quitfile As you can see above, the * is expanded to all files in the directory and what is actually run is find file file2 -print -quit Because of -quit , this prints the first file name of the ones you told it to look for and exits. In your case, you seem to have a file or directory called ~ so that is the one that is printed. The tilde ( ~ ), however, also has a special meaning. It is a shortcut to your $HOME directory: $ echo ~/home/terdon So, when you run find ~ , as root, the ~ is expanded to /home/root and the command you run is actually: # find ~ -print -quit+ find /root -print -quit/root Again, you are telling find to search for files or directories in a specific location and exit after printing the first one. Since the first file or directory matching /root is itself, that's what is printed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112763/"
]
} |
209,981 | When I resize a terminal window containing a tmux session, tmux doesn't detect this change, but continues to function normally within the old window boundaries. tmux ls shows no other attached clients before I attach: $ tmux lsadmin: 1 windows (created Mon Apr 27 15:12:58 2015) [272x75]apt-runs: 3 windows (created Mon Apr 27 15:17:50 2015) [272x75]lal-dev: 4 windows (created Tue Jun 9 12:24:25 2015) [238x73] This only happens with a particular host (running tmux 1.9a), and detaching/reattaching fixes the issue (until the window is resized again). What might be causing this? Before resize: After resize: | The easiest thing to do is to detach any other clients from the sessions when you attach: tmux attach -d or short tmux a -d Alternately, you can move any other clients to a different session before attaching to the session: https://stackoverflow.com/a/7819465/1069083 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/209981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2697/"
]
} |
209,990 | I have a file in the following format: $ cat /tmp/raw2015-01 5000 10002015-02 6000 20002015-03 7000 3000 Now, what I want is to get the combined value from columns 2 and 3 in each row so that results are as follows: 2015-01 60002015-02 80002015-03 9000 I tried this but it only shows last value in the file like 2015-03 value. | You can try using awk : awk '{ print $1, $2 + $3; }' /tmp/raw Result will be (I suppose value for 2015-03 should be 10000): 2015-01 60002015-02 80002015-03 10000 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/209990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119608/"
]
} |
210,019 | Microsoft Announces Its Own Linux OS , on It's F.O.S.S. , April 1, 2015 . I cannot find it on distrowatch. Was this a hoax, or is this a real, living distro that can be downloaded, tested? | Well, it was published on April 1, 2015 - April Fools' Day . They produce other jokes on April 1 st , like Linus Torvalds To Join Microsoft To Head Windows 9 Project . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112826/"
]
} |
210,093 | I need to pass a user and a directory to a script and have it spit out a list of what folders/files in that directory that the user has read access to. MS has a tool called AccessChk for Windows that does this but does something like this exist on the Unix side? I found some code that will do this for a specific folder or file but I need it to traverse a directory. | TL;DR find "$dir" ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -r' You need to ask the system if the user has read permission. The only reliable way is to switch the effective uid, effective gid and supplementation gids to that of the user and use the access(R_OK) system call (even that has some limitations on some systems/configurations). The longer story Let's consider what it takes for instance for a $user to have read access to /foo/file.txt (assuming none of /foo and /foo/file.txt are symlinks)? He needs: search access to / (no need for read ) search access to /foo (no need for read ) read access to /foo/file.txt You can see already that approaches that check only the permission of file.txt won't work because they could say file.txt is readable even if the user doesn't have search permission to / or /foo . And an approach like: sudo -u "$user" find / -readable Won't work either because it won't report the files in directories the user doesn't have read access (as find running as $user can't list their content) even if he can read them. If we forget about ACLs or other security measures (apparmor, SELinux...) and only focus on traditional permission and ownership attributes, to get a given (search or read) permission, that's already quite complicated and hard to express with find . You need: if the file is owned by you, you need that permission for the owner (or have uid 0) if the file is not owned by you, but the group is one of yours, then you need that permission for the group (or have uid 0). if it's not owned by you, and not in any of your groups, then the other permissions apply (unless your uid is 0). In find syntax, here as an example with a user of uid 1 and gids 1 and 2, that would be: find / -type d \ \( \ -user 1 \( -perm -u=x -o -prune \) -o \ \( -group 1 -o -group 2 \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user 1 \( ! -perm -u=r -o -print \) -o \ \( -group 1 -o -group 2 \) \( ! -perm -g=r -o -print \) -o \ ! -perm -o=r -o -print That one prunes the directories that user doesn't have search right for and for other types of files (symlinks excluded as they're not relevant), checks for read access. Or for an arbitrary $user and its group membership retrieved from the user database: groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "find / -type d \ \( \ -user "$user" \( -perm -u=x -o -prune \) -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" \( ! -perm -u=r -o -print \) -o \ \( -group $groups \) \( ! -perm -g=r -o -print \) -o \ ! -perm -o=r -o -print The best here would be to descend the tree as root and check the permissions as the user for each file. find / ! -type l -exec sudo -u "$user" sh -c ' for file do [ -r "$file" ] && printf "%s\n" "$file" done' sh {} + Or with perl : find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -r' Or with zsh : files=(/**/*(D^@))USERNAME=$userfor f ($files) { [ -r $f ] && print -r -- $f} Those solutions rely on the access(2) system call. That is instead of reproducing the algorithm the system uses to check for access permission, we're asking the system to do that check with the same algorithm (which takes into account permissions, ACLs...) it would use would you try to open the file for reading, so is the closest you're going to get to a reliable solution. Now, all those solutions try to identify the paths of files that the user may open for reading, that's different from the paths where the user may be able to read the content. To answer that more generic question, there are several things to take into account: $user may not have read access to /a/b/file but if he owns file (and has search access to /a/b , and he's got shell access to the system), then he would be able to change the permissions of the file and grant himself access. Same thing if he owns /a/b but doesn't have search access to it. $user may not have access to /a/b/file because he doesn't have search access to /a or /a/b , but that file may have a hard link at /b/c/file for instance, in which case he may be able to read the content of /a/b/file by opening it via its /b/c/file path. Same thing with bind-mounts . He may not have search access to /a , but /a/b may be bind-mounted in /c , so he could open file for reading via its /c/file other path. To find the paths that $user would be able to read. To address 1 or 2, we can't rely on the access(2) system call anymore. We could adjust our find -perm approach to assume search access to directories, or read access to files as soon as you're the owner: groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "find / -type d \ \( \ -user "$user" -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" -print -o \ \( -group $groups \) \( ! -perm -g=r -o -print \) -o \ ! -perm -o=r -o -print We could address 3 and 4, by recording the device and inode numbers or all the files $user has read permission for and report all the file paths that have those dev+inode numbers. This time, we can use the more reliable access(2) -based approaches: Something like: find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -0lne 'print 0+-r,$_' | perl -l -0ne ' ($w,$p) = /(.)(.*)/; ($dev,$ino) = stat$p or next; $writable{"$dev,$ino"} = 1 if $w; push @{$p{"$dev,$ino"}}, $p; END { for $i (keys %writable) { for $p (@{$p{$i}}) { print $p; } } }' And merge both solutions with: { solution1; solution2} | perl -l -0ne 'print unless $seen{$_}++' As should be clear if you've read everything thus far, part of it at least only deals with permissions and ownership, not the other features that may grant or restrict read access (ACLs, other security features...). And as we process it in several stages, some of that information may be wrong if the files/directories are being created/deleted/renamed or their permissions/ownership modified while that script is running, like on a busy file server with millions of files. Portability notes All that code is standard (POSIX, Unix for t bit) except: -print0 is a GNU extension now also supported by a few other implementations. With find implementations that lack support for it, you can use -exec printf '%s\0' {} + instead, and replace -exec sh -c 'exec find "$@" -print0' sh {} + with -exec sh -c 'exec find "$@" -exec printf "%s\0" {\} +' sh {} + . perl is not a POSIX-specified command but is widely available. You need perl-5.6.0 or above for -Mfiletest=access . zsh is not a POSIX-specified command. That zsh code above should work with zsh-3 (1995) and above. sudo is not a POSIX-specified command. The code should work with any version as long as the system configuration allows running perl as the given user. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119687/"
]
} |
210,101 | I have a file like this >chr1ACGTGGCTGCCGTTATCCTTG>chr2ACTTTTACTCATAA I want to convert the seq into 1 string. This should be the output: >chr1ACGTGGCTGCCGTTATCCTTG>chr2ACTTTTACTCATAA How can i do it using awk. I know how to do it in Perl. Thanks | TL;DR find "$dir" ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -r' You need to ask the system if the user has read permission. The only reliable way is to switch the effective uid, effective gid and supplementation gids to that of the user and use the access(R_OK) system call (even that has some limitations on some systems/configurations). The longer story Let's consider what it takes for instance for a $user to have read access to /foo/file.txt (assuming none of /foo and /foo/file.txt are symlinks)? He needs: search access to / (no need for read ) search access to /foo (no need for read ) read access to /foo/file.txt You can see already that approaches that check only the permission of file.txt won't work because they could say file.txt is readable even if the user doesn't have search permission to / or /foo . And an approach like: sudo -u "$user" find / -readable Won't work either because it won't report the files in directories the user doesn't have read access (as find running as $user can't list their content) even if he can read them. If we forget about ACLs or other security measures (apparmor, SELinux...) and only focus on traditional permission and ownership attributes, to get a given (search or read) permission, that's already quite complicated and hard to express with find . You need: if the file is owned by you, you need that permission for the owner (or have uid 0) if the file is not owned by you, but the group is one of yours, then you need that permission for the group (or have uid 0). if it's not owned by you, and not in any of your groups, then the other permissions apply (unless your uid is 0). In find syntax, here as an example with a user of uid 1 and gids 1 and 2, that would be: find / -type d \ \( \ -user 1 \( -perm -u=x -o -prune \) -o \ \( -group 1 -o -group 2 \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user 1 \( ! -perm -u=r -o -print \) -o \ \( -group 1 -o -group 2 \) \( ! -perm -g=r -o -print \) -o \ ! -perm -o=r -o -print That one prunes the directories that user doesn't have search right for and for other types of files (symlinks excluded as they're not relevant), checks for read access. Or for an arbitrary $user and its group membership retrieved from the user database: groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "find / -type d \ \( \ -user "$user" \( -perm -u=x -o -prune \) -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" \( ! -perm -u=r -o -print \) -o \ \( -group $groups \) \( ! -perm -g=r -o -print \) -o \ ! -perm -o=r -o -print The best here would be to descend the tree as root and check the permissions as the user for each file. find / ! -type l -exec sudo -u "$user" sh -c ' for file do [ -r "$file" ] && printf "%s\n" "$file" done' sh {} + Or with perl : find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -r' Or with zsh : files=(/**/*(D^@))USERNAME=$userfor f ($files) { [ -r $f ] && print -r -- $f} Those solutions rely on the access(2) system call. That is instead of reproducing the algorithm the system uses to check for access permission, we're asking the system to do that check with the same algorithm (which takes into account permissions, ACLs...) it would use would you try to open the file for reading, so is the closest you're going to get to a reliable solution. Now, all those solutions try to identify the paths of files that the user may open for reading, that's different from the paths where the user may be able to read the content. To answer that more generic question, there are several things to take into account: $user may not have read access to /a/b/file but if he owns file (and has search access to /a/b , and he's got shell access to the system), then he would be able to change the permissions of the file and grant himself access. Same thing if he owns /a/b but doesn't have search access to it. $user may not have access to /a/b/file because he doesn't have search access to /a or /a/b , but that file may have a hard link at /b/c/file for instance, in which case he may be able to read the content of /a/b/file by opening it via its /b/c/file path. Same thing with bind-mounts . He may not have search access to /a , but /a/b may be bind-mounted in /c , so he could open file for reading via its /c/file other path. To find the paths that $user would be able to read. To address 1 or 2, we can't rely on the access(2) system call anymore. We could adjust our find -perm approach to assume search access to directories, or read access to files as soon as you're the owner: groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "find / -type d \ \( \ -user "$user" -o \ \( -group $groups \) \( -perm -g=x -o -prune \) -o \ -perm -o=x -o -prune \ \) ! -type d -o -type l -o \ -user "$user" -print -o \ \( -group $groups \) \( ! -perm -g=r -o -print \) -o \ ! -perm -o=r -o -print We could address 3 and 4, by recording the device and inode numbers or all the files $user has read permission for and report all the file paths that have those dev+inode numbers. This time, we can use the more reliable access(2) -based approaches: Something like: find / ! -type l -print0 | sudo -u "$user" perl -Mfiletest=access -0lne 'print 0+-r,$_' | perl -l -0ne ' ($w,$p) = /(.)(.*)/; ($dev,$ino) = stat$p or next; $writable{"$dev,$ino"} = 1 if $w; push @{$p{"$dev,$ino"}}, $p; END { for $i (keys %writable) { for $p (@{$p{$i}}) { print $p; } } }' And merge both solutions with: { solution1; solution2} | perl -l -0ne 'print unless $seen{$_}++' As should be clear if you've read everything thus far, part of it at least only deals with permissions and ownership, not the other features that may grant or restrict read access (ACLs, other security features...). And as we process it in several stages, some of that information may be wrong if the files/directories are being created/deleted/renamed or their permissions/ownership modified while that script is running, like on a busy file server with millions of files. Portability notes All that code is standard (POSIX, Unix for t bit) except: -print0 is a GNU extension now also supported by a few other implementations. With find implementations that lack support for it, you can use -exec printf '%s\0' {} + instead, and replace -exec sh -c 'exec find "$@" -print0' sh {} + with -exec sh -c 'exec find "$@" -exec printf "%s\0" {\} +' sh {} + . perl is not a POSIX-specified command but is widely available. You need perl-5.6.0 or above for -Mfiletest=access . zsh is not a POSIX-specified command. That zsh code above should work with zsh-3 (1995) and above. sudo is not a POSIX-specified command. The code should work with any version as long as the system configuration allows running perl as the given user. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63486/"
]
} |
210,113 | I woud like to set up the default sound volume once for all, for all ALSA devices that will be connected ever. Of course, I could do amixer ... or even alsamixer to modify the volume of currently available soundcards . But I really want to modify the default volume even for future soundcards that will be added later . In which config file should I set this default sound volume? I've seen /var/lib/alsa/asound.state but the content is specific to currently connected soundcards.What I want is a solution that will apply to any soundcard that will be connected. Context : why do I want this? I'm providing a ready-to-use Debian image for my project SamplerBox . User #1 might use the computer's built-in-soundcard, User #2 might have a USB DAC, User #3 might have another soundcard... I would like to provide a default -3dB volume that will work for any ALSA soundcard people could have... Note: I reinstalled a fresh new system and it seems that, by default, the volume is -20dB for all devices : | I just wandered upon this post and see you are struggling with the answer to this as I was. This is what fixed it for me: Go into alsamixer and set everything the way you want it, then exit and type this: sudo alsactl store That will store the current config of alsamixer and it should keep the config. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59989/"
]
} |
210,117 | I have installed Debian 8, but I neeed to use just multi-user text mode, runlevel 3, instead of appear my Gnome 3. But I saw that doesn't exist /etc/inittab . And now? | Two things you need to know: 1) Systemd boots towards the target given by "default.target". This is typically a symbolic link to the actual target file. 2) Systemd keeps it's targets in /lib/systemd/system and /etc/systemd/system. A file in /etc/systemd/system takes precedence over those shipped with the OS in /lib/systemd/system -- the intent is that /etc/systemd is used by systems administrators and /lib/systemd is used by distributions. Debian as-shipped boots towards the graphical target. You can see this yourself: $ ls -l /etc/systemd/system/default.target... No such file or directory$ ls -l /lib/systemd/system/default.target... /lib/systemd/system/default.target -> graphical.target So to boot towards the multiuser target all you need do is to put in own target: $ cd /etc/systemd/system/$ sudo ln -s /lib/systemd/system/multi-user.target default.target | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61633/"
]
} |
210,147 | Is it possible at all to input JCK languages (Traditional Chinese in my case) in the console without having X-windows running? All the help I have found online for Chinese input assume that X and some kind of destop manager is running. I am in the process of planning a difficult upgrade of my outdated system, and I have gentoo installed on a different partition. I can easily chroot into this brand new system to configure things, test applications and plan the upgrade. While chroot'ed, I'd like to use applications installed on the new gentoo system to input Chinese. I don't know if it is possible at all. If it is, what applications would I need to install? So I guess my question includes two related but slightly different scenarios: 1) input CJK on a machine that has been booted to a terminal, without X running at all on the system.2) input CJK on a machine running X (KDE) and from which I chroot into a new system, and use the chroot environment to input Chinese. Side question: is it possible to start X (KDE or a lightweight DE) from within the chroot? (The tags: [CJK] or [Chinese] are missing.) | Two things you need to know: 1) Systemd boots towards the target given by "default.target". This is typically a symbolic link to the actual target file. 2) Systemd keeps it's targets in /lib/systemd/system and /etc/systemd/system. A file in /etc/systemd/system takes precedence over those shipped with the OS in /lib/systemd/system -- the intent is that /etc/systemd is used by systems administrators and /lib/systemd is used by distributions. Debian as-shipped boots towards the graphical target. You can see this yourself: $ ls -l /etc/systemd/system/default.target... No such file or directory$ ls -l /lib/systemd/system/default.target... /lib/systemd/system/default.target -> graphical.target So to boot towards the multiuser target all you need do is to put in own target: $ cd /etc/systemd/system/$ sudo ln -s /lib/systemd/system/multi-user.target default.target | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5687/"
]
} |
210,158 | The Bash interpreter itself has options. For example, those mentioned on lines 22-23 of Bash's man page : OPTIONS All of the single-character shell options documented in the description of the set builtin command can be used as options when the shell is invoked. In addition, bash interprets the following options when it is invoked: -c ... -i ... -l ... -r ... I've used a few search patterns in Bash's man page like: /^\s*set /list Is it possible to print a list of these settings that are applied to the current shell? | printf %s\\n "$-" Will list the single letter options in a single string. That parameter can also be used like: set -f -- ${-:+"-$-"}echo *don\'t* *glob* *this*set +f "$@" To first disable shell -f ilename expansion while simultaneously saving a value for $- - if any - in $1 . Next, no globs occur, and last +f ilename expansion is once again enabled, and possibly also disabled. For example, if -f ilename expansion was already disabled when the value for $- was first saved, then its saved value would be (at least) : f And so when set is run again, it works out to: set +f -f Which just puts you right back where you started. set +o Will list all set table shell options (see Jason's answer for the shopt able - is that a word? - options) in a form that is safe for shell reentry. In that way, you can also do: state=$(set +o)set -some -crazy -optionseval "$state" To save, change, and restore the shell options' state respectively. To handle shopt ions and set table options in one go: state=$(set +o; shopt -p) #do what you want with options hereeval "$state" You can also call set without any arguments to add a list of all of the shell's currently set variables - also quoted for reentry to the shell. And you can - in bash - additionally add the command typeset -fp to also include all currently declared shell functions. You can lump it all together and eval when ready. You can even call alias without arguments for more of the same. That... might cover it, though. I guess there is "$@" - which you'd have to put in a bash array first, I suppose, before doing set . Nope, there's also trap . This one's a little funny. Usually: trap 'echo this is my trap' 0(echo this is my subshell; trap) ...will just print this is my subshell because the subshell is a new process and gets its own set of trap s - and so doesn't inherit any trap s but those which its parent has explicitly ignored - (like trap '' INT ) . However: trap 'echo this is my trap' 0save_traps=$(trap) trap behaves specially when it is the first and only command run in a command substitution subshell in that it will reproduce a list of the parent shell's currently set traps in a format which is quoted for safe reentry to the shell. And so you can do the save_traps , then set without arguments - and all of the rest already mentioned - to pretty much get a lock on all shell state. You might want to explicitly add export -p and readonly -p to restore original shell var attributes, though. Anyway, that's enough. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
210,171 | File1: .tid.setnr := 1123 .tid.setnr := 3345 .tid.setnr := 5431.tid.setnr := 89323 File2: .tid.info := 12.tid.info := 3.tid.info := 44.tid.info := 60 Output file: .tid.info := 12.tid.setnr := 1123.tid.info := 3.tid.setnr := 3345.tid.info := 44.tid.setnr := 5431.tid.info := 60.tid.setnr := 89323 | Using paste : paste -d \\n file2 file1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
210,174 | Is it possible to change the background of the active (current) tmux tab? I'm using tmux 1.9 on Ubuntu 15.04. $ tmux -Vtmux 1.9 I tried to do: set-option -g pane-active-border-fg red But the result was not changed: I expected the 3-bash* to have a red background. | You haven't set window active background color, you only set active panel border, try: set-window-option -g window-status-current-bg red | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45370/"
]
} |
210,185 | Recently because of some problem, I lost all my server files and requested the hosting team to provide me my files from a backup. They will provide me a link from which I have to download a compressed file and upload it again on server. Is there any way to download that file directly onto the server? I have full access on the server. | You may use the wget utility. It has a really simple syntax, and all what do you need is to: wget http://link.to.file and it will be stored in the same directory where do you run wget . If you'd like to store a downloaded file somewhere else, you may use -P option, e.g. wget -P /path/to/store http://link.to.file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119750/"
]
} |
210,197 | If I encrypt (dm crypt, LUKS) my whole system, how many RAM should I provide ? I've understood that LUKS volume is mounted RAM, ... If my system is 10 Gb, should I have something like RAM 12 Go ? | You have misunderstood. The LUKS data is stored on disc and encrypted/decrypted a block at a time as necessary (of course there is some caching going on). I don't know the minimum size, but I operated a 32Gb LUKS encrypted ReiserFS partition from a 1 GB memory PC. A whole disc shouldn't make any difference from using LUKS on a partition. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116392/"
]
} |
210,201 | The duplicate is combination of different case text. I need to count number of duplicate (case-insensitive) and then I need to remove duplicate by choose case with highest duplicate. Below example: hot chocolatehot chocolatehot chocolateHot ChocolateHot ChocolateHot ChocolateHot ChocolateHot ChocolateXicolatadaXicolatadaXicolatadaXicolatadaXICOLATADAXICOLATADA Should become: Hot Chocolate, 8Xicolatada, 6 This question similar to this one but I need to choose case with highest duplicate and count case-insensitively. | And there's uniq --ignore-case --count | sort --numeric --reverse : sort | uniq -ic /tmp/foo.txt | sort -nr 8 hot chocolate 6 Xicolatada And to switch around the order putting a comma in there add this pipe onto the end: ... | sed -e 's/^ *\([0-9]*\) \(.*\)/\2, \1/' See the first comment below as to why we have the leading sort. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17948/"
]
} |
210,202 | I accidentally created a file with the name - (eg, seq 10 > - ). Then I tried to use less to view it, but it just hangs. I understand that this is happening because less - expects input from stdin , so it does not interpret the - as a file name. I tried less \- but it does not work either. So, is there any way to indicate less that - is a file and not stdin? The best I could get is: find -name '-' -exec less {} + | Just prefix it with ./ : less ./- Or use redirection: less < - Note that since - (as opposed to -x or --foo-- for instance) is considered a special filename rather than an option, the following doesn't work: less -- - # THIS DOES NOT WORK | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/210202",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40596/"
]
} |
210,228 | I want to add a user to Red Hat Linux that will not use a password for logging in, but instead use a public key for ssh. This would be on the command line. | Start with creating a user: useradd -m -d /home/username -s /bin/bash username Create a key pair from the client which you will use to ssh from: ssh-keygen -t rsa Copy the public key /home/username/.ssh/id_rsa.pub onto the RedHat host into /home/username/.ssh/authorized_keys Set correct permissions on the files on the RedHat host: chown -R username:username /home/username/.sshchmod 700 /home/username/.sshchmod 600 /home/username/.ssh/authorized_keys Ensure that Public Key authentication is enabled on the RedHat host: grep PubkeyAuthentication /etc/ssh/sshd_config#should output:PubkeyAuthentication yes If not, change that directive to yes and restart the sshd service on the RedHat host. From the client start an ssh connection: ssh username@redhathost It should automatically look for the key id_rsa in ~/.ssh/ . You can also specify an identity file using: ssh -i ~/.ssh/id_rsa username@redhathost | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119776/"
]
} |
210,245 | I am having problems with this and I don't know why. There are many related questions but none of them helped me. I have two VMs: CentOS 7 with GNOME 192.168.1.53 Mint 17.1 Rebbeca with XFCE 192.168.1.54 I know that by default exporting the display should be strait forward, like: #While I am Logged in on the desktop on the MINT:user@mint:~$ xhost +#I am SSHing to the Centos from the MINTuser@mint:~$ ssh -XY [email protected]#At the CentOS I export the display [root@cent ~]$ export DISPLAY=192.168.1.54:0.0[root@cent ~]$ echo $DISPLAY192.168.1.54:0.0#Trying to start a simple program but I get an error message instead:[root@cent ~]$ xclockError: Can't open display: 192.168.1.54:0.0 What I am doing wrong? I tried the suggestions on a number of forums but I still get the error message. I also tried to export the display from the Mint to the Centos (the oposite way) and I still get the same error but this time on the Mint. Could it be that the error is because one system has XFCE and the other GNOME? I am thinking that there may be some default security settings in effect on one/both of the distros for which I am not aware of. I also tried to eddit the /etc/gdm/custom.conf on the CentOS as explained here: http://www.softpanorama.org/Xwindows/Troubleshooting/can_not_open_display.shtml | You're trying to create an X tunnel through SSH then overriding it by specifying an IP address which bypasses the SSH tunnel. This doesn't work. When SSH tunnelling, SSH deals with transferring data between the local and remote IP addresses by opening a port on localhost on each machine it speaks to. You don't get to specify the IP address of either computer. You need to export the display that is tunnelled through SSH, and that means export DISPLAY=localhost:x.y , which should have been done for you automatically when you connect using ssh -X. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15222/"
]
} |
210,260 | Say I am logged in root, and some commands don't let me to run them in root. I tried to login different accounts, but I'm not able to do such a thing. How can I execute such command (logged in root) without the root properties? | su -c "command and args" username | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119794/"
]
} |
210,286 | For example I have a file named 5.jpg . How can I rename it to aaaaa.jpg with char a 5 times. I tried rename -v 's/(\d{1,})/a{$1}/g' * but this renames 5.jpg to a{5}.jpg , which is not what I want. I understand that second part of function isn't a regexp, this was just an attempt. | At least three different utilities imaginatively named rename(1) are floating around in the Linux waters: (1) the one that came with util-linux , (2) an older Perl script by Larry Wall further munged by Tom Christiansen, and (3) a newer Perl script evolved from the former and included with Unicode::Tussle . As far as I can tell, what you want can't be done with the util-linux version of rename(1) . It can be done with either of the Perl scripts though: rename -n 's/(\d+)/"a" x $1/e' 5.jpg Output: rename 5.jpg aaaaa.jpg (drop the -n to actually rename the file). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109975/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.