source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
397,402 | I have to find a file with only a given number and a string. Ito find 856 enclosed with wildcards *856* . e.g. I have this fileA.txt with thousands of numbers: 4542354235423542354254235435423532542353454352452434354353454523453454523543455454 I have this script: #!/bin/shfindFile() { while read -r LINE do printf "$LINE\n" grep -il "$LINE" *856* printf "\n" done < /path/to/fileA.txt > /path/to/result.txt}findFile EDIT: The file names doesn't contain the pattern from fileA.txt . Desired output: 45423542354235423542found_file_856_COMPANY_FFW3443E.dat5423543542353254235found_file_856_COMPANY_43R43F.dat345435245243found_file_856_COMPANY_77Y85HHH.dat435435345found_file_856_COMPANY_64Y76H6.dat452345345found_file_856_COMPANY_9630GGTFVF.dat4523543455454found_file_856_COMPANY_2R98JD925.dat | The -p flag is not useful when creating an archive (with -c ), only when extracting (with -x ). From the GNU tar manual: -p , --preserve-permissions , --same-permissions extract information about file permissions (default for superuser) That's a horrible way of saying "preserve permissions and ownerships". From the OpenBSD manual : -p Preserve user and group ID as well as file mode regardless of the current umask(2) . The setuid and setgid bits are only preserved if the user and group ID could be preserved. Only meaningful in conjunction with the -x flag. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231571/"
]
} |
397,445 | I cannot find anything in Settings > Mouse and Touchpad for the change. My X11 configs do not work here because I am using Wayland. OS: Linux Debian Stretch 9.1 Gnome: 3.22 Wayland: 1.12 | Install gnome-control-center than open it. Select Devices from the side Navigation bar than select Mouse & Touchpad . There you will find the option to toggle Natural Scrolling | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
397,457 | I'm writing a bash script,and I have a list of dpkg package names and I want my script to write a text file with the following format: name of packagedescriptionname of packagedescription... Here is the simple sentence that should do it but it does not... cat /media/sdcard/liste_des_paquets_sans_dependances_inverse | while read ligne ; dodpkg-query --showformat='${Package}\n\n${Description}\n\n\n\n' --show $ligne >> descriptions.txtdone Maybe you'll see where my fault is? | Install gnome-control-center than open it. Select Devices from the side Navigation bar than select Mouse & Touchpad . There you will find the option to toggle Natural Scrolling | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255124/"
]
} |
397,459 | I have a web application that access a remote storage running Linux to get some files, the problem is that the remote storage have currently 3 million files , so accessing the normal way is a bit tricky. So I needed to work on a script that is going to make it a little bit more easy to use , this script is going to reorganize the files into multiple folders depending on their creation date and specially their names,i made the script and it worked just fine, it intended to do what it meant to do, but it was too slow, 12 hours to perform the work completely (12:13:48 to be precise) . I think that the slowness is coming from the multiple cut and rev calls I make. example : I get the file names with an ls command that I loop into with for, and for each file I get the parent directory and, depending on the parent directory, I can get the correct year: case "$parent" in ( "Type1" ) year=$(echo "$fichier" | rev | cut -d '_' -f 2 | rev );; ( "Type2" ) year=$(echo "$fichier" | rev | cut -d '_' -f 2 | rev);; ( "Type3" ) year=$(echo "$fichier" | rev | cut -d '_' -f 1 | rev | cut -c 1-4);; ( "Type4" ) year=$(echo "$fichier" | rev | cut -d '_' -f 1 | rev | cut -c 1-4);; ( "Type5" ) year=$(echo "$fichier" | rev | cut -d '_' -f 1 | rev | cut -c 1-4);; esac for type1 of files : the file==>MY_AMAZING_FILE_THAT_IMADEIN_YEAR_TY.pdf I need to get the year so I perform a reverse cut: year=$(echo "$file" | rev | cut -d '_' -f 2 | rev );; for type2 of files : the file==>MY_AMAZING_FILE_THAT_IMADE_IN_YEAR_WITH_TY.pdf etc... and then I can mv the file freely : mv $file /some/path/destination/$year/$parent and yet this is the simplest example, there are some files that are much more complex, so to get 1 information I need to do 4 operations, 1 echo , 2rev and 1echo . While the script is running I am getting speeds of 50 files/sec to 100 files\s , I got this info by doing a wc-l output.txt of the script. Is there anything I can do to make it faster? or another way to cut the files name? I know that I can use sed or awk or string operations but I did not really understand how. | To get the YEAR portion of the filename MY_AMAZING_FILE_THAT_IMADEIN_YEAR_TY.pdf without using external utilities: name='MY_AMAZING_FILE_THAT_IMADEIN_YEAR_TY.pdf'year=${name%_*} # remove everything after the last '_'year=${year##*_} # remove everything up to the last (remaining) '_' After update to the question: Moving PDF files from under topdir to a directory /some/path/destination/<year>/<parent> where <year> is the year found in the filename of the file, and <parent> is the basename of the original directory that the file was found in: find topdir -type f -name '*.pdf' -exec bash ./movefiles.sh {} + movefiles.sh is a shell script in the current directory: #!/bin/bashdestdir='/some/path/destination'for name; do # get basename of directory parent=${name%/*} parent=${parent##*/} # get the year from the filename: # - Pattern: _YYYY_ (in the middle somewhere) # - Pattern: _YYYYMMDD.pdf (at end) if [[ "$name" =~ _([0-9]{4})_ ]] || [[ "$name" =~ _([0-9]{4})[0-9]{4}\.pdf$ ]]; then year="${BASH_REMATCH[1]}" else printf 'No year in filename "%s"\n' "$name" >&2 continue fi # make destination directory if needed # (remove echo when you have tested this at least once) if [ ! -d "$destdir/$year/$parent" ]; then echo mkdir -p "$destdir/$year/$parent" fi # move file # (remove echo when you have tested this at least once) echo mv "$name" "$destdir/$year/$parent"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217253/"
]
} |
397,462 | I do not get any printed result for that, but I don't understand why? read -e -i "no" -p "Install? " resultif [ '$result' == 'yes' ]; then declare -a subs=('one' 'two') for sub in "${subs[@]}" do echo "$sub" donefi | You need to double quote $result rather than single quote, otherwise it won't expand. [ '$result' == 'yes' ] will never evaluate to true because it is trying to compare the literal $result to the literal yes . Additionally, as Kusalananda points out; the == operator is for bash test constructs [[ while the = operator is for standard (POSIX) test constructs [ . Therefore, you should change that line to: [ "$result" = 'yes' ] Another good tool to know about is the set builtin and its -x switch which can be used to trace your scripts. If you add set -x to the top of your original script and run it you will see the following printed out: + read -e -i no -p 'Install? ' resultInstall? yes+ '[' '$result' == yes ']' As you can see, it is trying to compare '$result' to 'yes'. When quoted correctly, you wouldn't see the variable $result but rather it's expansion like so: + read -e -i no -p 'Install? ' resultInstall? yes+ '[' yes == yes ']'+ subs=('one' 'two')+ declare -a subs+ for sub in '"${subs[@]}"'+ echo oneone+ for sub in '"${subs[@]}"'+ echo twotwo Whenever you are banging your head against the wall with a script you should turn on set -x and trace what it's doing and where it's going wrong. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210969/"
]
} |
397,481 | I have a linux machine (turnkey core 14.2) with two network cards. eth0 is a public ip, WAN (let's call it 123.123.123.123 ). eth1 is my network, LAN. I would like to block SSH from the WAN with iptables . I use the command sudo iptables -A INPUT -p tcp -s 123.123.123.123 --dport 22 -j DROP If I then write sudo iptables -L I get the answer Chain INPUT (policy ACCEPT)target prot opt source destinationDROP tcp -- 123.123.123.123 anywhere tcp dpt:ssh Problem is that I'm not blocked if I use PuTTY to connect to 123.123.123.123. Any idea what I'm doing wrong? | You are matching traffic by source address ( -s option), instead of destination address ( -d option), which is why your rule doesn't drop any traffic from other hosts. You can also match by input interface (instead of address) with -i option. For example to drop all incoming traffic to port 22 for eth0 : iptables -A INPUT -i eth0 -p tcp --dport 22 -j DROP | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255136/"
]
} |
397,498 | I have a matrix that looks like following: Input : A B C D E F G H I 0 0 0 0 1 0 0 0 10 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 1 1 1 0 0 And I would like to extract for each row the list of letter corresponding to the value 1. Output : E,I DDAA,C,G A,D,H A,E,F,G I have tried to split the header and to match the words with numbers but I failed. | In awk : NR == 1 { for(column=1; column <= NF; column++) values[column]=$column; }NR > 1 { output="" for(column=1; column <= NF; column++) if($column) output=output ? output "," values[column] : values[column] print output } | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136099/"
]
} |
397,516 | Using Debian 9 stable, I want to start a custom shell script before starting NGINX processes and shorewall firewall: Do some init work Mount a directory (overlayfs) to overlay /etc with NGINX configuration, shorewall configuration and /etc/hosts The script also ends with sync , not sure if it's a good idea systemctl list-dependencies default.target ● ├─display-manager.service ● ├─systemd-update-utmp-runlevel.service ● └─multi-user.target ● ├─console-setup.service ● ├─cron.service ● ├─dbus.service ● ├─dropbear.service ● ├─myservice.service <-- My service (link created with systemctl enable) ● ├─networking.service ● ├─nginx.service <-- To be executed after myservice [...] ● ├─basic.target ● │ ├─-.mount ● │ ├─myservice.service <-- My service (link created with systemctl enable) ● │ ├─shorewall.service <-- To be executed after myservice myservice.service ATTEMPT 1 [Unit] Description=My startup service Requires=shorewall.service nginx.service Before=shorewall.service nginx.service [Service] RemainAfterExit=yes ExecStart=/usr/local/bin/myservice start ExecStop=/usr/local/bin/myservice stop [Install] WantedBy=multi-user.target WantedBy=basic.target The logs: journalctl [...] Oct 12 11:31:43 server-dev nginx[448]: nginx: [emerg] host not found in upstream "server-dev.com" in /etc/nginx/sites-enabled/default:33 Oct 12 11:31:43 server-dev nginx[448]: nginx: configuration file /etc/nginx/nginx.conf test failed <== NGINX: BAD Oct 12 11:31:43 server-dev systemd[1]: nginx.service: Control process exited, code=exited status=1 Oct 12 11:31:43 server-dev systemd[1]: Failed to start A high performance web server and a reverse proxy server. Oct 12 11:31:43 server-dev systemd[1]: nginx.service: Unit entered failed state. Oct 12 11:31:43 server-dev systemd[1]: nginx.service: Failed with result 'exit-code'. Oct 12 11:31:43 server-dev systemd[1]: Reached target Multi-User System. Oct 12 11:31:43 server-dev systemd[1]: Reached target Graphical Interface. Oct 12 11:31:43 server-dev systemd[1]: Starting Update UTMP about System Runlevel Changes... Oct 12 11:31:43 server-dev systemd[1]: Started Update UTMP about System Runlevel Changes. Oct 12 11:31:43 server-dev server[423]: DO: server start DONE <== END OF SCRIPT myservice Oct 12 11:31:43 server-dev shorewall[449]: Compiling using Shorewall 5.0.15.6... <== SHOREWALL: GOOD Oct 12 11:31:44 server-dev shorewall[449]: Processing /etc/shorewall/shorewall.conf... Oct 12 11:31:44 server-dev shorewall[449]: Loading Modules... Shorewall is systematically started correctly, after the execution of myservice.Nginx is most of the time started during the execution of myservice,before /etc is correctly overlayed (overlaid?),and therefore it fails to initialize properly. myservice.service ATTEMPT 2 I also tried to change the [Install] WantedBy=default.target And change [Unit] Before=multi-user.target It also does not work. How can I ensurethat nginx and shorewall start after the execution of myservice? | If you don't specify the type of your systemd service, it defaults to Type=simple . This means the service is considered started at the moment its main service process has been forked off (at which point it isn't even executing the ExecStart command yet). You probably want to use Type=oneshot instead, which waits for the ExecStart command to exit before considering the service started. See man systemd.service for further details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255158/"
]
} |
397,524 | For example, I can do the following touch a or touch ./a Then when I do ls I can view both, so what exactly is the ./ for? | The dot-slash, ./ , is a relative path to something in the current directory. The dot is the current directory and the slash is a path delimiter. When you give the command touch ./a you say "run the touch utility with the argument ./a ", and touch will create (or update the timestamp for) the file a in the current directory. There is no difference between touch a and touch ./a as both commands will act on the thing called a in the current directory. In a similar way, touch ../a will act on the a in the directory above the current directory as .. refers to "one directory further up in the hierarchy". . and .. are two special directory names that are present in every directory on Unix systems. It's useful to be able to put ./ in front of a filename sometimes, as when you're trying to create or delete, or just work with, a file with a dash as the first character in its filename. For example, touch -a file will not create a file called -a file , and neither would touch '-a file' But, touch ./'-a file' would. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230247/"
]
} |
397,554 | In order to run a script, I currently have to do a two step process: ssh remote_machine./run_script Is it possible to setup an alias on my host machine such that I can execute an alias, for example: run_script and it will automatically log me into the remote_machine and run the script? | Sure, I do this all the time: alias run_script="ssh remote_machine ./run_script" Note that if the ./run_script script is interactive, you'll need to allocate a TTY using the -t flag to ssh : alias run_script="ssh -t remote_machine ./run_script" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397554",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216934/"
]
} |
397,574 | I'm here trying to figure out a way to get this small script working properly. The cat command should give me either 1 or 0: if [cat /sys/block/sda/queue/rotational = 0 ]; then echo "SSD"else echo "HDD"fi I googled around and I found that cat command in someone's post. I am trying to make this script print whether drive sda is an SSD or HDD. I can test if it works by just changing the value after the equal sign from 0 to 1 and it should read it the other way. I also want it to just print "SSD or HDD", nothing else to be shown. | The cat command exits 0 on success, non-zero on failure. You don't want the exit code of cat ; you want a value in a file. Use command substitution $(...) , which captures command output. if [ "$(cat /sys/block/sda/queue/rotational)" = 0 ]; then echo "SSD"else echo "HDD"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255177/"
]
} |
397,586 | Is there any way I can print the variable name along with its value? j=jjjk=kkkl=lllfor i in j k ldo ....done Expected output (each variable on a separate line): j = jjj k = kkkl = lll Can any one suggest a way to get the above result? | A simple way in Bash: j="jjj"k="kkk"l="lll"for i in j k l; do echo "$i = ${!i}"; done The output: j = jjjk = kkkl = lll ${!i} - Bash variable expansion/indirection (gets the value of the variable name held by $i ) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255194/"
]
} |
397,589 | Installing it via Packages -> Alt-F -> ownlcloud -> install worked, but I got an empty page accessing https://mynas:8443/owncloud (only for testing: http://mynas:8080/owncloud ) (which forwarded to .../index.php ). | A simple way in Bash: j="jjj"k="kkk"l="lll"for i in j k l; do echo "$i = ${!i}"; done The output: j = jjjk = kkkl = lll ${!i} - Bash variable expansion/indirection (gets the value of the variable name held by $i ) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141555/"
]
} |
397,655 | How to find two files matched data in shell script and duplicate data store in another file in shell? #!/bin/bashfile1="/home/vekomy/santhosh/bigfiles.txt"file2="/home/vekomy/santhosh/bigfile2.txt"while read -r $file1; do while read -r $file2 ;do if [$file1==$file2] ; then echo "two files are same" else echo "two files content different" fi donedone I written code but it didn't work. How to write it? | To just test whether two files are the same, use cmp -s : #!/bin/bashfile1="/home/vekomy/santhosh/bigfiles.txt"file2="/home/vekomy/santhosh/bigfile2.txt"if cmp -s "$file1" "$file2"; then printf 'The file "%s" is the same as "%s"\n' "$file1" "$file2"else printf 'The file "%s" is different from "%s"\n' "$file1" "$file2"fi The -s flag to cmp will make the utility "silent". The exit status of cmp will be zero when comparing two files that are identical. This is used in the code above to print out a message about whether the two files are identical or not. If your two input files contains list of pathnames of files that you wish to compare, then use a double loop like so: #!/bin/bashfilelist1="/home/vekomy/santhosh/bigfiles.txt"filelist2="/home/vekomy/santhosh/bigfile2.txt"mapfile -t files1 <"$filelist1"while IFS= read -r file2; do for file1 in "${files1[@]}"; do if cmp -s "$file1" "$file2"; then printf 'The file "%s" is the same as "%s"\n' "$file1" "$file2" fi donedone <"$filelist2" | tee file-comparison.out Here, the result is produced on both the terminal and in the file file-comparison.out . It is assumed that no pathname in the two input files contain any embedded newlines. The code first reads all pathnames from one of the files into an array, files1 , using mapfile . I do this to avoid having to read that file more than once, as we will have to go through all those pathnames for each pathname in the other file. You will notice that instead of reading from $filelist1 in the inner loop, I just iterate over the names in the files1 array. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255262/"
]
} |
397,656 | I was wondering what is the fastest way to run a script , I've been reading that there is a difference in speed between showing the output of the script on the terminal, redirecting it to a file or perhaps /dev/null . So if the output is not important , what is the fastest way to get the script to work faster , even if it's minim . bash ./myscript.sh -or-bash ./myscript.sh > myfile.log-or-bash ./myscript.sh > /dev/null | Terminals these days are slower than they used to be, mainly because graphics cards no longer care about 2D acceleration. So indeed, printing to a terminal can slow down a script, particularly when scrolling is involved. Consequently ./script.sh is slower than ./script.sh >script.log , which in turn is slower than /script.sh >/dev/null , because the latter involve less work. However whether this makes enough of a difference for any practical purpose depends on how much output your script produces and how fast. If your script writes 3 lines and exits, or if it prints 3 pages every few hours, you probably don't need to bother with redirections. Edit: Some quick (and completely broken) benchmarks: In a Linux console, 240x75: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)real 3m52.053suser 0m0.617ssys 3m51.442s In an xterm , 260x78: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)real 0m1.367suser 0m0.507ssys 0m0.104s Redirect to a file, on a Samsung SSD 850 PRO 512GB disk: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >file) real 0m0.532s user 0m0.464s sys 0m0.068s Redirect to /dev/null : $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >/dev/null) real 0m0.448s user 0m0.432s sys 0m0.016s | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217253/"
]
} |
397,668 | I've installed RHEL v5 on my PC. The installation was successful, but after that I came across 2 lines on boot-up, the first one was setting clock with OK message and the second one was starting udev with OK message . After that a black screen showed up. I searched the internet and came to know that the systems which do have integrated graphics card will not load during boot-up, so the solution I found was to do the nomodeset option on GRUB, but I am very new to Linux so I don't know how to do this. | Terminals these days are slower than they used to be, mainly because graphics cards no longer care about 2D acceleration. So indeed, printing to a terminal can slow down a script, particularly when scrolling is involved. Consequently ./script.sh is slower than ./script.sh >script.log , which in turn is slower than /script.sh >/dev/null , because the latter involve less work. However whether this makes enough of a difference for any practical purpose depends on how much output your script produces and how fast. If your script writes 3 lines and exits, or if it prints 3 pages every few hours, you probably don't need to bother with redirections. Edit: Some quick (and completely broken) benchmarks: In a Linux console, 240x75: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)real 3m52.053suser 0m0.617ssys 3m51.442s In an xterm , 260x78: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)real 0m1.367suser 0m0.507ssys 0m0.104s Redirect to a file, on a Samsung SSD 850 PRO 512GB disk: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >file) real 0m0.532s user 0m0.464s sys 0m0.068s Redirect to /dev/null : $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >/dev/null) real 0m0.448s user 0m0.432s sys 0m0.016s | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255271/"
]
} |
397,677 | Let's suppose Mary is a directory. Is the following path ~/Mary relative? | No, it's not relative. It's a full path, with ~ being an alias. Relative paths describe a path in relation to your current directory location. However, ~/Mary is exactly the same, no matter which directory you're currently in. Assuming you were currently logged in as Bob and also in the directory /home/Bob , then ../Mary would be an example of a relative path to /home/Mary . If you were currently in /etc/something then ~/Mary would still be /home/Bob/Mary but ../Mary would now be /etc/Mary . Note that Bash handles ~ in particular ways, and that it doesn't always translate to $HOME . For further reading, see Why doesn't the tilde (~) expand inside double quotes? The POSIX standard on tilde expansion The Bash manual on tilde expansion | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255278/"
]
} |
397,747 | I got two files: file1 with about 10 000 lines and file2 with a few hundred lines. I want to check whether all lines of file2 occur in file1. That is: ∀ line ℓ ∈ file2 : ℓ ∈ file1 Should anyone not know what these symbols mean or what "check whether all lines of file2 occur in file1" means: Several equivalent lines in either files don't influence whether the check returns that the files meet the requirement or don't. How do I do this? | comm -13 <(sort -u file_1) <(sort -u file_2) This command will output lines unique to file_2 . So, if output is empty, then all file_2 lines are contained in the file_1 . From comm's man: With no options, produce three-column output. Column one contains lines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files. -1 suppress column 1 (lines unique to FILE1) -2 suppress column 2 (lines unique to FILE2) -3 suppress column 3 (lines that appear in both files) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/397747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147785/"
]
} |
397,759 | For a quick benchmarking test, how can nice and ionice be combined to maximum effect, ie for a command to use as little resource as possible (without idling altogether)? (I think it's something like `nice -n 19 ionice -c 2 [command], but not sure about ionice's "-n" (classdata param), the man page is cryptic about its relevance.) | The full command you want is: chrt -b 0 nice -n 19 ionice -c 2 -n 7 [command] The chrt command at the beginning will switch things to the batch scheduling class, which is equivalent to adding 0.5 to the nice value. The -n option for ionice is a simple priority for the realtime ( -c 1 ) and best-effort ( -c 2 ) options, with lower values being higher priority just like nice values (but in the range 0-7). However, the ionice command is not strictly necessary, since I/O scheduling class and priority are by default derived from the CPU scheduling parameters, and nice -n 19 implies ionice -c 2 -n 7 . However, you can get the absolute minimal resource usage by setting both the CPU and I/O scheduling classes to idle. In both cases, the 'idle' schedulers aren't actually idle schedulers, and you will still be able to use resources, it's just that everything will have higher priority. For the CPU scheduling class, this also uses the chrt command, albeit without needing nice (priority must be set to 0 in the idle scheduling class), and looks like this: chrt -i 0 {command or PID} The nice command on Linux mirrors the SVR4 version, which means that it can't change scheduling class, only nice value (which also behaves differently on Linux than classical UNIX, but that's a bit OT). As the original alternative scheduling classes were the POSIX.1E realtime SCHED_RR and SCHED_FIFO , the command to set scheduling classes ended up being called chrt . The -i option specifies to use the SCHED_IDLE scheduling class For the I/O scheduling class, you use ionice . The exact command looks like this: ionice -c 3 {command or PID} The -c option specifies what scheduling class to use, and 3 is the number for the idle class. Note that depending on which block I/O scheduler is being used, this may not actually impact anything. In particular, the noop I/O scheduler doesn't support priorities or scheduling classes at all, and I'm pretty sure the deadline schedulers (both the legacy one, and the blk-mq one) don't either. If you want to do this programmatically, either for your own program, or to adjust things for other processes, check out the man pages for the sched_setscheduler and ioprio_set system calls (although both are worth reading if you just want more background too). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36161/"
]
} |
397,806 | I'm trying to save clipboard content into an sqlite database. Created database and tables. I don't want it to create journal file in every clipboard change, so I tried to pass PRAGMA journal_mode = OFF; flag. But it's tricky to pass those commands in a one liner commands because sqlite only accepts two command like sqlite3 clipboard_archive.db "insert into cb (cb_context) values ('clipboard');" it works. I looked for Q&A sites, some advises echoing the commands in the following way. echo "PRAGMA journal_mode = OFF;" | sqlite3 clipboard_archive.db "insert into cb (cb_context) values ('clipboard');" But PRAGMA journal_mode = OFF; doesn't take effect in that way though it works within the sqlite3 command prompt. What's wrong with my one liner script? | Not sure why you want to use SQLite if you don't want the journal (have you considered the much faster WAL mode if speed is a concern?) but you can give multiple commands separated by semicolons: sqlite3 clipboard_archive.db "PRAGMA journal_mode = OFF; insert into cb (cb_context) values ('clipboard');" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29606/"
]
} |
397,845 | I'm writing an alias that takes input for a file name and then after making sure the file is regular and readable copies the file using cp -i to the back up folder $HOME/Backup I am very new to UNIX so I am experiencing some difficulties accomplishing this. Here is my code: alias getname='read filename'alias vfile='getname; if [ ! -f $filename ]; then echo "Irregular file"; (exit 1); elif [ ! -r $file ]; then echo "Not readable"; (exit 2); fi;'alias backup='vfile; if [ vfile ]; then cp -i $filename $home/Backup; fi;' I have tested the vfile alias and it works, the errors I am getting are: -bash: syntax error near unexpected token ';' I started getting this error as soon as I included vfile as the first operation; if I don't run vfile before the if statement, it will use the filename from the last time it ran. I have to make sure that vfile did not produce an error before I can copy it. Before I added vfile as the first command I got this error: cp: cannot create regular file '/Backup/*': No such file or directory but there is indeed a directory named Backup in my home folder so I don't know what's causing this either. | Not sure why you want to use SQLite if you don't want the journal (have you considered the much faster WAL mode if speed is a concern?) but you can give multiple commands separated by semicolons: sqlite3 clipboard_archive.db "PRAGMA journal_mode = OFF; insert into cb (cb_context) values ('clipboard');" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255371/"
]
} |
397,853 | I created systemd unit for x0vncserver like this [Unit]Description=Remote desktop service (VNC)After=graphical.target[Service]Type=forkingUser=userExecStart=/usr/bin/sh -c '/usr/bin/x0vncserver -display :0 -rfbport 5900 -passwordfile /home/user/.vnc/passwd &'[Install]WantedBy=multi-user.target and enabled it to run but it fails. Then I realized as I am trying to load the original desktop using x0vncserver, I can only do that after loading the desktop itself completely. So I have to set the system unit to run after loading the desktop but how? Or any timed way to set it up? Though it may be possible by using desktop session tools but any systemd way solution? and my default.target is # systemctl get-default graphical.target | The first suggestion didn't work for me. So I tried a workaround instead. I set my x0vncserver systemd unit as follows [Unit]Description=Remote desktop service (VNC)After=multi-user.target[Service]Type=forkingUser=userExecStart=/bin/sh -c '/usr/bin/x0vncserver -display :0 -rfbport 5900 -passwordfile /home/user/.vnc/passwd &'[Install]WantedBy=default.target And then as the above service fails because it tries to load before the desktop:0 loads, I set a systemd timer unit as x0vncserver.timer to run the x0vncserver.service unit after a defined time considering the desktop loading time for my machine (with poor old config) like below [Unit]Description=x0vncserver timer[Timer]# Time to wait after booting before it run for first timeOnBootSec=2mUnit=x0vncserver.service[Install]WantedBy=default.target And then I activated the timer unit by systemctl enable x0vncserver.timer and rebooted. This time it worked as my goal was to start the server without my manual intervention :). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179857/"
]
} |
397,919 | I have a file something like this: H|ACCT|XEC|1|TEMP|20130215035845849002|48|1208004|100|||1849007|28|1208004|100|||1T|2|3 Note that there are extra empty lines at the end of the file. I want to replace the value of column 5 with column 4's value in all the lines except first and last non-blank line. I cannot rely on the number of fields as the last line may have as many fields as the other ones, nor on the lines to modify always starting with a number. I tried the code below: awk 'BEGIN{FS="|"; OFS="|"} {$5=$4; print}' in.txt Output is: H|ACCT|XEC|1|1|20130215035845||||849002|48|1208004|100|100||1||||849007|28|1208004|100|100||1||||T|2|3|||||||||||||| Expected output: H|ACCT|XEC|1|TEMP|20130215035845|849002|48|1208004|100|100||1849007|28|1208004|100|100||1T|2|3 How can I skip the first and last non-blank lines from getting changed? I also want to skip blank lines. | Here you go with awk and processing the file only once. awk -F'|' 'NR==1{print;next} m && NF{print m} NF{l="\n"$0; $5=$4; m="\n"$0; c=0}; !NF{c++}END{ print l; for (; i++<c;)print }' OFS='|' infile Explanation: Here we are skyping first line to being replace 5 th field's value with 4 th field's value, and just print it and do next . ... if it (current next line) was not empty line (at least contains one field NF ), then take a backup of whole line with a \n ewline added l="\n"$0 first next set 5 th field's value with 4 th field's value $5=$4 and last set it to a variable m with a \n ewline added m="\n"$0; ; There is a c variable as a counter flag and is used to determine the number of empty lines !NF{c++} if no line with at least one field seen; Otherwise c=0 will reset this counter. Now we have modified line in m variable and m && NF{print m} will print it where in the next step awk runs and m has set and it's not on empty lines & NF (this is used to prevent duplication on printing when empty line). At the end we are printing the untouched last line which we take backup every time before performing replacement END{ print l; ... and then number of empty lines which never seen a line with a field with looping for (; i++<c;)print }' . That's much shorter if you don't need redundant empty lines. awk -F'|' 'NR==1{print;next} m && NF{print m} NF{l=$0; $5=$4; m=$0} END{ print l}' OFS='|' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102866/"
]
} |
397,925 | I know this question has been asked multiple times but the solution doesn't work for me. I have 2 files: 1_all and 2_ovo. They both contain a list of items. 1_all contains items from 2_ovo plus other items. I need to delete all the items from 1_all which are the same as in 2_ovo. This is what I've got: for i in 2_ovo dosed -i "/$i/d" 1_alldone So take the value from 2_ovo and delete this value from 1_all. I know that variables in sed should be handled with double quotes, yet the command does nothing at all. If I substitute $i with a real value, the value is deleted from the file as expected. Any ideas? | Here you go with awk and processing the file only once. awk -F'|' 'NR==1{print;next} m && NF{print m} NF{l="\n"$0; $5=$4; m="\n"$0; c=0}; !NF{c++}END{ print l; for (; i++<c;)print }' OFS='|' infile Explanation: Here we are skyping first line to being replace 5 th field's value with 4 th field's value, and just print it and do next . ... if it (current next line) was not empty line (at least contains one field NF ), then take a backup of whole line with a \n ewline added l="\n"$0 first next set 5 th field's value with 4 th field's value $5=$4 and last set it to a variable m with a \n ewline added m="\n"$0; ; There is a c variable as a counter flag and is used to determine the number of empty lines !NF{c++} if no line with at least one field seen; Otherwise c=0 will reset this counter. Now we have modified line in m variable and m && NF{print m} will print it where in the next step awk runs and m has set and it's not on empty lines & NF (this is used to prevent duplication on printing when empty line). At the end we are printing the untouched last line which we take backup every time before performing replacement END{ print l; ... and then number of empty lines which never seen a line with a field with looping for (; i++<c;)print }' . That's much shorter if you don't need redundant empty lines. awk -F'|' 'NR==1{print;next} m && NF{print m} NF{l=$0; $5=$4; m=$0} END{ print l}' OFS='|' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247441/"
]
} |
397,951 | If you have both .bashrc or .zshrc how to know which one is preferred by the system? I am sure there is a chain of command or preference but do not know how to figure that out. | They’re not used together: .bashrc is read by Bash, .zshrc by Zsh, so which one is used depends on which shell you’re using. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
397,953 | Is there a nice way to get the output of pstree in some machine readable machine output without a bunch of code or horrible parsing? I just really want a list of all descendant processes. Edit: specific usecase: get all descendants > useful_pstree $PID10101012101011013 a more general usecase might give me beautiful JSON # json_pstree $PID { 'pid': 1010, children: { ... Although... I don't really know a nice way of easily doing recursion of json structures from the command line (à la jq ) | They’re not used together: .bashrc is read by Bash, .zshrc by Zsh, so which one is used depends on which shell you’re using. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36185/"
]
} |
398,033 | On windows you can see the size actual size of a file/directory and the size on disk, this size on disk depends on block size. How can I see this on Linux (mint)? I have disk with thousands of pictures with varied sizes. I want to see their actual size and the space they take up on disk, especially directories. On windows the difference can be in gigabytes of wasted space. | I think what you are looking for is du . Executing du -s <directory> shows you much disk space a directory's contents use up. du can also count the sizes of each file in the directory individually and tell you how big their total content is: du -s --apparent-size <directory> If you want to know the amount of "wasted" space resulting from allocation in blocks, just subtract the second command's result from the first's. Note: --apparent-size reports the size in kilobytes. You can use the -b flag instead, if you want to know the exact number of bytes a file contains. This is useful if you want to know how many bytes (without headers and such) you would need to send over the network to deliver the file, for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255558/"
]
} |
398,106 | I'm setting up a docker container which requires a cronjob to do a backup using awscli . I'm having a problem with the cron job being able to access the environment variables of the docker container. As I work around on startup I print all environment variables to a file printenv > /env . When I try to use source from the cron job (I have tried both directly in crontab and in a script called by crontab) it doesn't seem to work. I made a simplified version of my project to demonstrate the issue (including rsyslog for logging) : Dockerfile: FROM debian:jessie# Install aws and cronRUN apt-get -yqq updateRUN apt-get install -yqq awscli cron rsyslog# Create cron jobADD crontab /etc/cron.d/hello-cronRUN chmod 0644 /etc/cron.d/hello-cron# Output environment variables to file# Then start cron and watch logCMD printenv > /env && cron && service rsyslog start && tail -F /var/log/* crontab: # Every 3 minutes try to source /env and run `aws s3 ls`.*/3 * * * * root /usr/bin/env bash & source /env & aws s3 ls >> /test 2>&1 When I start the container I can see /env was created with my variables but it never gets sourced. | First of all, the command's (well, shell builtin's) name is source . Unless you have written a script called source and put it in / , you want source and not /source . The next issue is that cron usually uses whatever you have as /bin/sh and source is a bashism (or other such more complex shells). The portable, POSIX-compliant command for sourcing a file is . . So, try that instead of source : */3 * * * * root /usr/bin/env bash & . /env & aws s3 ls >> /test 2>&1 Also, I don't quite understand what that is supposed to be doing. What's the point of starting a bash session and sending it to the background? If you want to use bash to run the subsequent commands, you'd need: */3 * * * * root /usr/bin/env bash -c '. /env && aws s3 ls' >> /test 2>&1 I also changed the & to && since sourcing in the background is pointless as far as I can see. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
398,118 | I can append to the start of a file fine with: sed -i '1s/^/word\n/' file I'm reading that if I use double quotes I can expand variables, so I try: sed -i "1s/^/$(printenv)\n/" file I end up getting back: sed: -e expression #1, char 15: unterminated `s' command What is happening here. Is it related to the contents of the variable or something else? | I think the following would work: sed -i '1 e printenv' file From the GNU sed manual: 'e COMMAND' Executes COMMAND and sends its output to the output stream. The command can run across multiple lines, all but the last ending with a back-slash. Alternatively, you can use cat , but this requires creating a temporary file: cat <(printenv) file > temporary_file; mv temporary_file file If the package moreutils is installed on your machine, you can avoid creating a temporary file manually by using sponge : cat <(printenv) file | sponge file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
398,142 | I have the following code that I run on my Terminal. LC_ALL=C && grep -F -f genename2.txt hg38.hgnc.bed > hg38.hgnc.goi.bed This doesn't give me the common lines between the two files. What am I missing there? | Use comm -12 file1 file2 to get common lines in both files. You may also needs your file to be sorted to comm to work as expected. comm -12 <(sort file1) <(sort file2) From man comm : -1 suppress column 1 (lines unique to FILE1)-2 suppress column 2 (lines unique to FILE2) Or using grep command you need to add -x option to match the whole line as a matching pattern. The F option is telling grep that match pattern as a string not a regex match. grep -Fxf file1 file2 Or using awk . awk 'NR==FNR{seen[$0]=1; next} seen[$0]' file1 file2 This is reading whole line of file1 into an array called seen with the key as whole line (in awk the $0 represent the whole current line). We used NR==FNR as condition to run its followed block only for first input fle1 not file2 , because NR in awk refer to the current processing line number and FNR is referring to the current line number in all inputs. so NR is unique for each input file but FNR is unique for all inputs. The next is there telling awk do not continue rest code and start again until NR wan not equal with FNR that means all lines of file1 read by awk . Then next seen[$0] will only run for second file2 and for each line in file2 will look into the array and will print that line where it does exist in array. Another simple option is using sort and uniq : sort file1 file2|uniq -d This will print both files sorted then uniq -d will print only duplicated lines. BUT this is granted when there is NO duplicated lines in both files themselves, else below is always granted even if there is a lines duplicated within both files. uniq -d <(sort <(sort -u file1) <(sort -u file2)) | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/398142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/199494/"
]
} |
398,148 | On OSX, I am building a function to validate date formats and then convert them to epoch times. The function should validate that the date is in one of the following formats, if not error: 01/01/1970 10:00PM or 10:00PM ( %m/%d/%Y %I:%M%p or %I:%M%p ) FUNCTION checkTIME () { local CONVERT_CHK_TIME="$1" if [[ "$CONVERT_CHK_TIME" =~ ^(0[0-9]|1[0-2]):[0-9][0-9](AM|PM)$ ]]; then CONVERT_TIME="$(date -j -f "%I:%M%p" "$CONVERT_CHK_TIME" "+%s")" elif [[ "$CONVERT_CHK_TIME" =~ (0[0-9]|1[0-2])\/([0-2][0-9]|3[0-1])\/\d{4}\s[0-9][0-9]:[0-9][0-9](AM|PM) ]]; then CONVERT_TIME="$(date -j -f "%m/%d/%Y %I:%M%p" "$CONVERT_CHK_TIME" "+%s")" else echo "ERROR!" exit 1 fi} It currently works fine for 10:00PM but is failing to match when I try 01/10/2017 10:00PM I'm calling it as follows: ./convert '01/10/2017 10:00PM'......+ [[ -n 01/10/2017 10:00PM ]]+ checkTIME '01/10/2017 10:00PM'+ local 'CONVERT_CHK_TIME=01/10/2017 10:00PM'+ [[ 01/10/2017 10:00PM =~ ^(0[0-9]|1[0-2]):[0-9][0-9](AM|PM)$ ]]+ [[ 01/10/2017 10:00PM =~ (0[0-9]|1[0-2])/([0-2][0-9]|3[0-1])/d{4}s[0-9][0-9]:[0-9][0-9](AM|PM) ]]+ echo 'ERROR!'ERROR!+ exit 1 Thanks! I've also tried the following regex: (0[0-9]|1[0-2])\/([0-2][0-9]|3[0-1])\/\d{4}\ [0-9][0-9]:[0-9][0-9](AM|PM) | \d matches a decimal digit in some versions of regex (perl), but does not in the Extended Regular Expressions used for the =~ operator of the [[ command in bash . Therefore, change the \d to [0-9] for a pattern that will match 4 decimal digits. Similarly for \s . To match one literal space character, replace the \s with an escaped space ( \ ). If you want to match 1 or more blanks (spaces or tabs) then replace the \s with [[:blank:]]+ . More importantly, to avoid these regex mix-ups: man bash says that =~ regular expressions match according to the extended regular expression syntax, as documented in regex(3) . man 3 regex (POSIX regex functions) says SEE ALSO regex(7) . man 7 regex gives a description of the regular expression syntax, and says SEE ALSO POSIX.2, section 2.8 (Regular Expression Notation) . You can find the complete POSIX Extended Regular Expressions syntax described in The Open Group's Posix Regular Expressions documentation . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237982/"
]
} |
398,204 | This works perfectly: $ inotifywait --event create ~/fooSetting up watches.Watches established./home/ron/foo/ CREATE bar However, this just sits there when directory tun0 is created under /sys/devices/virtual/net. $ inotifywait --event create /sys/devices/virtual/netSetting up watches.Watches established. Since those folders are world readable, I'd expect inotifywait to work. So, what am I doing wrong? Thanks | Although the inotify FAQ implies partial support: Q: Can I watch sysfs (procfs, nfs...)? Simply spoken: yes, but with some limitations. These limitations vary between kernel versions and tend to get smaller. Please read information about particular filesystems. it does not actually say what might be supported (or in which kernel version, since that's mostly down to the inotify support in the filesystem itself rather than the library/utilities). A simple explanation is that is doesn't really make sense to support inotify for everything in in /sys (or /proc ) since they don't get modified in the conventional sense. Most of these files/directories represent a snapshot of kernel state at the time you view them . Think of /proc/uptime as a simple example, it contains the uptime accurate to the centisecond. Should inotify notify you 100 times a second that it was "written" to? Apart from not being very useful, it would be both a performance issue and a tricky problem to solve since nothing is generating inotify events on behalf of these fictional "writes". Within the kernel inotify works at the filesystem API level . The situation then is that some things in sysfs and procfs do generate inotify events, /proc/uptime for example will tell you when it has been accessed (access, open, close), but on my kernel /proc/mounts shows no events at all when file systems are mounted and unmounted. Here's Greg Kroah-Hartman's take on it: http://linux-fsdevel.vger.kernel.narkive.com/u0qmXPFK/inotify-sysfs and Linus: http://www.spinics.net/lists/linux-fsdevel/msg73955.html (both threads from 2014 however) To solve your immediate problem you may be able to use dbus, e.g. dbus-monitor --monitor --system (no need to be root) will show trigger on tun devices being created and removed (though mine doesn't show the tun device name, only the HAL string with the PtP IP); udevadm monitor (no need to be root); or fall back to polling the directory (try: script to monitor for new files in a shared folder (windows host, linux guest) ).(With udev you could also use inotifywait -m -r /dev/.udev and watch out for files starting with "n", but that's quite an ungly hack.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39069/"
]
} |
398,212 | How to sort a file containing below? (s=second, h=hour, d=day m=minute) 1s2s1h2h1m2m2s1d1m | awk '{ unitvalue=$1; }; /s/ { m=1 }; /m/ { m=60 }; /h/ { m=3600 }; /d/ { m=86400 }; { sub("[smhd]","",unitvalue); unitvalue=unitvalue*m; print unitvalue " " $1; }' input | sort -n | awk '{ print $2 }'1s2s2s1m1m2m1h2h1d | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10126/"
]
} |
398,219 | Is a device driver a program that runs on its own, or is it just a library (a group of functions) that is loaded in memory and programs can call one of its functions (so it is not running on its own). And if it is a program, does it have a process ID, so can I for example terminate a device driver the same way I can terminate any other process? | On Linux, many device drivers are part of the kernel, not libraries or processes. Programs interact with these using device files (typically in /dev ) and various system calls such as open , read , write , ioctl ... There are exceptions however. Some device drivers use a mixture of kernel driver stubs and user-space libraries ( e.g. using UIO). Others are implemented entirely in user-space, usually on top of some bit-banging interface (UART or GPIO). In both cases, they’re generally in-process, so you won’t see a separate process, just the process that’s using the device. To “terminate” a device driver, you’d have to stop all the processes using it, then remove its kernel modules (assuming it’s built as modules), and optionally any other modules it uses and which are no longer necessary. You can list the modules on your system using lsmod , and unload them using rmmod or modprobe -r , both of which will only work if lsmod indicates they have no users. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255688/"
]
} |
398,223 | I am removing stop words from a text, roughly using this code I have the following $ cat filefiletypesextensions$ cat stopwordsifiletypes grep -vwFf stopwords file I am expecting the result: extensions but I get the ( I think incorrect) fileextensions It is as if the word file has been skipped in the stopwords file.Now here's the cool bit: if I modify the stopwords file, by changing the single word/letter i on the first line, to any other ascii letter apart from f , i , l , e , then the same grep command gives me a different and correct result of extensions . What is going on here and how do I fix it? I'm using grep (BSD grep) 2.5.1-FreeBSD on a Mac OSX GNU bash, version 4.4.12(1) | This was a bug in bsdgrep , relating to a variable that tracks the part of the current line still to scan that is overwritten with successive calls to the regular expression matching engine when multiple patterns are involved. local fix You can work around this to an extent by not using the -w option, which relies upon this variable for correct operation and thus is failing, but instead using the regular expression extensions that match the beginning and endings of words, making your stopwords file look like: \<i\>\<file\>\<types\> This workaround will also require that you do not use the -F option. Note that the documented regular expression components [[:<:]] and [[:>:]] that the re_format manual tells you about will not work here. This is because the regular expression library that is compiled into bsdgrep has GNU regular expression compatibility support turned on. This is another bug, which is reportedly fixed. service fix This bug was fixed earlier this year. The fix has not yet made it into the STABLE or RELEASE flavours of FreeBSD, but is reportedly in CURRENT. For getting this into the MacOS version of grep , that is derived from FreeBSD's bsdgrep , please consult Apple. ☺ Further reading Jonathan de Boyne Pollard (2017-10-15). bsdgrep behaves incorrectly when given multiple patterns . Bug #223031. FreeBSD Bugzilla. Kyle Evans (2017-04-03). bsdgrep: fix matching behaviour . Revision 316477. FreeBSD source. Kyle Evans (2017-05-02). bsdgrep: fix -w -v matching improperly with certain patterns . Revision 317665. FreeBSD source. Nathan Weeks (2014-06-16). grep(1) and bsdgrep(1) do not recognize [[:<:]] and [[:>:]] . Bug #191086. FreeBSD Bugzilla. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231830/"
]
} |
398,280 | I've searched for an answer for the differences and using for these two configuration parameters in the openssl-config-file. certs = ... # Where the issued certs are and new_certs_dir = ... # default place for new certs In the Network Security with OpenSSL O'Reilly book also these two parameters in the default-openssl-config-file, but the certs is never used and never described.By my tests with openssl , all certificates are stored in the folder - defined by new_cers_dir . What is the difference between these two parameters?And is the parameter certs used somewhere? | As shown in the documentation https://www.openssl.org/docs/man1.1.0/apps/ca.html new_certs_dir is used by the CA to output newly generated certs. certs is not used here. However its referenced in the demoCA:"./demoCA/certs - certificate output file"Certs is ALSO not used for certificate chains as shown here: https://www.openssl.org/docs/man1.1.0/apps/pkcs12.html or https://www.openssl.org/docs/man1.1.0/apps/verify.html Note that /etc/ssl/certs is the default location for issued certs. But the certs variable is $dir/certs so it would be ./demoCA/certs I think we all agree its for issued certs specific to the CA. This makes sense because the CA might be signing certs that are chained to certs not yet issued by any public cert authority. But where is the documentation for this? I believe its an artifact of the configuration file. It use to be used for options like certificate which would hold the ca.pem within certs so certificate=$certs/ca.pem. I vaguely recall having this exact same question until I realized it was used later in the config file but now its not. Edit: It gets weirder. The current version of ca.c here: https://github.com/openssl/openssl/blob/master/apps/ca.c does not reference certs. But much older versions such as this: https://github.com/openssl/openssl/blob/d02b48c63a58ea4367a0e905979f140b7d090f86/apps/ca.c Reference it but do nothing with it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255719/"
]
} |
398,283 | For some reason I'm looking into this xkb symbol file, and I'm seeing some group of keys designated as FOUR_LEVEL_SEMIALPHABETIC . Now, the definition of it in types/caps is: type "FOUR_LEVEL_SEMIALPHABETIC" { modifiers = Shift+Lock+LevelThree; map[None] = Level1; map[Shift] = Level2; preserve[Lock] = Lock; map[LevelThree] = Level3; map[Shift+LevelThree] = Level4; map[Lock+LevelThree] = Level3; map[Lock+Shift+LevelThree] = Level4; preserve[Lock+LevelThree] = Lock; preserve[Lock+Shift+LevelThree] = Lock; level_name[Level1] = "Base"; level_name[Level2] = "Shift"; level_name[Level3] = "Alt Base"; level_name[Level4] = "Shift Alt";}; }; Lock means Caps Lock , right? Ok, fine. But What's with this LevelThree business? From the text I thought maybe it's the Alt key, but that doesn't seem to work, i.e. using one of these supposed layouts ( il ), I don't get Level3 or Level4 characters typed using the Alt key - and I tried all combinations - with/out Shift, Caps Lock on/off. What am I doing wrong? | What's with this LevelThree business? In the model expounded by ISO/IEC 9995, which is the international standard (set) that covers computer keyboards, keys can have one or more levels . What you may think of as "modifier keys" select amongst the available levels, sometimes in a complex fashion. (Think of the operation of a mechanical typewriter, where the shift key actually moved part of the mechanism to a different level, and where the shift key often mechanically unlocked a "shift lock". Then mix in the ideas of other kinds of locks, such as ones that only apply to subsets of the keyboard such as the main block or the calculator pad.) Sometimes the levels are what you see physically engraved upon the keys, sometimes (especially in the case of U.S. and European engravings and alphabetic keys) one or more of the levels are implied but not explicitly engraved. Level 1 is unshifted; level 2 is the result of a ⇧ Shift modifier, a shift latch, a ⇫ Shift Lock , a Num Lock , or a ⇬ Caps Lock ; and level 3 is the result of a "level three modifier" of some kind. As you can see from this configuration file, a "level 4" convention exists (an extension to ISO/IEC 9995 proper) that is the result of applying both level 2 and level 3 shifts at the same time. (This convention presupposes that this combination is even available in the first place. In some keyboard layouts, there is no ⇨ Group 2 key, and the keys that would otherwise select "level 4" instead select the second group , which is an entire alternative layout complete with its own set of 3 shift levels. On actual keytops, group 2 is a second column of one to three engraved symbols on the right. A lot of complexity in some systems results from trying to pretend that group 2 does not exist, whereas level 4 does.) A level three modifier is generally the ⇮ AltGr key, to the right of the spacebar. On some keyboards the key that generates the relevant HID code, down the wire from the keyboard to the main unit, is labelled ⌥ Option and its physical position (still on the right) is slightly different. Software sees it as the same key, whichever the engraving and physical position. Do not, by the way, confuse it with the similar key that is to the left of the spacebar. That is a different key. Not all software keyboard layouts make this key into a level modifier, however. What keys are modifier keys is (with one exception) entirely determined by the software keyboard layout. In some software keyboard layouts the key in that position is treated as another ⎇ Alt . If this is the case, one has no way to type a level 3 shift, absent using another keyboard layout or patching one's current keyboard layout so that some key or combination of keys produces a level 3 shift. In a SI 1452 layout, ⇮ AltGr is indeed the level 3 shift, and with it you should be able to type all of the Niqqud. I suspect that you have conflated ⎇ Alt and ⇮ AltGr . Further reading https://unix.stackexchange.com/a/391968/5132 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
398,331 | Input: IN22FDX_S1P_WWWWBBBBMMMCCCC_ffg_pre_0.59_0_.72_-40_nominal Output: IN22FDX_S1P_WWWWBBBBMMMCCCC ffg_pre_0.59_0_.72_-40_nominal# ^# space instead of underscore How to convert the above line to two columns (remove one underscore between them)? The first word is of fixed length IN22FDX_S1P_WWWWBBBBMMMCCCC . | If the first column has a fixed size, sed 's/./ /28' replaces the 28th char with a space. Assuming the data is stored in a file, you can edit it in place with sed. sed -i.bak 's/./ /28' input_file Note the .bak I added to the -i option; it instructs sed to make a backup file input_file.bak with the provided suffix. The suffix is optional. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98346/"
]
} |
398,413 | I have a text file containing tweets and I'm required to count the number of times a word is mentioned in the tweet. For example, the file contains: Apple iPhone X is going to worth a fortuneThe iPhone X is Apple's latest flagship iPhone. How will it pit against it's competitors? And let's say I want to count how many times the word iPhone is mentioned in the file. So here's what I've tried. cut -f 1 Tweet_Data | grep -i "iPhone" | wc -l it certainly works but I'm confused about the 'wc' command in unix. What is the difference if I try something like: cut -f 1 Tweet_Data | grep -c "iPhone" where -c is used instead? Both of these yield different results in a large file full of tweets and I'm confused on how it works. Which method is the correct way of counting the occurrence? | Given such a requirement, I would use a GNU grep (for the -o option ), then pass it through wc to count the total number of occurrences: $ grep -o -i iphone Tweet_Data | wc -l3 Plain grep -c on the data will count the number of lines that match, not the total number of words that match. Using the -o option tells grep to output each match on its own line, no matter how many times the match was found in the original line. wc -l tells the wc utility to count the number of lines. After grep puts each match in its own line, this is the total number of occurrences of the word in the input. If GNU grep is not available (or desired), you could transform the input with tr so that each word is on its own line, then use grep -c to count: $ tr '[:space:]' '[\n*]' < Tweet_Data | grep -i -c iphone3 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/398413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255849/"
]
} |
398,437 | How can I permanently block any ipaddress who accesses known vulnerable pages such as /phpMyadmin/ ? I am running a Debian server and I often see bots, or hackers scanning my server trying to find vulnerabilities. 73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpMyadmin/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpMyAdmin/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyAdmin/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyadmin2/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyadmin3/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyadmin4/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee" I have followed this stackoverflow question already: How to secure phpMyAdmin . I am looking to start blocking bots from taking up bandwidth. | This may be more heavy weight than you're looking for, but you might consider using fail2ban ( https://www.fail2ban.org ). That's a tool that can monitor your log files and automatically ban addresses that generate logs that match a set of customizable patterns. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120184/"
]
} |
398,439 | Not sure why I'm not getting this. I've been searching and testing my command for a couple hours and I'm not getting anywhere. The text is: <?xml version="1.0" encoding="UTF-8" standalone="yes"?><result expand="changes,testResults,metadata,logEntries,plan,vcsRevisions,artifacts,comments,labels,jiraIssues" key="EP-ED-JOB1-174" state="Failed" lifeCycleState="Finished" number="174" .... And I just want to pull out the state="Failed" part, it could also be state="Successful" . I've tried a million variations of this: sed '/state=".*"/p' htmlResponse.txt But paren's, escape slashes etc seem to match the entire chunk of text. What's wrong with my regex? | Putting aside the obligatory "you should really be using a proper XML parser because regexes aren't powerful enough to parse XML" comment, I see two problems in your sed line: ".*" will match from the first " to the last, since . matches " The sed command /.../p prints the whole line if it matches the regex. Here's two things I'd suggest for quick-and-dirty HTML-scraping shell scripts: Use "[^"]*" to match "quote, any number of non-quote characters, end quote" It's lots easier to use grep -o to pull out bits of a file that match a regex So that would make your command more like: grep -o 'state="[^"]*"' Or, if you really must use sed: sed -n 's/.*\(state="[^"]*"\).*/\1/p' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398439",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255866/"
]
} |
398,459 | Do anchors only work with grep or can it be used with other commands? For example: ls -l ^cat | Regular expression anchors such as ^ and $ are only parsed by tools which implement regular expressions. ls is not such a tool, and so no, it cannot use them. However any binary invoked from the shell can use shell globbing, which is a simpler, albeit less powerful, wildcard-based search mechanism. For example, for a list of all files whose names start with cat: $ ls cat* # lists all files with names which start with 'cat'$ ls *dog # lists all files with names which end with 'dog'$ ls d*y # Lists all files which names which start with 'd' and end with 'y', e. g. 'donkey'$ ls p?g # Lists all files which start with 'p', have one additional character, and end with 'g', e. g. 'pig' and 'pug' For globbing purposes, * means 'zero or more characters'; while ? means 'precisely one character'. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255882/"
]
} |
398,466 | The flag GCC -fstack-protector flag enables the usage of stack canaries for stack overflow protection. The usage of this flag by default has been more prominent in recent years. If a package is compiled with -fstack-protector, and we overflow a buffer in the program, we are likely to get an error such as: *** buffer overflow detected ***: /xxx/xxx terminated However, "who" is in charge of these error messages? Where do these messages get logged? Does the syslog daemon pick these messages? | Stack smashing is detected by libssp , which is part of gcc . It tries very hard to output the message to a terminal, and only if that fails does it log to the system log — so in practice you’ll see buffer overflow messages in the logs for daemons and perhaps GUI applications. Once it’s output its message, libssp tries a variety of ways to exit, including crashing the application; this might be caught by one of the abnormal exit loggers, but that’s not guaranteed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255885/"
]
} |
398,481 | How is it possible, from bash or standard linux command-line tools, to XOR a file against a key? Something like: cat my1GBfile | xor my1MB.key > my1GBfile.encrypted Off-topic: I know the encryption is quite weak with this example, but I was just wondering if this is available from bash or standard linux command-line tools (or even better: from bash and cygwin, because I use both Linux and Windows). | bash can't deal with ASCII NUL characters, so you won't be doing this with shell functions, you need a small program for it. This can be done in just about any language, but it seems easiest to do it in C, perhaps like this: #include <stdio.h> #include <stdlib.h>intmain(int argc, char *argv[]){ FILE *kf; size_t ks, n, i; long pos; unsigned char *key, *buf; if (argc != 2) { fprintf (stderr, "Usage: %s <key>\a\n", argv[0]); exit(1); } if ((kf = fopen(argv[1], "rb")) == NULL) { perror("fopen"); exit(1); } if (fseek(kf, 0L, SEEK_END)) { perror("fseek"); exit(1); } if ((pos = ftell(kf)) < 0) { perror("ftell"); exit(1); } ks = (size_t) pos; if (fseek(kf, 0L, SEEK_SET)) { perror("fseek"); exit(1); } if ((key = (unsigned char *) malloc(ks)) == NULL) { fputs("out of memory", stderr); exit(1); } if ((buf = (unsigned char *) malloc(ks)) == NULL) { fputs("out of memory", stderr); exit(1); } if (fread(key, 1, ks, kf) != ks) { perror("fread"); exit(1); } if (fclose(kf)) { perror("fclose"); exit(1); } freopen(NULL, "rb", stdin); freopen(NULL, "wb", stdout); while ((n = fread(buf, 1, ks, stdin)) != 0L) { for (i = 0; i < n; i++) buf[i] ^= key[i]; if (fwrite(buf, 1, n, stdout) != n) { perror("fwrite"); exit(1); } } free(buf); free(key); exit(0);} (this needs some more error checking, but oh well). Compile the above with: cc -o xor xor.c then run it like this: ./xor my1MB.key <my1GBfile >my1GBfile.encrypted | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59989/"
]
} |
398,483 | I want to right align the 3rd column using awk or any other UNIX tool such that all floating point numbers are centered with respect to decimal point. Al 11.134 15.250 2.393Al 11.134 5.825 2.393Al 12.888 10.537 2.393 Please let me know if you have any suggestions. I tried to use formatting methods, but for floating point it seems that they are not working. So the expected output is Al 11.134 15.250 2.393Al 11.134 5.825 2.393Al 12.888 10.537 2.393 | bash can't deal with ASCII NUL characters, so you won't be doing this with shell functions, you need a small program for it. This can be done in just about any language, but it seems easiest to do it in C, perhaps like this: #include <stdio.h> #include <stdlib.h>intmain(int argc, char *argv[]){ FILE *kf; size_t ks, n, i; long pos; unsigned char *key, *buf; if (argc != 2) { fprintf (stderr, "Usage: %s <key>\a\n", argv[0]); exit(1); } if ((kf = fopen(argv[1], "rb")) == NULL) { perror("fopen"); exit(1); } if (fseek(kf, 0L, SEEK_END)) { perror("fseek"); exit(1); } if ((pos = ftell(kf)) < 0) { perror("ftell"); exit(1); } ks = (size_t) pos; if (fseek(kf, 0L, SEEK_SET)) { perror("fseek"); exit(1); } if ((key = (unsigned char *) malloc(ks)) == NULL) { fputs("out of memory", stderr); exit(1); } if ((buf = (unsigned char *) malloc(ks)) == NULL) { fputs("out of memory", stderr); exit(1); } if (fread(key, 1, ks, kf) != ks) { perror("fread"); exit(1); } if (fclose(kf)) { perror("fclose"); exit(1); } freopen(NULL, "rb", stdin); freopen(NULL, "wb", stdout); while ((n = fread(buf, 1, ks, stdin)) != 0L) { for (i = 0; i < n; i++) buf[i] ^= key[i]; if (fwrite(buf, 1, n, stdout) != n) { perror("fwrite"); exit(1); } } free(buf); free(key); exit(0);} (this needs some more error checking, but oh well). Compile the above with: cc -o xor xor.c then run it like this: ./xor my1MB.key <my1GBfile >my1GBfile.encrypted | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255898/"
]
} |
398,520 | kworker process consumes 75% of one CPU. The offending kworker thread is related with ACPI stuff: sudo cat /proc/THE_PID_OF_KWORKER_PROCESS/stack[<ffffffff85c0c705>] acpi_ns_evaluate+0x1bc/0x23a[<ffffffff85bffe09>] acpi_ev_asynch_execute_gpe_method+0x98/0xff[<ffffffff85be4e30>] acpi_os_execute_deferred+0x10/0x20[<ffffffff8588dc21>] process_one_work+0x181/0x370[<ffffffff8588de5d>] worker_thread+0x4d/0x3a0[<ffffffff85893f1c>] kthread+0xfc/0x130[<ffffffff8588de10>] process_one_work+0x370/0x370[<ffffffff85893e20>] kthread_create_on_node+0x70/0x70[<ffffffff858791ba>] do_group_exit+0x3a/0xa0[<ffffffff85e6a2b5>] ret_from_fork+0x25/0x30[<ffffffffffffffff>] 0xffffffffffffffff so I started debugging by rebooting with some acpi related kernel parameters , such as: acpi=off : Completely solves the high cpu usage, but computer no longer suspends. acpi=ht : no effect, still high cpu usagepci=noacpi : not booting at allpnpacpi=off : no effect, still high cpu usagenoapic : worse, 100% cpu usage nolapic : worse, 100% cpu usage uname -a : Linux 4.13.0-1-amd64 #1 SMP Debian 4.13.4-1 (2017-10-01) x86_64 GNU/Linux My root folder disk layout is: BTRFS over LVM over LUKS . How can I find the root of the problem? Update I wasn't using my external hard drive which uses a DVD enclosure to be attached to the laptop. Today I reconnected the drive and kworker consumed that excessive amount of CPU again. Note that I didn't mount partition from the external drive, only attaching caused that cpu usage. | When I checked ACPI interrupts, I noticed that gpe6F had a very high trigger count: root@HOST:~# grep . -r /sys/firmware/acpi/interrupts//sys/firmware/acpi/interrupts/ff_gbl_lock: 0 EN enabled unmasked/sys/firmware/acpi/interrupts/gpe15: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe4F: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe43: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe7D: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe71: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe05: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe3F: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe33: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe6D: 0 disabled unmasked/sys/firmware/acpi/interrupts/gpe61: 0 EN enabled unmasked/sys/firmware/acpi/interrupts/gpe2F: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe23: 0 EN enabled unmasked/sys/firmware/acpi/interrupts/gpe5D: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe51: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe1F: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe13: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe4D: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe41: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe7B: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe0F: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe03: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe3D: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe31: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe6B: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe2D: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe21: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe5B: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe1D: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe78: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe11: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe4B: 0 invalid unmasked/sys/firmware/acpi/interrupts/ff_pwr_btn: 0 EN enabled unmasked/sys/firmware/acpi/interrupts/ff_slp_btn: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe0D: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe68: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe01: 0 invalid unmasked/sys/firmware/acpi/interrupts/ff_pmtimer: 0 STS invalid unmasked/sys/firmware/acpi/interrupts/gpe3B: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe58: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe2B: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe48: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe1B: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe76: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe38: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe0B: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe66: 4 EN enabled unmasked/sys/firmware/acpi/interrupts/gpe28: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe56: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe18: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe46: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe74: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe08: 0 invalid unmasked/sys/firmware/acpi/interrupts/sci: 819678/sys/firmware/acpi/interrupts/gpe36: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe64: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe26: 0 invalid unmasked/sys/firmware/acpi/interrupts/error: 0/sys/firmware/acpi/interrupts/gpe54: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe16: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe44: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe7E: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe72: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe06: 0 invalid unmasked/sys/firmware/acpi/interrupts/ff_rt_clk: disabled unmasked/sys/firmware/acpi/interrupts/gpe34: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe6E: 262969 EN enabled unmasked/sys/firmware/acpi/interrupts/gpe62: 0 EN enabled unmasked/sys/firmware/acpi/interrupts/gpe24: 0 EN enabled unmasked/sys/firmware/acpi/interrupts/gpe5E: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe52: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe14: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe4E: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe42: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe7C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe70: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe04: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe3E: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe32: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe6C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe60: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe2E: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe22: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe5C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe50: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe1E: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe79: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe12: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe4C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe40: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe7A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe0E: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe69: 0 disabled unmasked/sys/firmware/acpi/interrupts/gpe02: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe3C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe30: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe6A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe59: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe2C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe20: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe5A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe49: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe1C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe77: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe10: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe4A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe39: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe0C: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe67: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe00: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe3A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe29: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe57: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe2A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe19: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe47: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe1A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe75: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe09: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe37: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe0A: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe65: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe27: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe55: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe17: 0 STS invalid unmasked/sys/firmware/acpi/interrupts/gpe45: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe7F: 0 invalid unmasked/sys/firmware/acpi/interrupts/sci_not: 101/sys/firmware/acpi/interrupts/gpe73: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe07: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe35: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe6F: 560719 STS enabled unmasked/sys/firmware/acpi/interrupts/gpe63: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe25: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe5F: 0 invalid unmasked/sys/firmware/acpi/interrupts/gpe_all: 823692/sys/firmware/acpi/interrupts/gpe53: 0 invalid unmasked I disabled it: root@HOST:~# echo "disable" > /sys/firmware/acpi/interrupts/gpe6F and everything went back to normal: Linux 4.9.0-6-amd64 (HOST) 05/01/2018 _x86_64_ (4 CPU)12:30:27 PM CPU %user %nice %system %iowait %steal %idle12:30:30 PM all 6.88 0.00 1.26 0.17 0.00 91.6912:30:33 PM all 6.45 0.00 1.17 0.17 0.00 92.2012:30:36 PM all 7.15 0.00 1.01 0.34 0.00 91.51Average: all 6.83 0.00 1.15 0.22 0.00 91.80 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65781/"
]
} |
398,540 | I have installed MySQL on my Arch Linux server. I moved the data directory to a place under /home, where my RAID volume is mounted. I noticed that mysqld will not start in this configuration by default since the systemd unit contains the setting ProtectHome=true . I want to override just this setting. I don't want to re-specify the ExecStart or similar commands, in case they change when the package is upgraded. I tried making a simple file at /etc/systemd/system called mysqld.service and added only these lines: [Service]ProtectHome=false This doesn't work as it looks like the service in /etc replaces , not overrides, the system service. Is there a way to override settings in systemd unit files this way without directly modifying the files in /usr/lib/systemd/system? (which is what I have done for now as a temporary fix, although that will end up reverted if the package is updated) | systemctl edit will create a drop-in file where you can override most of the settings, but these files have some specifics worth mentioning: Note that for drop-in files, if one wants to remove entries from a setting that is parsed as a list (and is not a dependency), such as AssertPathExists= (or e.g. ExecStart= in service units), one needs to first clear the list before re-adding all entries except the one that is to be removed. #/etc/systemd/system/httpd.service.d/local.conf[Unit]AssertPathExists=AssertPathExists=/srv/www Dependencies ( After= , etc.) cannot be reset to an empty list, so dependencies can only be added in drop-ins. If you want to remove dependencies, you have to override the entire unit. To override the entire unit, use systemctl edit --full , this will make a copy in /etc if there is none yet and let you edit it. See also Systemd delete overrides | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/398540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65884/"
]
} |
398,543 | I accidentally overwrote the /bin/bash file with a dumb script that I intented to put inside the /bin folder. How do I get the contents of that file back? Is there a way I can find the contents on the web, and just copy them back in? What are my options here, considering that terminal gives an error talking about "Too many Symbolic Links?" I'm still a newcomer to this kind of thing, and I appreciate all the help I can get. Edit: I forgot to mention I'm on Kali 2.2 Rolling, which is pretty much debian with some added features. Edit 2: I also restarted the machine, as I didn't realize my mistake until a few days ago. That makes this quite a bit harder. | bash is a shell, probably your system shell, so now weird things happen, while parts of the shell are still in memory. Once you log out or reboot, you,ll be in deeper trouble. So the first thing should be to change your shell to something safe. See what shells you have installed cat /etc/shells Then change your shell to one of the other shells listed there, for example chsh -s /bin/dash Update, because you already rebooted: You are lucky that nowadays the boot process doesn't rely on bash , so your system boots, you just can't get a command line. But you can start an editor to edit /etc/passwd and change the shell in the root line from /bin/bash to /bin/dash . Log out and log in again. Just don't make any other change in that file, or you may mess up your system completely. Then try to reinstall bash with apt-get --reinstall install bash If everything succeeded you can chsh back to bash . Finally: I think, kali is a highly specialized distribution, probably not suited for people who accidently overwrite their shell. As this sentence was called rude and harsh, I should add that I wrote it out of my own experience. When I was younger, I did ruin my system because nobody told me to avoid messing around as root. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/398543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245476/"
]
} |
398,646 | I always thought that bash treats backslashes the same when using without or with double quotes, but I was wrong: [user@linux ~]$ echo "foo \ "foo \[user@linux ~]$ echo foo \ # Space after \foo So I thought backslashes are always printed, when using double quotes, but: [user@linux ~]$ echo "foo \" "foo "[user@linux ~]$ echo "foo \\ "foo \ Why is the backslash in the first code line shown? | Section 3.1.2.3 Double Quotes of the GNU Bash manual says: The backslash retains its special meaning only when followed by one of the following characters: ‘ $ ’, ‘ ` ’, ‘ " ’, ‘ \ ’, or newline . Within double quotes, backslashes that are followed by one of these characters are removed. Backslashes preceding characters without a special meaning are left unmodified. A double quote may be quoted within double quotes by preceding it with a backslash. If enabled, history expansion will be performed unless an ‘ ! ’ appearing in double quotes is escaped using a backslash. The backslash preceding the ‘ ! ’ is not removed. Thus \ in double quotes is treated differently both from \ in single quotes and \ outside quotes. It is treated literally except when it is in a position to cause a character to be treated literally that could otherwise have special meaning in double quotes. Note that sequences like \' , \? , and \* are treated literally and the backslash is not removed, because ' , ? and * already have no special meaning when enclosed in double quotes. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398646",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166118/"
]
} |
398,748 | In linux I can easily add a service or disable it from startup using the update-rc.d command. I am trying to create a toggle-able service and I don't wan't to go into sed to manually edit the /etc/rc.conf file and add/edit a line service_enable=YES/NO | There is a nice tool for the job named sysrc so you can avoid manually editing /etc/rc.conf Example: sysrc nginx_enable=YESsysrc sendmail=NONE Furthermore you have the service command to control your service: service nginx startservice nginx reload | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398748",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256084/"
]
} |
398,766 | When I booting to CentOS, the character is too small. I tried to config grub2 files following this solution , but it seems not work. My /etc/default/grub file is : So, how can I increase the font size? | Personally I would not touch your GRUB configuration files. Instead, I would add a setfont line to your shell initialization file. For example, if you are using Bash, you could ask the following line to .bash_profile : if [ $TERM = linux ]then setfont sun12x22fi There are lots of different fonts available; sun12x22 is just one example. See the setfont man page for more information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256098/"
]
} |
398,810 | If no link on enp0s18 i have root@route:~# ip rdefault via a.a.a.1 dev enp0s18 metric 10 linkdown default via a.a.b.1 dev enp0s10 metric 20 onlink linkdown default via x.x.x.49 dev wwx001e101f0000 metric 30 Expected that default swiches to x.x.x.49, but it tries linkdown route root@route:~# ping -n ya.ruPING ya.ru (87.250.250.242) 56(84) bytes of data.From a.a.a.231 icmp_seq=1 Destination Host UnreachableFrom a.a.a.231 icmp_seq=2 Destination Host UnreachableFrom a.a.a.231 icmp_seq=3 Destination Host Unreachable^C Link state 4: enp0s10: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 1c:af:f7:08:27:e2 brd ff:ff:ff:ff:ff:ff5: enp0s18: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 00:13:d3:14:83:f1 brd ff:ff:ff:ff:ff:ff Result: Traffic black-holed | In new kernel we have new defaults. Solution is echo 1 > /proc/sys/net/ipv4/conf/enp0s10/ignore_routes_with_linkdownecho 1 > /proc/sys/net/ipv4/conf/enp0s18/ignore_routes_with_linkdown And make new default echo net.ipv4.conf.all.ignore_routes_with_linkdown=1 > /etc/sysctl.d/10-linkdown.confsysctl -p /etc/sysctl.d/10-linkdown.conf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43898/"
]
} |
398,819 | I'd like to output hello world over 20 characters. printf "%-20s :\n\n" 'hello world!!'# Actual outputhello world!! :# Wanted outputhello world!!========: However, I don't want to complete with spaces but with " = " instead.How do I do that? | filler='===================='string='foo'printf '%s\n' "$string${filler:${#string}}" Gives foo================= ${#string} is the length of the value $string , and ${filler:${#string}} is the substring of $filler from offset ${#string} onwards. The total width of the output will be that of the maximum width of $filler or $string . The filler string can, on systems that has jot , be created dynamically using filler=$( jot -s '' -c 16 '=' '=' ) (for 16 = in a line). GNU systems may use seq : filler=$( seq -s '=' 1 16 | tr -dc '=' ) Other systems may use Perl or some other faster way of creating the string dynamically. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142331/"
]
} |
398,840 | I have a two files, file1 and file2. In file1 contents are: ABC_DEC_EDC=ONWER_QSD_RCS=ON file2 contents are: TRD_OIY_REC=ONYUH_PON_UYT=ONWER_QSD_RCS=OFF I have to check line by line in file2. First if ABC_DEC_EDC=ON is not present in file2 then add to file2. Second In first file SAX_IUY_TRE=OFF is there with OFF but in file2 SAX_IUY_TRE=ON with ON ; In this case I just want to update as with the file only OFF . Example: SAX_IUY_TRE=OFF All updates new updates happen in file2 only. Output should be: ABC_DEC_EDC=ONWER_QSD_RCS=ON WER_RTC_YTC=ONWER_QSD_RCS=OFF | filler='===================='string='foo'printf '%s\n' "$string${filler:${#string}}" Gives foo================= ${#string} is the length of the value $string , and ${filler:${#string}} is the substring of $filler from offset ${#string} onwards. The total width of the output will be that of the maximum width of $filler or $string . The filler string can, on systems that has jot , be created dynamically using filler=$( jot -s '' -c 16 '=' '=' ) (for 16 = in a line). GNU systems may use seq : filler=$( seq -s '=' 1 16 | tr -dc '=' ) Other systems may use Perl or some other faster way of creating the string dynamically. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/398840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256159/"
]
} |
398,891 | I am looking for a sed command to change the line: userA:$6$lhkjhl$sdlfhlmLMHQSDFM374FGSDFkjfh/7mD/354dshkKHQSkljhsd.sdmfjlk57HJ/:95170:::::: to userA:$6$sLdkjf$576sdKUKJGKmlk565oiuljkljpi/9Fg/rst3587zet324etze.dsfgLIMLmdf/:34650:::::: | instead: chpasswd -e <<< 'userA:yourencryptedpassword' If you were going to use sed - despite the risks: To set a password - no matter what it was before: sed -i.sedbackup 's/^\(userA:\)[^:]*\(:.*\)$/\1yournewpassword\2/' /etc/shadow To replace a specific password string: sed -i.sedbackup 's/^\(userA:\)youroldpassword\(:.*\)$/\1yournewpassword\2/' /etc/shadow | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/398891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236887/"
]
} |
398,905 | I have installed TightVNCServer on Raspbian (the September 2.017 version) for my Raspberry Pi 2 B+ : luis@Frambuesio:~$ vncserver -name Frambuesio -geometry 1280x1024 -depth 16New 'Frambuesio' desktop at :1 on machine FrambuesioStarting applications specified in /etc/X11/Xvnc-sessionLog file is /home/luis/.vnc/Frambuesio:1.logUse xtigervncviewer -SecurityTypes VncAuth -passwd /home/luis/.vnc/passwd :1 to connect to the VNC server.luis@Frambuesio:~$ netstat -ano | grep "5901"tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN off (0.00/0/0)tcp6 0 0 ::1:5901 :::* LISTEN off (0.00/0/0) But my VNC Viewer (from RealVNC on a remote Windows machine) receives the message " Connection refused " when trying to connect, and the port doesn't seem to be listening: luis@Hipatio:~$ sudo nmap Frambuesio- -p 5900,5901,5902[sudo] password for luis:Starting Nmap 7.01 ( https://nmap.org ) at 2017-10-18 16:58 CESTNmap scan report for Frambuesio- (192.168.11.142)Host is up (0.00050s latency).PORT STATE SERVICE5900/tcp closed vnc5901/tcp closed vnc-15902/tcp closed vnc-2MAC Address: B8:27:EB:7D:7C:B0 (Raspberry Pi Foundation)Nmap done: 1 IP address (1 host up) scanned in 0.67 seconds If I try from Ubuntu 16.04.3 on another Raspberry Pi everything goes all right (note the different netstat results): luis@Zarzaparrillo:~$ vncserver -name Zarzaparrillo -geometry 1280x1024 -depth 16New 'Zarzaparrillo' desktop is Zarzaparrillo:1Starting applications specified in /home/luis/.vnc/xstartupLog file is /home/luis/.vnc/Zarzaparrillo:1.logluis@Zarzaparrillo:~$ netstat -ano | grep 5901tcp6 0 0 :::5901 :::* LISTEN off (0.00/0/0) Same results with VNC4Server . I have read the official Raspberry papers , consisting on installing the realvnc-vnc-server package. But the RealVNC program installs a ton of extra packages and is not open source , even when it is free for educative purposes. I would prefer some GNU's more open policies for my VNC, as long as it could be used in an enterprise production environment. My workaround for now consists on using X11vnc to serve the display on another port: luis@Frambuesio:~$ vncserver -name Frambuesio -geometry 1280x1024 -depth 16[... on another terminal: ]luis@Frambuesio:~$ sudo x11vnc -display :1 -passwd anypassword -auth guess -forever ... and now the X11vnc program makes display :1 available. Note that, as long as the port 5901 TCP is occupied, X11VNC uses the 5900 TCP (aka :0 port ): The VNC desktop is: Frambuesio:0PORT=5900 Note the netstat output, now in a working condition: luis@Frambuesio:~$ netstat -ano | grep 5900tcp 0 0 0.0.0.0:5900 0.0.0.0:* LISTEN off (0.00/0/0)tcp6 0 0 :::5900 :::* LISTEN off (0.00/0/0)luis@Frambuesio:~$ netstat -ano | grep 5901tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN off (0.00/0/0)tcp6 0 0 ::1:5901 :::* LISTEN off (0.00/0/0) Why are my VNC servers failing and how could I solve this? | The problem seems to be just a default argument on VNCServer with the improper (for your case) option. From vncserver command line help: [-localhost yes|no] Only accept VNC connections from localhost This should solve your problem: vncserver -localhost no Interpreting the same last example in the original question, note the 0.0.0.0:5900 meaning "listening connections from anywhere at 5900 TCP": luis@Frambuesio:~$ netstat -ano | grep 5900tcp 0 0 0.0.0.0:5900 0.0.0.0:* LISTEN off (0.00/0/0)tcp6 0 0 :::5900 :::* LISTEN off (0.00/0/0) Meanwhile, note the 127.0.0.1:5901 meaning "listening connections from localhost at 5901 TCP" luis@Frambuesio:~$ netstat -ano | grep 5901tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN off (0.00/0/0)tcp6 0 0 ::1:5901 :::* LISTEN off (0.00/0/0) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/398905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
398,921 | I have a directory containing files with names rho_0.txtrho_5000.txtrho_10000.txtrho_150000.txtrho_200000.txt and so on. I would like to delete all those that are a multiple of 5000. I tried the following: printf 'rho_%d.txt\n' $(seq 5000 10000 25000) | rm , but that gave me the response rm: missing operand . Is there another way to do this? | You don't need a loop or extra commands where you have Bash Shell Brace Expansion . rm -f rho_{0..200000..5000}.txt Explanation : {start..end..step} . The -f to ignore prompt on non-existent files. P.s. To keep safety and check which files will be deleted, please do a test first with: ls -1 rho_{0..200000..5000}.txt | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/398921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45002/"
]
} |
398,944 | I'm learning about fork() and exec() commands. It seems like fork() and exec() are usually called together. ( fork() creates a new child process, and exec() replaces the current process image with a new one.) However, in what scenarios might you call each function on its own? Are there scenarios like these? | Sure! A common pattern in "wrapper" programs is to do various things and then replace itself with some other program with only an exec call (no fork) #!/bin/shexport BLAH_API_KEY=blub...exec /the/thus/wrapped/program "$@" A real-life example of this is GIT_SSH (though git(1) does also offer GIT_SSH_COMMAND if you do not want to do the above wrapper program method). Fork-only is used when spawning a bunch of typically worker processes (e.g. Apache httpd in fork mode (though fork-only better suits processes that need to burn up the CPU and not those that twiddle their thumbs waiting for network I/O to happen)) or for privilege separation used by sshd and other programs on OpenBSD (no exec) $ doas pkg_add pstree...$ pstree | grep sshd |-+= 70995 root /usr/sbin/sshd | \-+= 28571 root sshd: jhqdoe [priv] (sshd) | \-+- 14625 jhqdoe sshd: jhqdoe@ttyp6 (sshd) The root sshd has on client connect forked off a copy of itself (28571) and then another copy (14625) for the privilege separation. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/398944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256229/"
]
} |
399,027 | This error has arise when I add gns repository and try to use this command: #sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F88F6D313016330404F710FC9A2FD067A2E3EF7B the error is: gpg: keyserver receive failed: Server indicated a failure | Behind a firewall you should use the port 80 instead of the default port 11371 : sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9A2FD067A2E3EF7B Sample output: Executing: /tmp/apt-key-gpghome.mTGQWBR2AG/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv 9A2FD067A2E3EF7Bgpg: key 9A2FD067A2E3EF7B: "Launchpad PPA for GNS3" not changedgpg: Total number processed: 1gpg: unchanged: 1 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/399027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254922/"
]
} |
399,107 | I'm on Linux Mint 18.2 with GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu). I'd like to re-define sha256sum with my function defined in .bash_aliases for it to show progress because I use it for 100+GB files often. The function follows: function sha256sum { if [ -z "$1" ] then { \sha256sum --help } else { pv $1 | \sha256sum -b } fi} But there are some culprits, which I can't explain. For one it behaves unexpectedly, I somehow managed to force it to "eat" the parameter. Specifically, the following file: -rw-r--r-- 1 root root 2.0K Jul 24 12:29 testdisk.log Now it outputs the file's size, never-ending: vlastimil@vb-nb-mint ~ $ sha256sum testdisk.log 1.92KiB 0:00:00 [40.8MiB/s] [====================================================>] 100% 1.92KiB1.92KiB1.92KiB1.92KiB1.92KiB1.92KiB1.92KiB1.92KiB1.92KiB1.92KiB1.92KiB.........^C[1]+ Stopped pv $1 | \sha256sum -b What am I doing wrong? I tried different structure, with and without braces, with and without semicolon, and like for an hour no better result than this one. EDIT1: Removing the \ sign for the function to look like: function sha256sum { if [ -z "$1" ] then { sha256sum --help } else { pv "$1" | sha256sum -b } fi} Results in: 1.92KiB 0:00:00 [56.8MiB/s] [====================================================>] 100% 1.92KiB1.92KiB1.92KiB1.92KiB.........^C[2]+ Stopped pv "$1" | sha256sum -b | Each of the occurences of \sha256sum in your function's body is arecursive call to that function. Prefixing the name with a backslashprevents it from being interpreted as an alias, but does not preventinterpreting it as a function. You want to write command sha256sum instead of \sha256sum ; forexample, keeping the layout of your original function: function sha256sum { if [ -z "$1" ] then { command sha256sum --help } else { pv "$1" | command sha256sum -b } fi} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
399,197 | I'm trying to parse the results of nmcli dev wifi which produces a result like this: * SSID MODE CHAN RATE SIGNAL BARS SECURITY Prk Infra 11 54 Mbit/s 99 ▂▄▆█ VIDEOTRON2255 Infra 11 54 Mbit/s 67 ▂▄▆_ WPA1 WPA2 a space Infra 6 54 Mbit/s 65 ▂▄▆_ WPA2 * TNCAP4D0B18 Infra 11 54 Mbit/s 52 ▂▄__ Initially I was just parsing using awk -F" " which worked for almost all cases. I'm finding any wifi network with a space in it throws this off completely. So I instead try using two spaces instead of one this didn't produce the results I expected. How can I consistently parse the columns in the above output? Current script is something like this: nmcli dev wifi | sed 's/\*//g' > /tmp/scannetworks=$(cat /tmp/scan | awk -F" " '{print $1}' | sed '1d')# ...bars=$(cat /tmp/scan | awk -F" " '{print $6}' | sed '1d') | awk -F' {2,}' # means - two or more spaces. or awk -F' +' # means - one space, then one or more spaces. This commands mean the same - use 2 or more spaces as fields delimiter. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
399,222 | I've written a script that converts the output of nmcli --mode multiline dev wifi into JSON,but I'm finding it's inconsistent (breaks when results have a space) ,long, and hard to read. I wonder if it is possible to pipe the results directly into jq . The nmcli output (input to my script) looks like this: *: SSID: VIDEOTRON2255MODE: InfraCHAN: 11RATE: 54 Mbit/sSIGNAL: 69BARS: ▂▄▆_SECURITY: WPA1 WPA2*: * SSID: VIDEOTRON2947MODE: InfraCHAN: 6RATE: 54 Mbit/sSIGNAL: 49BARS: ▂▄__SECURITY: WPA1 WPA2 I'm looking to generate something like this: [{ "network": "VIDEOTRON2255", "mode": "Infra", "chan": "11", "rate": "54 Mbit/s", "signal": "69", "bars": "▂▄▆_", "security": "WPA1 WPA2"},{ "network": "VIDEOTRON2947", "mode": "Infra", "chan": "6", "rate": "54 Mbit/s", "signal": "49", "bars": "▂▄__", "security": "WPA1 WPA2"}] I asked a related question earlier. This is the first script I wrote based on Gilles's answer . It worked for some of the values but not security or rate , which have spaces. | The script that you linked to is extremely inefficient - you're doing a lot of useless pre-processing... Use nmcli in --terse mode since, per the manual, "this mode is designed and suitable for computer (script) processing" , specify the desired fields and pipe the output to jq -sR e.g. printf '%s' "$(nmcli -f ssid,mode,chan,rate,signal,bars,security -t dev wifi)" | \jq -sR 'split("\n") | map(split(":")) | map({"network": .[0], "mode": .[1], "channel": .[2], "rate": .[3], "signal": .[4], "bars": .[5], "security": .[6]})' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
399,359 | From the console, I've created 2 empty files, and I tried to read them simultaneously. $ echo -n '' | tee f1 > f2$ cat f1 f2$ cat <(tail -f f1) <(tail -f ./f2) On the other console I've run my tests. $ echo 'tee test' | tee -a f1 >> f2$ echo 'f1 test' >> f1$ echo 'f2 test' >> f2$ cat f1 f2tee testf1 testtee testf2 test However, cat on the first console only read outputs from the first fd . $ cat <(tail -F ./f1) <(tail -F ./f2)tee testf1 test Why? How do I then read simultaneously from two or more file descriptors? | cat processes its arguments sequentially; tail -f f1 continues running, so cat keeps waiting for input on <(tail -f f1) , and doesn’t move on to processing <(tail -f f2) . You’ll see the output from tail -f f2 if you kill the first tail . A better tool to track multiple files simultaneously is tail itself (at least, GNU tail ): tail -f f1 f2 If you don’t want to see file headers, use -q : tail -qf f1 f2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111103/"
]
} |
399,378 | I have Ubuntu 16.04 running a samba server and another 16.04 box that mounts it without issue using the fstab line //192.168.0.102/share /mnt/raid cifs user=myuser,pass=mypass . When I mount the share, the files all show the proper user/group and when coping files to the share, the mode (ie 0444) is preserved. With another machine running Ubuntu 17.10, the same fstab line causes the mount to list everything on the share as user/group root:root instead of myuser:myuser. I can force the user/group to be correct by adding uid=1000,gid=1000 to the fstab line but when copying files to the share, the permissions are no longer preserved (they all show up as 0755). Any ideas on what has changed that might be causing this issue and how I can fix it would be appreciated. This samba share has worked correctly for me across multiple versions of Linux so I'm fairly certain the issue is on the new Ubuntu 17.10 side but I not certain if it's a change in the security policies or something in the cifs library itself. | They've changed the default dialect to SMB3 in mount.cifs. Originally it defaulted to SMB1. To get get the same behavior as Ubuntu 16.04 you add vers=1.0 to the mount options. With this option present, I now get the correct user/group and permissions are preserved when copying. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256545/"
]
} |
399,392 | I have a variable set with var='type_cardio_10-11-2017' . I need to remove the last 10 letters from the variable, and append the remaining value to var2 . I tried with the following script, but it doesn't work as expected. var=type_cardio_10-11-2017var2=${var} | cut -f1 -d "_"echo ${var2} The output I want is type_cardio . | To remove everything from after the last _ in var and assign the result to var2 : var2=${var%_*} The parameter expansion ${parameter%word} removes the pattern word ( _* in this case) from the end of the value of the given variable. The POSIX standard calls this a "Remove Smallest Suffix Pattern" parameter expansion. To remove the last underscore and the 10 characters after it, use var2=${var%_??????????} To remove characters corresponding to a date string such as the one in your example, use var2=${var%_[0-9][0-9]-[0-9][0-9]-[0-9][0-9][0-9][0-9]} Which pattern you use depends on how carefully you want to perform the match. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/399392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222859/"
]
} |
399,407 | When I open up a bash prompt and type: $ set -o xtrace$ x='~/someDirectory'+ x='~/someDirectory'$ echo $x+ echo '~/someDirectory'~/someDirectory I was hoping that the 5th line above would have went + echo /home/myUsername/someDirectory . Is there a way to do this? In my original Bash script, the variable x is actually being populated from data from an input file, via a loop like this: while IFS= read linedo params=($line) echo ${params[0]}done <"./someInputFile.txt" Still, I'm getting a similar result, with the echo '~/someDirectory' instead of echo /home/myUsername/someDirectory . | The POSIX standard imposes word expansion to be done in the following order (emphasize is mine): Tilde expansion (see Tilde Expansion), parameter expansion (see Parameter Expansion), command substitution (see Command Substitution), and arithmetic expansion (see Arithmetic Expansion) shall be performed, beginning to end. See item 5 in Token Recognition. Field splitting (see Field Splitting) shall be performed on the portions of the fields generated by step 1, unless IFS is null. Pathname expansion (see Pathname Expansion) shall be performed, unless set -f is in effect. Quote removal (see Quote Removal) shall always be performed last. The only point which interests us here is the first one: as you can see tilde expansion is processed before parameter expansion: The shell attempts a tilde expansion on echo $x , there is no tilde to be found, so it proceeds. The shell attempts a parameter expansion on echo $x , $x is found and expanded and the command-line becomes echo ~/someDirectory . Processing continues, tilde expansion having already been processed the ~ character remains as-is. By using the quotes while assigning the $x , you were explicitly requesting to not expand the tilde and treat it like a normal character. A thing often missed is that in shell commands you don't have to quote the whole string, so you can make the expansion happen right during the variable assignment: user@host:~$ set -o xtraceuser@host:~$ x=~/'someDirectory'+ x=/home/user/someDirectoryuser@host:~$ echo $x+ echo /home/user/someDirectory/home/user/someDirectoryuser@host:~$ And you can also make the expansion occur on the echo command-line as long as it can happen before parameter expansion: user@host:~$ x='someDirectory'+ x=someDirectoryuser@host:~$ echo ~/$x+ echo /home/user/someDirectory/home/user/someDirectoryuser@host:~$ If for some reason you really need to affect the tilde to the $x variable without expansion, and be able to expand it at the echo command, you must proceed in two times to force two expansions of the $x variable to occur: user@host:~$ x='~/someDirectory'+ x='~/someDirectory'user@host:~$ echo "$( eval echo $x )"++ eval echo '~/someDirectory'+++ echo /home/user/someDirectory+ echo /home/user/someDirectory/home/user/someDirectoryuser@host:~$ However, be aware that depending on the context where you use such structure it may have unwanted side-effect. As a rule of thumb, prefer to avoid using anything requiring eval when you have another way. If you want to specifically address the tilde issue as opposed to any other kind of expansion, such structure would be safer and portable: user@host:~$ x='~/someDirectory'+ x='~/someDirectory'user@host:~$ case "$x" in "~/"*)> x="${HOME}/${x#"~/"}"> esac+ case "$x" in+ x=/home/user/someDirectoryuser@host:~$ echo $x+ echo /home/user/someDirectory/home/user/someDirectoryuser@host:~$ This structure explicitly check the presence of a leading ~ and replaces it with the user home dir if it is found. Following your comment, the x="${HOME}/${x#"~/"}" may indeed be surprising for someone not used in shell programming, but is in fact linked to the same POSIX rule I quoted above. As imposed by the POSIX standard, quote removal happens last and parameter expansion happens very early. Thus, ${#"~"} is evaluated and expanded far before the evaluation of the outer quotes. In turns, as defined in Parameter expansion rules: In each case that a value of word is needed (based on the state of parameter, as described below), word shall be subjected to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. Thus, the right side of the # operator must be properly quoted or escaped to avoid tilde expansion. So, to state it differently, when the shell interpretor looks at x="${HOME}/${x#"~/"}" , he sees: ${HOME} and ${x#"~/"} must be expanded. ${HOME} is expanded to the content of the $HOME variable. ${x#"~/"} triggers a nested expansion: "~/" is parsed but, being quoted, is treated as a literal 1 . You could have used single quotes here with the same result. ${x#"~/"} expression itself is now expanded, resulting in the prefix ~/ being removed from the value of $x . The result of the above is now concatenated: the expansion of ${HOME} , the literal / , the expansion ${x#"~/"} . The end-result is enclosed in double-quotes, functionally preventing word splitting. I say functionally here because these double quotes are not technically required (see here and there for instance), but as a personal style as soon as an assignments gets anything beyond a=$b I usually find it clearer add double-quotes. By-the-way, if look more closely to the case syntax, you will see the "~/"* construction which relies on the same concept as x=~/'someDirectory' I explained above (here again, double and simple quotes could be used interchangeably). Don't worry if these things may seem obscure at the first sight (maybe even at the second or later sights!). In my opinion, parameter expansion are, with subshells, one of the most complex concept to grasp when programming in shell language. I know that some people may vigorously disagree, but if you would-like to learn shell programming more in depth I encourage you to read the Advanced Bash Scripting Guide : it teaches Bash-scripting, so with a lot of extensions and bells-and-whistles compared to POSIX shell scripting, but I found it well written with loads of practical examples. Once you manage this, it is easy to restrict yourself to POSIX features when you need to, I personally think that entering directly in the POSIX realm is an unnecessary steep learning curve for beginners (compare my POSIX tilde replacement with @m0dular's regex-like Bash equivalent to get an idea of what I mean ;) !). 1 : Which leads me into finding a bug in Dash which don't implement tilde expansion here correctly (verifiable using x='~/foo'; echo "${x#~/}" ). Parameter expansion is a complex field both for the user and the shell developers themselves! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/399407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95620/"
]
} |
399,480 | I've used gpg2 to generate some keys on my Ubuntu 16.04 server. Now I have to move the machine. I need to transfer all generated keys to a Mac. I think I just have to copy the ~/.gnupg files. But where do I have to store it to get them called via gpg --list-secret-keys --keyid-format LONG [email protected] ? Same place? And how do I install gpg2 on my Mac? homebrew gpg2 is not existing. | On the machine that initially has the keys (the Ubuntu machine): Export the public keys: gpg --export --armor --output=key_public.asc Export the private keys: gpg --export-secret-keys --armor --output=key_secret.asc Copy the exported files to the second machine (the Mac). Import the keys: gpg --import --armor key_public.asc and gpg --import --armor key_secret.asc The above commands will export all the keys in your keyring. If you just want a specific key/s, you need to specify it/them by uid. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210969/"
]
} |
399,492 | I want to do stuff like this: echo myserver:~/dir/2/ | rsync -r HERE /local/path/ I want the output redirected to a specified location in the command. The echo stuff goes "HERE". What's the easiest way to do this? | You can use xargs or exactly this requirement. You can use the -I as place-holder for the input received from the pipeline, do echo "myserver:${HOME}/dir/2/" | xargs -I {} rsync -r "{}" /local/path/ (or) use ~ without double-quotes under which it does not expand to the HOME directory path. echo myserver:~/dir/2/ | xargs -I {} rsync -r "{}" /local/path/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/399492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256301/"
]
} |
399,527 | zsh history includes a timestamp, with the option to disable it. Beyond knowing when a command was executed, what's the reason for this? What features might I lose if I disable this? | zsh history includes a timestamp, if the EXTENDED_HISTORY feature is enabled, or if the SHARE_HISTORY feature is enabled; in the latter case, timestamps are used so multiple shells can read each others’ history accurately — they only need to read any new commands added since the last time they read the history file, and they can do so even when the history file is rewritten. These features are enabled or disabled using setopt : setopt extendedhistorysetopt sharehistory will enable them, while setopt noextendedhistorysetopt nosharehistory will disable them. If you want to keep shared histories, you’ll need to keep the extended timestamps (but you don’t need to explicitly enable the EXTENDED_HISTORY feature; the SHARE_HISTORY feature is self-sufficient). Note that extended history tells you not only when a command was executed, but also how long it took to run, which can be useful in some cases. The History section of the documentation has all the details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/399527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63928/"
]
} |
399,540 | I'm trying to configure i3 so that xzoom is always launched in floating mode. The problem is that xzoom's window does not have a WM_CLASS and its WM_NAME is not set at window creation, but after a small delay. Here is what the properties look like for the first few ms: $ xzoom & sleep .01; xprop -id 0x2200001_NET_WM_DESKTOP(CARDINAL) = 0WM_STATE(WM_STATE): window state: Normal icon window: 0x0WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOWWM_ICON_NAME(STRING) = "xzoom" As you can see, the only thing that tells it apart is the WM_ICON_NAME .After a few ms the title is added: $ xpropWM_NAME(STRING) = "xzoom x4"_NET_WM_DESKTOP(CARDINAL) = 0WM_STATE(WM_STATE): window state: Normal icon window: 0x0WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOWWM_ICON_NAME(STRING) = "xzoom" If I match the window using WM_NAME , the screen flashes horribly, as the other windows are re-arranged before falling back to their positions: for_window [title="xzoom*"] floating enable I tried looking for a command criterion that would look at the WM_ICON_NAME , but I couldn't find any. Is there an alternative way to start this program in floating mode? | zsh history includes a timestamp, if the EXTENDED_HISTORY feature is enabled, or if the SHARE_HISTORY feature is enabled; in the latter case, timestamps are used so multiple shells can read each others’ history accurately — they only need to read any new commands added since the last time they read the history file, and they can do so even when the history file is rewritten. These features are enabled or disabled using setopt : setopt extendedhistorysetopt sharehistory will enable them, while setopt noextendedhistorysetopt nosharehistory will disable them. If you want to keep shared histories, you’ll need to keep the extended timestamps (but you don’t need to explicitly enable the EXTENDED_HISTORY feature; the SHARE_HISTORY feature is self-sufficient). Note that extended history tells you not only when a command was executed, but also how long it took to run, which can be useful in some cases. The History section of the documentation has all the details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/399540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34892/"
]
} |
399,545 | Why does awk 's FILENAME variable return nothing when input is a empty file? Does this means awk doesn't open that file? If doesn't open how knows it's empty, or if it's open why doesn't return filename then? I read this post , but there not explained why below should work. awk 'BEGINFILE{print FILENAME}1' filename and below doesn't. awk '{print FILENAME}' filename #orawk 'BEGIN{print FILENAME}' filename | From the awk manual: FILENAME A pathname of the current input file.Inside a BEGIN action the value is undefined. (...) I think this is the explanation. Until a field is processed the value of FILENAME is undefined. Since no field is processed in case of an empty file the variable stays uninitialised. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72456/"
]
} |
399,560 | I have a big csv file, which looks like this: 1,2,3,4,5,6,-991,2,3,4,5,6,-991,2,3,4,5,6,-991,2,3,4,5,6,251781,2,3,4,5,6,279861,2,3,4,5,6,-99 I want to select only the lines in which the 7th columns is equal to -99, so my output be: 1,2,3,4,5,6,-991,2,3,4,5,6,-991,2,3,4,5,6,-991,2,3,4,5,6,-99 I tried the following: awk -F, '$7 == -99' input.txt > output.txtawk -F, '{ if ($7 == -99) print $1,$2,$3,$4,$5,$6,$7 }' input.txt > output.txt But both of them returned an empty output.txt. Can anyone tell me what I'm doing wrong?Thanks. | The file that you run the script on has DOS line-endings. It may be that it was created on a Windows machine. Use dos2unix to convert it to a Unix text file. Alternatively, run it through tr : tr -d '\r' <input.txt >input-unix.txt Then use input-unix.txt with your otherwise correct awk code. To modify the awk code instead of the input file: awk -F, '$7 == "-99\r"' input.txt >output.txt This takes the carriage-return at the end of the line into account. Or, awk -F, '$7 + 0 == -99' input.txt >output.txt This forces the 7th column to be interpreted as a number, which "removes" the carriage-return. Similarly, awk -F, 'int($7) == -99' input.txt >output.txt would also remove the \r . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/399560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227902/"
]
} |
399,600 | I updated to the newest, 3.26.1, version of GNOME several hours ago, and I don't see the list of background applications that used to be located in the bottom-left corner of the screen, on a hidden sliding panel. The icons belonging to Audacious, VLC, Dropbox, Redshift and other applications I run in the background don't see anywhere in the screen. I opened the Tweaks app (or equivalently the gnome-tweak-tool command) looking for relevant configurations with no results. What happened to this feature, and is there a way to have one similar to it if it is gone? Source Status Icons and GNOME , Form and Function Allan Day's blog | The legacy tray was removed in 3.26 (it was a stop-gap measure, destined to be removed at some point, as explained in the corresponding bug ). This is also mentioned in the release notes . To see your indicator icons, you can use an extension such as TopIcons Plus . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399600",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144096/"
]
} |
399,610 | I have ubuntu 16.04 installed on my local pc and I'm trying to make my work environment as similar to the way it is configured at my job to achieve that I wanted to use the same tcshrc file (yes, we use tcsh, not sure why...) anyway, when I try to open a tcsh terminal (or to source ~/.tcshrc for the matter) I'm getting an error: set: Variable name must begin with a letter. trying to isolate the cause I found out that the next lines are enough to cause it #!/bin/tcshecho 0set history = 2000 # this line is not the cause, verified by echoingecho 1set savehist = (2000 merge)echo 2 output: 01set: Variable name must begin with a letter. when i try to run set savehist = (2000 merge) as a regular shell command the terminal doesn't show any error. ofcourse that at my job the tcshrc is working fine. in both I have tcsh 6.18.01 installed any help is welcome thanks | One possible reason: you actually have an invalid character in there, accidentally added when creating the file. It would have to be something that's normally not (very) visible, for example, a non-breaking space or some such. $ cat test.cshset foo = bar$ tcsh test.csh set: Variable name must begin with a letter.$ od -c test.csh0000000 s e t 342 201 240 f o o = b a r0000020 \n0000021 That was a zero-width word joiner, it actually doesn't print on my system. Something like that od above should show you what there actually is. The easiest way to get rid of characters like that is probably to just find the offending line(s) from the error messages, and re-type them from scratch. (or debugging prints instead of error messages, since tcsh doesn't seem to give line numbers in them. oh well.) But if you want to, you could pipe the whole file through something like tr -d '\200-\377' , which would remove all bytes with the high bit set. (That is, assuming the file is UTF-8 encoded, and that tr works as it seems to work on my system, on bytes, not actual characters, and that there aren't any other non-ASCII characters you'd like to keep. Make backups.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256693/"
]
} |
399,614 | Recently, I was looking at the performance impact of tools such as strace . For instance, this blog post uses the default metrics given by dd. I wanted to do some measurements myself but with other programs. Is there a tool that measures the speed of execution of an arbitrary program? | Read also time(7) (assuming a Linux system). You can not only use time(1) but also some time functions (e.g. clock(3) , clock_gettime(2) , etc...) inside the program. See also this . Look also into gprof(1) , perf(1) , oprofile(1) . You may want to invoke the GCC compiler specifically (e.g. gcc -pg for gprof ) for profiling and/or benchmarking , which has some overhead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255885/"
]
} |
399,619 | When booting a kernel in an embedded device, you need to supply a device tree to the Linux kernel, while booting a kernel on a regular x86 pc doesn't require a device tree -- why? As I understand, on an x86 pc the kernel "probes" for hardware (correct me if I'm wrong), so why can't the kernel probe for hardware in and embedded system? | Peripherals are connected to the main processor via a bus . Some bus protocols support enumeration (also called discovery), i.e. the main processor can ask “what devices are connected to this bus?” and the devices reply with some information about their type, manufacturer, model and configuration in a standardized format. With that information, the operating system can report the list of available devices and decide which device driver to use for each of them. Some bus protocols don't support enumeration, and then the main processor has no way to find out what devices are connected other than guessing. All modern PC buses support enumeration, in particular PCI (the original as well as its extensions and successors such as AGP and PCIe), over which most internal peripherals are connected, USB (all versions), over which most external peripherals are connected, as well as Firewire , SCSI , all modern versions of ATA/SATA , etc. Modern monitor connections also support discovery of the connected monitor ( HDMI , DisplayPort , DVI , VGA with EDID ). So on a PC, the operating system can discover the connected peripherals by enumerating the PCI bus, and enumerating the USB bus when it finds a USB controller on the PCI bus, etc. Note that the OS has to assume the existence of the PCI bus and the way to probe it; this is standardized on the PC architecture (“PC architecture” doesn't just mean an x86 processor: to be a (modern) PC, a computer also has to have a PCI bus and has to boot in a certain way). Many embedded systems use less fancy buses that don't support enumeration. This was true on PC up to the mid-1990s, before PCI overtook ISA . Most ARM systems, in particular, have buses that don't support enumeration. This is also the case with some embedded x86 systems that don't follow the PC architecture. Without enumeration, the operating system has to be told what devices are present and how to access them. The device tree is a standard format to represent this information. The main reason PC buses support discovery is that they're designed to allow a modular architecture where devices can be added and removed, e.g. adding an extension card into a PC or connecting a cable on an external port. Embedded systems typically have a fixed set of devices¹, and an operating system that's pre-loaded by the manufacturer and doesn't get replaced, so enumeration is not necessary. ¹ If there's an external bus such as USB, USB peripherals are auto-discovered, they wouldn't be mentioned in the device tree. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/399619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153502/"
]
} |
399,690 | I am wondering whether there is a general way of passing multiple options to an executable via the shebang line ( #! ). I use NixOS, and the first part of the shebang in any script I write is usually /usr/bin/env . The problem I encounter then is that everything that comes after is interpreted as a single file or directory by the system. Suppose, for example, that I want to write a script to be executed by bash in posix mode. The naive way of writing the shebang would be: #!/usr/bin/env bash --posix but trying to execute the resulting script produces the following error: /usr/bin/env: ‘bash --posix’: No such file or directory I am aware of this post , but I was wondering whether there was a more general and cleaner solution. EDIT : I know that for Guile scripts, there is a way to achieve what I want, documented in Section 4.3.4 of the manual: #!/usr/bin/env sh exec guile -l fact -e '(@ (fac) main)' -s "$0" "$@" !# The trick, here, is that the second line (starting with exec ) is interpreted as code by sh but, being in the #! ... !# block, as a comment, and thus ignored, by the Guile interpreter. Would it not be possible to generalize this method to any interpreter? Second EDIT : After playing around a little bit, it seems that, for interpreters that can read their input from stdin , the following method would work: #!/usr/bin/env shsed '1,2d' "$0" | bash --verbose --posix /dev/stdin; exit; It's probably not optimal, though, as the sh process lives until the interpreter has finished its job. Any feedback or suggestion would be appreciated. | There is no general solution, at least not if you need to support Linux, because the Linux kernel treats everything following the first “word” in the shebang line as a single argument . I’m not sure what NixOS’s constraints are, but typically I would just write your shebang as #!/bin/bash --posix or, where possible, set options in the script : set -o posix Alternatively, you can have the script restart itself with the appropriate shell invocation: #!/bin/sh -if [ "$1" != "--really" ]; then exec bash --posix -- "$0" --really "$@"; fishift# Processing continues This approach can be generalised to other languages, as long as you find a way for the first couple of lines (which are interpreted by the shell) to be ignored by the target language. GNU coreutils ’ env provides a workaround since version 8.30, see unode ’s answer for details. (This is available in Debian 10 and later, RHEL 8 and later, Ubuntu 19.04 and later, etc.) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/399690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212582/"
]
} |
399,739 | I have an old laptop with a faulty screen (The screen works in random intervals). Recently I bought a screen and I thought I can use my old laptop as a media centre. Can I disable completely my laptop's screen? I have done this for my user, but I don't know how to do this systemwise. I have installed debian stretch with th default desktop environment. | Run the command xrandr -q to shows the exact names. xrandr -q | grep 'VGA\|HDMI\|DP\|LVDS' This is a sample command to turn off LVDS-1 and enable VGA-1 : xrandr --output LVDS-1 --off --output VGA-1 --auto To switch back: xrandr --output VGA-1 --off --output LVDS-1 --auto | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222807/"
]
} |
399,746 | I am completely new to *NIX based OSes. One of the things that baffles me is that a process or program may execute setuid(0) and then perform privileged operations and revert back to its normal uid. My question is what is the mechanism in *NIX to prevent any arbitrary process from possessing root ? If I write a simple C program that calls setuid(0) under what conditions will that call succeed and under what conditions will it fail ? | The basic idea is that a process may only reduce its privileges. A process may not gain any privileges. There is one exception: a process that executes a program from a file that has a setuid or setgid flag set gains the privileges expressed by this flag. Note how this mechanism does not allow a program to run arbitrary code with elevated privileges. The only code that can be run with elevated privileges is setuid/setgid executables. The root user, i.e. the user with id 0, is more privileged than anything else. A process with user 0 is allowed to do anything. (Group 0 is not special.) Most processes keep running with the same privileges. Programs that log a user in or start a daemon start as root, then drop all privileges and execute the desired program as the user (e.g. the user's login shell or session manager, or the daemon). Setuid (or setgid) programs can operate as the target user and group, but many switch between the caller's privileges and their own additional privileges depending on what they're doing, using the mechanisms I am going to describe now. Every process has three user IDs : the real user ID (RUID), the effective user ID (EUID), and the saved user ID (SUID). The idea is that a process can temporarily gain privileges, then abandon them when it doesn't need them anymore, and gain them back when it needs them again. There's a similar mechanism for groups , with a real group ID (RGID), an effective group ID (EGID), a saved group ID (SGID) and supplementary groups. The way they work is: Most programs keep the same real UID and GID throughout. The main exception is login programs (and daemon launchers), which switch their RUID and RGID from root to the target user and group. File access, and operations that require root privileges, look at the effective UID and GID. Privileged programs often switch their effective IDs depending on whether they're executing a privileged operation. The saved IDs allow switching the effective IDs back and forth. A program may switch its effective ID between the saved ID and the real ID. A program that needs to perform certain actions with root privileges normally runs with its EUID set to the RUID, but calls seteuid to set its EUID to 0 before running the action that requires privileges and calls seteuid again to the EUID change back to the RUID afterwards. In order to perform the call to seteuid(0) even though the EUID at the time is not 0, the SUID must be 0. The same mechanism can be used to gain group privileges. A typical example is a game that saves high scores of local users. The game executable is setgid games . When the game starts, its EGID is set to games , but it changes back to the RGID so as not to risk performing any action that the user isn't normally allowed to do. When the game is about to save a high score, it changes its EGID temporarily to games . This way: Because the high score file requires privileges that ordinary users don't have, the only way to add an entry to the high score file is to play the game. If there's a security vulnerability in the game, the worst that it can do is grant a user permission to the games group, allowing them to cheat on high scores. If there's a bug in the game that doesn't result in the program calling the setegid function, e.g. a bug that only causes the game to write to an unintended file, then that bug doesn't allow cheating on high scores, because the game doesn't have the permission to write to the high score file without calling setegid . What I wrote above is describes a basic traditional Unix system. Some modern systems have other features that complement the traditional Unix privilege model. These features come in addition to the basic user/group effective/real system and sometimes interact with it. I won't go into any detail about these additional features, but I'll just mention three features of the Linux security model. The permission to perform many actions is granted via a capability rather than to user ID 0. For example, changing user IDs requires the capability CAP_SETUID , rather than having user ID 0. Programs running as user ID 0 receive all capabilities unless they go out of their way, and programs running with CAP_SETUID can acquire root privileges, so in practice running as root and having CAP_SETUID are equivalent. Linux has several security frameworks that can restrict what a process can do, even if that process is running as user ID 0. With some security frameworks, unlike with the traditional Unix model and capabilities, a process may gain privileges upon execve due to the security framework's configuration rather than due to flags in the executable file's metadata. Linux has user namespaces . A process running as root in a namespace only has privileges inside that namespace. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230582/"
]
} |
399,781 | I am trying to find number of runs of a script but its always 2 even though only one running. sh 11.sh1 11.sh already running,exiting.. here is code. ps -ef | grep -v grep | grep -c "$0"if [[ `ps -ef | grep -v grep | grep -c "$0"` -gt "1" ]]; then `echo " $0 already running,exiting.."`fi | The basic idea is that a process may only reduce its privileges. A process may not gain any privileges. There is one exception: a process that executes a program from a file that has a setuid or setgid flag set gains the privileges expressed by this flag. Note how this mechanism does not allow a program to run arbitrary code with elevated privileges. The only code that can be run with elevated privileges is setuid/setgid executables. The root user, i.e. the user with id 0, is more privileged than anything else. A process with user 0 is allowed to do anything. (Group 0 is not special.) Most processes keep running with the same privileges. Programs that log a user in or start a daemon start as root, then drop all privileges and execute the desired program as the user (e.g. the user's login shell or session manager, or the daemon). Setuid (or setgid) programs can operate as the target user and group, but many switch between the caller's privileges and their own additional privileges depending on what they're doing, using the mechanisms I am going to describe now. Every process has three user IDs : the real user ID (RUID), the effective user ID (EUID), and the saved user ID (SUID). The idea is that a process can temporarily gain privileges, then abandon them when it doesn't need them anymore, and gain them back when it needs them again. There's a similar mechanism for groups , with a real group ID (RGID), an effective group ID (EGID), a saved group ID (SGID) and supplementary groups. The way they work is: Most programs keep the same real UID and GID throughout. The main exception is login programs (and daemon launchers), which switch their RUID and RGID from root to the target user and group. File access, and operations that require root privileges, look at the effective UID and GID. Privileged programs often switch their effective IDs depending on whether they're executing a privileged operation. The saved IDs allow switching the effective IDs back and forth. A program may switch its effective ID between the saved ID and the real ID. A program that needs to perform certain actions with root privileges normally runs with its EUID set to the RUID, but calls seteuid to set its EUID to 0 before running the action that requires privileges and calls seteuid again to the EUID change back to the RUID afterwards. In order to perform the call to seteuid(0) even though the EUID at the time is not 0, the SUID must be 0. The same mechanism can be used to gain group privileges. A typical example is a game that saves high scores of local users. The game executable is setgid games . When the game starts, its EGID is set to games , but it changes back to the RGID so as not to risk performing any action that the user isn't normally allowed to do. When the game is about to save a high score, it changes its EGID temporarily to games . This way: Because the high score file requires privileges that ordinary users don't have, the only way to add an entry to the high score file is to play the game. If there's a security vulnerability in the game, the worst that it can do is grant a user permission to the games group, allowing them to cheat on high scores. If there's a bug in the game that doesn't result in the program calling the setegid function, e.g. a bug that only causes the game to write to an unintended file, then that bug doesn't allow cheating on high scores, because the game doesn't have the permission to write to the high score file without calling setegid . What I wrote above is describes a basic traditional Unix system. Some modern systems have other features that complement the traditional Unix privilege model. These features come in addition to the basic user/group effective/real system and sometimes interact with it. I won't go into any detail about these additional features, but I'll just mention three features of the Linux security model. The permission to perform many actions is granted via a capability rather than to user ID 0. For example, changing user IDs requires the capability CAP_SETUID , rather than having user ID 0. Programs running as user ID 0 receive all capabilities unless they go out of their way, and programs running with CAP_SETUID can acquire root privileges, so in practice running as root and having CAP_SETUID are equivalent. Linux has several security frameworks that can restrict what a process can do, even if that process is running as user ID 0. With some security frameworks, unlike with the traditional Unix model and capabilities, a process may gain privileges upon execve due to the security framework's configuration rather than due to flags in the executable file's metadata. Linux has user namespaces . A process running as root in a namespace only has privileges inside that namespace. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170161/"
]
} |
399,808 | I'm running Debian 8.9 Jessie on a Linux 2.6.32-openvz-042stab120.11-amd64 OpenVZ container. I'm trying to use curlftpfs 0.9.1 as this version has a functionality that was removed in later versions - namely, open(read+write) and open(write). The current version is 0.9.2-9~deb8u1 : apt-cache policy curlftpfscurlftpfs: Installed: (none) Candidate: 0.9.2-9~deb8u1 Version table: 0.9.2-9~deb8u1 0 500 http://ftp.debian.org/debian/ jessie/main amd64 Packages I was able to find both binaries and sources on Debian Snapshot . However, if I try to install the .deb binary, I get unmet dependencies: # dpkg -i ./curlftpfs_0.9.1-3_amd64.debSelecting previously unselected package curlftpfs.(Reading database ... 44948 files and directories currently installed.)Preparing to unpack ./curlftpfs_0.9.1-3_amd64.deb ...Unpacking curlftpfs (0.9.1-3) ...dpkg: dependency problems prevent configuration of curlftpfs: curlftpfs depends on fuse-utils; however: Package fuse-utils is not installed. curlftpfs depends on libgnutls13 (>= 2.0.4-0); however: Package libgnutls13 is not installed. curlftpfs depends on libkrb53 (>= 1.6.dfsg.2); however: Package libkrb53 is not installed. curlftpfs depends on libldap2 (>= 2.1.17-1); however: Package libldap2 is not installed.dpkg: error processing package curlftpfs (--install): dependency problems - leaving unconfiguredProcessing triggers for man-db (2.7.0.2-5) ...Errors were encountered while processing: curlftpfs And apt-get tells me these dependencies are not installable: #apt-get install Reading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt-get -f install' to correct these.The following packages have unmet dependencies: curlftpfs : Depends: fuse-utils but it is not installable Depends: libgnutls13 (>= 2.0.4-0) but it is not installable Depends: libkrb53 (>= 1.6.dfsg.2) but it is not installable Depends: libldap2 (>= 2.1.17-1) but it is not installableE: Unmet dependencies. Try using -f. But running apt-get -f install installs the current version of curlftpfs . Trying gdebi isn't any better: # gdebi curlftpfs_0.9.1-3_amd64.deb Reading package lists... DoneBuilding dependency tree Reading state information... DoneBuilding data structures... Done Building data structures... Done Este pacote n\xe3o pode ser desinstaladoDependency is not satisfiable: fuse-utils If I add a debian-snapshot to my sources list, I can get the specific package version I want, but then I get lost in dependency hell: apt-get install -f curlftpfs=0.9.1-3+b2Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: curlftpfs : Depends: libkrb53 (>= 1.6.dfsg.2) but it is not going to be installed Depends: fuse-utils but it is not going to be installedE: Unable to correct problems, you have held broken packages.root@tunnelserver:~/temp# apt-get install fuse-utilsReading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: fuse-utils : Depends: libfuse2 (= 2.7.3-4) but 2.9.3-15+deb8u2 is to be installedE: Unable to correct problems, you have held broken packages.root@tunnelserver:~/temp# apt-get install libfuse2=2.7.3-4Reading package lists... DoneBuilding dependency tree Reading state information... DoneThe following packages were automatically installed and are no longer required: libselinux1-dev libsepol1-devUse 'apt-get autoremove' to remove them.Suggested packages: fuse-utilsThe following packages will be REMOVED: fuse gvfs-fuse libfuse-dev ntfs-3g sshfs testdiskThe following packages will be DOWNGRADED: libfuse20 upgraded, 0 newly installed, 1 downgraded, 6 to remove and 0 not upgraded.Need to get 128 kB of archives.After this operation, 4059 kB disk space will be freed.Do you want to continue? [Y/n] n So, I decided to build the binaries. I downloaded the source from Debian Snapshot , applied the diff patch, ran ./configure , and got this error: debian configure: error: "libcurl not found" : # ./configurechecking for a BSD-compatible install... /usr/bin/install -cchecking whether build environment is sane... yeschecking for gawk... gawkchecking whether make sets $(MAKE)... yeschecking for gcc... gccchecking for C compiler default output file name... a.outchecking whether the C compiler works... yeschecking whether we are cross compiling... nochecking for suffix of executables... checking for suffix of object files... ochecking whether we are using the GNU C compiler... yeschecking whether gcc accepts -g... yeschecking for gcc option to accept ISO C89... none neededchecking for style of include used by make... GNUchecking dependency style of gcc... gcc3checking how to run the C preprocessor... gcc -Echecking for a BSD-compatible install... /usr/bin/install -cchecking whether ln -s works... yeschecking whether make sets $(MAKE)... (cached) yeschecking build system type... x86_64-unknown-linux-gnuchecking host system type... x86_64-unknown-linux-gnuchecking for a sed that does not truncate output... /bin/sedchecking for grep that handles long lines and -e... /bin/grepchecking for egrep... /bin/grep -Echecking for ld used by gcc... /usr/bin/ldchecking if the linker (/usr/bin/ld) is GNU ld... yeschecking for /usr/bin/ld option to reload object files... -rchecking for BSD-compatible nm... /usr/bin/nm -Bchecking how to recognise dependent libraries... pass_allchecking for ANSI C header files... yeschecking for sys/types.h... yeschecking for sys/stat.h... yeschecking for stdlib.h... yeschecking for string.h... yeschecking for memory.h... yeschecking for strings.h... yeschecking for inttypes.h... yeschecking for stdint.h... yeschecking for unistd.h... yeschecking dlfcn.h usability... yeschecking dlfcn.h presence... yeschecking for dlfcn.h... yeschecking for g++... nochecking for c++... nochecking for gpp... nochecking for aCC... nochecking for CC... nochecking for cxx... nochecking for cc++... nochecking for cl.exe... nochecking for FCC... nochecking for KCC... nochecking for RCC... nochecking for xlC_r... nochecking for xlC... nochecking whether we are using the GNU C++ compiler... nochecking whether g++ accepts -g... nochecking dependency style of g++... nonechecking for g77... nochecking for xlf... nochecking for f77... nochecking for frt... nochecking for pgf77... nochecking for cf77... nochecking for fort77... nochecking for fl32... nochecking for af77... nochecking for xlf90... nochecking for f90... nochecking for pgf90... nochecking for pghpf... nochecking for epcf90... nochecking for gfortran... nochecking for g95... nochecking for xlf95... nochecking for f95... nochecking for fort... nochecking for ifort... nochecking for ifc... nochecking for efc... nochecking for pgf95... nochecking for lf95... nochecking for ftn... nochecking whether we are using the GNU Fortran 77 compiler... nochecking whether accepts -g... nochecking the maximum length of command line arguments... 32768checking command to parse /usr/bin/nm -B output from gcc object... okchecking for objdir... .libschecking for ar... archecking for ranlib... ranlibchecking for strip... stripchecking for correct ltmain.sh version... yeschecking if gcc supports -fno-rtti -fno-exceptions... nochecking for gcc option to produce PIC... -fPICchecking if gcc PIC flag -fPIC works... yeschecking if gcc static flag -static works... yeschecking if gcc supports -c -o file.o... yeschecking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yeschecking whether -lc should be explicitly linked in... nochecking dynamic linker characteristics... GNU/Linux ld.sochecking how to hardcode library paths into programs... immediatechecking whether stripping libraries is possible... yeschecking if libtool supports shared libraries... yeschecking whether to build shared libraries... yeschecking whether to build static libraries... yesconfigure: creating libtoolappending configuration tag "CXX" to libtoolappending configuration tag "F77" to libtoolchecking for pkg-config... /usr/bin/pkg-configchecking pkg-config is at least version 0.9.0... yeschecking for GLIB... yeschecking for FUSE... yeschecking for gawk... (cached) gawkchecking for curl-config... nochecking whether libcurl is usable... noconfigure: error: "libcurl not found" I can't find a libcurl package that I can install. How can I proceed? | I found out that if you install libcurl4-openssl-dev , then make won't complain about the absence of libcurl anymore: apt-get install libcurl4-openssl-dev Unfortunately, I'm unable to provide an explanation on why or how this happens (other than the package install this elusive libcurl ). But I have tested and confirmed myself, and it does work. So I'm leaving this answer here. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28160/"
]
} |
399,812 | Before this problem I was stuck in a grub rescue screen after trying to install puppy linux, and to fix it I used a boot rescue disc, but after using the boot rescue disc I restarted my computer and it says no operating system found, then this screen appears https://ibb.co/iAHe86 and I have no idea what to do from here. I thought the boot repair disc erased puppy linux after I installed it, so I put in a lubuntu disc and tried installing that, but while I was trying to install it, it said puppy linux was still on my computer, but I don't know how because its not booting. Does anyone know how I can get puppy linux to boot from here? | I found out that if you install libcurl4-openssl-dev , then make won't complain about the absence of libcurl anymore: apt-get install libcurl4-openssl-dev Unfortunately, I'm unable to provide an explanation on why or how this happens (other than the package install this elusive libcurl ). But I have tested and confirmed myself, and it does work. So I'm leaving this answer here. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249687/"
]
} |
399,930 | How can I achieve the effects of read -e (which turns on line editing via readline and is available in bash ) from a general (POSIX) shell? I don’t want to lose POSIX compatibility just because of this one command. | readline is a GNU project (developed alongside bash ). There are other alternatives like BSD libedit, and all POSIX shells have their own line editor, either specific to the shell or based on either of those libraries that implement at least a vi editing mode (the only one specified by POSIX, though most also support an emacs mode (not specified by POSIX because RMS objected to it)). POSIX however only specifies that line editing mode for the shell command line, not for read . ksh93 does support it with read though (provided stdin and stderr are on a terminal device). There, you can do: set -o emacsIFS= read -r 'line?prompt: ' The zsh equivalent is with the vared (variable editor) builtin: line=; vared -p 'prompt: ' line That's the most feature rich with its history handling and the complete customisation of key binding and completion. read -e is bash specific. IFS= read -rep 'prompt: ' variable There is no POSIX equivalent. POSIXly, you could start vi (specified by POSIX) to edit a temporary file and read the content of that file into a variable. Alternatively, you could look for the availability of one of zsh / bash / ksh93 or rlwrap or other wrappers around libreadline, or socat (provided it has been built with readline support) and use any of those if available to read the line or revert to plain read or vi if not. Or use that LE line-editor function seen in this similar Q&A that implements a limited emacs -like line editor. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/399930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9114/"
]
} |
399,980 | Suppose I have the following trivial example in my history : ...76 cd ~77 ./generator.sh out.file78 cp out.file ~/out/79 ./out/cleaner.sh .80 ls -alnh /out... If I wanted to execute commands 77 , 78 , and 79 in one command, does there exist a shortcut for this? I've tried !77 !78 !79 , which will simply place them all on a single line to execute. | EDIT: You can do this in POSIX-compliant fashion with the fix command tool fc : fc 77 79 This will open your editor (probably vi ) with commands 77 through 79 in the buffer. When you save and exit ( :x ), the commands will be run. If you don't want to edit them and you're VERY SURE you know which commands you're calling, you can use: fc -e true 77 79 This uses true as an "editor" to edit the commands with, so it just exits without making any changes and the commands are run as-is. ORIGINAL ANSWER: You can use: history -p \!{77..79} | bash This assumes that you're not using any aliases or functions or any variables that are only present in the current execution environment, as of course those won't be available in the new shell being started. A better solution (thanks to Michael Hoffman for reminding me in the comments) is: eval "$(history -p \!{77..79})" One of the very, very few cases where eval is actually appropriate! Also see: Is there any way to execute commands from history? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/399980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79419/"
]
} |
399,986 | I'm running xfce on Arch without a DM. Using xorg-xinit to startx. Per default, after startup, I get a login prompt on tty1 and all is good. However, I'd like to change the default behavior to be dropped at a login prompt on tty6 (or whatever) without having to manually Ctrl+Alt+F6. I've spent a bunch of time reading various sources, Arch wiki, man pages, http://0pointer.de/blog/projects/systemd-docs.html , etc. However, I'm still not getting it. I've tried both manually adding and deleting the files, /etc/systemd/system/getty.target.wants/[email protected] and [email protected]. Alternatively also used systemctl to enable and disable them. As a test, also editing last line of /usr/lib/systemd/system/[email protected] DefaultInstance=tty1 to DefaultInstance=tty7, and combinations of all the above. Would have created in /etc/systemd/system if it worked. I asked on the Arch forums and got one very general reply, mostly crickets chirping. Is what I'm trying to do frowned upon for some reason? I ended up just creating a service file in /etc/systemd/system that calls a bash one liner with chvt in it. This gives me what I wanted, but now I can't scroll the boot messages I have setup to not clear on tty1. This solution also seems like a bad add on hack. What would be the proper way to do this? | EDIT: You can do this in POSIX-compliant fashion with the fix command tool fc : fc 77 79 This will open your editor (probably vi ) with commands 77 through 79 in the buffer. When you save and exit ( :x ), the commands will be run. If you don't want to edit them and you're VERY SURE you know which commands you're calling, you can use: fc -e true 77 79 This uses true as an "editor" to edit the commands with, so it just exits without making any changes and the commands are run as-is. ORIGINAL ANSWER: You can use: history -p \!{77..79} | bash This assumes that you're not using any aliases or functions or any variables that are only present in the current execution environment, as of course those won't be available in the new shell being started. A better solution (thanks to Michael Hoffman for reminding me in the comments) is: eval "$(history -p \!{77..79})" One of the very, very few cases where eval is actually appropriate! Also see: Is there any way to execute commands from history? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/399986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134593/"
]
} |
400,101 | Hy there,I am having problem in connecting my raspberry pi to a wifi dongle.I have followed a lot of tutorials from internet but no success so far. My WIFI dongle can scan the networks but it's not connecting to any network.Here is what my configuration file looks like ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev country=GB update_config=1network={ ssid="noname" #psk="zong4gisbest" psk=ead5b0b7e82e1a68f01e9a17a2a7719ec24575c89bb5b5805e4ae49c80daa983 } Here are the results of my commands on Raspbian iwconfigwlan0 unassociated Nickname:"<WIFI@REALTEK>" Mode:Auto Frequency=2.412 GHz Access Point: Not-Associated Sensitivity:0/0 Retry:off RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0eth0 no wireless extensions.lo no wireless extensions. lsusb Bus 001 Device 004: ID 0bda:0179 Realtek Semiconductor Corp. RTL8188ETV Wireless LAN 802.11n Network AdapterBus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Can you please help me resolving the issue? thanks | Edit your wpa_supplicant.conf , change the following lines: network={ ssid "noname" psk"zong4gisbest" to: network={ ssid="noname" #psk="zong4gisbest" psk=ead5b0b7e82e1a68f01e9a17a2a7719ec24575c89bb5b5805e4ae49c80daa983} save then run wpa_supplicant -iwlan0 -D wext -c/etc/wpa_supplicant/wpa_supplicant.conf -Bdhclient wlan0 The error: nl80211: Driver does not support authentication/association or connect commands mean that the standard nl80211 doesn't support your device , you should use the old driver wext . To correctly set up your wpa_supplicant.conf , it is better to use the wpa_passphrase command: wpa_passphrase YOUR-SSID YOUR-PASSWORD >> /etc/wpa_supplicant/wpa_supplicant.conf To automatically connect to your AP after restart edit the wlan0 interface on your /etc/network/interfaces as follow: allow-hotplug wlan0iface wlan0 inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/400101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230162/"
]
} |
400,123 | I have a command below but it does not output anything. If I retain the print $0 or any column (e.g print $2 , print $3 ) it will print as it should. It seems that it does not read the "if statement". awk -F"," '{if ($2 >= 09170000000 && $2 <= 09179999999) print $0 }' filename Sample file: "dummy","09171234567","","dummy","dummy","dummy","dummy" | Edit your wpa_supplicant.conf , change the following lines: network={ ssid "noname" psk"zong4gisbest" to: network={ ssid="noname" #psk="zong4gisbest" psk=ead5b0b7e82e1a68f01e9a17a2a7719ec24575c89bb5b5805e4ae49c80daa983} save then run wpa_supplicant -iwlan0 -D wext -c/etc/wpa_supplicant/wpa_supplicant.conf -Bdhclient wlan0 The error: nl80211: Driver does not support authentication/association or connect commands mean that the standard nl80211 doesn't support your device , you should use the old driver wext . To correctly set up your wpa_supplicant.conf , it is better to use the wpa_passphrase command: wpa_passphrase YOUR-SSID YOUR-PASSWORD >> /etc/wpa_supplicant/wpa_supplicant.conf To automatically connect to your AP after restart edit the wlan0 interface on your /etc/network/interfaces as follow: allow-hotplug wlan0iface wlan0 inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/400123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257133/"
]
} |
400,142 | What does this command? I know that, the CSI n ; m H is for move the cursor to n row and m column, but what does command from title? ^[[H^[[2J ? | That's a visual representation (where ^[ represents the ESC character) of the sequence to clear the screen and bring the cursor to the top in xterm-like terminals at least: $ TERM=xterm tput clear | cat -v^[[H^[[2J To find out about those escape sequences, look at the ctlseqs.txt document shipped with xterm . There, you'll find: ESC [ Control Sequence Introducer (CSI is 0x9b). and: CSI Ps ; Ps H Cursor Position [row;column] (default = [1,1]) (CUP). and: CSI Ps J Erase in Display (ED). Ps = 0 -> Erase Below (default). Ps = 1 -> Erase Above. Ps = 2 -> Erase All. Ps = 3 -> Erase Saved Lines (xterm). (note that ^[[2J doesn't clear the saved lines or alternate screen). tput clear (or clear ) on xterm -like terminals does the same as printf '\e[H\e[2J' . For that it queries the terminfo database to know what the sequence of character is for the clear capability for the terminal whose name is stored in the $TERM environment variable. If you dump the entry for the xterm terminal in the terminfo database with infocmp xterm , you'll see in it: $ infocmp -1 xterm | grep clear clear=\E[H\E[2J, Which is another way to find out about a given escape sequence: $ infocmp -L -1 | grep J clear_screen=\E[H\E[2J, clr_eos=\E[J, (here using the L ong name for the capabilities). Then, you can do man 5 terminfo for a description of those capabilities. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133532/"
]
} |
400,225 | If you open the defragment section of btrfs-filesystem(8) , you will see the following ominous inscription left by the developers: Warning: Defragmenting with Linux kernel versions < 3.9 or ≥ 3.14-rc2 as well as with Linux stable kernel versions ≥ 3.10.31, ≥ 3.12.12 or ≥ 3.13.4 will break up the ref-links of COW data (for example files copied with cp --reflink , snapshots or de-duplicated data). This may cause considerable increase of space usage depending on the broken up ref-links. That sounds terrible. A selling point of btrfs is its ability to create snapshots without copying everything. I mostly create readonly snapshots. Do the files of readonly snapshots also count as “COW-data” or will parent subvolume deduplication survive without making disk space bloat? | Yes, files in a readonly snapshot count as COW-data and will contribute to disk space bloat caused by defragmenting. When defragmentation happens, data is copied from the old extents into fewer new extents. The new extents are distinct from the old extents. All other copies of the file (in snapshots, for instance) still point to the old extents. Therefore, you have data bloat. There's a long thread about defragmenting on the mailing list starting here that has some interesting points. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157435/"
]
} |
400,231 | I'm just wondering what is the equivalent of apt-get upgradeapt upgradeyum update with OpenWRT or LEDE? | There no single command or argument, but you can easily do it. To upgrade all of the packages, LEDE recommends , opkg list-upgradable | cut -f 1 -d ' ' | xargs -r opkg upgrade There are other less efficient ways where people use AWK and such. An important caveat often follows with extensive use of LEDE / OpenWRT's opkg Since OpenWrt firmware stores the base system in a compressed read-only partition, any update to base system packages will be written in the read-write partition and therefore use more space than it would if it was just overwriting the older version in the compressed base system partition. It's recommended to check the available space in internal flash memory and the space requirements for updates of base system packages. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/400231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
400,235 | Elsewhere I have seen a cd function as below: cd(){ builtin cd "$@"} why is it recommended to use $@ instead of $1 ? I created a test directory "r st" and called the script containing this function and it worked either way $ . cdtest.sh "r st" but $ . cdtest.sh r st failed whether I used "$@" or "$1" | Because, according to bash(1) , cd takes arguments cd [-L|[-P [-e]] [-@]] [dir] Change the current directory to dir. if dir is not supplied, ... so therefore the directory actually may not be in $1 as that could instead be an option such as -L or another flag. How bad is this? $ cd -L /var/tmp$ pwd/var/tmp$ cd() { builtin cd "$1"; }$ cd -L /var/tmp$ pwd/home/jhqdoe$ Things could go very awry if you end up not where you expect using cd "$1" … | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/400235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194688/"
]
} |
400,313 | I have bunch of lines in findmydomain.txt and would like to extract only those domain names which don't have a path after the domain and save them in a new file. I couldn't figured out how to do this because some domain names finish in .gov , some in .vn , some don't have www etc.. I am expecting to extract only domain names without a filepath to a new file. For example: http://www.drexel.edu/http://trianglewordpress.com/http://www.nasa.gov/http://www.mexico.com.mx/ findmydomain.txt http://www.safmls.org/2009/2009%20Presentations/Hemostasis%20-%20Stop%20Doing%20Bleeding%20Times.dochttp://debsabo.com/Luke_2.dochttp://lessons.ctaponline.org/~ferson/worksheets/Factoring%20Trinomials.dochttp://shalegasconsortium.com/down12547http://www.auburnschools.org/drake/jwilliams/New%20Stuff/Review%20Questions%20for%20Evolution%20and%20Changing%20Populations.docxhttp://static.schoolrack.com/files/28065/668560/APHUG_Chapter_4.ppsxhttp://medicine.missouri.edu/docs/financial/Student%20Loan%20Forgiveness.docxhttp://personales.upv.es/jpgarcia/LinkedDocuments/P&LLayout.dochttp://www.drexel.edu/http://www.cardiotimes.com/documents/powerpoints/SLI022.dochttp://www.dot.state.oh.us/projects/pdp/Related%20Documents/Project%20Development%20Process%20Scoping%20Training.docxhttp://www.fhs.d211.org/departments/english/gdawson/short%20story%20terms.docxhttp://www.gregorydoublewing.com/Lonesome_Polecat.dochttp://www.cs.iit.edu/~cs549/cs549s07/CryptographyNetSecurity-2008.dochttp://cmsachesapeake.org/wp-content/uploads/2014/02/Current-Concepts-in-Concussion-Care.docxhttp://liffeysoundfm.ie/cheap-alli-tablets-uk.docxhttp://mepag.nasa.gov/meeting/2008-02/MEPAG_Feb_2008_McCuistion21.dochttp://trianglewordpress.com/http://www.kvsangathanectlt.com/topic_sys/767639292The%20Mirror.dochttp://people.eku.edu/sumithrans/Zoo/labs/Pseudocoelomates.dochttp://www.examvault.net/uploads/3/0/1/5/30157763/f211_cell_membrane.docxhttp://www.dgelman.com/powerpoints/statistics/spitz/5.2%20The%20standard%20Normal%20Distribution.dochttp://www.usna.edu/Users/physics/jwathen/Chapter%20S26.4%20through%206.dochttp://www.npti.in/Download/Renewable/POWERGEN%20PRSTN_Renewable%20April2012/Centralized%20Remote%20Monitoring%20of%20Renewable%20Power%20Plants.ppsxhttps://mail.alquds.edu/~f2095/Communication%20Systems/Introduction.dochttp://cdffa.org/Documents/CPR_AED_presentation.dochttp://www.nasa.gov/http://faculty.caldwell.edu/kreeve/Chap%2021%20-%20Critical%20Thinking%202013.docxhttp://www.agriseta.co.za/downloads/Agriseta_Sep_18_2014_Ngomane.docxhttp://nttc.columbiabasin.edu/automotive/CBC_doc/AC/AC%20Case%20and%20Duct%20System.docxhttp://www.aoa.gov/AoA_Programs/HPW/Alz_Grants/docs/Caregiving-Feb14-2013-SKeller_Caregiving_for_people_with_dementia_and_ID-Down-syndrome.dochttp://www.hetdijkmagazijn.nl/fosamax-dosage-for-dogs.docxhttps://partner.microsoft.com/download/portugal/40096973http://www.ohio.edu/people/shriver/308/TCOM%20308-6-Radio%20Frequencies.dochttp://www.advocatehealth.com/documents/clinicalevents/Cervical_Insufficiency-McCulloch.dochttp://www.unesco.org/bsp/eng/UNESCOMDG.dochttp://iris.nyit.edu/~kkhoo/Summer1_2008/715-OOAD/Larman_doc/LarmanChap12.dochttp://www.shs.d211.org/socialstudies/faculty/AJP/The%20significance%20of%20ancient%20Persia%203.dochttp://trojan.troy.edu/studentsupportservices/assets/documents/presentations/english_reading/BasicBusinessWriting.dochttp://www.drradloff.com/documents/the-alchemist-introduction.dochttp://tc3.hccs.edu/itse1402/Shows/COBOL%20Unit1%20slides.dochttp://www.outreach.mcb.harvard.edu/teachers/Summer04/Barbara%20Gould/Handwashing_Activity_1.dochttp://images.pcmac.org/Uploads/WestCarroll/WestCarroll/Divisions/Presentations/2013-14%20TCAP%20Writing%20Assessment.docxhttp://www.ltisdschools.org/cms/lib09/TX21000349/Centricity/Domain/552/REPRODUCTIONandGENETICS.docxhttp://www.green-eu.net/system/files/documents/Green%20Training%20Audit_Romano.docxhttp://www.panbc.net/files/multimodal_analgesia.docxhttp://www.unco.edu/nhs/physics/faculty/adams/Phys%20221/MRI_2013.dochttp://docs.lib.purdue.edu/context/roadschool/article/1075/type/native/viewcontenthttp://bealertbealive.com/Road%20Safety%20Powerpoint.docxhttps://www.sde.idaho.gov/site/superintendentMeeting/2014pres/annual/Tiered%20Licensure.docxhttp://students.salisbury.edu/~ab67028/EDUC318/Technology%20Standards.dochttp://www.ic.ucsc.edu/~rlipsch/EE80S/Global%20Sustainability.dochttp://www.earth4567.com/talks/evolution/evolution.dochttp://www.dra.ca.gov/uploadedFiles/Content/Energy/Procurement/PGE_Oakley/EMC%20presentation%20may%2016%202012.dochttp://studentaffairs.com/vcs/2011entries/NorthCarolinaStateUniversity-lupica-ewsuk.docxhttps://intermec.custhelp.com/ci/fattach/get/161443/0/filename/Intermec+Firmware+Management+Tool1.docxhttp://www.karunadu.gov.in/spb/SeminarsFinancing/Sridharan.dochttps://www.homeworkmarket.com/sites/default/files/qx/15/01/20/01/pain_theories_and_treatment_presentation.docxhttp://faculty.ksu.edu.sa/eltamaly/Documents/student%20forum/Future%20Student%20Projects/PV1/pv.ppsxhttp://www.cuhmmc2015.org/wp-content/uploads/2013/12/National-Domestic-Preparedness-Consortium-Recovered1.dochttp://www.uky.edu/~clthyn2/PS671/Concepts_and_Theory_in_Political_Science.docxhttp://glearning.tju.edu.cn/pluginfile.php/60967/mod_data/content/51160/%E8%8B%B1%E6%96%87%E6%8A%A5%E5%91%8A.docxhttp://www.biblestudies-online.com/Sermons/PowerPoint_Sermons/Genesis/1%20Genesis%20Foundation%201st%20Sermon.ppsxhttp://graphics.ucsd.edu/courses/cse191_s03/CSE191_04.dochttp://www.me.uprm.edu/sundaram/inme%204007/INME4007-3.dochttp://www.csb.uncw.edu/people/rosenl/classes/OPS372/Topic%204%20New%20Service%20Developmnt.dochttps://testing.byui.edu/info/powerpoint/Test%20Center%20Finals.ppshttp://bizconst.org/jam.php?how-to-make-cialis-effective.docxhttp://www.co.nueces.tx.us/risk/training/Safe%20Driving%20Practices.ppshttp://apalachee.elearning4u.net/pluginfile.php/9320/mod_resource/content/0/English%20Constitutionalism.dochttp://nygma.hu/dl/Golive_timer_1.dochttp://www.ccis313.blog.com/files/2013/02/is313-Lecture4.docxhttp://impact.asu.edu/Presentations/Mobicom-Talk.dochttp://www.radiographyonline.com/article/S1078-8174(09)00066-2/dochttp://www.fao.org/fileadmin/user_upload/animalwelfare/Animal%20Welfare%20-%20Global%20Summary%20of%20Standard%20+%20Programs.dochttp://ebooks-kings.com/doc/chapter-16-the-reproductive-system-answer-keyhttp://www.cs.sjsu.edu/faculty/lee/cs157b/29SpCS157BL13BCNF&Lossless.dochttp://faculty.ksu.edu.sa/Al-Fakey/Medical%20Student%20Lictures/Anatomy%20and%20Physiology.dochttp://connect.issaquah.wednet.edu/high/liberty/staff/ms_andersons_site/chemistry/m/chemistry_files_2015-16/235721/download.aspxhttp://www.radford.edu/~cshing/340/lectures/Mannino/doc/CHAP006%20-%20Problems.dochttps://www.uco.edu/academic-affairs/cqi/files/docs/facilitator_tools/06.06.21-dashboardshandout.dochttp://www.llakes.org/wp-content/uploads/2012/10/Laura-James.docxhttp://www.agrisk.umn.edu/conference/uploads/WMcGee1678_01.docxhttp://kith.episerverhotell.net/upload/2585/sandefjord2005_ote.dochttp://faweb.loyolablakefield.org/ClassDocuments/4977/TYPES%20OF%20STATES.docxhttp://committees.comsoc.org/tccc/ccw/2010/slides/20-Cynthia.docxhttp://www2.sunysuffolk.edu/mancuse/nursing/NUR%20246%20Spring%2011/Workshop%20presentations%202011%20L&M%20newest.dochttp://www.anslab.iastate.edu/Class/AnS320/ToBeRemoved/Fall%202012/Lecture/AnS%20320%20-%202012%20Fall%20-%20Swine%20Lecture%20%231.docxhttp://psych.stanford.edu/~jlm/Presentations/SymSys100_04-07-09-AreHumansRational.dochttp://www.ksums.net/files/1st/Archive/01.Foundation%20Block/Male/Pathology/Neoplasia%20Lecture%201%20&%202.docxhttp://www.curriculumsupport.education.nsw.gov.au/secondary/hsie/assets/aust_curriculum/aust_curriculum.docxhttp://www.labour.gov.za/DOL/downloads/documents/useful-documents/occupational-health-and-safety/Construction%20Workers%20(SASOM-DoL%20Feb2011).dochttp://www.clemson.edu/cafls/departments/fnps/undergraduate/packaging_science_bs_degree/emphasis_areas.dochttp://www.erh.noaa.gov/lwx/em/Monday_EM_Briefing_current.docxhttp://hr.colorado.edu/hrits/docs/Documents/SharePoint%202010%20Training.docxhttp://www.k12lessonplans.com/uploads/downloads/1114.dochttps://www.hss.edu/files/Muscle-Rupture-Involving-Gastrocnemius-Soleus-Muscles-Hematoma.dochttp://ethics.calbar.ca.gov/Portals/9/documents/Civility/Civility-and-Ethics.dochttp://leadfree.ipc.org/files/RoHSLessons0707.dochttp://www.healthychildcare.org/Building%20Bridges%20resources/PowerPoints/PRESENTATION_Pediatricians.docxhttp://www.joms.org/article/S0278-2391(13)00295-4/dochttp://ingilizceslayt.com/slides/WILL%20%E2%80%93%20BE%20GOING%20TO.dochttp://www.facstaff.bucknell.edu/ttoole/Kansas%20City%20DfCS%20presentation.dochttp://www.alexa.com/http://isites.harvard.edu/fs/docs/icb.topic1322481.files/S005%20Reliability%20checks%20v2.dochttp://rustin.pbworks.com/f/The+Hero%E2%80%99s+Journey+%232.docxhttp://eshare.stust.edu.tw/EshareFile/2013_5/2013_5_b4c4905a.docxhttp://sbs.wsu.edu/biol352/ControlofGeneExpression.dochttp://www.ohio.edu/plantbio/staff/showalte/PBIO%20450%20&%20550/Emch.docxhttp://www.mcet.org/mining/environment/Toolkit/Training%20Presentations/Developing%20SWPPP%20Presentation/Dev_SWPPP%20Presentation.dochttp://www.uic.edu/classes/phyb/phyb402dbh/Hypothalamus%20and%20Pituitary.dochttp://www.cancer.org/acs/groups/content/@research/documents/document/acspc-041647.docxhttp://adem.arkansas.gov/adem/Documents/Final_AR_FPC_Slide_Show_%20V2_4_17_11.ppsxhttp://web.mit.edu/14.160/www/mit-lecture%2011%20(limited%20rationality&strategic%20interaction)2.dochttp://ieee-icsc.org/icsc2013/ICSC2013_Luciano_Tutorial.docxhttps://www.niams.nih.gov///News_and_Events/Meetings_and_Events/Reports/2006/RR14_FDA_Schrager.dochttp://static.schoolrack.com/files/13968/580817/Light_-_electromagnetic_Spectrum%2C_reflection%2C_refraction_2013.docxhttp://www1.eere.energy.gov/hydrogenandfuelcells/docs/doe_presentation.dochttp://mrsdooley.webs.com/AP%20Lang%20&%20Comp/Figures%20of%20Speech.docxhttp://research.che.tamu.edu/groups/Seminario/CHEN320_Fall_2013_files/num-g05-massflow.docxhttp://planningservices.partners.extranet.microsoft.com/en/SDPS/SAPSDocuments/SDPS%202014%20SharePoint%20Office%20Web%20Apps.docxhttp://www.biogazrhonealpes.org/doc/methanisation_agricole/techniques/WP4_1_biochimie_et_suivi_digesteur.dochttp://www.cdc.gov/tb/education/skillscourse/Day1/Interview%20Question%20Types/Day%201-%20Interview%20Question%20Types_Final.docxhttp://www.waseantourism.com/ft/Toolbox%20Development%20II:%2098%20toolboxes%20for%20Front%20Office,%20F&%20B%20Services%20and%20Food%20Production/Submission%20to%20ASEC/3rd%20submission%20of%2025%20draft%20TBs_200413/Apply%20std%20safety%20procedures%20for%20handling%20foodstuff/doc_Apply%20safety%20proc%20handling%20food_Final.docxhttp://www.usda.gov/egov/egov_redesign/intranet/usability/Readiness_Assessment_v7.dochttp://www.almaden.ibm.com/institute/resources/2008/presentations/Brenda.dochttp://biologyonline.us/Online%20A&P/AP%202/Northland/AP2PowerPoint/A&P%202%20Urinary%20System%202005.dochttp://www.topdownbook.com/chapters/Chapter09.dochttp://www.castonline.ilstu.edu/smith/164/grp_3_go2.docxhttp://mapit.gov.in/doc_of_traning/MP%20Online%20-%20AeGM%20Training.dochttp://www.ap.gatech.edu/Burkholder/4600/slides/21-CompensatoryHypertrophy.dochttp://www.agrisk.umn.edu/conference/uploads/BOelke0257_03.dochttp://faculty.kfupm.edu.sa/ARE/sabeer/Site_Analysis_Example.dochttp://seed.ucsd.edu/Onlinecrs/rsrc/Onlinecrs/AnupDoshiLesson/Mindreader2.dochttp://ccba.jsu.edu/ccba/faculty/facultyFiles/jzanzig_Acc%20512%20-%20Chapter%207.dochttp://kehsscience.org/Intro%20to%20Ecology.docxhttp://www.la-ptac.org/events/downloadFile.jsp?path=/siteSpecific/1118/Files/Articles/&file=Small_Business_Summit.dochttp://ace.arkansas.gov/cte/Documents/Microsoft%20IT%20Academy/Getting%20Started%20Virtual%20training%202012.docxhttp://crninet.com/2013/2b.%20Kiewiet-Presentation.docxhttp://portal.unesco.org/geography/es/files/11957/1259950117505_Seamus_Hegarty.doc/05%2BSeamus%2BHegarty.dochttp://edcmail.mui.ac.ir/images/stories/powerpoint/azar91/91.9.9/Dr.norozi,Liver%20-%20Incidentalomas2.docxhttp://www.eurostemcell.org/files/CSI_PowerPoint_slides_FINAL_17July2012.dochttp://classes.engr.oregonstate.edu/mime/winter2010/ie337-001/Lectures/IE%20337%20W10%20Lecture%206.machining.operations&machinability.dochttp://maine.gov/msl/libs/tech/diglit/present2013.docxhttp://campuses.fortbendisd.com/campuses/documents/Teacher/2009/teacher_20090209_1031_2.dochttp://bama.ua.edu/~st497/doc/consensusbaseddecision.dochttp://www.aiha.org/get-involved/outreach/Documents/IAQIntroTR.dochttp://chotenmien.vn/http://www.ise.ncsu.edu/wysk/courses/ISE316/ISE316-Course-presentation/Chapter%2020.docxhttp://www.belgianbraincouncil.be/files/2010.09.16_EXPECTATIONS_AND_RIGHTS_OF_PATIENTS_SUFFERING_FROM_A_BRAIN_DISORDER.dochttp://imp.uwe.ac.uk/imp_public/displayentry.asp?URN=6109&pid=16http://classes.mst.edu/edtech/TLT2012/presentations/TLT-2012-mhays-BbCollaborate-01.docxhttp://asja-eg.com/admin/cmes/files/Pediatric%20equipmentfinal.docxhttp://www.outreach.mcb.harvard.edu/teachers/Summer05/ElizabethMick/TheNervousSystem.dochttp://nttc.columbiabasin.edu/automotive/doc_BTC-auto/fluidcouplers.dochttp://www.karentimberlake.com/doc%20Energy/Heating%20Curves.dochttp://sde.ok.gov/sde/sites/ok.gov.sde/files/documents/files/Document%20%2323--%20Don't%20PrintCharter%20School%20REVISED%203-3-2015.docxhttp://geology.uprm.edu/Classes/GEOL3055/cmor.dochttp://www.techcoachcorner2.org/Curriculum%20Links/Writing/hook.dochttp://www.esiee.fr/%7Epoulichp/Magnetique/CoursCapteurMag+ActionMag.dochttp://www.doa.la.gov/orm/doc/supervisory_responsibility.dochttp://sharepoint.mvla.net/teachers/SophiaC/Backup/Tanks.dochttp://joshua-cox.webs.com/How%20to%20Play%20Checkers.dochttp://www.nyswysa.org/downloads/markedwards.dochttp://www.health.nsw.gov.au/mhdao/workforcedev/Documents/forum-pres/neami-nat-abor-link-prog.docxhttp://aps.anl.gov/epics/irmis/2005-03/A07-Keitel-TRIUMF.dochttp://instructional1.calstatela.edu/prosent/CIS%20581/chapter8.docxhttp://cominkamotors.com/bienvenidos.dochttp://www.sjsu.edu/people/steven.lee/courses/c4/s1/Trace_lecture_1.dochttp://englishexchange.pbworks.com/f/Advanced+Reading+and+Discussion.dochttp://www.techdata.com/(S(rnlsv045xrvvep55z05wfa45))/business/emc/files/VNXecollateral/VNXe_DD160%20Bundle%20-%20Overview%20(customer%20presentation).dochttp://www.eastportmpa.com/Powerpoint%20Presentations/Eastport%20MPA%20Version%201.dochttp://www2.warwick.ac.uk/fac/med/study/ugr/mbchb/societies/slime/products/handouts/cushings_addisons_and_acromegaly_ed.docxhttp://www.poac-nova.org/news_uploads/3545/process_tips_writing_goals_2012_ieps.dochttps://vle.york.ac.uk/bbcswebdav/xid-289133_4https://www.vdh.virginia.gov/LHD/ThomasJefferson/documents/2011/doc/drkavanaughOBmortalitypresentation.docxhttp://usian.org/table.php?long-term-psychosis-of-metoprolol-succinate.docxhttp://jaymetracy.pbworks.com/w/file/fetch/57894206/Chapter%201.1%20What%20is%20Fashion.docxhttp://www.fhwa.dot.gov/resourcecenter/teams/civilrights/cr_ppp8.dochttp://www.nihpromis.org/Documents/Standards%20Slides_5-22-12-508compliant.docxhttp://www.rkent.myweb.cs.uwindsor.ca/cs367/Chapter_4_V6.0.docxhttp://www.shrm.org/TemplatesTools/Samples/PowerPoints/Documents/09-doc-Employee%20Recognition_FINAL.docxhttp://static.schoolrack.com/files/223037/707743/Globe_Theatre.dochttp://lwthspn.pbworks.com/w/file/fetch/87919174/A%26P%20Unit%204%20Skeletal%20student%20Ch8.docxhttp://www.dgelman.com/powerpoints/geometry/spitz/10.6%20Equations%20of%20Circles.dochttp://www.falmouthschools.org/File/Population_Ecology_Chapter_52.dochttp://medicine.missouri.edu/financial/uploads/BYOB-2012.docxhttp://www.ltisdschools.org/cms/lib09/TX21000349/Centricity/Domain/1328/8%20adverbs.dochttp://ncheney.com/official.php?doxycycline-dosage-for-malaria-prophylaxis.docxhttp://moodle.penyrheol-comp.swansea.sch.uk/pluginfile.php/21074/mod_label/intro/Tourism.docxhttp://blog.stikom.edu/anjik/files/2012/08/PT5_0_Overview_14Jul08.docxhttp://tulane.edu/som/departments/medicine/gastroenterology/resident-portal/upload/HepCC022415.docxttp://www.sciencedirect.gr/http://me.uprm.edu/sundaram/inme%204007/INME4007-6.dochttp://cbesio.cox.smu.edu/mktg3344/course%20files/class_notes/Market%20Segmentation.dochttp://teachers.srsd.net/mstrada/wp-content/uploads/2011/10/Parallel-and-Perpendicular-Lines2.docxhttps://www.icsi.edu/docs/40nc/4TechSesonGopalkrishna.ppshttp://www.pedstudent.com/wp-content/uploads/2011/02/pals_11.dochttp://webscripts.esd101.net/safety/PowerPoint%20Training/R_01.dochttp://images.pcmac.org/SiSFiles/Schools/MS/DeSotoCounty/HernandoHigh/Uploads/Presentations/Chapter%2012%20Industrial%20Rev.%20section%201.docxhttp://www.mocs.gov.np/uploads/Role%20of%20Customs%20in.docxhttp://www.wainwright.army.mil/nwtc/Classes/Slides/Planning%20Considerations%20for%20Oversnow%20Movement.dochttp://www.hydrologicalusa.com/images/uploads/Hydrological_Services_America.docxhttp://web.iitd.ac.in/~ravi1/++++LEVI/doc/ch01_final.dochttp://www.seminarscolonrectalsurgery.com/article/S1043-1489(05)00066-7/dochttp://www.esc14.net/users/0076/docs/spp7/Ind%20%207%20Request%20TEASE%20Access%202012-13.docxhttp://www.andrews.edu/shp/speech/resources/anatomyandphysiology/physiologyofarticulation2.docxhttp://www.clubs.psu.edu/up/actsci/resume%20info%204-7-14.docxhttp://www.dgelman.com/powerpoints/algebra/alg2/spitz/1.3%20Solving%20Linear%20Equations.dochttp://www.multistatepartnership.org/docs/Wednesday/18-Deacon-Advantages-of-Regional-Partnerships.docxhttps://faculty.elgin.edu/jputz/CIS%20230%20Chapter%201%2002.docxhttp://www.cfoa.org.uk/download/12887http://www.jvascsurg.org/article/S0741-5214(11)00608-2/dochttp://feti.lsu.edu/municipal/NFA/TRADE/materials/TRADE%20CD%201/POWER%20POINT%20PROGRAMS/Acountability.dochttps://www.roadsafetyworkshop.com/doc/day2/6.%20Samir%20Raval-SCDP%20(12.8.16)%20(1).docx%20%2062%20slides.docxhttp://www.uprb.edu/profesor/mvelez/cursos/ccom3033/docsgaddis/C05.dochttp://www.centralcancernetwork.org.nz/file/fileid/47896http://medicine.missouri.edu/childhealth/uploads/congheartdisease.docxhttp://rpids.csc.tntech.edu/_resources/Introduction/2140883132/2140883132.dochttp://faculty.uoh.edu.sa/b.hijah/documents/Chronic%20renal%20failure.dochttp://www.calvin.edu/admin/physicalplant/departments/ehs/policies/biosafety/Process%20Flow%20Chart%20for%20IBC.docxhttp://instructional1.calstatela.edu/prosent/CIS%20581/chapter11.docxhttp://tandtmidwives.com/source/resources/FUTURE_STRATEGIES_MIDWIFERY.ppsxhttp://advantagegolfcars.com/bathroom.php?metronidazole-dose-of-acne-treatment.docxhttp://www.bauer.uh.edu/pgalvani/files/MARK6361/Kotler14e_12.1_idoc.dochttp://kvsangathanectlt.com/topic_sys/Role%20of%20the%20%20goernment%20in%20health.dochttp://www.fgse.nova.edu/itde/faculty/simonson/doc/de_www.dochttp://www.scgcorp.com/OLD/MW/2008/7_Cantor%20status%20for%20Mgr%20Mtg%202008-11-06%20FINAL.dochttp://www.ars.usda.gov/SP2UserFiles/Place/19320000/Turner-Small%20Ruminant%20Steering%20Committee%20Meeting%20April%202006.dochttp://careers.michelin-us.com/reltech/reliability-docs/IMT/06-Lubricants.ppsxhttps://schoolhistory.co.uk/year8links/natives/bighorn.dochttp://reporting.msue.msu.edu/miprs/online/extensionprogramevaluation.dochttp://linuxyw.com/width.php?dog-colitis-prednisone.docxhttp://agriscience.msu.edu/2000/2010-2020/2023/2023seedanatomy.dochttp://kcooperict.wikispaces.com/file/view/Rudolf+Steiner.dochttp://www.vghks.gov.tw/cs/docfiles/OVERVI~1.dochttp://www.psycholosphere.com/Vessels%20on%20Learning%20&%20Memory.dochttp://www.cias.wisc.edu/curriculum/modII/secc/TCB_SoilQual_distrib.dochttp://nttc.columbiabasin.edu/automotive/WWCC/gears_ch_29.dochttp://cattlespring.org/sadness.php?over-weight-diet-pill.docxhttp://think.stedwards.edu/campusrecreation/sites/think.stedwards.edu.campusrecreation/files/Club%20Sport%20Officer%20Meeting%20Fall%202015.docxhttp://jcsites.juniata.edu/faculty/kruse/it110/CH02_NET+.dochttps://www.cpcc.edu/learning/campus-updates/stem-2016.dochttp://headandnecksurgery.ucla.edu/workfiles/Academics/Lectures/4-25-12_Gopen_Mastoid_Surgery_review.dochttp://static.schoolrack.com/files/19286/507009/Macronutrients.dochttp://iris.nyit.edu/~facevedo/ClinicalYearPresentations/OTITIS%20MEDIA.dochttp://parasitology.xjtu.edu.cn/powerpoint/Eng/T.g.dochttp://elearn.azpost.gov/FileContent/filesupload/AZDPS/AZDPS_2013_GasMaskFitTest/FRM40%20Gas%20Mask%20Training%20and%20Inspection%20Elearn%203_Reduced%20Version.ppsxhttp://www.mysbdteam.com/wp-content/uploads/2010/12/Brook-Kirwin-How-to-Bring-Business-in-the-Club.dochttp://www.fmdrl.org/index.cfm?event=c.getAttachment&riid=3140https://library.e.abb.com/public/ef11d44d832d5e9ac125793c00343ce2/20111102%20PS%20Consulting%20Presentation.dochttp://www.ars.usda.gov/GARA/presentations/GARA%20Presentation%20South%20Africa,%20October%202014.docxhttp://alhefzi.com/G34/Family/Seminars/Obesity%20&%20Dyslipidemia.docxhttp://www.eucosh.org.cn/Documents/Activity%20outputs/1.5.2%20Non%20coal%20screening/The%20Hidden%20Hazards%20in%20Non-Coal%20Mining_Nick_rev.docxhttp://www.hccfl.edu/media/518678/basic%20sentence%20patterns%20&%20punctuation.docxhttp://www.gautehallansteiwer.com/term-papers-help.docxhttp://www.aerbvi.org/2012international/documents/Flener-ResponsetoInterventionandIpad2Handout1_000.docxhttp://csrri.iit.edu/~howard/biochem/lecdoc/sug3+lip1f10_401.dochttp://mbbsclub.com/download/3/Microbiology/Corynebacterium%20&%20Listeria.docxhttp://www.henley.ac.uk/web/FILES/management/Alain_Verbeke.dochttp://www.psclg.org.sa/web/doc/Balance-Score-Card/BSC-Presentation.ppshttp://www.ecu.edu/cs-cas/anth/nuevosouth/upload/ushispanicmarketoverview-finalcv5-5-09-090505155110-phpapp02.dochttp://healthcare.utah.edu/miners_hospital/outreach/Work-relatedAsthmaPowerPoint.dochttp://www.waynecc.edu/sherryg/wp-content/uploads/sites/10/EDU-251-Piaget-Theory.docxhttp://www.castonline.ilstu.edu/henninger/Powerpoints/341/Percentile%20Rank,%20Percentile,%20Correlation.docxhttp://www.californiaeducatorsnetwork.com/wp-content/uploads/2010/10/California-Health-Science-Capacity-Building-Project-2010.dochttp://ww2.nmh.org/oweb/MagnetDoc/04_ep_exemplary_professional_practice/ep30-c_-_safety_representatives_meeting__presentation.dochttp://dhmh.maryland.gov/mbpme/documents/lecture09.dochttp://www.unh.edu/writing/cwc/presentations/media/effectivepresentations.dochttp://homepages.umflint.edu/~christsw/Classes/5%20Spring%202013/PTP%20641%20Med.Surgery/6.17.13%20General%20Medical%20Conditions%20and%20Surgeries%20Med%20Surg%20II.docxhttp://www.safmls.org/2010/2010%20Presentations/W%201/The%20Beta-Lactamase%20Family.dochttp://www.fldoe.org/core/fileparse.php/7531/urlt/decision-making-process.dochttp://www.lhsc.on.ca/lab/qmanage/present/qms.dochttp://www.site.uottawa.ca/~rhabash/ELG4139L1PE.dochttp://www.agriscience.msu.edu/2000/2040/2041/2041.dochttp://instructional1.calstatela.edu/hye2/HNRS330/Chapter%20One/HNRS330_Chapter%20One.docxhttp://kheseminar.com/doc/01DB74D1-2818-32BA-39B4-1AE702B5CEEF/Backup/a.dochttp://www.solonschools.org/accounts/NBarnes/23201092816_GravityandFreeFall.dochttp://static.schoolrack.com/files/45730/142314/Magnetic_Effects_of_Electric_Currents.dochttp://cancer.dartmouth.edu/lung_thoracic/documents/Nalepinski_CTOP_Retreat_COG_Update.docxhttp://www.phscof.org/docs/PresentationsFinal/Tuesday/Pharmacy/SchupbachCOA_2011_DiabetesManagementGuidelinesUpdateSchupbach.docxhttp://www.mexico.com.mx/http://www.teachmebusiness.co.uk/resources/Factors_affecting_promotion.dochttp://www.unisa.edu.au/Documents/EASS/HRI/CPCM/faulks-best-interests.dochttp://delthabot.altervista.org/Lezioni/IV%20Anno/I%20Semestre/Gastroenterologia/20%20LEZIONE%20IPERTENSIONE%20PORTALE.dochttp://www.archbalt.org/youth-young-adult/upload/New-Evangelization.dochttp://www.canadianjournalofdiabetes.com/article/S1499-2671(10)44010-1/dochttp://pdic.tamu.edu/farmpolicy/josling.dochttp://www.waseantourism.com/ft/Approved%20Toolboxes%20&%20Competency%20standards/Prepare%20tenders%20for%20catering%20contracts/doc_Prepare_tenders_for_catering_contracts_FN_030214.docxhttp://www.studentaffairs.colostate.edu/Data/Sites/1/programreviews2015/lsc-dining-services-program-review---dsa-directors---october-7-2015.docxhttp://www2.dsu.nodak.edu/users/rbutz/International%20Business/PowerPoint/F09/Honda_S1_F09.docxhttp://www.imbanaco.com/webfm_send/1043 | This gets the result you show in your example. grep '^[^/]*/[^/]*/[^/]*/$' findmydomain.txt >new These are not properly "domain names", they are URLs possibly with one or more subdomains. For example, in www.google.com , the domain name is google.com and www is just an individual node name. In the general case, resolving the TLD out of a hostname is a much more complex problem which requires knowledge of each individual TLD. The final slash is optional, strictly speaking; @terdon's answer uses a more complex regex which solves this. As a quick and dirty fix, you could add a * after the final slash here (which would however then also match http://example.com/// with an arbitrary amount of redundant trailing slashes). The regex looks for lines with exactly three slashes in them, with optional non-slash characters before and between them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254061/"
]
} |
400,351 | I'm using a Debian 9 image on a virtual machine. The ping command is not installed. When I run: sudo apt-get install ping It asks me: Package ping is a virtual package provided by: iputils-ping 3:20161105-1 inetutils-ping 2:1.9.4-2+b1You should explicitly select one to install. Why is there two ping utilities? What are the differences between them? Is there some guidelines to choose one version over the other? What are the implications of this choice? Will all scripts and programs be compatible with both versions? | iputils ’s ping supports quite a few more features than inetutils ’ ping , e.g. IPv6 (which inetutils implements in a separate binary, ping6 ), broadcast pings, quality of service bits... The linked manpages provide details. iputils ’ ping supports all the options available on inetutils ’ ping , so scripts written for the latter will work fine with the former. The reverse is not true: scripts using iputils -specific options won’t work with inetutils . As far as why both exist, inetutils is the GNU networking utilities , targeting a variety of operating systems and providing lots of different networking tools; iputils is Linux-specific and includes fewer utilities. So typically you’d combine both to obtain complete coverage and support for Linux-specific features, on Linux, and only use inetutils on non-Linux systems. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/400351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74926/"
]
} |
400,382 | My wifi card keeps connecting to a wifi network which is on a channel that prevents me from making a hotspot. I'm trying to turn the connection off from command line. I have tried a few things: nmcli radio wifi off and ifconfig wlo1 down Problem with these is that they turn the wifi interface off which also prevents me from creating a hotspot. What command can I run to keep my wifi interface on but not connected to anything? | LANG=C nmcli dDEVICE TYPE STATE CONNECTION wlan0 wifi connected connectioname eth0 ethernet unmanaged -- lo loopback unmanaged -- Here you can see the name of the connection as connectionname . To disconnect, run nmcli con down id connectionname . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/400382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
400,395 | I wanted to use non-default DNS resolver for one specific domain, and the first idea was to simply use local dnsmasq. While looking for MacOS version, I found out that I can achieve the same simply by creating a file with domain name in /etc/resolver/example.com , with simple one line: nameserver 8.8.8.8 All was good and works as expected, the resolution works, and scutil --dns confirms: resolver #8 domain : example.com nameserver[0] : 8.8.8.8 flags : Request A records reach : Reachable The next thing, I wanted to share this with a friend, by creating a simple one liner that he could run in his terminal: sudo mkdir -p /etc/resolver/ && echo "nameserver 8.8.8.9" | sudo tee /etc/resolver/example.net Again, scutil --dns confirms: resolver #10 domain : example.net nameserver[0] : 8.8.8.9 flags : Request A records reach : Reachable Then I noticed a typo, so I corrected the address to 8.8.8.8 and ran the line again: sudo mkdir -p /etc/resolver/ && echo "nameserver 8.8.8.8" | sudo tee /etc/resolver/example.net But that did not seem to have any effect: resolver #10 domain : example.net nameserver[0] : 8.8.8.9 flags : Request A records reach : Reachable I checked the file content, all seemed fine: $ cat /etc/resolver/example.net nameserver 8.8.8.8 And then I opened the file in vim , changed to 8.8.4.4 and: resolver #10 domain : example.net nameserver[0] : 8.8.4.4 flags : Request A records reach : Reachable I have checked back and forth several times, when I echo the address to the file, the change has no effect, but it is enough to only open it in vim and not even change anything (just exit), previously echoed changes will be applied.What is the mechanism behind this? | [It] is enough to only open it in vim and not even change anything (just exit), previously echoed changes will be applied. I had to use sudo vim for that to work. Running it with my regular user had no effect. My theory is that whatever is watching /etc/resolver watches for changes to the directory, and not to changes to files within it, and the reloads everything in it when it sees a change to the directory. Vim creates a swap file, by default in the same directory as the file being edited. This is a change to the directory, and is picked up by the watcher. So, when I did: sudo vim -n /etc/resolver/example.net where -n disable swap file creation, changes to the file were no longer picked up. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110578/"
]
} |
400,447 | I don't do enough scripting to remember, without looking up, whether double or single quotes result in a Unix variable being substituted. I definitely understand what is going on. My question is does anyone have a memory trick for making the correct quoting rule stick in my head? | Single quotes are simple quotes, with a single standard: every character is literal. Double quotes have a double standard: some characters are literal, others are still interpreted unless there's a backslash before them. Single quotes work alone: backslash inside single quotes is not special. Double quotes pair up with backslash: backslash inside double quotes makes the next character non-special. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/400447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57899/"
]
} |
400,467 | There are variables in the shell like $0 , $1 , $2 , $? , etc. I tried to print the shell and environment variables using the following command: set But these variables were not in the list. So basically these variables are not considered to be shell/environment variables, right? (even though to output them, you have to precede them with a $ , like you do with shell/environment variables) | Variables are one of three distinct varieties of parameters in shell. A variable is a parameter whose name is a valid shell identifier; starts with _ or a letter, followed by zero or more letters, numbers or _ . The positional parameters are the numbered parameters $1 , $2 , ... The special parameters all have single-character names, and aside from $0 , they are all various punctuation characters. set only displays the shell's variables. A subset of the shell variables are the environment variables, whose values are either inherited from the environment when the shell starts up, or are created by setting the export attribute on a valid name. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/400467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227869/"
]
} |
400,549 | Background: One of my colleagues who doesn't come from a Linux background asked me about using ./ before some commands and not others, so I explained to him how PATH works and how binaries are chosen to be run. His response was that it was dumb and he just wanted to not need to type ./ before commands. Question: Is there a way to easily modify the behavior of the shell such that $PWD is always the first item on PATH ? | If you really want to, you can do this by prepending . to your path: export PATH=".:$PATH" However, that’s a bad idea, because it means your shell will pick any command in the current directory in preference to others. If someone (or some program) drops a malicious ls command in a directory you use frequently, you’re in for trouble... | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/400549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167515/"
]
} |
400,589 | I am looking for a bash command that will multiply only numbers from a text file. Below is the content of my text file. I need multiply all numbers by 100. 0 4530000 sil4530000 11100000 ow11100000 6320000 p6320000 7600000 ah7600000 8410000 n8410000 12100000 sil For single line with only number,I am using something like this below for file in *.txt; \do y=`sed -n '1 p' "$file"`; z=$(bc<<<"$y*100") sed $file -i -e 's/'"$y"'/'"$z"'/'done But I dont know how to do it for multiple lines, with alphabets in them. Number of lines in my file are not fixed, each file has different number of lines with max being 8 | Can use perl perl -pe 's/\b(\d+\.)?\d+\b/$&*100/ge' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231811/"
]
} |
400,598 | It is useful to be able to call system commands from awk. However, if you try to use shell extended regex, you'll find it doesn't work. That is because awk calls /bin/sh instead of /bin/bash as you'd expect in linux these days. How is it possible to get extended regex to work, when calling the system from awk? | Can use perl perl -pe 's/\b(\d+\.)?\d+\b/$&*100/ge' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/400598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146339/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.