source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
363,644 | I am trying to do a record count on a 7.6 GB gzip file. I found few approaches using the zcat command. $ zcat T.csv.gz | wc -l423668947 This works but it takes too much time (more than 10 minutes to get the count). I tried a few more approaches like $ sed -n '$=' T.csv.gz28173811$ perl -lne 'END { print $. }' < T.csv.gz28173811$ awk 'END {print NR}' T.csv.gz28173811 All three of these commands are executing pretty fast but giving an incorrect count of 28173811. How can I perform a record count in a minimal amount of time? | The sed , perl and awk commands that you mention may be correct, but they all read the compressed data and counts newline characters in that. These newline characters have nothing to do with the newline characters in the uncompressed data. To count the number of lines in the uncompressed data, there is no way around uncompressing it. Your approach with zcat is the correct approach and since the data is so large, it will take time to uncompress it. Most utilities that deals with gzip compression and decompression will most likely use the same shared library routines to do so. The only way to speed it up would be to find an implementation of the zlib routines that are somehow faster than the default ones, and rebuild e.g. zcat to use those. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222538/"
]
} |
363,658 | I'm trying to implement the quota warning to users over quota with Dovecot following this tip . I've added the following to my /etc/dovecot/conf.d/30-overquota.conf (any modication in the original /etc/dovecot/dovecot.conf is said would be removed in case of update). plugin { quota = dict:user::file:/var/vmail/%d/%n/.quotausage sieve=/var/vmail/%d/%n/.sievequota_warning = storage=50%% quota-warning 50 %uquota_warning2 = storage=80%% quota-warning 80 %uquota_warning3 = -storage=100%% quota-warning below %u # user is no longer over quota}service quota-warning { executable = script /opt/extra-script/quota-warning.sh user = root unix_listener quota-warning { user = root mode = 0600 }} then I created the /opt/extra-script/quota-warning.sh, chmodding it to 755 #!/bin/shPERCENT=$1USER=$2cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=maildir:User quota:noenforcing"From: [email protected]: Mailbox pienaLa tua casella è piena al $PERCENT%. Cancellare i messaggi vecchi.EOF Unfortunately, this is not working as it should, since I'm not receiving any message on a test mail box of 1MB full at 95% (and the limit in my 30-overquota.conf was set at 50% on line 5). Can anyone help me to configure properly the service? here follows the dovecot -n output which, as far as I can understand, confirms my extra config is included in the running service (but without any desired effect) root@centos1670:~# dovecot -n# 2.2.18: /etc/dovecot/dovecot.conf# Pigeonhole version 0.4.8 (0c4ae064f307+)# OS: Linux 2.6.32-642.15.1.el6.x86_64 x86_64 CentOS release 6.8 (Final) ext3auth_mechanisms = plain login digest-md5 cram-md5 apopauth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890&.-_@'disable_plaintext_auth = nofirst_valid_uid = 30imap_client_workarounds = delay-newmailimap_logout_format = rcvd=%i, sent=%omail_home = /var/qmail/mailnames/%Ld/%Lnmail_location = maildir:/var/qmail/mailnames/%Ld/%Ln/Maildirmail_log_prefix = "service=%s, user=%u, ip=[%r]. "mail_plugins = " quota"managesieve_logout_format = rcvd=%i, sent=%omanagesieve_notify_capability = mailtomanagesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate imapflags notifynamespace inbox { inbox = yes location = prefix = INBOX. separator = .}passdb { driver = plesk}plugin { quota = dict:user::file:/var/vmail/%d/%n/.quotausage quota_grace = 0 quota_warning = storage=50%% quota-warning 50 %u quota_warning2 = storage=80%% quota-warning 80 %u quota_warning3 = -storage=100%% quota-warning below %u sieve = ~/.dovecot.sieve sieve_dir = ~/sieve sieve_extensions = +notify +imapflags}pop3_client_workarounds = outlook-no-nuls oe-ns-eohpop3_logout_format = rcvd=%i, sent=%o, top=%t/%p, retr=%r/%b, del=%d/%m, size=%sprotocols = imap pop3 sieveservice auth-worker { group = user = }service auth { group = unix_listener auth-userdb { group = popuser mode = 0600 user = popuser } user = }service quota-warning { executable = script /opt/extra-script/quota-warning.sh unix_listener quota-warning { mode = 0600 user = root } user = root}ssl_cert = </etc/dovecot/private/ssl-cert-and-key.pemssl_key = </etc/dovecot/private/ssl-cert-and-key.pemuserdb { args = uid=popuser gid=popuser driver = static}protocol imap { mail_plugins = " quota imap_quota"}protocol pop3 { pop3_uidl_format = UID%u-%v}protocol lda { mail_plugins = " quota sieve"} edit: as suggested by Jens Erat, root@centos1670:~# doveadm quota get actually produces the following output: Quota name Type Value Limit %user STORAGE 0 - 0user MESSAGE 0 - 0 This seems to denote a defective configuration which might be fixed adding something like quota_rule = *:storage=1GB and enforcing quota recalculation. The problem in doing this is that the Dovecot setup I'm asking about is running under Plesk in which is possible to set different mailbox sizes per user, so the possibility to define the value for quota_rule in a parametric way would be appreciated. | The sed , perl and awk commands that you mention may be correct, but they all read the compressed data and counts newline characters in that. These newline characters have nothing to do with the newline characters in the uncompressed data. To count the number of lines in the uncompressed data, there is no way around uncompressing it. Your approach with zcat is the correct approach and since the data is so large, it will take time to uncompress it. Most utilities that deals with gzip compression and decompression will most likely use the same shared library routines to do so. The only way to speed it up would be to find an implementation of the zlib routines that are somehow faster than the default ones, and rebuild e.g. zcat to use those. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143511/"
]
} |
363,660 | How do I send a mail thanks to swaks with BCC recipients? | The sed , perl and awk commands that you mention may be correct, but they all read the compressed data and counts newline characters in that. These newline characters have nothing to do with the newline characters in the uncompressed data. To count the number of lines in the uncompressed data, there is no way around uncompressing it. Your approach with zcat is the correct approach and since the data is so large, it will take time to uncompress it. Most utilities that deals with gzip compression and decompression will most likely use the same shared library routines to do so. The only way to speed it up would be to find an implementation of the zlib routines that are somehow faster than the default ones, and rebuild e.g. zcat to use those. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
363,679 | Opening a flat file and giving a pattern by pressing "/", we can search for our pattern by using "n".What is the keyboard shortcut to travel up by 1 step in that? That is, the immediate previous pattern match. | The sed , perl and awk commands that you mention may be correct, but they all read the compressed data and counts newline characters in that. These newline characters have nothing to do with the newline characters in the uncompressed data. To count the number of lines in the uncompressed data, there is no way around uncompressing it. Your approach with zcat is the correct approach and since the data is so large, it will take time to uncompress it. Most utilities that deals with gzip compression and decompression will most likely use the same shared library routines to do so. The only way to speed it up would be to find an implementation of the zlib routines that are somehow faster than the default ones, and rebuild e.g. zcat to use those. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115488/"
]
} |
363,785 | I have a file with 2 columns of data. I need to find the lines that have a common string from each column. I'm only interested in the matches line by line, not a matching string from say column 1 line 10 and column 2 line 3. my file: 023q 023q023q0adc 0adc0adc123456 123456abcde abcdefg08tgdf 90alkhg So, in this example, each line except the last line shares a common string, either a portion of the line or the lines are identical, and that's what I need to find. I've seen tons of questions and threads on common strings from 2 files, but nothing so far on my exact use case. UPDATE: at least 4 characters need to match, in order, on each line. | Short gawk approach: awk '(index($1, $2) !=0 && length($2) >= 4) || (index($2, $1) !=0 && length($1) >= 4)' file The output: 023q 023q023q0adc 0adc0adc123456 123456abcde abcdefg index(in, find) Search the string in for the first occurrence of the string find , and return the position in characters where that occurrence begins in the string in . For the more complex case when we need to find the longest common substring with at least 4 characters length on 2 input strings - I would suggest Python approach: Let's say the input file was slightly "sophisticated" and had the following lines: 1023q 023q023qv0adc 20adc0adcs123456 123456eabcde cabcdefg08tgdf 90alkhg To find the longest common substring we'll use SequenceMatcher class from difflib module. find_common_lines.py script: import refrom difflib import SequenceMatcherwith open('filename', 'r') as fh: for l in fh.read().splitlines(): items = re.findall(r'\S+', l.strip()) # getting 2 comparable strings m = SequenceMatcher(None, items[0], items[1]).find_longest_match(0, len(items[0]), 0, len(items[1])) if m.size >= 4: print(l) Usage (you may have another python 3.x version, the current case has been tested on python 3.5): python3.5 find_common_lines.py The output: 1023q 023q023qv0adc 20adc0adcs123456 123456eabcde cabcdefg | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81926/"
]
} |
363,814 | Using Raspbian and Ubunntu 16.04 LTS so need a generic Linux solution. Requirement is simple: I need a way to send one-line email messages from the command line. I have set up a gmail account just for this particular Rpi3, with the address of [email protected] - with no 2FA So now I need to be able to send one-line mail messages from anywhere (including cron) without user intervention. I also would like it to be able to send text files; basically, anything from stdin . | The simplest answer to sending one-line messages via gmail is to use ssmtp Install it with the following commands: sudo apt-get updatesudo apt-get install ssmtp Edit /etc/ssmtp/ssmtp.conf to look like this: [email protected]=smtp.gmail.com:[email protected]=testing123UseTLS=YES Send a one-liner like so: echo "Testing...1...2...3" | ssmtp [email protected] or printf "Subject: Test\n\nTesting...1...2...3" | ssmtp [email protected] Then, true to *nix, you just get the prompt back in a few seconds. Check your [email protected] account, and voila, it is there! This also works well when sending a file, as so: cat program.py | ssmtp [email protected] And the program will show up in the mailbox If the file is a text file, it can have a first line that says Subject: xxxxxx This can be used with various cron jobs can send me data with subject lines indicating the content. This will work with anything that prepares a message that is piped into ssmtp via stdin. For more details such as securing these files against other users and such, visit this article: Send Email from Raspberry Pi Command Line Be sure to also look down below to the answer posted by Rui about locking down the FROM: address that might be changed in formatted message files, if necessary. Now if only I could figure out how to send SMS the same way. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/363814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197427/"
]
} |
363,819 | OS: Linux (Debian 8)Environment details: We use LXDE and uxterm . Virtual console form till ctrl +Alt + F6 is disabled. Total users available user1, user2, user3 the default system is set with auto login with user1 and we trigger user terminal via script ( uxterm ). On user pressing special combination in LXDE it invoked a Uxterminal in screen with User2 as login. on login success, user2 console can be used. We have 3rd user, where his role is very limited and needed for our current scenario of our operation. To create a BASH profile for the user, we have added info in sudoers with NOPASSWD option. etc/sudoers User_Alias PRIVILEGEDUSER = user1,user2Runas_Alias TARGETUSER = user2,user3PRIVILEGEDUSER ALL=(TARGETUSER) NOPASSWD: /bin/bash also we have added env_keep in Defaults Defaults env_resetDefaults env_keep += "JRE_HOME"Defaults mail_badpassDefaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" With this option when we try switching user, JRE_HOME is not getting set. Also added JRE_HOME in etc/.profile manually for all available users in system for testing purposed, even that did not help JRE_HOME=/usr/lib/jvm/java-7-openjdk-i386/jreexport JRE_HOME Need some guidance in setting JRE_HOME during sudo -u switching. | The simplest answer to sending one-line messages via gmail is to use ssmtp Install it with the following commands: sudo apt-get updatesudo apt-get install ssmtp Edit /etc/ssmtp/ssmtp.conf to look like this: [email protected]=smtp.gmail.com:[email protected]=testing123UseTLS=YES Send a one-liner like so: echo "Testing...1...2...3" | ssmtp [email protected] or printf "Subject: Test\n\nTesting...1...2...3" | ssmtp [email protected] Then, true to *nix, you just get the prompt back in a few seconds. Check your [email protected] account, and voila, it is there! This also works well when sending a file, as so: cat program.py | ssmtp [email protected] And the program will show up in the mailbox If the file is a text file, it can have a first line that says Subject: xxxxxx This can be used with various cron jobs can send me data with subject lines indicating the content. This will work with anything that prepares a message that is piped into ssmtp via stdin. For more details such as securing these files against other users and such, visit this article: Send Email from Raspberry Pi Command Line Be sure to also look down below to the answer posted by Rui about locking down the FROM: address that might be changed in formatted message files, if necessary. Now if only I could figure out how to send SMS the same way. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/363819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52764/"
]
} |
363,878 | I just bumped into another SELinux related problem. It would seem my haproxy wasn't allowed to open TCP connections to the backend and I was able to fix it quickly using Google. Now, I would like to know how one would fix this problem if one actually knew how to work with SELinux? The Problem I want to use haproxy to forward my publicly accessible port 5000 to 127.0.0.1:5601. I configured haproxy accordingly, and did systemctl restart haproxy and immediately got this beauty in the syslog: May 9 09:38:45 localhost haproxy[2900]: Server kibana/app1 is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. permission denied --> SELinux has become the dominant short circuit evaluation in my Linux problem solving. Hasn't failed me in years! The Solution The solution to this problem according to our friends over at Serverfault is semanage port --add --type http_port_t --proto tcp 5601 because SELinux only allow[s] the web server to make outbound connections to a limited set of ports and apparently the semanage command adds port 5601 to that list. This works and now my haproxy serves me Kibana. Great! What I don't get/know When I ps fauxZ I see the context of haproxy is system_u:system_r:haproxy_t:s0 . With the available commands on my Linux, how can I find out that haproxy is restricted by the ports associated with http_port_t ? | SELinux has a bit of a reputation for being arcane - and one that I think is well deserved. The way I understand it, the context of a program that is running defines what the current policy will permit it to access or do. As such, as you've surmised, the haproxy_t type is allowed some permissions based on the http_port_t type. Now let's try to actually figure out how to find this relationship. Get SELinux label As you know, ps -eZ will list the SELinux labels of running processes - in the case of haproxy, that user:role:type:sensitivity is system_u:system_r:haproxy_t:s0 . The important bit is the type, in this case haproxy_t. Permissions Now to find out which permissions this type has, we can use sesearch[1]: sesearch -d -A -s haproxy_t -d shows only direct results - if you omit this, this will also show all objects from seinfo --type=haproxy_t -x ! -A searches for allow rules -s haproxy_t defines the source type as haproxy_t Now we get quite a lot of results (106 on my CentOS 7 VM), because, as it happens, there are a lot of different, granular permissions defined.At this point you need to to know something more about what you're searching for - either the class of what the permission applies to, the target type or the permission name itself. Search by class Let's go with class first: so we know that our source type is haproxy_t , and we think that our class might be something to do with the internet, so it's a good bet that it might be tcp_socket. sesearch -d -A -s haproxy_t -c tcp_socket This will list all allow rules for the source type haproxy_t that have anything to do with tcp_socket. That's narrowed it down quite a bit, but it's still anyone's guess which one of these could be the ones we're looking for. Search by specific permissions Next, let's try with permissions - we know that we need our haproxy to bind and connect on specific ports, right? so we try the name_bind and the name_connect permissions. sesearch -d -A -s haproxy_t -p "name_bind, name_connect"Found 4 semantic av rules: allow haproxy_t http_cache_port_t : tcp_socket { name_bind name_connect } ; allow haproxy_t commplex_main_port_t : tcp_socket { name_bind name_connect } ; allow haproxy_t http_port_t : tcp_socket { name_bind name_connect } ; allow haproxy_t port_type : tcp_socket name_bind ; This shows only 4 results, only 3 of which could be our culprit! These results are, as far as SELinux is concerned, the only ports any process with the context of haproxy_t is allowed to bind to. Search by target type This one is a bit of a cheat, because in this case we're actually looking for the target type! But, for the sake of completeness - for instance, if we wanted to find out what permissions haproxy_t gets from http_port_t, we can use the following: sesearch -d -A -s haproxy_t -t http_port_tFound 1 semantic av rules: allow haproxy_t http_port_t : tcp_socket { name_bind name_connect } ; and that, of course, gives us only one result, and the permissions that apply. Ports Now that we know the target objects, and that they are definitions of ports, we can find out exactly which ports they encompass. Well, let's find out: semanage port -l | grep -E 'http_cache_port_t|commplex_main_port_t|http_port_t'commplex_main_port_t tcp 5000commplex_main_port_t udp 5000http_cache_port_t tcp 8080, 8118, 8123, 10001-10010http_cache_port_t upd 3130http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000pegasus_http_port_t tcp 5988 And so, we see that tcp port 5601 is nowhere on that list. Now, as far as SELinux cares, you can add that port to any of those types with the semanage port --add --type XXX --proto tcp 5601 command, and it'll work out. But since this is serving http, http_port_t seems the most applicable type. Hopefully that demystifies it just a little bit. [1] Available in the setools-console package. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19575/"
]
} |
363,896 | I have a file whose columns contain simple arithmetic equations that I would like to merge to the arithmetic result. Input sample (tab-separated columns): +104-1+12 6 +3 I would like to compute the arithmetic sum within each column. If one column contains no arithmetic sign, I treat it as it contained a + before the item. Although it would be easy through sed to add a + sign if a column starts with no sign ( sed -E 's/(\t)([0-9]*)/\1\t+\2/g' would work, assuming that a row never begins with a digit, as in the example) The output I would expect is the following: 115 6 3 How can I achieve this in unix? awk/sed solutions are preferred. | You could use perl : perl -pe 's/[\d+-]+/eval$&/ge' your-file Or even: perl -pe 's/[\d+-]+/$&/gee' your-file (thanks Rakesh) Same with zsh : set -o extendedglob # for the ## operator (same as ERE +)while IFS= read -r line; do printf '%s\n' ${line//(#m)[0-9+-]##/$((MATCH))}done < your-file Or: zmodload zsh/mapfileset -o extendedglobprintf %s ${mapfile[your-file]//(#m)[0-9+-]##/$((MATCH))} In all four, we're looking for sequences of digits, - and + characters and passing them to the interpreter's arithmetic processor ( eval in perl (or the ee flag that causes the expansion of the replacement to be evaluated as perl code), $((...)) in zsh ). We're not validating the expressions before passing to the interpreter, so it may cause failures (for instance on sequences like -+- or 3++ ) but at least, because we're only considering digits and - / + characters, it shouldn't do much more harm than reporting an error message and aborting the command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
363,899 | Under Linux I often use /proc/<pid>/fd/[0,1,2] to access std[in,out,err] of any running process. Is there a way to achieve the same result under FreeBSD and/or macOS ? | See this StackOverflow link for a dtrace based answer to this. I've tested it on FreeBSD and it works perfectly: capture() { sudo dtrace -p "$1" -qn ' syscall::write*:entry /pid == $target && arg0 == 1/ { printf("%s", copyinstr(arg1, arg2)); } ' } | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197721/"
]
} |
363,940 | I have the below lines in a file: SUT_INST_PIT=trueSUT_INST_TICS=trueSUT_INST_EXAMPLES=falseSUT_INST_PING=false How can i create a sed line to match pattern SUT_INST_EXAMPLES & SUT_INST_PING and set false to true ? I can't simply replace false with true because I don't want to change SUT_INST_PIT & SUT_INST_TICS even if they are false!!! I have at the moment two sed commands that are working, but i would like one line only! sed -i "s/SUT_INST_EXAMPLES=false/SUT_INST_EXAMPLES=true/g" <file>sed -i "s/SUT_INST_PING=false/SUT_INST_PING=true/g" <file> One more thing the sed line should be able to be parametrized to set false -> true or true -> false , but only for SUT_INST_EXAMPLES & SUT_INST_PING . Solution (according to @RomanPerekhrest) and how to use it in send (expect script): send "sed -i 's\/^\\(SUT_INST_EXAMPLES\\|SUT_INST_PING\\)=false\/\\1=true\/' file\r" | sed approach: sed -i 's/^\(SUT_INST_EXAMPLES\|SUT_INST_PING\)=false/\1=true/' file file contents: SUT_INST_PIT=trueSUT_INST_TICS=trueSUT_INST_EXAMPLES=trueSUT_INST_PING=true \(SUT_INST_EXAMPLES\|SUT_INST_PING\) - alternation group, matches either SUT_INST_EXAMPLES OR SUT_INST_PING at the start of the string Alternative gawk ( GNU awk ) approach: gawk -i inplace -F'=' -v OFS='=' '$1~/^SUT_INST_(EXAMPLES|PING)/{$2=($2=="false")? "true":"false"}1' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230578/"
]
} |
363,946 | I understand that e.g. catfish and gnome-search-utils both can search inside file contents that are UTF-8 encoded. To be able to search for words or numbers within text files one would have to convert them via iconv into UTF-8 first. If the file is known, text editors like gedit or mousepad have no trouble with UTF-16. Why is there no search tool (GUI or command-line) with any of the Linux distributions that can handle UTF-16 encoded txt files? I'm on Xubuntu. | UTF-16 (or UCS-2) is highly unfriendly for the null-terminated strings used by the C standard library and the POSIX ABI. For example, command line arguments are terminated by NULs (bytes with value zero), and any UTF-16 character with numerical value < 256 contains a zero byte, so any strings of the usual English letters would be impossible to represent in UTF-16 on a command line argument. That in turn means that either the utilities would need to take input in some other format (say UTF-8) and convert to UTF-16; or they would need to take their input in some other way. The first option would require all such utilities to contain (or link to) code for the conversion, and the second would make interfacing those programs to other utilities somewhat difficult. Given those difficulties, and the fact that UTF-8 has better backwards-compatibility properties, I'd just guess that few care to use UTF-16 enough to be motivated to create tools for that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230573/"
]
} |
363,962 | I know that a directory is a file contained rows kind of “name = inode number”. When I request a path like /home/my_file.txt, next steps take place: Go to inode number 2 (root directory default inode) Get file to which inode #2 is pointing. Search through this file and find “home” entry. Get its inode number, for example 135. Get file to which inode #135 is pointing. Search through this file and find “my_file.txt” entry. Get its inode number, for example 245. Get file to which inode #245 is pointing. The question: how this process is different in case the home directory is the mount point of another filesystem, residing on another block device? When system understand, that this directory is the mount point and how it do that? Where this information is stored - in the inode, in the directory file or somewhere else? For example, part of my root directory listing with inode numbers displayed: ls -d1i /*/inode # name656641 /bin/2 /boot/530217 /cdrom/2 /dev/525313 /etc/2 /home/393985 /lib/ Here, home and boot directories are mount points and resided on own filesystems. Run my pseudocode algorithm (written above) and stuck on the step number 3 - in this case, home inode number is 2 and it is located in another filesystem and in another block device. | Your description of the process isn't quite right. The kernel keeps track of which paths are mount points. Exactly how it does that varies between kernel, but typically the information is stored in terms of paths. For example the kernel remembers “ / is this filesystem, /media/cdrom is this filesystem, /proc is this filesystem”, etc. Typically, rather than a table mapping path strings to data structures representing mounted filesystems, the kernel stores tables per directory. The data associated with a directory entry is classically called a dentry . There's a dentry for the root, and in each directory there's a dentry for each file in that directory that the kernel remembers. The dentry contains a pointer to an inode structure, and the inode contains a pointer to the filesystem data structure for the filesystem that the file is on. At a mount point, the associated filesystem is different from the parent dentry's associated filesystem, and there's additional metadata to keep track of the mount point. So in a typical unix kernel architecture, the dentry for / contains a pointer to information about the root filesystem, in addition to a pointer to the inode containing the root directory; the dentry for /proc (assuming that it's a mount point) contains a pointer to information about the proc filesystem, etc. If /media/cdrom is a mount point but not /media , the kernel remembers in the dentry for /media that it isn't allowed to forget about it: remembering about /media isn't just a matter of caching for performance, it's necessary to remember the existence of the mount point /media/cdrom . For Linux, you can find documentation in the kernel documentation , on this site and elsewhere on the web. Bruce Fields has a good presentation of the topic. When the kernel is told to access a file, it processes the file name one slash-separated component at a time and looks up the component each time. If it finds a symbolic link, it follows it. If it finds a mount point, no special processing is actually necessary: it's just that the inodes are attached to a different directory. The process does not use inode numbers, it follows pointers . Inode numbers are a way to give a unique identity to each file on a given filesystem outside of the kernel: on disk, and for applications. There are filesystems that don't have unique inode numbers; filesystem drivers normally try to make up one but that doesn't always work out, especially with network filesystems (e.g. if the server exports a directory tree which contains a mount point, there may be overlap between the set of inodes above and below that mount point). Rows that map name to inode number are the way a typical on-disk filesystem works if it supports hard links; filesystems that don't support hard links don't really need the concept of inode number. Note that information about mount points is stored only in memory. When you mount a filesystem, this does not modify the directory on top of which the filesystem is mounted. That directory is merely hidden by the root of the mounted filesystem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109397/"
]
} |
363,967 | A few years ago I recall using the terminal and reading a tutorial in the Linux manual (using man ) on how a computer worked after it was turned on. It walked you through the whole process explaining the role of the BIOS, ROM, RAM and OS on this process. Which page was this, if any? How can I read it again? | You're thinking of the boot(7) manual ( man 7 boot ) and/or the bootup(7) manual ( man 7 bootup ). Those are the manuals I can think of on (Ubuntu) Linux that best fits your description. These manuals are available on the web (see links above), but the definite text is what's available on the system that you are using. If a web-based manual says one thing but the manual on your system says another thing, then the manual on your system is the more correct one for you. This goes for all manuals. See also the "See also" section in those manuals. This other question may also be of interest: How does the Linux or Unix " / " get mounted during bootup? For a non-Linux take on the boot process, the OpenBSD first-stage system bootstrap ( biosboot(8) ) and second-stage bootstrap ( boot(8) ) manuals, followed by rc(8) , may be interesting. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/363967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134848/"
]
} |
363,978 | I am having trouble with a configuration line in common-account-pc and common-auth-pc that denies also root access: account required pam_tally2.so deny=10 onerr=fail unlock_time=600 even_deny_root root_unlock_time=5 file=/home/log/faillog It seems that this line causes some problem when trying to access multiple times the SUT and i assume that it things that it is an attack via ssh.But it is actually a test tool that tries to send several times commands via ssh root@ to the SUT (100.100.100.100) from server (10.10.10.13). Apr 25 05:51:56 SUT sshd[31570]: pam_tally2(sshd:auth): user root (0) tally 83, deny 10Apr 25 05:52:16 SUT sshd[31598]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.10.10.13 user=rootApr 25 05:52:21 SUT sshd[31568]: error: PAM: Authentication failure for root from 10.10.10.13Apr 25 05:52:21 SUT sshd[31568]: Connection closed by 10.10.10.13 [preauth] Since the password is always correct, but still after some time it starts to through exception (pexpect) Account locked. version: 2.3 ($Revision: 399 $)command: /usr/bin/sshargs: ['/usr/bin/ssh', '[email protected]']searcher: searcher_re: 0: re.compile(".*:~ #")buffer (last 100 chars): :Account locked due to 757 failed loginsPassword:before (last 100 chars): :Account locked due to 757 failed loginsPassword:after: <class 'pexpect.TIMEOUT'>... But according to passwd root is not LK labeled: SUT:~ # passwd -S rootroot P 04/24/2017 -1 -1 -1 -1 Manually it is always possible to access the SUT via ssh root@!!! So, for the moment the only that can cause this is the pam configuration. But how do i restart or activate the changes? Does someone else have any other idea? Thanks in adv. | There is no PAM daemon. You do not need to reload anything for the changes to take effect. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230578/"
]
} |
364,067 | After copying the last version of Stretch distro on a 4GB USB drive with sudo dd bs=4M if=debian-testing-amd64-netinst.iso of=/dev/sdb1 && sync on a Lubuntu 16.04 system, the file system of sdb1 shows as 'unknown'. This is the case after creating a ms-dos or gpt partition table, a single partition, and formatting it with ext4 or FAT32 file system, which shows correctly after formatting, but goes to unknown after the dd command completes. Actually, the drive does not boot the computer, it simply gets stuck (although this might be caused by other reasons). I've also tried, with the same result, after executing isohybrid on the ISO file. | You need to overwrite the USB key including its partition table: sudo dd bs=4M if=debian-testing-amd64-netinst.iso of=/dev/sdb ( sdb instead of sdb1 ). The download contains its own partition table. See the official instructions , which suggest sudo cp debian-testing-amd64-netinst.iso /dev/sdbsync instead. As pointed out by Mioriin , you should verify the downloaded image before copying it to your key, to make sure it was downloaded correctly and more importantly that it hasn’t been tampered with. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91304/"
]
} |
364,105 | btrfs ( often pronounced "better fs" ) has quite a few features that ext4 lacks. However, comparing the functionality of btrfs vs ext4, what is lacking in btrfs? 1 In other words, what can I do with ext4 that I can't with btrfs? 1 Ignoring the lesser battle-ground testing of btrfs given ext4 is so widely used | Disadvantages of btrfs compared to ext4: btrfs doesn't support badblocks This means that if you've run out of spare non-addressable sectors that the HDD firmware keeps to cover for a limited number of failures, there is no way to mark blocks bad and avoid them at the filesystem level. Swap files are only supported via a loopback device , which complicates things because it seems impossible to resume from suspend using this method It's quite tricky to calculate free space , so much so that... You can get "No space left on device" errors even though btrfs' own tools say there is space | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/364105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
364,112 | How can I wrap paragraphs in plain text with paragraph tags {p} before and {/p} after each paragraph using sed ? Each paragraph is separated by blank lines. I can use sed -e 's/^\s*$/<r>/ somefile.txt to find every blank line in the text file, but this will always insert {p} everywhere and I don't quite understand, how to vary them. Also, there's no empty line after the very last paragraph, so it won't do anything for the last one. Input text: Section 5. General Information About Project Gutenberg-tm electronicworks.DescriptionProfessor Michael S. Hart is the originator of the Project Gutenberg-tmconcept of a library of electronic works that could be freely sharedwith anyone.Project Gutenberg-tm eBooks are often created from several printededitions, all of which are confirmed as Public Domain in the U.S. unlessa copyright notice is included. Required Output: Section 5. General Information About Project Gutenberg-tm electronicworks.{p}Description{/p}{p}Professor Michael S. Hart is the originator of the Project Gutenberg-tmconcept of a library of electronic works that could be freely sharedwith anyone.{/p}{p}Project Gutenberg-tm eBooks are often created from several printededitions, all of which are confirmed as Public Domain in the U.S. unlessa copyright notice is included.{/p} | Disadvantages of btrfs compared to ext4: btrfs doesn't support badblocks This means that if you've run out of spare non-addressable sectors that the HDD firmware keeps to cover for a limited number of failures, there is no way to mark blocks bad and avoid them at the filesystem level. Swap files are only supported via a loopback device , which complicates things because it seems impossible to resume from suspend using this method It's quite tricky to calculate free space , so much so that... You can get "No space left on device" errors even though btrfs' own tools say there is space | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/364112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230685/"
]
} |
364,156 | I want to time a command which consists of two separate commands with one piping output to another. For example, consider the two scripts below: $ cat foo.sh#!/bin/shsleep 4$ cat bar.sh#!/bin/shsleep 2 Now, how can I get time to report the time taken by foo.sh | bar.sh (and yes, I know the pipe makes no sense here, but this is just an example)? It does work as expected if I run them sequentially in a subshell without piping: $ time ( foo.sh; bar.sh )real 0m6.020suser 0m0.010ssys 0m0.003s But I can't get it to work when piping: $ time ( foo.sh | bar.sh )real 0m4.009suser 0m0.007ssys 0m0.003s$ time ( { foo.sh | bar.sh; } )real 0m4.008suser 0m0.007ssys 0m0.000s$ time sh -c "foo.sh | bar.sh "real 0m4.006suser 0m0.000ssys 0m0.000s I've read through a similar question ( How to run time on multiple commands AND write the time output to file? ) and also tried the standalone time executable: $ /usr/bin/time -p sh -c "foo.sh | bar.sh"real 4.01user 0.00sys 0.00 It doesn't even work if I create a third script which only runs the pipe: $ cat baz.sh#!/bin/shfoo.sh | bar.sh And then time that: $ time baz.shreal 0m4.009suser 0m0.003ssys 0m0.000s Interestingly, it doesn't appear as though time exits as soon as the first command is done. If I change bar.sh to: #!/bin/shsleep 2seq 1 5 And then time again, I was expecting the time output to be printed before the seq but it isn't: $ time ( { foo.sh | bar.sh; } )12345real 0m4.005suser 0m0.003ssys 0m0.000s Looks like time doesn't count the time it took to execute bar.sh despite waiting for it to finish before printing its report 1 . All tests were run on an Arch system and using bash 4.4.12(1)-release. I can only use bash for the project this is a part of so even if zsh or some other powerful shell can get around it, that won't be a viable solution for me. So, how can I get the time a set of piped commands took to run? And, while we're at it, why doesn't it work? It looks like time immediately exits as soon as the first command has finished. Why? I know I can get the individual times with something like this: ( time foo.sh ) 2>foo.time | ( time bar.sh ) 2> bar.time But I still would like to know if it's possible to time the whole thing as a single operation. 1 This doesn't seem to be a buffer issue, I tried running the scripts with unbuffered and stdbuf -i0 -o0 -e0 and the numbers were still printed before the time output. | It is working. The different parts of a pipeline are executed concurrently. The only thing that synchronises/serialises the processes in the pipeline is IO, i.e. one process writing to the next process in the pipeline and the next process reading what the first one writes. Apart from that, they are executing independently of each other. Since there is no reading or writing happening between the processes in your pipeline, the time take to execute the pipeline is that of the longest sleep call. You might as well have written time ( foo.sh & bar.sh &; wait ) Terdon posted a couple of slightly modified example scripts in the chat : #!/bin/sh# This is "foo.sh"echo 1; sleep 1echo 2; sleep 1echo 3; sleep 1echo 4 and #!/bin/sh# This is "bar.sh"sleep 2while read line; do echo "LL $line"donesleep 1 The query was "why does time ( sh foo.sh | sh bar.sh ) return 4 seconds rather than 3+3 = 6 seconds?" To see what's happening, including the approximate time each command is executed, one may do this (the output contains my annotations): $ time ( env PS4='$SECONDS foo: ' sh -x foo.sh | PS4='$SECONDS bar: ' sh -x bar.sh )0 bar: sleep 20 foo: echo 1 ; The output is buffered0 foo: sleep 11 foo: echo 2 ; The output is buffered1 foo: sleep 12 bar: read line ; "bar" wakes up and reads the two first echoes2 bar: echo LL 1LL 12 bar: read line2 bar: echo LL 2LL 22 bar: read line ; "bar" waits for more2 foo: echo 3 ; "foo" wakes up from its second sleep2 bar: echo LL 3LL 32 bar: read line2 foo: sleep 13 foo: echo 4 ; "foo" does the last echo and exits3 bar: echo LL 4LL 43 bar: read line ; "bar" fails to read more3 bar: sleep 1 ; ... and goes to sleep for one secondreal 0m4.14suser 0m0.00ssys 0m0.10s So, to conclude, the pipeline takes 4 seconds, not 6, due to the buffering of the output of the first two calls to echo in foo.sh . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/364156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
364,195 | I have several processes that are producing output on STDOUT and STDERR , which I have redirected to numbered file descriptors, and I want to collate all the output together into a single file. I have naively tried [input processes] | cat <3 <4 <5 2>&1 >[output file] but of course, this does not work as cat will wait until it's STDIN pipe is closed before reading data from any of the subsequent ones, causing my process to hang when the other pipes' buffers become full. Any suggestions? | Collating output together is not really the dual of tee . tee makes multiple copies of its input, whereas collating output does not involve any merging of data. To merge output sources, just redirect them all to the same file descriptor. The interleaving of the sources is somewhat unpredictable in general, but sufficiently small writes to a pipe are guaranteed to be atomic . ( Being able to tell the boundaries from the read side is another story .) { data_source_1 & data_source_2 & wait; } >merged_output If you're getting input from multiple file descriptors and you want to merge them, pass each of them through. { cat <&3 & cat <&4 & wait; } >merged_ouput But usually you'd be able to redirect all the file descriptors to the same destination. … 3>merged_ouput 4>&3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133056/"
]
} |
364,201 | I am using tail -f (in Terminal on Mac OS X el capitan) to view the live changes to my file (the results of pulling data from a database using a PERL script). However, sometimes, the Perl script will truncate the file and add new data to it. Sometimes, when this happens, it gives me this message: tail: test.txt: file truncated And then does not show any contents of the file afterwards. This seems to only happen when I'm replacing a file with LESS rows than before. When the new rows are longer than before running the script, I do not get this error and the tail -f continues to work. I have confirmed that there are, in fact, data in the file that tail -f is not showing after getting this (error?) message. I've seen this similar question: Suppress 'file truncated' messages when using tail tail -f test.txt 2> /dev/null But that just suppresses the message and still breaks, it doesn't continue to show me the shorter, truncated file contents. Is there a better command to use to live view changes to the file? Or a flag for tail -f to not care when the file is truncated? | As others have pointed out, the tail command that ships with OS X does not have the --retry option. However, you could simply install the GNU version of tail which has that option; it is part of the GNU coreutils . For example, if you use MacPorts you could install them by running sudo port install coreutils . An alternative to live watching a file is the watch command, which unfortunately also doesn't ship with OS X. However, you can use this simple workaround . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220220/"
]
} |
364,270 | On my search for a command to list all shell variables, I somehow realized, that there is a command to list all environment variables, but somehow there is no one to list all shell variables, for reasons unknown to me. However, someone here gave an answer on how to display all variables, shell and environment ones. ( set -o posix ; set ) | less He actually did not explain for the layman what this expression does, and my fragmentary understanding is not enough to grasp the idea behind it. This is what I know: ( command1; command2) this causes the commands to be executed inside a child process of the shell. set is some way to declare variables, though do not know what the -o posix means and why a second set is executed in succession command | less This one is not the problem, even I understand it, it is a pager for more control about the output. | set shows all shell variables (exported or not). In Bash, set -o posix sets the shell in POSIX compatibility mode . (I don't know if other shells have similar syntax for the similar feature, but I'll assume Bash here.) The difference in this case is that usually Bash's set shows also shell functions, but in POSIX mode set only shows variables, and changes the output format slightly: When the set builtin is invoked without options, it does not display shell function names and definitions. When the set builtin is invoked without options, it displays variable values without quotes, unless they contain shell metacharacters, even if the result contains nonprinting characters. In Bash, there's additionally the declare builtin that can be used to show all the otherwise hidden or Bash-specific flags of variables: declare -p xx shows variable xx in a format that Bash can take as input. declare -p shows all variables and declare -f can be used to show functions. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
364,276 | wqdq wqdqgrhehr cnkzjncicoajc hello space oejwfoiwejfow wqodojw more spaces more This is my file and I would like to make this with sed : -wqdq-wqdqgrhehr-cnkzjncicoajc-hello space----oejwfoiwejfow----wqodojw----more spaces----more------- Do I have to use loop to make it or does it exist any different approach?I tried this: user:~$ sed -n ': loop s/^ /-/ s/[^-] /-/pt loop' spaces | With sed , you'd need either a loop like: sed -e :1 -e 's/^\( *\) /\1-/; t1' < file Or do something like: sed 's/ */&\/; # add a newline after the leading spacesh; # save a copy on the hold spacey/ /-/; # replace *every* space with -G; # append our saved copys/\n.*\n//; # remove the superflous part' < file With perl , you can do things like: perl -pe 's{^ *}{$& =~ y/ /-/r}e' < file or perl -pe 's/(^|\G) /-/g' < file \G in PCRE matches (with zero-width) at the end of the previous match (in //g context). So here, we're replacing a space that follows either the beginning of the line ^ or the end of the previous match (that is, the previously substituted space). (that one would also work with sed implementations that support PCREs like ssed -R ). With awk , you can do something like: awk ' match($0, /^ +/) { space = substr($0, 1, RLENGTH) gsub(" ", "-", space) $0 = space substr($0, RLENGTH+1) } {print}' < file If you want to convert tabs as well (where for instance <space><tab>foo would be converted to --------foo ), you can preprocess the input with expand . With GNU expand , you can make it expand -i so that only the tabs among the leading blanks in the line are converted. You can specify how far apart the tab-stops are (every 8 columns be default) with the -t option. To generalise that to all horizontal spacing characters, or at least those that are in the [:blank:] category in your locale, that becomes more complicated. If it weren't for the TAB character, it would just be a matter of: perl -Mopen=locale -MText::CharWidth=mbswidth -pe 's/^\h+/"-" x mbswidth($&)/e' But the TAB character being a control character has a width of -1 with that mbswidth() , while in reality it has a variable width from 1 to 8 columns depending on where it's found on the line. The expand command takes care of expanding it to the right number of spaces, but several implementations, including GNU expand don't get it right when there are multi-byte characters (like all the blank characters except tab, space in UTF-8 locales), and even some of those that support multi-byte characters can be fooled by zero-width or double-width characters (like U+3000 which is in the [:blank:] class in typical GNU locales at least). So one would have to do the TAB expansion by hand like: perl -Mopen=locale -MText::CharWidth=mbswidth -pe 's{^\h+}{ $s = $&; while ($s =~ /(.*?)\t(.*)/) { $s = $1 . (" " x ((7-mbswidth($1)) % 8 + 1)) . $2; } "-" x mbswidth($s)}e' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220233/"
]
} |
364,313 | If I have a Bash script like: function repeat { while :; do echo repeating; sleep 1 done}repeat &echo running once running once is printed once but repeat 's fork lives forever, printing endlessly. How should I prevent repeat from continuing to run after the script which created it has exited? I thought maybe explicitly instantiating a new bash -c interpreter would force it to exit as its parent has disappeared, but I guess orphaned processes are adopted by init or PID 1. Testing this using another file: # repeat.bashwhile :; do echo repeating; sleep 1; done# fork.bashbash -c "./repeat.bash & echo an exiting command" Running ./fork.bash still causes repeat.bash to continue to run in the background forever. The simple and lazy solution is to add the line to fork.bash : pkill repeat.bash But you had better not have another important process with that name, or it will also be obliterated. I wonder, if there is a better or accepted way to handle background jobs in forked shells that should exit when the script (or process) that created them has exited? If there is no better way than blindly pkilling all processes with the same name, how should a repeating job that runs alongside something like a webserver be handled to exit? I want to avoid a cron job because the script is in a git repository, and the code should be self-contained without changing system files in /etc/ . | This kills the background process before the script exits: trap '[ "$pid" ] && kill "$pid"' EXITfunction repeat { while :; do echo repeating; sleep 1 done}repeat &pid=$!echo running once How it works trap '[ "$pid" ] && kill "$pid"' EXIT This creates a trap . Whenever the script is about to exit, the commands in single-quotes will be run. That command checks to see if the shell variable pid has been assigned a non-empty value. If it has, then the process associated with pid is killed. pid=$! This saves the process id of the preceding background command ( repeat & ) in the shell variable pid . Improvement As Patrick points out in the comments, there is a chance that the script could be killed after the background process starts but before the pid variable is set. We can handle that case with this code: my_exit() { [ "$racing" ] && pid=$! [ "$pid" ] && kill "$pid"}trap my_exit EXITfunction repeat { while :; do echo repeating; sleep 1 done}racing=Yrepeat &pid=$!racing=echo running once | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136107/"
]
} |
364,401 | I typed help suspend and got this short explanation: suspend: suspend [-f] Suspend shell execution. Suspend the execution of this shell until it receives a SIGCONT signal. Unless forced, login shells cannot be suspended. Options: -f force the suspend, even if the shell is a login shell Exit Status: Returns success unless job control is not enabled or an error occurs. How I understand this is: I type suspend and the terminal freezes, not even strg + c can unfreeze it. But when I open another terminal and search for the PID for the frozen one and type kill -SIGCONT PID-NR a SIGCONT signal is send to the frozen terminal and thaws it up, so that it gets unfrozen. But, what is the actual purpose of suspending a terminal? Which every day applications are typical for it? What did the people who made it a shell builtin have in mind? | If you start a shell from another shell, you can suspend the inner one. Say when using su , and wanting to switch back to the regular user for a moment: user$ suPassword: ...root# do somethingroot# suspenduser$ do something as the ordinary user againuser$ fgroot# ... (If you do that, don't forget the privileged shell open in the background...) Similarly, if you escape to a shell from some other program (the ! command in e.g. less ), you can still suspend the shell. But I wouldn't expect many other programs to handle it nicely when they launch a subprocess, which then suspends itself. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/364401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
364,412 | I just got a debian 8 VPS and I'm trying to run a python script as a service, I wrote this (everithing was done via ssh logged as root) : [Unit]Description=My Script ServiceAfter=multi-user.target[Service]Type=idleExecStart=/usr/bin/python /web/cmcreader/test.py > /web/cmcreader/test.log 2>&1[Install]WantedBy=multi-user.target I've put it in /lib/systemd/system I then chmoded it : chmod 644 /lib/systemd/system/cmcreader.service I then tried to activate it using : systemctl daemon-reloadsystemctl enable cmcreader.service however the last command returns ( enable ) : Failed to execute operation: Invalid argument What did I do wrong?Thanks. | Starting at man systemd.directives you can find the docs for any systemd directive. Here you can find that ExecStart= is documented in man systemd.service . The docs there say: redirection using "<", "<<", ">", and ">>", pipes using "|", running programs in the background using "&", and other elements of shell syntax are not supported. They are also not usually needed. systemd already runs apps in the background by default, so you don't need & . It also automatically captures output to STDOUT and STDERR and logs it for you, so you don't need to redirect the output to a log file either. Just use journalctl -u cmcreader to view the logs for your service, or journalctl to view all the logs. If you aren't sure about the syntax of a systemd file, you can use: systemd-analyze verify ./path/to/your.service Also, service files you create go in /etc/systemd/system . The /lib directory is for service file installed by packages, not humans. Finally, enable doesn't start the service, it just runs the [Install] section, setting your app to start at boot. To start the service use systemctl start your.service . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160341/"
]
} |
364,454 | I'm running Debian Stretch with MATE desktop 1.16.1. Since I have update the package mate-settings-daemon I can't change my background, it shows the wallpaper used for the login screen. I've tried to change it in the usual way through Control Center -> Appearance -> Background but when I select a new wallpaper nothing happens. I've also tried to change some configurations with dconf editor but again with no success. Here are the packages about MATE desktop that I have in my system: ii atril 1.16.1-2 amd64 MATE document viewerii atril-common 1.16.1-2 all MATE document viewer (common files)ii caja 1.16.2-2 amd64 file manager for the MATE desktopii caja-common 1.16.2-2 all file manager for the MATE desktop (common files)ii compiz-mate 1:0.9.13.0+16.10.20160818.2-5 amd64 OpenGL window and compositing manager - MATE integrationii debian-mate-default-settings 1.16.1-1 all Default settings for MATE on Debianii engrampa 1.16.0-2 amd64 archive manager for MATEii engrampa-common 1.16.0-2 all archive manager for MATE (common files)ii eom 1.16.0-2 amd64 Eye of MATE graphics viewer programii eom-common 1.16.0-2 all Eye of MATE graphics viewer program (common files)ii gir1.2-mate-panel 1.16.2-1 amd64 GObject introspection data for MATE panelii libamd2:amd64 1:4.5.4-1 amd64 approximate minimum degree ordering library for sparse matricesii libatrildocument3 1.16.1-2 amd64 MATE document rendering libraryii libatrilview3 1.16.1-2 amd64 MATE document viewing libraryii libcamd2:amd64 1:4.5.4-1 amd64 symmetric approximate minimum degree library for sparse matricesii libccolamd2:amd64 1:4.5.4-1 amd64 constrained column approximate library for sparse matricesii libcolamd2:amd64 1:4.5.4-1 amd64 column approximate minimum degree ordering library for sparse matricesii libmate-desktop-2-17:amd64 1.16.1-1 amd64 Library with common API for various MATE modules (library)ii libmate-menu2:amd64 1.16.0-2 amd64 implementation of the freedesktop menu specification for MATE (library)ii libmate-panel-applet-4-1 1.16.2-1 amd64 library for MATE Panel appletsii libmate-sensors-applet-plugin0 1.16.1-1 amd64 Library for plugins for the mate-sensors-applet packageii libmate-slab0:amd64 1.16.1-1 amd64 beautification app libraryii libmate-window-settings1:amd64 1.16.1-1 amd64 utilities to configure the MATE desktop (window settings library)ii libmatedict6 1.16.0-1 amd64 MATE desktop utilities (matedict library)ii libmatekbd-common 1.16.0-2 all MATE library to manage keyboard configuration (common files)ii libmatekbd4:amd64 1.16.0-2 amd64 MATE library to manage keyboard configurationii libmatemixer-common 1.16.0-2 all Mixer library for MATE Desktop (common files)ii libmatemixer0:amd64 1.16.0-2 amd64 Mixer library for MATE Desktopii libmateweather-common 1.16.1-2 all MateWeather shared library (common files)ii libmateweather1:amd64 1.16.1-2 amd64 MateWeather shared libraryii marco 1.16.0-1 amd64 lightweight GTK+ window manager for MATEii marco-common 1.16.0-1 all lightweight GTK+ window manager for MATE (common files)ii mate-applet-brisk-menu 0.3.5-0ubuntu1 amd64 Solus Project's Brisk Menu MATE Panel Appletii mate-applet-topmenu 0.3-1 amd64 Topmenu applet for the MATE panelii mate-applets 1.16.0-1 amd64 Various applets for the MATE panelii mate-applets-common 1.16.0-1 all Various applets for the MATE panel (common files)ii mate-backgrounds 1.16.0-1 all Set of backgrounds packaged with the MATE Desktop Environmentii mate-control-center 1.16.1-1 amd64 utilities to configure the MATE desktopii mate-control-center-common 1.16.1-1 all utilities to configure the MATE desktop (common files)ii mate-desktop 1.16.1-1 amd64 Library with common API for various MATE modulesii mate-desktop-common 1.16.1-1 all Library with common API for various MATE modules (common files)ii mate-desktop-environment 1.16.0+1 all MATE Desktop Environment (metapackage)ii mate-desktop-environment-core 1.16.0+1 all MATE Desktop Environment (essential components, metapackage)ii mate-desktop-environment-extras 1.16.0+1 all MATE Desktop Environment (extra components, metapackage)ii mate-icon-theme 1.16.2-1 all MATE Desktop icon themeii mate-icon-theme-faenza 1.16.0+dfsg1-2 all MATE Faenza Desktop icon themeii mate-indicator-applet 1.16.0-1 amd64 MATE panel indicator appletii mate-indicator-applet-common 1.16.0-1 all MATE panel indicator applet (common files)ii mate-media 1.16.0-1 amd64 MATE media utilitiesii mate-media-common 1.16.0-1 all MATE media utilities (common files)ii mate-menu 16.10.1-2 all Advanced MATE menuii mate-menus 1.16.0-2 amd64 implementation of the freedesktop menu specification for MATEii mate-notification-daemon 1.16.1-1 amd64 daemon to display passive popup notificationsii mate-notification-daemon-common 1.16.1-1 all daemon to display passive popup notifications (common files)ii mate-panel 1.16.2-1 amd64 launcher and docking facility for MATEii mate-panel-common 1.16.2-1 all launcher and docking facility for MATE (common files)ii mate-polkit:amd64 1.16.0-2 amd64 MATE authentication agent for PolicyKit-1ii mate-polkit-common 1.16.0-2 amd64 MATE authentication agent for PolicyKit-1 (common files)ii mate-power-manager 1.16.2-1 amd64 power management tool for the MATE desktopii mate-power-manager-common 1.16.2-1 all power management tool for the MATE desktop (common files)ii mate-screensaver 1.16.1-1 amd64 MATE screen saver and lockerii mate-screensaver-common 1.16.1-1 all MATE screen saver and locker (common files)ii mate-sensors-applet 1.16.1-1 amd64 Display readings from hardware sensors in your MATE panelii mate-sensors-applet-common 1.16.1-1 all Display readings from hardware sensors in your MATE panel (common files)ii mate-session-manager 1.16.1-1 amd64 Session manager of the MATE desktop environmentii mate-settings-daemon 1.16.2-1 amd64 daemon handling the MATE session settingsii mate-settings-daemon-common 1.16.2-1 all daemon handling the MATE session settings (common files)ii mate-system-monitor 1.16.0-2 amd64 Process viewer and system resource monitor for MATEii mate-system-monitor-common 1.16.0-2 all Process viewer and system resource monitor for MATE (common files)ii mate-terminal 1.16.2-1 amd64 MATE terminal emulator applicationii mate-terminal-common 1.16.2-1 all MATE terminal emulator application (common files)ii mate-themes 3.22.6-1 all Official themes for the MATE desktopii mate-tweak 16.10.5-1 all MATE desktop tweak toolii mate-user-guide 1.16.0-1 all User documentation for MATE Desktop Environmentii mate-utils 1.16.0-1 amd64 MATE desktop utilitiesii mate-utils-common 1.16.0-1 all MATE desktop utilities (common files)ii mozo 1.16.0-1 all easy MATE menu editing toolii pluma 1.16.1-1 amd64 official text editor of the MATE desktop environmentii pluma-common 1.16.1-1 all official text editor of the MATE desktop environment (common files)ii python-mate-menu 1.16.0-2 amd64 implementation of the freedesktop menu specification for MATE (Python bindings)ii task-mate-desktop 3.39 all MATE | I have the same problem. A quick workaround could be to use feh: feh --bg-scale <imagefile.jpg> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221775/"
]
} |
364,456 | WWN = world wide name Seagate Constellation ES, model ST3500514NS, a 500 GB 3.5" SATA drive It has "serial number", 9WJxxxxx which is eight characters. It has WWN 5000C5002E47xxxx which is a 16 characters. Both are printed on the label on the hard disk drive. WD model WD4001FFSX, a 4 TB SATA drive It has "serial number" WMC5D0Dxxxxx which is 12 characters. It has WWN 50014EE003Fxxxxx which is 16 characters. HGST, model HUC109060CSS600, a 300 GB 2.5" SAS drive It has "serial number" KWJTxxx, also eight characters. It has WWN... I don't know; it is not printed on the label and not plugged into the system to find out. For inventory, we generally write down and track the following, which can always be gathered from the label on the drive: Manufacturer Model number Serial number Size in GB or TB, and connection type which is either SATA or SAS Location where in use, or in storage when not in use The problem arises obviously 1, 2, 3 years later when an inventory sheet shows whatever hard drive. You are pretty sure it's in a running server, but you don't want to shut down the server to pull the hard drive to read the label. How do you get the serial number of the drive that corresponds to what is on the label? udevadm info --query=all --name=/dev/sda has ID_SERIAL , but that is the WWN. We don't want another field to track the 16 characters of the WWN as an identifier... And I already hate writing down the long serial numbers of WD drives. Is there a way in Linux to extract the serial number of the drive? I believe it is possible because years ago the RAID storage manager GUI we had been using nicely reported the eight-character serial numbers of Seagate drives that were in use. And that RAID hardware has listed a bunch of Seagate-specific hard disk drives that were "officially supported", and if memory serves, really no other make/model of drives. Is it possible this is hard disk drive firmware related, meaning it can be done on certain make drives and not others? | Assuming the disk supports SMART, you should be able to retrieve the disk serial number using smartctl -i /dev/sdX | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154426/"
]
} |
364,468 | I bought a disk with some bad sectors, planning to fix them and then use it as part of RAID 6 cluster. I can do bad sector fixing under Windows, there are very good bad block fixing tools, but under Windows, the process is very slow, one sector fix takes 15 minutes. In my experience, Linux is better at dealing with devices that don't respond in time and this results in a far faster process under Linux. However, I checked the fsck manual, but did not find any useful option for surface & bad block scanning or bad block reallocation. How can I scan the surface of my hard disk and fix/reallocate bad sectors in Linux from the command line? | This answer is about magnetic disks. SSDs are different. Also, this is disk with no data (or no data you care to preserve) on it; see my answer to “Can I fix bad blocks on my hard disk with a single command” for what to do if you have important data on the disk. Disks made since at least the late 90s manage bad blocks themselves. In brief, a disk will handle a bad block by transparently replacing it with a spare sector. It will do so if (a) while reading, it discovers the block is "weak", but ECC is enough to recover the data; (b) while writing, it discovers the sector header is bad; (c) while writing, if a read previously detected the sector as bad, but the data was not recoverable. The disk firmware typically lets you monitor this process (the counts at least) via SMART attributes. Typically there will be at least a count of reallocated sectors and two counts of pending (discovered bad on read, ECC failed, has not yet been written to). There are two ways to get the disk to notice bad sectors: Use smartctl -t offline /dev/sdX to tell the disk firmware to do an offline surface scan. You then just leave the disk alone (completely idle will be fastest) until it's done (check the "Offline data collection status" in smartctl -c /dev/sdX ). This will typically update the "offline uncorrectable" count in SMART. (Note: drives can be configured to automatically run an offline check routinely.) Have Linux read the entire disk, e.g., badblocks -b 4096 -c 1024 -s /dev/sdX . This will typically update the "current pending sector" count in SMART. Either of the above may also increase the reallocated sector count—this is case (b), the ECC recovered the data. Now, to recover the sectors you just need to write to them. Normally, that'd be a simple pv -pterba /dev/zero > /dev/sdX (or just plain cat , or dd ) but you plan to make these part of a RAID array. The RAID init will write to the entire disk anyway, so that's pointless. The only exception the beginning and end of the disk—it's possible a few tens of megabytes will be missed (due to alignment, headers, etc.). So: disk=/dev/sdXend=$(echo "$(/sbin/blockdev --getsize64 "$disk")/4096-32768" | bc)dd if=/dev/zero bs=4096 count=32768 of="$disk" # first 128 MiBdd if=/dev/zero bs=4096 seek="$end" count=32768 of="$disk" # last 128 MiB I think I managed to avoid the all-to-easy fencepost error 1 above, so that should blank the first and last 128MiB of the disk. Then let mdadm raid init write the rest. It's harmless (except for trivial wear, and wasting hours of time) to zero the whole disk if you'd like to, though. Another thing to do, if your disks support it: smartctl -l scterc,40,100 (or whatever numbers) to tell the disk that you want it to give up on correcting read errors quicker—40 would be 4 seconds. The two numbers are read errors and write errors; mdraid will easily correct read errors via parity (and write the failed sector back to the disk to let it reallocate). Write errors, though, will fail the disk out of the array. PS: Make sure to keep an eye on the reallocated sectors count. That attribute going to failed is bad news. And if its continuously increasing, that's bad news too. PPS: Make sure your RAID arrays are scrubbed (every sector read and all the parity verified) routinely. Many distros already ship a script that does this monthly. This will detect & repair any new bad blocks as otherwise seldom-read bad blocks can linger and ultimately cause rebuild failure. 1 Fencepost error—a type of off-by-one error from failing to count one of the ends. Named from if you have a fence post every 3ft, how many fence posts in a 9ft freestanding fence? The correct answer is 4; the fencepost error is 3 and is from not counting the post at the beginning or at the end. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364468",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208148/"
]
} |
364,502 | Say that a process made a system call to open a file, when the Linux kernel executes this system call, the Linux kernel should add the fd for the opened file to the process fd table that made the system call. How does the Linux kernel knows which process made this system call when the arguments passed to the system call do not include the PID? | A kernel system call executes within the context of the calling process, just at a different privilege level and with different support infrastructure. The Linux kernel has a per-CPU variable which tracks the current process, current_task ; it uses that whenever it needs to know what the current process is. On a given CPU, the current task only changes when the scheduler decides, and the context switch takes care to save all the necessary information so that the kernel can keep track of what’s happening where. LWN has a couple of useful articles on syscalls, Anatomy of a system call part 1 and part 2 . They explain how system calls are defined and how they are executed, albeit perhaps not with enough detail to actually answer your question since they don’t cover the transition from user-space to kernel-space in detail; but that’s “just” whatever trap-based transition support is available on the CPU. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230989/"
]
} |
364,517 | I understand chmod and chown and how the permission bits work, but there is another permission system inside Linux, ACL with setfacl and getfacl , so this makes me wonder. What's the difference between those two permission control systems? Do they interfere with each other? | One is not better than the other, they are just different methods and way of thinking. You can use both permissions system on the same path without problems. They interfere with each other when modifying owner's, owning group and other permissions: when setting current value for these from setfacl, it will actually set the posix permission, not the ACL one. Posix permissions only allows an owner, owning group and "everyone" permission while ACL allows multiple "owning" users and group.ACL also allows setting default permissions for new files in a folder. You can add more permission management on top of both with apparmor or selinux for stricter control. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226061/"
]
} |
364,649 | My goal make i3 start one browser per monitor on a dual monitor setup. I can't find how to start a browser and move it to the target monitor. I've dig through the doc and tried in ~/.i3/config exec --no-startup-id i3-msg 'workspace 1 ; move workspace to output HDMI1 ; exec chromium --new-window "http://url/1" ; workspace 2 ; move workspace to output HDMI2 ; exec chromium --new-window "http://url/2"' But both windows appear on 1st monitor leaving the second one blank. What did I miss ? Xorg is configured as follow: Section "Monitor" Identifier "HDMI1" Option "Primary" "true"EndSectionSection "Monitor" Identifier "HDMI2" Option "LeftOf" "HDMI1"EndSection EDIT: I've added to ~/.i3/config workspace 1 output HDMI1workspace 2 output HDMI2 I've tried exec --no-startup-id i3-msg 'workspace 1; exec xeyes'exec --no-startup-id i3-msg 'workspace 2; exec xclock' or exec --no-startup-id i3-msg 'workspace 1; exec xeyes; workspace 2; exec xeyes' Always the same result, both apps start on last selected workspace. | You could assign specific class names to your Chromium instances and tie them to workspaces. So with 2 monitors config: workspace 1 output HDMI1workspace 2 output HDMI2for_window [class="^chromium-no-1$"] move workspace number 1for_window [class="^chromium-no-2$"] move workspace number 2 You'll need to start 2 browser instances with specific class values: $ chromium-browser --class=chromium-no-1$ chromium-browser --class=chromium-no-2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10406/"
]
} |
364,655 | Typing the following in Bash: env | grep USER and set | grep USER gives both times the same username. How do I know, for instance when typing echo $USER if the shell or the environment variable has been displayed? | For POSIX-compatible shells (including Bash), the standard says: 2.5.3 Shell Variables Variables shall be initialized from the environment [...] If a variable is initialized from the environment, it shall be marked for export immediately; see the export special built-in. New variables can be defined and initialized with variable assignments, [etc.] And about export : export name[=word]... The shell shall give the export attribute to the variables corresponding to the specified names, which shall cause them to be in the environment of subsequently executed commands. So from the shell's point of view, there are only variables. Some of them may have come from the environment when the shell was started, and some of them may be exported to the environment of the processes the shell starts. (The "environment" is really just a bunch of strings passed to the process when it starts. When the process is running, it can do whatever it likes with that, use it, ignore it, overwrite it. And what a process passes on when starting other processes can be yet another thing, though of course it's usual to just pass all of the environment variables along again.) If you were using some non-POSIX shell, such as csh , things might be different: $ csh% echo $foofoo: Undefined variable.% setenv foo bar% echo $foobar% set foo=asdf% echo $fooasdf% env |grep foofoo=bar% exit | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
364,660 | I'm going through this book , Advanced Linux Programming by Mark Mitchell, Jeffrey Oldham, and Alex Samuel. It's from 2001, so a bit old. But I find it quite good anyhow. However, I got to a point when it diverges from what my Linux produces in the shell output. On page 92 (116 in the viewer), the chapter 4.5 GNU/Linux Thread Implementation begins with the paragraph containing this statement: The implementation of POSIX threads on GNU/Linux differs from the thread implementation on many other UNIX-like systems in an important way: on GNU/Linux, threads are implemented as processes. This seems like a key point and is later illustrated with a C code. The output in the book is: main thread pid is 14608child thread pid is 14610 And in my Ubuntu 16.04 it is: main thread pid is 3615child thread pid is 3615 ps output supports this. I guess something must have changed between 2001 and now. The next subchapter on the next page, 4.5.1 Signal Handling, builds up on the previous statement: The behavior of the interaction between signals and threads varies from one UNIX-like system to another. In GNU/Linux, the behavior is dictated by the fact that threads are implemented as processes. And it looks like this will be even more important later on in the book. Could someone explain what's going on here? I've seen this one Are Linux kernel threads really kernel processes? , but it doesn't help much. I'm confused. This is the C code: #include <pthread.h>#include <stdio.h>#include <unistd.h>void* thread_function (void* arg){ fprintf (stderr, "child thread pid is %d\n", (int) getpid ()); /* Spin forever. */ while (1); return NULL;}int main (){ pthread_t thread; fprintf (stderr, "main thread pid is %d\n", (int) getpid ()); pthread_create (&thread, NULL, &thread_function, NULL); /* Spin forever. */ while (1); return 0;} | I think this part of the clone(2) man page may clear up the difference re. the PID: CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller. The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID (*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer . As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/364660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
364,669 | Working on Linux Mint 18.1, VirtualBox 5.0.40_Ubuntu. I have a VDI file from a VirtualBox VM: ~/VirtualBox\ VMs/Win10x64/Win10x64.vdi I've taken a Snapshot: ~/VirtualBox\ VMs/Win10x64/Snapshots/{GUID}.vdi I want to mount the guest's HDD from the snapshot . I can successfully mount the base VDI using qemu-nbd : qemu-nbd -c /dev/nbd0 ~/VirtualBox\ VMs/Win10x64/Win10x64.vdi But if I try with the Snapshot file: qemu-nbd -c /dev/nbd0 ~/VirtualBox\ VMs/Win10x64/Snapshots/{GUID}.vdi it fails with: unsupported VDI image (non-NULL link UUID) I did notice the --snapshot parameter for qemu-nbd but this doesn't seem to be the right thing. How can I mount the HDD as it is in the snapshot? Edit #1 I've also tried vdfuse , but again, doesn't seem to be any way of "applying" the differencing disk. | I think this part of the clone(2) man page may clear up the difference re. the PID: CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller. The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID (*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer . As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/364669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119822/"
]
} |
364,671 | I did the following in BASH: while true;do bash;done I wrote this one liner, but I was not sure at first whether it: stays in the main shell and fathers as many subshells as it runs dry of memory or some other stuff. the main shell fathers a subshell, than this subshells fathers a subshell until this lineage runs dry of memory or some other stuff. But, I suppose it is the second case, because once I run the one liner, I got shortly after my prompt back and I began to type exit and another exit and exit, exit, exit...and I still was not back in the main shell. Now, since so many subshells have been opened and each one is a program, I thought each one should have its own PID. So I did: ps aux | grep bash expecting to see a lot of processes with bash in their names. However, nothing like this, there were only two bash shells. How is it possible, I guess I have somewhere a very wrong idea of processes, shells, subshells and PIDs, but do not know where. | I think this part of the clone(2) man page may clear up the difference re. the PID: CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller. The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID (*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer . As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/364671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
364,733 | Is it possible to remove rather than adding substring to a filename using bash brace expansion? Considering the following scenario, one can add a suffix to a filename by using the below technique: mv offlineimap.conf{,.minimal} What it does is renaming offlineimap.conf to offlineimap.conf.minimal which is very handy esp. for making backup files (eg. swith .bak extension). But is it also possible to subtract substrings from given filenames, like so: mv offlineimap.conf.minimal{,-minimal} Here I use - as a hypothetical special character to substract the substring. I expect the second technique to result in offlineimap.conf removing the .minimal suffix from the name of an existing file. | To use a brace command to remove a suffix, such as .minimal from the file offlineimap.conf.minimal , use: mv offlineimap.conf{.minimal,} More on brace expansion The idea here is that brace expansion creates a series of strings using the comma-separated list of strings between the braces: $ echo a{b,c}ab ac In your first use, the first of the two strings is empty: $ echo a{,c}a ac In the desired solution, we switch it so that the second of the two strings is empty: $ echo a{b,}ab a Or: $ echo offlineimap.conf{.minimal,}offlineimap.conf.minimal offlineimap.conf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364733",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
364,782 | I have a service that stopped suddenly. I tried to restart that service but failed and was asked to run: systemctl daemon-reload . What does it exactly do? What is a daemon-reload ? | man systemctl says: daemon-reload Reload systemd manager configuration. This will rerun all generators (see systemd.generator(7)), reloadall unit files, and recreate the entire dependency tree. While the daemon is being reloaded, all socketssystemd listens on behalf of user configuration will stay accessible. This command should not be confused with the reload command. So, it's a "soft" reload, essentially; taking changed configurations from filesystem and regenerating dependency trees . Consequently, systemd.generator states: Generators are small binaries that live in /usr/lib/systemd/user-generators/ and other directories listedabove. systemd(1) will execute those binaries very early at bootup and at configuration reload time — beforeunit files are loaded. Generators can dynamically generate unit files or create symbolic links to unit filesto add additional dependencies, thus extending or overriding existing definitions. Their main purpose is toconvert configuration files that are not native unit files dynamically into native unit files. Generators are loaded from a set of paths determined during compilation, listed above. System and usergenerators are loaded from directories with names ending in system-generators/ and user-generators/,respectively. Generators found in directories listed earlier override the ones with the same name indirectories lower in the list. A symlink to /dev/null or an empty file can be used to mask a generator,thereby preventing it from running. Please note that the order of the two directories with the highestpriority is reversed with respect to the unit load path and generators in /run overwrite those in /etc. After installing new generators or updating the configuration, systemctl daemon-reload may be executed. Thiswill delete the previous configuration created by generators, re-run all generators, and cause systemd toreload units from disk. See systemctl(1) for more information. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/364782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231172/"
]
} |
364,826 | I have attempted to write a script that renames a list of directories in a folder. It is a bash script and I am only using awk to accomplish this task. Current form: [2015] Name of the album Desired form: Name of the album - [2015] My script: #! /usr/bin/env bashfor f in \[*; do mv -t "$f" "$( awk -F '\] ' ' {print $2 " - " $1 "]"}' <<<"$f" )"done When I execute the above script, I get the following error: mv: cannot stat 'In Dreams [EP] - [1963]': No such file or directory | You don't want -t in your situation. That will tell mv to try to move the desired name to the directory named $f . It would expand to something like: mv -t "[2015] Name of the album" "Name of the album - [2015]" which would be likely what you want without the -t . As is it treats $f as the target directory name into which all the other arguments should be moved. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231209/"
]
} |
364,850 | I'm trying to set up my Linux console - the bare TTY terminal without X.I tried to capture this problem with asciinema but interestingly, it didn't show up there, So I captured it with my own camera, here is a link to the video . It doesn't appear in [n]vim only, It is completely random and it appears sometimes on the command line as well. I'm pretty sure it has nothing to do with the font. Has anyone ever encountered such a strange behavior before? Edit: more info : I'm using ArchLinux and I think there was a problem with the way I installed the OS. In the past, I made a terrible mistake which deleted almost all files in /usr/ . Afterwards, I decided I don't need to reinstall the filesystem For example and I only need to reinstall the gnu-core programs and the kernel with pacstrap . This problem appeared after that. Troubleshooting: I tried reset and it doesn't help. I tried LC_ALL=en_US.UTF-8 nvim test.txt and LC_ALL=C nvim test.txt in order to see if it's related to Locale settinga as well and it doesn't help either. | You don't want -t in your situation. That will tell mv to try to move the desired name to the directory named $f . It would expand to something like: mv -t "[2015] Name of the album" "Name of the album - [2015]" which would be likely what you want without the -t . As is it treats $f as the target directory name into which all the other arguments should be moved. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135796/"
]
} |
364,898 | While almost all text-editors can view .json files, I am sure there are some or one which show .json files in all its glory. Could somebody share name or names of native .json viewers rather than text-editors which can also show .json files. I am looking to know about the earliest tools which are/were used to view .json files. Looking to see raw .json output rather than pretty output. | jq is a json processor (like sed for json) which can also be used to pretty-print json documents. cat yourfile.json | jq | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
364,922 | I'm having a problem where grep gets confused when the directory contains a file starting with dashes. For example, I have a file named " ------.js " . When I do something like grep somestring * I get the error: grep: unrecognized option '------.js'Usage: grep [OPTION]... PATTERN [FILE]...Try 'grep --help' for more information. This seems like the kind of question that would be asked all over the internet, but I can't find anything. I can manually resolve the problem with something like find . | while read f; do grep MYSTRING "$f"; done but I'm wondering if there's a simpler / more robust solution. I'm running Arch Linux. | As an addition to Romeo's answer, note that grep pattern --whatever is required by POSIX to look for pattern in the --whatever file. That's because no options should be recognised after non-option arguments (here pattern ). GNU grep in that instance is not POSIX compliant. It can be made compliant by passing the POSIXLY_CORRECT environment variable (with any value) into its environment. That's the case of most GNU utilities and utilities using a GNU or compatible implementation of getopt() / getopt_long() to parse command-line arguments. There are obvious exceptions like env , where env VAR=x grep --version gets you the version of grep , not env . Another notable exception is the GNU shell ( bash ) where neither the interpreter nor any of its builtins accept options after non-option arguments. Even its getopts cannot parse options the GNU way. Anyway, POSIXLY_CORRECT won't save you if you do grep -e pattern *.js (there, pattern is not a non-option argument, it is passed as an argument to the -e option, so more options are allowed after that). So it's always a good idea to mark the end of options with -- when you can't guarantee that what comes after won't start with a - (or + with some tools): grep -e pattern -- *.jsgrep -- pattern *.js or use: grep -e pattern ./*.js (note that grep -- pattern * won't help you if there's a file called - , while grep pattern ./* would work. grep -e "$pattern" should be used instead of grep "$pattern" in case $pattern itself may start with - ). There was an attempt in the mid-90s to have bash be able to tell getopt() which arguments (typically the ones resulting from a glob expansion) were not to be treated as options (via a _<pid>_GNU_nonoption_argv_flags_ environment variable), but that was removed as it was causing more problems than it solved. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22426/"
]
} |
364,923 | I am trying to write a one line shell command (bash) to save a list of movies to a CSV file. I am okay with using a script if needed. My folders are laid out as follows: -Movies/ --A/ ----After Earth (2013).mkv --B/ ----Batman (1989).mkv Using this command: ls Movies/* | grep '.mkv' | cut -d. -f1 This provides me with a list of movies that I know to be of the type MKV. 12 Monkeys (1995) 2001 - A Space Odyssey (1968) A Million Ways To Die In The West (2014) A Series of Unfortionate Events (2004) Bad Company (2002) I would eventually however like to end up with a CSV file that looks as follows: MOVIE_NAME, MOVIE_YEAR, FILE_TYPE, CREATED_DATE (YYYY-MM-DD) Perhaps something involving sed or awk may be necessary? | As an addition to Romeo's answer, note that grep pattern --whatever is required by POSIX to look for pattern in the --whatever file. That's because no options should be recognised after non-option arguments (here pattern ). GNU grep in that instance is not POSIX compliant. It can be made compliant by passing the POSIXLY_CORRECT environment variable (with any value) into its environment. That's the case of most GNU utilities and utilities using a GNU or compatible implementation of getopt() / getopt_long() to parse command-line arguments. There are obvious exceptions like env , where env VAR=x grep --version gets you the version of grep , not env . Another notable exception is the GNU shell ( bash ) where neither the interpreter nor any of its builtins accept options after non-option arguments. Even its getopts cannot parse options the GNU way. Anyway, POSIXLY_CORRECT won't save you if you do grep -e pattern *.js (there, pattern is not a non-option argument, it is passed as an argument to the -e option, so more options are allowed after that). So it's always a good idea to mark the end of options with -- when you can't guarantee that what comes after won't start with a - (or + with some tools): grep -e pattern -- *.jsgrep -- pattern *.js or use: grep -e pattern ./*.js (note that grep -- pattern * won't help you if there's a file called - , while grep pattern ./* would work. grep -e "$pattern" should be used instead of grep "$pattern" in case $pattern itself may start with - ). There was an attempt in the mid-90s to have bash be able to tell getopt() which arguments (typically the ones resulting from a glob expansion) were not to be treated as options (via a _<pid>_GNU_nonoption_argv_flags_ environment variable), but that was removed as it was causing more problems than it solved. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231280/"
]
} |
364,927 | With NetworkManager, how do I set the currently connected connection on my device (say wlp2s0 ) as metered? How do I unset it in such a way that guessing of the metered/unmetered state will still occur? Note: some hotspots will be metered (eg my phone) and some won't (eg home), so setting this on the device isn't what I'm after. | I really hope that this isn't the best answer: it seems convoluted in the simple case, and even more so if allowing for a binary SSID. Anyways, here goes: Get device's current connection nmcli -t -f GENERAL.CONNECTION --mode tabular device show $DEVICE | head -n1 -t is required as there is a space appended at the end otherwise (!?). Show current metered status nmcli -f connection.metered connection show $CONNECTION Where $CONNECTION is the string returned by the previous command. Change metered status The valid statuses are yes , no , and unknown . unknown is the default, which will do the guessing based on things like DHCP option ANDROID_METERED (reference) . Example: set $CONNECTION to be metered: nmcli connection modify $CONNECTION connection.metered yes Allowing for binary SSIDs To do this "right" (allowing for 32 arbitrary octets in the SSID), you'll need to use the device 's GENERAL.CON-PATH : nmcli -t -f GENERAL.CON-PATH --mode tabular device show DEVICE | tail -n1 This will return a string like: /org/freedesktop/NetworkManager/ActiveConnection/39 Then use this path to get connection 's GENERAL.CON-PATH . NB: this has a different value (a device's CON-PATH == connection's GENERAL.DBUS-PATH ) nmcli -t -f GENERAL.CON-PATH -m tabular connection show apath /org/freedesktop/NetworkManager/ActiveConnection/39 This will return a string like: /org/freedesktop/NetworkManager/Settings/5 (note no ActiveConnection ). This can be then used to modify the connection: nmcli connection modify /org/freedesktop/NetworkManager/Settings/<NUMBER> connection.metered <VALUE> | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364927",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
364,950 | I prefer programming on vim , however , I heard that vim is not suitable for java development. Is it true? Is it possible to do it without IDE effectively? If yes, how is it possible? | Huge vim user myself, I faced the same question when moving in to Android Development a few years back. Originally I used Eclipse while Android Studio was still in development. Once Android Studio hit full release, the switch from Eclipse to Android Studio was another pain, but now Android Studio feels "good" to use. All I can say is download Android Studio and spend the time needed to use and learn the IDE. Android Studio is made for, and developed to make Android Apps. If you are expanding from HTML / Javascript / CSS / PHP / "Something else" to Android, do not waste your time trying to set-up anything else. You will have nothing but issues and hand made scripts and other stuff to remember and manage. With Android Studio instead of that layer of complexity you will be using a menu option or even a shortcut key. Personally I still use vim for everything except Android Apps. Android Studio is simply another tool that needs to be learned to do the job. The only pain in the butt now is the constant updates, but the medium is still moving pretty quick and that is just how things go in App development. Without Android Studio you will need to update those scripts or need remember to download something else because something changed. Keep it simple and use the correct tool for the job. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168027/"
]
} |
364,984 | What method does the login shell use to read /etc/profile ? | It is sourced. The difference between executing and sourcing is explained in this post . The important difference here is that sourcing causes the commands in the sourced file to be run in the current shell. That means that any variables defined in the file will now be available in the shell. To illustrate the difference, try the following: $ cat foo ## a simple file with a variable definitionvar="hello"$ chmod +x foo ## make file executable$ ./foo ## execute$ echo "$var" ## var is not set in the parent shell$ . foo ## source$ echo "$var" ## var is now set in the parent shellhello So, since /etc/profile needs to be able to affect the shell it was read from, it is sourced and not executed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/364984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231324/"
]
} |
364,997 | In a program which I am writing I want to offer the functionality to open the directory where the file which I am currently processing is located and automatically select that file (so that the user does not need to search for it). I know that I can open a directory in the default file manager using xdg-open /path/to/directory I know that I can open a directory in nautilus and select a file using nautilus /path/to/file.txt I thought that I could use xdg-mime query default inode/directory to get the default file manager and - if it is nautilus - call it as shown above.But, despite the fact that nautilus is the default on my system ( xdg-open opens nautilus and so does the places menu in the gnome shell), xdg-mime returns Thunar.desktop .(I have tried find / -name Thunar.desktop -mount 2>/dev/null but it did not find anything.) Also, I do not know how to open a directory and select a sub directory in nautilus (with the above mentioned approach it would open the subdirectory). How can I open a directory in the default filemanager and select a file in that directory (if selecting a directory, too, was possible that would be great, but for this application not needed)or at least find out the default filemanager so that I can call it directly? | 1. To open a directory and select a subdirectory/file in nautilus: nautilus --select path/to/file/or/directory From nautilus(1) man page : -s, --select Select specified URI in parent folder. 2. xdg-mime returns Thunar.desktop but xdg-open opens nautilus xdg-mime uses mimeapps.list to determine the default application to use. Separate mimeapps.list files exist to handle user-specific, system-specific and distribution-specific requirements. Their lookup order can be found over here . mimeapps.list lists default applications for a given mimetype under [Default Applications] section. It allows to list multiple default applications in their decreasing order of preference. For example : [Default Applications]mimetype1 = default1.desktop;default2.desktop; where mimetype1 is the mime type and *.desktop are the desktop files. xdg-open searches for desktop file down the lookup order, across the preference list till it finds a valid desktop file. If no such file is found across all the files then the most preferred one according to the associations is chosen and is used as default application. So in case of our example, let us suppose default1.desktop is not present on our system, so xdg-open will try to open our file using default2.desktop . However, xdg-mime returns default1.desktop which is the first entry in our mimeapps.list file. In your case default1.desktop must be Thunar.desktop hence the output. However it is not installed on your system. So xdg-open opens your file/directories using nautilus which is present on your system. To verify this, you can check your mimeapps.list file for line containing inode/directory . For Ubuntu 17.10, the location of mimeapps.list file is : /usr/share/applications/defaults.list NOTE: The complete algorithm to determine 'Default Applications' can be found here . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/364997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231334/"
]
} |
365,023 | I'm learning CentOS/RHEL and currently doing some stuff about process management. The RHCSA book I'm reading describes running kill 1234 as sending SIGQUIT. I always thought the kill command without adding a switch for signal type should default to kill -15 SIGTERM is kill -15 and SIGKILL is kill -9 , right? Does CentOS/RHEL use a slightly different method of kill -15 or have I just been mistaken? EDIT: kill -l gives SIGQUIT as kill -3 and it seems to be associated with using the keyboard to terminate a process. man 7 signal also states that SIGQUIT is kill -3 , so I can only assume that my book is wrong in stating that SIGQUIT is kill -15 default. | No, they're not the same. The default action for both is to terminate the process, but SIGQUIT also dumps core. See e.g. the Linux man page signal(7) . kill by default sends SIGTERM, so I can only imagine that the mention of SIGQUIT being default is indeed just a mistake. That default is in POSIX , and so are the numbers for SIGTERM, SIGKILL and SIGQUIT. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/365023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/204207/"
]
} |
365,085 | I have some script to be executed on remote server( server-2 ) through ssh from server-1 and I have to write that output into a log file named file.log on server-1 . I am trying this: sc.sh echo 'testing'cp $HOME/dir1/file1 $HOME/dir2 Now, executing the sc.sh through ssh: sshpass -p 'psswd' ssh username@server-2 "bash -s" < sc.sh | tee -a file.logif [ $? -eq 0]; then echo "successfully executed the script file" . . .else echo "failed copying due to incorrect path"fi Now because of tee -a file.log command, it will always return 0 even though my commands in script file fails. How can I write into log file and should check the if condition after ssh command which should work based on the ssh commands exit code? | checking ${PIPESTATUS[0]} worked for me... if [ ${PIPESTATUS[0]} -eq 0 ]; then echo "successfull"fi echo ${PIPESTATUS[*]} prints the exit codes of all the pipeline commands. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227732/"
]
} |
365,095 | Server side: nc -l -p 192.168.1.229 1234 Client side: nc 192.168.1.229 1234 But it cannot connect. Why? ~# nc 192.168.1.229 1234(UNKNOWN) [192.168.1.229] 1234 (?) : Connection refused | On the server side you shouldn't provide its IP address. Server side should be: nc -l -p 1234 Client side should be nc 192.168.1.229 1234 Note that the source of the problem might be a firewall/router between those two machines which filter-out traffic on the port you are using. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231411/"
]
} |
365,114 | I am trying a naive: $ cat * | sort -u > /tmp/bla.txt which fails with: -bash: /bin/cat: Argument list too long So in order to avoid a silly solution like (creates an enormous temporary file): $ find . -type f -exec cat {} >> /tmp/unsorted.txt \;$ cat /tmp/unsorted.txt | sort -u > /tmp/bla.txt I though I could process files one by one using (this should reduce memory consumption, and be closer to a streaming mechanism): $ cat proc.sh#!/bin/shold=/tmp/old.txttmp=/tmp/tmp.txtcat $old "$1" | sort -u > $tmpmv $tmp $old Followed then by: $ touch /tmp/old.txt$ find . -type f -exec /tmp/proc.sh {} \; Is there a simpler more unix-style replacement for: cat * | sort -u when the number of files reach MAX_ARG ? It feels akward writing a small shell script for such a common task. | A simple fix, works at least in Bash, since printf is builtin, and the command line argument limits don't apply to it: printf "%s\0" * | xargs -0 cat | sort -u > /tmp/bla.txt ( echo * | xargs would also work, except for the handling of file names with white space etc.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32896/"
]
} |
365,124 | I have a program that is parallelized using MPI. It thinks that it is able to run across multiple nodes on our (CentOS 6.6)-based HPC grid, when in actual fact it only runs successfully on multiple cores of the same compute node . e.g. If I qsub a job to the grid asking for 20 cores, and Grid Engine decides to split it over two different nodes, the program fails. However, if there is a node with 20 cores available, and Grid Engine sends it all to that one, the program runs successfully. The qsub script contains the command #$ -pe mpi 20 to select the number of cores. So at the moment, I do a qstat -f -u "*" to manually identify a compute node with 20 available cores, and submit to that node with qsub -q general.q@node-X-X What I am looking for is a way to tell Grid Engine to wait and only submit the job to a single compute node that has the required number of available cores. This will allow me to automate my job submission. I am considering writing a bash script to parse the qstat -f -u "*" command, but there must be a more elegant solution. I have looked through the qsub manual but am unable to find a suitable flag or command line argument. I'm not able to modify the program itself at this time and I am not a system administrator. Here is some information on the different software versions I have available: MPI/gridengine info: > ompi_info | grep gridengineMCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.2) Grid engine version is: OGS/GE 2011.11p1 | A simple fix, works at least in Bash, since printf is builtin, and the command line argument limits don't apply to it: printf "%s\0" * | xargs -0 cat | sort -u > /tmp/bla.txt ( echo * | xargs would also work, except for the handling of file names with white space etc.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191891/"
]
} |
365,225 | Is there a clean, simple way to get an IP address for a network interface from /proc , similar to the way I can get the MAC address for a network interface? Ideally I would just type cat /proc/<foo>/{interface_name} and get the IPv4 address. I'd rather not run anything other than cat . | Under the /proc directory, you can also find the IPv4 addresses in the Forwarding Information Base table, at /proc/net/fib_trie The table is pretty intelligible doing a mere cat , first comes the Main: and then Local: cat /proc/net/fib_trie or to see your network, IP addresses and netmask: cat /proc/net/fib_trie | grep "|--" | egrep -v "0.0.0.0| 127." |-- 193.136.1.0 |-- 193.136.1.2 |-- 193.136.1.255 |-- 193.136.1.0 |-- 193.136.1.2 |-- 193.136.1.255 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/365225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115294/"
]
} |
365,235 | After installing package touchegg my laptop will not boot. I want to boot to terminal and run apt-get purge touchegg . How can I boot to terminal and/or otherwise run this command? I can get as far as the login screen. I login > desktop starts to load > Freeze. | Before loging in press Ctrl + Alt + F1 to change to tty1 and use your username and password to login. After that you can use history to get the last commands used, and that should give you a hint on what caused the issue, and possibly a solution. In this particular case what did it, somehow, was : sudo apt-get purge gimp inkscape touchegg | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104015/"
]
} |
365,263 | A CentOS 7 server needs to have a new user created with a specific home directory and shell defined as follows, taken from the instructions at this link : sudo /usr/sbin/useradd --create-home --home-dir /opt/atlassian/bitbucket --shell /bin/bash atlbitbucket However, when that command is run on a CentOS 7 server, the command fails with the following error: useradd: cannot create directory /opt/atlassian/bitbucket Similarly, creating the /opt/atlassian/bitbucket directory before-hand results in the following error: useradd: warning: the home directory already exists.Not copying any file from skel directory into it. What specific changes need to be made to these commands, so that the new atlbitbucket user can successfully be created? The Complete Terminal Output: The following is the complete series of commands and responses in the CentOS 7 terminal: Manually Creating The Directories: login as: [email protected]'s password:Last login: Mon May 15 14:00:18 2017[my_sudoer_user@localhost ~]$ sudo mkdir /opt/atlassian/[sudo] password for my_sudoer_user:[my_sudoer_user@localhost ~]$ sudo mkdir /opt/atlassian/bitbucket[my_sudoer_user@localhost ~]$ sudo /usr/sbin/useradd --create-home --home-dir /opt/atlassian/bitbucket --shell /bin/bash atlbitbucketuseradd: warning: the home directory already exists.Not copying any file from skel directory into it.[my_sudoer_user@localhost ~]$ sudo rmdir /opt/atlassian/bitbucket[my_sudoer_user@localhost ~]$ sudo rmdir /opt/atlassian/ The Recommended useradd Syntax: [my_sudoer_user@localhost ~]$ sudo /usr/sbin/useradd --create-home --home-dir /opt/atlassian/bitbucket --shell /bin/bash atlbitbucketuseradd: user 'atlbitbucket' already exists[my_sudoer_user@localhost ~]$ sudo userdel -r atlbitbucketuserdel: atlbitbucket home directory (/opt/atlassian/bitbucket) not found[my_sudoer_user@localhost ~]$ sudo /usr/sbin/useradd --create-home --home-dir /opt/atlassian/bitbucket --shell /bin/bash atlbitbucketuseradd: cannot create directory /opt/atlassian/bitbucket[my_sudoer_user@localhost ~]$ adduser Instead Of useradd I then tried @terdon's suggestion from this other posting to use adduser instead, but got the same error, as follows: [my_sudoer_user@localhost ~]$ sudo userdel -r atlbitbucket[sudo] password for my_sudoer_user:userdel: atlbitbucket mail spool (/var/spool/mail/atlbitbucket) not founduserdel: atlbitbucket home directory (/opt/atlassian/bitbucket) not found[my_sudoer_user@localhost ~]$ sudo adduser --create-home --home-dir /opt/atlassian/bitbucket --shell /bin/bash atlbitbucketadduser: cannot create directory /opt/atlassian/bitbucket[my_sudoer_user@localhost ~]$ Shorter Syntax: Then I tried @rajcoumar's suggestion from the same other posting , but got the same following results: [my_sudoer_user@localhost ~]$ sudo userdel -r atlbitbucketuserdel: atlbitbucket mail spool (/var/spool/mail/atlbitbucket) not founduserdel: atlbitbucket home directory (/opt/atlassian/bitbucket) not found[my_sudoer_user@localhost ~]$ sudo useradd -m -d /opt/atlassian/bitbucket -s /bin/bash atlbitbucketuseradd: cannot create directory /opt/atlassian/bitbucket[my_sudoer_user@localhost ~]$ Elevating To root : I even upgraded to root just to see if the problem could be resolved by running the command as root, but I still got the following error: [my_sudoer_user@localhost ~]$ su -Password:Last login: Mon May 15 14:07:11 PDT 2017 on ttyS0[root@localhost ~]# /usr/sbin/useradd --create-home --home-dir /opt/atlassian/bitbucket --shell /bin/bash atlbitbucketuseradd: cannot create directory /opt/atlassian/bitbucket[root@localhost ~]# | The useradd code calls a mkdir library function to (attempt to) create the specified directory. useradd checks the return code, but only for being non-zero; in this case, I suspect that mkdir is returning ENOENT -- A directory component in pathname does not exist or is a dangling symbolic link because the parent directory (/opt/atlassian) didn't exist, or had been removed during your attempts to add the user. As Kusalananda / roaima point out, the simplest solution here is to create the parent directory structure before calling useradd: sudo mkdir -p /opt/atlassian sudo /usr/sbin/useradd --create-home --home-dir /opt/atlassian/bitbucket --shell /bin/bash atlbitbucket | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
365,285 | I'm using Linux CentOS 7 Server and I already installed OpenVPN and NordVPN servers which I use to connect my Linux to. After establishing the VPN Connection, immediately my SSH access got disconnected. How to allow SSH access to the server while it's connected to VPN Server? And how to make it work whenever the server is rebooted? I used this tutorial on my setup: https://nordvpn.com/tutorials/linux/openvpn/ | I were able to find a solution for my issue by: when you connect to the Server by its public IP address, the return packets get routed over the VPN. You need to force these packets to be routed over the public eth0 interface. These route commands should do the trick: ip rule add from x.x.x.x table 128ip route add table 128 to y.y.y.y/y dev eth0ip route add table 128 default via z.z.z.z Where x.x.x.x is your Server public IP, y.y.y.y/y should be the subnet of your Server public IP address, eth0 should be your Server public Ethernet interface,and z.z.z.z should be the default gateway. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227199/"
]
} |
365,291 | I've configured tmp in /etc/fstab like this: tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 2 The problem is that now /tmp is limited to half of the machine's memory, and when it reaches that limit I'm getting "no space left on device" error. I'd like to make it "unlimited", i.e. grow to the size of the disk. | I get the impression you have a few misconceptions regarding tmpfs . You might find it useful to read the kernel documentation on the topic; I’ll attempt to clarify things for you here. Your question’s title “ tmpfs does not overflow to swap” doesn’t seem to reflect the actual contents of your question, but in any case tmpfs does use swap, although arguably it doesn’t overflow to swap. tmpfs is fundamentally a (virtual) memory-based file system; its contents live in memory only, but since they’re swappable the kernel can store them in swap instead of physical memory if necessary. Nevertheless tmpfs file systems can’t be larger than the total amount of virtual memory available, i.e. physical RAM and swap, as indicated e.g. by free -h . By default tmpfs file systems have a maximum size equal to half the amount of physical memory available. You can increase this using the size parameter, but again it can’t ever be more than the available physical memory and swap (although that limit isn’t enforced at mount time). Once the file system reaches its maximum size (or rather, contains files occupying that much space), it reports that it’s run out of space, as you found out. tmpfs itself doesn’t support overflowing anywhere when it runs out of space. If you need temporary storage space for large files, you should use /var/tmp rather than /tmp . You really don’t want a very large tmpfs file system, that’s a recipe for disaster when it fills up (the kernel’s usual ways of recovering memory don’t work in a tmpfs ). (If you have lots of RAM of course, a large tmpfs can work. I run a few systems with build tmpfs file systems sized at 75% of RAM, out of 32GiB, 64GiB or even more.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6442/"
]
} |
365,328 | I want to grep lines containing - 0 amount - 122,000,000 amount - 50,000 amount I've tried to use grep -rhI " - * amount ". but it will output anything containing - regardless of whether it contains amount The problem is I only want strings with - * amount where * indicates anything in between - (here) amount . It is acceptable if the lines only containing " - " are dismissed. But it doesn't work. | The * will match zero or more of the preceding character or pattern. This means that - * amount will match - amount - amount - amount (etc.) To match the numbers, as you have written them, use - [0-9,]+ amount as the pattern. The + will force at least one match of the preceding regular expression, and [0-9,] will match any digit or a comma. Given the following file: - 0 amount - 122,000,000 amount - 50,000 amount - amount - amount - some amount This will work: $ grep -E -e '- [0-9,]+ amount' file - 0 amount - 122,000,000 amount - 50,000 amount The -E is needed because + is an extended regular expression operator, and the -e is needed because otherwise the - in the pattern would be interpreted as an option to grep ( -e means the next thing is a regular expression for grep to apply to its input). Without -E , use \{1,\} in place of + . You may also anchor the pattern to the start and end of the line: $ grep -E '^ - [0-9,]+ amount$' file This means that the -e is no longer needed and will ensure that the line starts with a space followed by the dash and the the number etc. as before. The string amount must be the last thing on the line (we anchor the word to the end of the line with $ ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218655/"
]
} |
365,336 | So per POSIX specification we have the following definition for * : Expands to the positional parameters, starting from one, initially producing one field for each positional parameter that is set. When the expansion occurs in a context where field splitting will be performed, any empty fields may be discarded and each of the non-empty fields shall be further split as described in Field Splitting. When the expansion occurs in a context where field splitting will not be performed, the initial fields shall be joined to form a single field with the value of each parameter separated by the first character of the IFS variable if IFS contains at least one character, or separated by a if IFS is unset, or with no separation if IFS is set to a null string. For a vast majority of people we are aware of the famous ARG_MAX limitation: $ getconf ARG_MAX2621440 which may lead to: $ cat * | sort -u > /tmp/bla.txt-bash: /bin/cat: Argument list too long Thankfully the good people behind bash ([include all POSIX-like others]) provided us with printf as a built-in, so we can simply: printf '%s\0' * | sort -u --files0-from=- > /tmp/bla.txt And everything is transparent for the user. Could someone please let me know why this is so trivial to bypass the ARG_MAX limitation using a built-in command and why it is so damn hard to provide a conforming POSIX shell interpreter which would handle gracefully * special parameter to a standalone executable: $ cat * Would that break something ? I am not asking bash people to provide cat as a built-in, I am solely interested in the order of operations and why is * expanded in different behavior depending whether the command is build-in or is a standalone executable. | The limitation is not in the shell but in the exec() family of functions. The POSIX standard says in relation to this : The number of bytes available for the new process' combined argument and environment lists is {ARG_MAX} . It is implementation-defined whether null terminators, pointers, and/or any alignment bytes are included in this total. To run utilities that are built into the shell, the shell will not need to call exec() , so it is unaffected by this limitation. Notice, too, that it's not simply the length of the command line that is limited, but the combination of the length of the command, its arguments, and the current environment variables and their values. Also notice that printf is not a built in utility in e.g. pdksh (which happens to act as sh and ksh on OpenBSD). Relying on it being a built-in will need to take the specific shell which is being used into account. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32896/"
]
} |
365,369 | I used ls -lS command in my home directory. This command is supposed to list down the contents of a directory by size. This is what I got after running this command total 10148-rw-rw-r-- 1 rahul rahul 8053159 May 15 15:35 Costa_ODE.pdf-rw-rw-r-- 1 rahul rahul 1755507 May 15 17:33 gnuplot.pdf-rw-rw-r-- 1 rahul rahul 218048 May 13 22:14 out.log-rw-rw-r-- 1 rahul rahul 98131 Feb 16 01:53 hs_err_pid8639.log-rw-rw-r-- 1 rahul rahul 12364 Apr 19 14:01 Untitled 1.csvdrwxr-xr-x 4 rahul rahul 12288 Jun 6 2016 cfitsio-rw-r--r-- 1 rahul rahul 8980 Feb 7 2016 examples.desktopdrwxrwxr-x 2 rahul rahul 4096 Mar 10 12:24 bindrwxrwxr-x 8 rahul rahul 4096 May 8 14:51 boxfitv2drwxrwxrwx 2 rahul rahul 4096 Jan 30 11:50 dao2drwxrwxr-x 2 rahul rahul 4096 Mar 12 2016 deja-dupdrwxr-xr-x 6 rahul rahul 4096 May 16 02:12 Desktopdrwxr-xr-x 3 rahul rahul 4096 May 15 10:53 Documentsdrwxr-xr-x 5 rahul rahul 4096 May 8 14:09 Downloads.... and its a pretty big list. But I want you to focus on sub-directories, for example Desktop. Its size is shown as 4096 bytes. But when I tried to see the details of Desktop, this is what I got. In short, the command ls -lS is not calculating the size of the contents of Desktop and other sub-directories. Is there any way to do it? EDIT: Output of ls -lsh command total 10M4.0K drwxrwxr-x 2 rahul rahul 4.0K Mar 10 12:24 bin4.0K drwxrwxr-x 8 rahul rahul 4.0K May 8 14:51 boxfitv24.0K -rw-rw-r-- 1 rahul rahul 3.2K May 13 13:28 c.c 12K drwxr-xr-x 4 rahul rahul 12K Jun 6 2016 cfitsio7.7M -rw-rw-r-- 1 rahul rahul 7.7M May 15 15:35 Costa_ODE.pdf4.0K drwxrwxrwx 2 rahul rahul 4.0K Jan 30 11:50 dao2 0 -rw-rw-r-- 1 rahul rahul 0 May 13 20:37 default.txt4.0K drwxrwxr-x 2 rahul rahul 4.0K Mar 12 2016 deja-dup4.0K drwxr-xr-x 6 rahul rahul 4.0K May 16 17:11 Desktop4.0K drwxr-xr-x 3 rahul rahul 4.0K May 15 10:53 Documents4.0K drwxr-xr-x 5 rahul rahul 4.0K May 8 14:09 Downloads 12K -rw-r--r-- 1 rahul rahul 8.8K Feb 7 2016 examples.desktop... Output of du -sh ~/Desktop command 80M /home/rahul/Desktop | ls -lS is indeed showing the true size of the directory : the directory itself + references to any file contained in the given directory. You could use du instead of ls : du -ha -d 1 | sort -hr du : estimates file space usage recursively for directories h : human readable a : all content, not just directory d 1 : max depth 1, so you only check for the directories within the current directory sort -hr : sorts it decreasingly | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196929/"
]
} |
365,380 | I'm using Centos 7 Server And I Would Like To Save ip Rule And Route Whenever Server Rebooted. ip rule add from x.x.x.x table 128ip route add table 128 to y.y.y.y/y dev eth0ip route add table 128 default via z.z.z.z The mentioned Rule and Route lose once i reboot the server which means i need to run the 3 commands each time server rebooted. I need to make ip rule and route persist whenever server is rebooted. | Take a look at /etc/rc.d/rc.local . The file states Please note that you must run chmod +x /etc/rc.d/rc.local to ensure that this script will be executed during boot. So: chmod +x /etc/rc.d/rc.local Then place your commands above the last line touch /var/lock/subsys/local There is better way using relevant configuration files. Rules and routes can be specified using corresponding file names. All the relevant configuration files are given below. (The device names may differ.) /etc/iproute2/rt_tables/etc/sysconfig/network/etc/sysconfig/network-scripts/ifcfg-eth0/etc/sysconfig/network-scripts/ifcfg-eth1/etc/sysconfig/network-scripts/route-eth0/etc/sysconfig/network-scripts/route-eth1/etc/sysconfig/network-scripts/rule-eth0/etc/sysconfig/network-scripts/rule-eth1 To create a named routing table, use /etc/iproute2/rt_tables . I added 128 mynet . ## reserved values#255 local254 main253 default0 unspec## local#128 mynet The EL 7.x /etc/sysconfig/network file. The default route is GATEWAY . NETWORKING=yesHOSTNAME=hostname.sld.tldGATEWAY=10.10.10.1 THE EL 7.x /etc/sysconfig/network-scripts/ifcfg-eth0 file, without HWADDR and "UUID". This configures a static IP address for eth0 without using NetworkManager . DEVICE=eth0TYPE=EthernetONBOOT=yesNM_CONTROLLED=noBOOTPROTOCOL=noneIPADDR=10.10.10.140NETMASK=255.255.255.0NETWORK=10.10.10.0BROADCAST=10.10.10.255 THE EL 7.x /etc/sysconfig/network-scripts/ifcfg-eth1 file, without HWADDR and UUID . This configures a static IP address for eth1 without using NetworkManager . DEVICE=eth0TYPE=EthernetONBOOT=yesNM_CONTROLLED=noBOOTPROTOCOL=noneIPADDR=192.168.100.140NETMASK=255.255.255.0NETWORK=192.168.100.0BROADCAST=192.168.100.255 The EL 7.x /etc/sysconfig/network-scripts/route-eth1 file. The default route was already specified in /etc/sysconfig/network . 192.168.100.0/24 dev eth1 table mynetdefault via 192.168.100.1 dev eth1 table mynet The EL 7.x /etc/sysconfig/network-scripts/rule-eth1 file: from 192.168.100.0/24 lookup mynet Update for RHEL8 This method described above works with RHEL 6 & RHEL 7 as well as the derivatives, but for RHEL 8 and derivatives, one must first install network-scripts to use the method described above. dnf install network-scripts The installation produces a warning that network-scripts will be removed in one of the next major releases of RHEL and that NetworkManager provides ifup / ifdown scripts as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227199/"
]
} |
365,392 | For a bash script I'm making, I need to delete all but the newest 2 files in a directory. I decided to use ls -tUr > somefile.txt and then just tail the newest 2, move them somewhere else, rm * , and move them back. For some reason, somefile.txt actually shows up (among the rest of the files) inside somefile.txt which screws up using the tail command. So my question is, logically ls would return the current folder/files and then be redirected into somefile.txt , but clearly this isn't happening since somefile.txt must exist before ls runs; why is this? | This answers the question "why does this happen?" Yes, somefile.txt will actually be created before ls is run. $ utility >file The first thing that happens is that the shell notices the redirection, creates file (or truncates it if it already exists), then it executes utility with its standard output stream going into file . The utility is not generally aware of or concerned with where its standard output is going (some, like ls , do check to see whether it's a TTY or not and changes their behaviour accordingly), so it doesn't matter whether it's writing to a regular file, pipe, device file or socket. It's certainly not concerned with creating this file. It is therefore the job of the shell to make sure that the plumbing is in place before the process is started, which includes creating or truncating the file named file in the example above. For answers dealing with your issue regarding deleting all but the newest files, see the question " remove oldest files " | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179574/"
]
} |
365,399 | I know that, given l="a b c" , echo $l | xargs ls yields ls a b c Which construct yields mycommand -f a -f b -f c | One way to do it: echo "a b c" | xargs printf -- '-f %s\n' | xargs mycommand This assumes a , b , and c don't contain blanks, newlines, quotes or backslashes. :) With GNU findutil you can handle the general case, but it's slightly more complicated: echo -n "a|b|c" | tr \| \\0 | xargs -0 printf -- '-f\0%s\0' | xargs -0 mycommand You can replace the | separator with some other character, that doesn't appear in a , b , or c . Edit: As @MichaelMol notes, with a very long list of arguments there is a risk of overflowing the maximum length of arguments that can be passed to mycommand . When that happens, the last xargs will split the list and run another copy of mycommand , and there is a risk of it leaving an unterminated -f . If you worry about that situation you could replace the last xargs -0 above by something like this: ... | xargs -x -0 mycommand This won't solve the problem, but it would abort running mycommand when the list of arguments gets too long. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/365399",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52728/"
]
} |
365,436 | Is there any way to dynamically choose the interpreter that's executing a script? I have a script that I'm running on two different systems, and the interpreter I want to use is located in different locations on the two systems. What I end up having to to is change the hashbang line every time I switch over. I would like to do something that is the logical equivalent of this (I realize that this exact construct is impossible): if running on system A: #!/path/to/python/on/systemAelif running on system B: #!/path/on/systemB#Rest of script goes here Or even better would be this, so that it tries to use the first interpreter, and if it doesn't find it uses the second: try: #!/path/to/python/on/systemAexcept: #!path/on/systemB#Rest of script goes here Obviously, I can instead execute it as /path/to/python/on/systemA myscript.py or /path/on/systemB myscript.py depending on where I am, but I actually have a wrapper script that launches myscript.py , so I would like to specify the path to the python interpreter programmatically rather than by hand. | No, that won't work. The two characters #! absolutely needs to be the first two characters in the file (how would you specify what interpreted the if-statement anyway?). This constitutes the "magic number" that the exec() family of functions detects when they determine whether a file that they are about to execute is a script (which needs an interpreter) or a binary file (which doesn't). The format of the shebang line is quite strict. It needs to have an absolute path to an interpreter and at most one argument to it. What you can do is to use env : #!/usr/bin/env interpreter Now, the path to env is usually /usr/bin/env , but technically that's no guarantee. This allows you to adjust the PATH environment variable on each system so that interpreter (be it bash , python or perl or whatever you have) is found. A downside with this approach is that it will be impossible to portably pass an argument to the interpreter. This means that #!/usr/bin/env awk -f and #!/usr/bin/env sed -f is unlikely to work on some systems. Another obvious approach is to use GNU autotools (or some simpler templating system) to find the interpreter and place the correct path into the file in a ./configure step, which would be run upon installing the script on each system. One could also resort to running the script with an explicit interpreter, but that's obviously what you're trying to avoid: $ sed -f script.sed | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/365436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230958/"
]
} |
365,457 | I'm writing a bash script. I have a series of pipes working to get all the branches on a git repository: git ls-remote $1 'refs/heads/*' \ | rev \ | cut -d'/' -f1 \ | rev \ | if [ -z $2 ] then echo {} else echo {} > $2 fi Currently the if statement part of this doesn't work properly. What do I replace {} with to make this work? | To copy standard input to standard output, use cat , not echo . git ls-remote "$1" 'refs/heads/*' |sed 's~.*/~~' |if [ -z "$2" ] then cat else cat > "$2"fi Notice also the proper use of quotes and the placement of the pipes so you can avoid the backslashes. The use of sed is a very minor optimization but I also find it clearer than the double rev around a cut (provided you grok regex). You could also use awk -F/ '{ print $NF }' (but then that requires you to grok Awk). You could avoid the cat by doing this instead; ${2+exec >"$2"}git ls-remote "$1" | sed 's~.*/~~' (The failure if you pass in an empty string as the second argument should at least be more explicit, if not necessarily more helpful, than with [ -z , which fails to distinguish between an unset and an empty value.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
365,550 | I uploaded my source code to SVN repository. After committing I found many files starting with ._filename. How can I remove all those files starting with ._filename ? I have so many subfolders and each subfolder has same problem.It would be better for me to verify that only those files which match a particular pattern are deleted. So kindly help | find . -type f -name "._*" -print This will find and display the names of all the files matching the filename globbing pattern ._* in the current directory, or in any of its subdirectories. To remove them, change -print to -delete , or just add -delete to the end if you want to see what gets deleted. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/365550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231598/"
]
} |
365,558 | I want the color of my zsh prompt to be decided based on whether I'm inside a tmux session or not. In bash, it can be done by checking the value of $TMUX, but I can't find an equivalent method in zsh. Is it possible in zsh? | In zsh, the prompt_subst option is off by default. If you want to use variable substitutions in your prompt, turn it on. setopt prompt_substPS1='$foo' For $TMUX , though, you don't need this. The value doesn't change during the session, so you can initialize PS1 when the shell starts. setopt prompt_substif (($+TMUX)); then PS1='[tmux:${TMUX_PANE//\%/%%}] %# 'else PS1='[not tmux] %# 'fi Note that prompt expansion happens after variable susbtitution, this is why the percent signs in the variable's value need to be protected. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187585/"
]
} |
365,592 | I have a large set of JPEG pictures all with the same resolution. It would take too long to open each one inside the graphical interface of imagemagic or gimp. How do I achieve each picture being rotated and saved as the same filename? | You can use the convert command: convert input.jpg -rotate <angle in degrees> out.jpg To rotate 90 degrees clockwise: convert input.jpg -rotate 90 out.jpg To save the file with the same name: convert file.jpg -rotate 90 file.jpg To rotate all files: for photo in *.jpg ; do convert $photo -rotate 90 $photo ; done Alternatively, you can also use the mogrify command line tools (the best tool) recommended by @don-crissti : mogrify -rotate 90 *.jpg | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/365592",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
365,598 | when I do ls the output is in columns, however I need the output to be in only one column, line by line, one entry per line. So the only way I could come up with is: echo * | xargs -n1 echo Is this the standard way to achieve it or is this bad style? | That's bad, for all the reasons plain xargs is bad, namely it breaks with filenames containing whitespace or backslashes: $ touch "foo bar"$ echo * | xargs -n1 echofoobar Besides, it runs a copy of (the external) echo command for every file. In most shells you could use printf "%s\n" * to get the listing. Or ls -1 . However, the question is, what do you want to do with the list of files? Just look at them or use them in a script? For the latter, you're probably better off using for f in * ; do something with "$f" ; done or some variant of find ... -exec somecmd {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
365,619 | I am creating a backup tar files of each mount point: /var/usr/image/usr/image/temp I am trying to create tarnames like backup_var_date.tarbackup_usr_image_date.tarbackup_usr_image_temp_date.tar Trying something like below but "mv" not working "$i is a variable calling in for loop" /bin/tar -cvpzf backup_`echo`mv "$i" "${i//"/"/_}``_`hostname`.`date +%m.%d.%Y`.tar.gz | That's bad, for all the reasons plain xargs is bad, namely it breaks with filenames containing whitespace or backslashes: $ touch "foo bar"$ echo * | xargs -n1 echofoobar Besides, it runs a copy of (the external) echo command for every file. In most shells you could use printf "%s\n" * to get the listing. Or ls -1 . However, the question is, what do you want to do with the list of files? Just look at them or use them in a script? For the latter, you're probably better off using for f in * ; do something with "$f" ; done or some variant of find ... -exec somecmd {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231793/"
]
} |
365,697 | I was wondering, is there any way to kill a process that is running on a specific IP and port on Ubuntu 14.04 on a local IP and port? Preferably, this would be in one command, but if not, a bash script would be perfectly fine as well. | There are likely cleaner ways, but something along the lines of: netstat -lnp | grep 'tcp .*127.0.0.1:9984' | sed -e 's/.*LISTEN *//' -e 's#/.*##' | xargs kill | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231851/"
]
} |
365,698 | In the string wget -qO- What exactly am I doing? I've never seen a switch that has a dash after the expression. But, to install Docker, I am using wget -qO- https://get.docker.com | sh | There is nothing wrong or strange about this. -qO- is the same as -q (quiet) followed by -O - (output to standard output). See the wget manual . The fact that the options are squished together is common Unix practice, and an option that takes an option-argument doesn't usually need a space in-between. The fact that you're pulling something from the web and feeding it directly into sh should be more of a concern for you. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231852/"
]
} |
365,710 | Is it possible to unset the $1 variable? If not, I can't find out where it is explained in man . [root@centos2 ~]# set bon jour[root@centos2 ~]# echo $1$2bonjour[root@centos2 ~]# unset $1[root@centos2 ~]# echo $1$2bonjour[root@centos2 ~]# EDIT: Finaly, here is what I found out in man ( man set option double-dash ) to empty all the positional parameters (and the man used the word "unset"!): If no arguments follow this option, then the positional parameters are unset. [root@centos2 ~]# echo $1[root@centos2 ~]# set bon jour[root@centos2 ~]# echo $1$2bonjour[root@centos2 ~]# set --[root@centos2 ~]# echo $1$2[root@centos2 ~]# It's @Jeff Schaller's answer that helped me understand that. | You can't unset a positional parameter, but you may shift $2 into $1 , effectively removing the old value of $2 . $ set -- bon jour$ echo "$1$2"bonjour$ shift$ echo "$1$2" # $2 is now empty (i.e. "unset" or "not set")jour The shift command will shift all positional parameters one step lower. Command-line parsing loops (that does not use getopt / getopts ) commonly shift the positional parameters over each iteration while repeatedly examining the value of $1 .It is uncommon to want to unset a positional parameter. By the way, unset takes a variable name , not its value, so unset $1 will, in effect, unset the variable bon (had it been previously set). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103808/"
]
} |
365,740 | I recently started using CentOS. I went to try to use the killall utility but found it missing, with me receiving a command not found message when trying to use it. How can I get this functionality on my system so that I can, for instance, kill all processes whose names match a pattern? | The pkill utility is a much better alternative to killall . killall is not portable as the behavior of the command is very different across OSs. pkill is portable and behaves the same everywhere. It's also a lot more flexible as it provides a lot of different ways of matching the processes. It also shares the same matching behavior and arguments as the pgrep utility , which allows you to see what processes would be matched and signaled without actually signalling them. Usage: pkill foo (which would be the same as killall foo ) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/365740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163827/"
]
} |
365,775 | When you increase the size of a an existing block device (eg /dev/sdc was given 20 gigs from vmware, now it has been increased to 40 gigs), you need to do some work for LVM to use the space. All the guides I've seen ( 1 , 2 , 3 ) suggest deleting the existing partition and recreate it with the extra space added on to the end, then run pvresize. Instead of resizing /dev/sdc1 then running pvresize /dev/sdc1 , you could also create /dev/sdc2 and run pvcreate /dev/sdc2 and add it to the VG. Is there a technical or performance reason why resizing is better than adding a new partition or is resizing just the way its always been done? | Summary: From a purely technical standpoint, it doesn't make much difference, but resizing is better. Once you add in practical aspects, adding a new partition is a clear winner. From a strictly technical standpoint, new PVs have a few downsides: You get another copy of the LVM metadata, this costs some disk space. If you frequently do LVM operations, this also causes write amplification (as the metadata is mirrored to all PVs) There may be empty space left for alignment Increases amount of LVM metadata (per copy), slightly, to store info about the new PV. If you create a LV that spans both PVs, it's not contiguous (due to the extra metadata and any holes left for alignment). So, e.g, if you were using that LV for big sequential reads/writes, there is now a seek in there. From a practical standpoint, none of those matter for any reasonable number of PVs. The extra copies of the metadata matter first, but there is an LVM option to keep fewer copies (VG --metadatacopies or PV --metadataignore ). Further, continuing the practical standpoint, deleting and recreating a partition is much more likely to suffer from admin error (typo, etc.) than creating a new partition due to better tools for the latter. And any admin errors that occur are likely to be much more destructive for resizing (because your data is on the resized partition, but there is no data on a new partition). This is even worse when you have multiple layers; e.g., mdraid below LVM. Depending on your -e option, the superblocks can be at the end of the array—fun when you resize a partition. There is one exception that comes to mind: if for whatever reason you can't create a new partition. For example, maybe you're using DOS partition tables, and you already used all four primary partitions (without creating an extended partition). Then you don't have a choice. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3852/"
]
} |
365,783 | EDIT: I should mention that my mouse and keyboard are wireless devices. I assume that doesn't matter since a Live version of Windows 7 had no problems with it, and the most basic part of my system (BIOS?) allows me to use my peripherals no problem. EDIT2: It turns out all of my devices work from my USB 3.0 ports, in case this helps anyone. I'll post as I find more. My very, very novice understanding of Linux is that I need to create a bootable .iso with some software that allows me to boot into a no-changes-saved, low memory version of the desired OS that loads and operates purely off RAM, known as a "Live version." Only from there can I do a full install, maybe b/c Windows can't format storage partitions to the whimsical, wonderful, phantasmagorical "Ext 4" filesystem? Welp, I can't do ANYTHING b/c once I manage to boot my Live OS from either my flash drive or external HDD, my keyboard and mouse no longer respond. I have to cut the power to turn off the computer (which I'm sure is not great). I've never used a Linux OS, but I've been trying to for the last week. No matter what bootable USB software I use to set up my device (Rufus, unetbootin, YUMI), I always run into the same problems, starting with a black screen. Monitors turned off automatically, receiving no signal once I got past the boot screen. Eventually I awkwardly found my way past the black screen problem via the "Grub" commands with Google results and found to either select "nomodeset" or edit whatever line of text ends with "splash --", inserting nomodeset in before the double-hash. My computer has an NVIDIA graphics card; I'm assuming that was the problem? Either way, now I just cannot interact with these OS's without a keyboard or mouse, even though I can see their swell-looking desktops. I really thought these community-based OS's caught up to the big-company ones. Am I doing shit wrong? :( This is basically what I see for a few seconds before the every OS loads its desktop. | Summary: From a purely technical standpoint, it doesn't make much difference, but resizing is better. Once you add in practical aspects, adding a new partition is a clear winner. From a strictly technical standpoint, new PVs have a few downsides: You get another copy of the LVM metadata, this costs some disk space. If you frequently do LVM operations, this also causes write amplification (as the metadata is mirrored to all PVs) There may be empty space left for alignment Increases amount of LVM metadata (per copy), slightly, to store info about the new PV. If you create a LV that spans both PVs, it's not contiguous (due to the extra metadata and any holes left for alignment). So, e.g, if you were using that LV for big sequential reads/writes, there is now a seek in there. From a practical standpoint, none of those matter for any reasonable number of PVs. The extra copies of the metadata matter first, but there is an LVM option to keep fewer copies (VG --metadatacopies or PV --metadataignore ). Further, continuing the practical standpoint, deleting and recreating a partition is much more likely to suffer from admin error (typo, etc.) than creating a new partition due to better tools for the latter. And any admin errors that occur are likely to be much more destructive for resizing (because your data is on the resized partition, but there is no data on a new partition). This is even worse when you have multiple layers; e.g., mdraid below LVM. Depending on your -e option, the superblocks can be at the end of the array—fun when you resize a partition. There is one exception that comes to mind: if for whatever reason you can't create a new partition. For example, maybe you're using DOS partition tables, and you already used all four primary partitions (without creating an extended partition). Then you don't have a choice. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231913/"
]
} |
365,810 | I'm using avconv for trimming and converting videos. Let's say I want to drop the first 7 and last 2.5 seconds of the video stream and one audio stream of an one-hour mts file: avconv -i input.mts -map 0:0 -map 0:3 -ss 0:0:07 -t 0:59:50.5 out.mov This works so far, but now I want to add two seconds of fading in and out at the beginning and the end by adding: -vf fade=type=in:start_frame=350:nb_frames=100 -vf fade=type=out:start_frame=178750:nb_frames=100 Those frames are calculated with the 50 fps that avconv reports for the video source. But there is neither fading in nor out. 1) What goes wrong with the video fading and how to do it right?2) How to add audio fading. There seems to be an -afade option. but I don't find it documented. Alternatively, you can propose a different tool for this goal (trim and fade video and audio), preferrably available as package for Debian 8. | I finally found the time to try the answer suggested by @Mario G., but it seemed extremely cumbersome. I need to do this many dozens of times. I read the documentation of ffmpeg and found it much more powerful than avconv , including fading for audio and video, so the solution is ffmpeg -i input.mts -map 0:0 -map 0:3 -ss 0:0:07 -to 0:59:57.5 -vf 'fade=t=in:st=7:d=2,fade=t=out:st=3595.5:d=2,crop=out_h=692' -af 'afade=t=in:st=7:d=2,afade=t=out:st=3595.5:d=2' out.mov So the st= and d= parameters for the fade take times in seconds, no need for converting to frames. I also discovered the -to option to take the end time directly instead of calculating the length. This command does all steps channel selection with -map , trimming with -ss and -to , video fading with -vf option fade=t=in and fade=t=out , audio fading with -af option afade=t=in and afade=t=out and cropping with -vf option crop= in a single step. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216004/"
]
} |
365,829 | First thing I noticed about Linux was the different ways of feeding data into a command, I know a few but I do not know how many there are in total. the ways I know so far: piping: echo -e "zorro\nwurst\nlahmacun\nsozialhilfe" | sort from keyboard with End of file signal: sort>gurken>zucker>braunkohle from file: sort foo HERE string (actually called HERE document): sort << HERE>wasserkopf>elternzeit>schuldeingeständnis>währungsreform>alabasterhautHERE Is this all or are there more ways to do it, why are there different ways at all, are some ways redundant? | To begin with, sort << HERE and so on is not a here-string. That's a here-document. The here-string uses the <<< operator. That's in bash (and some other shells). In general, there are two ways. One is through standard input (and redirections), and the other one is through parameters. Standard input and its redirections This is the stream that is by default connected to the terminal, in which a command executes and is associated with fd (file descriptor) 1. It's fed whatever is typed to the terminal. Though it (the input) can be redirected as in using the pipe | or using the here-documents or here-strings. Then the input doesn't come from the terminal, but is attached to the standard output of the command preceding the pipe. Different shells have different redirections, so for this category you should check the shell documentation. Look for input redirections. Parameters This is what in your example is sort foo . Calling a program or a function you pass it one or more parameters. A local file is just one possibility. The parameter might contain a URL or whatever. The options here are endless. To sum up, there are two general ways: the first one abstracts from the application's inner logic and manipulates the standard input on the OS/shell level, while the second one involves the application's innards, and it's them that offer limitless possibilities, basing on the call parameters as an interface. For both ways the answer is there are no limits. Though the actual limits come with the OS/shell and the application itself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
365,858 | What is -o after -eq in the mentioned code: ... [ $sorszam -eq 0 ] && min1=$ertek; [ $sorszam -eq 1 -o $sorszam -eq 2 -o $sorszam -eq 3 ] && [ $ertek -lt $min1 ] && min1=$ertek ... | As you can see in the Linux Documentation Project page about if , -o stands for the logical operator OR . In your case, the variable sorszam is checked whether it equals to 1, 2, or 3. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231998/"
]
} |
365,859 | In our GitHub repository, a coworker removed a branch named release . But when I run git checkout release locally, I always get the removed branch release . Same, even when I checked out another branch, deleted the release branch with git branch -D release and ran again git checkout release . Is there something to fix on the GitHub repository, or shall I fix something locally? | After deleting a branch on the remote side you may still see this formerly fetched remote branch locally, see: $ git branch -a[...]releaseremotes/origin/release[...] You only removed the "release" but not "remotes/origin/release". Delete it like this: $ git branch -rd origin/release Or remove all fetched branches which do not exist on the remote side anymore: $ git remote prune origin | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
365,889 | I want to get log details between two dates but I am unable to print anything with the command: egrep "^\[MAY 16 11:00:00\]" alert.log -A 10000 | egrep "^\[MAY 16 16:30:00\]" -B 10000 | After deleting a branch on the remote side you may still see this formerly fetched remote branch locally, see: $ git branch -a[...]releaseremotes/origin/release[...] You only removed the "release" but not "remotes/origin/release". Delete it like this: $ git branch -rd origin/release Or remove all fetched branches which do not exist on the remote side anymore: $ git remote prune origin | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232017/"
]
} |
365,932 | In the few years I've been using Linux as my main system, specifically Fedora, I've always seen my hostname set to just "localhost", with the exception of when I connect to some networks and it becomes my IP. Today I experienced the following behavior which I'm having trouble understanding though. I set up an Ubuntu installation on another partition of my laptop, setting a computer name / hostname during the Ubuntu install. When I rebooted back into Fedora though, Fedora had updated my hostname to the name I set in the Ubuntu install. I always thought the hostname was configured and stored on the partition of the distro installation, and indeed the contents of /etc/hostname on Fedora still read "localhost.localdomain", but running the hostname command shows the new hostname. Both installs share an efi boot partition, but are otherwise discrete. I'm wondering from where and why the Fedora install is reading the new hostname? | The hostname program performs a uname syscall, as can be seen from running: strace hostname...uname({sysname="Linux", nodename="my.hostname.com", ...}) = 0... From the uname syscall man page , it says the syscall retrieves the following struct from the kernel: struct utsname { char sysname[]; /* Operating system name (e.g., "Linux") */ char nodename[]; /* Name within "some implementation-defined network" */ char release[]; /* Operating system release (e.g., "2.6.28") */ char version[]; /* Operating system version */ char machine[]; /* Hardware identifier */ #ifdef _GNU_SOURCE char domainname[]; /* NIS or YP domain name */ #endif }; So the domain name comes from the NIS / YP system, if we believe the comment. So more than likely, there may be a NIS / YP service on your network that is trotting back the name to you that is set by the ubuntu OS. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232045/"
]
} |
365,953 | In my /etc/fstab file I have an entry for my swap as follows: /root/swap swap swap sw 0 0 I have other machines and also I've seen online that sometimes they put default or xfs or other options. Then, I'm a little confused on what 'sw' means and what's for, and also which one would be the best option to put there and why. | From the fstab manual on my system : The fourth field, fs_mntops , describes the mount options associated with the filesystem. It is formatted as a comma separated list of options. It contains at least the type of mount (see fs_type below) plus any additional options appropriate to the filesystem type. [...] If fs_type is “rw”, “rq”, or “ro” then the filesystem whose name is given in the fs_file field is normally mounted read-write or read-only on the specified special file. If fs_type is “sw” then the special file is made available as a piece of swap space by the swapon(8) command at the end of the system reboot procedure. So basically, sw is used to tell swapon (or swapctl on my system) that this is a valid candidate for use as swap space that will be added as part of the system start-up routine. From the manual describing swapctl -A : This option causes swapctl to read the /etc/fstab file for devices and files with an “sw” type, and adds all these entries as swap devices. If no swap devices are configured, swapctl will exit with an error code. That's on OpenBSD. On the Ubuntu Linux system that I have access to, neither manual mentions sw as a mount option for swap for some reason. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91570/"
]
} |
365,955 | I'm trying to change the output of lsof -i4TCP:PORT to include a custom name. This will help me identify the server process as started by my daemon. Below is a picture with arrow pointing to what I'd like to control. I've created a custom gem, executed the process there, and it still says Ruby. Rather than go down the 'rabbit hole', wondering if anyone else has had this need. I'd essentially like to do exactly what docker has done, and show the process tagged with my program name. | From the fstab manual on my system : The fourth field, fs_mntops , describes the mount options associated with the filesystem. It is formatted as a comma separated list of options. It contains at least the type of mount (see fs_type below) plus any additional options appropriate to the filesystem type. [...] If fs_type is “rw”, “rq”, or “ro” then the filesystem whose name is given in the fs_file field is normally mounted read-write or read-only on the specified special file. If fs_type is “sw” then the special file is made available as a piece of swap space by the swapon(8) command at the end of the system reboot procedure. So basically, sw is used to tell swapon (or swapctl on my system) that this is a valid candidate for use as swap space that will be added as part of the system start-up routine. From the manual describing swapctl -A : This option causes swapctl to read the /etc/fstab file for devices and files with an “sw” type, and adds all these entries as swap devices. If no swap devices are configured, swapctl will exit with an error code. That's on OpenBSD. On the Ubuntu Linux system that I have access to, neither manual mentions sw as a mount option for swap for some reason. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/365955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232059/"
]
} |
365,956 | My colleague is watching my X session somehow. lightdm runs with --notcp; no vnc is running; he is not connected via ssh. What other ways are there and how can I stop him or turn this game around on him? I'm only used to cooperatively share a session via vnc, so I'm new to this. (I'm on ubuntu 17.04; he is using debian) I also don't want to accidentally turn something off, that is needed. Please help! | Turn on a firewall. Every Ubuntu install comes with ufw , which can be used to effectively block all inbound connections. By default, just running sudo ufw enable will be enough. UFW's default rules are to allow all outgoing connections, and deny all incoming. If you need to unblock specific ports for whatever reason, the command is as follows: sudo ufw allow <port_number> If you want more verbose firewall rules, look at the gufw package or this handy reference for more information as to what UFW can offer you. Now, this won't work if your coworker (somehow) has something like TeamViewer on your computer, as tools like that can bypass the firewall in some cases. Similarly, this won't work if your coworker has access to your physical machine, as they can just disable/alter your firewall rules. Be sure to lock your session whenever you're not at your computer. If you're really paranoid, also encrypt your hard drive to prevent attacks using recovery mode or a drive sled. As mentioned in the comments, this is a social issue as opposed to a technical issue. If you can prove that your coworker is actually spying on you, that can go a long way. You can use Wireshark or any similar network monitor to see what's going on. X11 will usually run on port 6000, but can go up to 6032 in rare cases. Wireshark can automatically detect X11 in some cases, but not all. If you're paranoid about your coworker seeing Wireshark running, it comes with a command-line utility called dumpcap . You can use a TTY ( Ctrl + Alt + F1 - F6 , Ctrl + Alt + F7 to get back to X) to ensure that the process is completely invisible from the X server. Note, though, that if you open up top or your coworker has access to an open shell, it may still be possible to spot the process. Also, UFW will print "block logs" to your dmesg console: [ 367.801540] [UFW BLOCK] IN=wlp2s0 OUT= MAC=[REDACTED] SRC=192.168.1.2 DST=192.168.1.22 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=9359 DF PROTO=TCP SPT=8009 DPT=41672 WINDOW=294 RES=0x00 ACK URGP=0 You can use grep to search for DPT=60* to see if ufw has blocked any requests. Note, however, that just because a port is blocked doesn't mean it's necessarily an attack. Your machine may just be being pinged/scanned by your colleague or some network threat management tool on your network. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/365956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95548/"
]
} |
366,009 | I have a .bash_profile and in that I have a set of aliases. Those aliases at the moment execute only a single command and it was fairly easy. I would however like to do two things with a new alias that I'm trying to create. CD into a directory Run a command from that directory | To execute a command with a specific working directory, one usually does ( cd directory && utility ) The parentheses around the cd ... means that the command(s) therein runs in a subshell. Changing the working directory in a subshell makes it so that the current working directory of the calling shell is not changed, i.e., after having called this command, you would still be located in the same directory where you started. Example: ( cd / && echo "$PWD" ) # will output "/"echo "$PWD" # will output whatever directory you were in at the start This can not be turned into a generic alias as an alias can not take any arguments. For a specific directory and utility, one could do alias cdrun='( cd "$HOME/somedir" && ./script.sh )' but for the general case, you would have to use a shell function: cdrun () { ( cd "$1" && shift && command "$@" )} or cdrun () ( cd "$1" && shift && command "$@") Replacing the curly braces with parentheses around the body of the function makes the function execute in its own subshell. This would be used as $ cdrun "$HOME/somedir" ./script.sh which would run the script script.sh located in the directory $HOME/somedir , with $HOME/somedir as the working directory, or $ cdrun / ls -l which would provide you with a directory listing in "long format" of the root directory. The shell function takes its first argument and tries to change to that directory. If that works, it shifts the directory name off from the positional parameters (the command line argument list) and executes the command given by the rest of the arguments. command is a built-in command in the shell which simply executes its arguments as a command. All of this is needed if you want to execute a command with a changed working directory . If you just want to execute a command located elsewhere, you could obviously use alias thing='$HOME/somedir/script.sh' but this would run script.sh located in $HOME/somedir with the current directory as the working directory. Another way of executing a script located elsewhere without changing the working directory is to add the location of the script to your PATH environment variable, e.g. PATH="$PATH:$HOME/somedir" Now script.sh in $HOME/somedir will be able to be run from anywhere by just using $ script.sh Again, this does not change the working directory for the command. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232103/"
]
} |
366,015 | I suddenly decided I'd like to look at the source code for 'echo' $ which echo/usr/bin/echo so $ ls -al /usr/bin/echo-rwxr-xr-x. 1 root root 32536 Oct 31 2016 /usr/bin/echo so $strings /usr/bin/echo leads me to believe it's a compiled C program Now I'm stuck. How do I: Find out which package it's in Get the source Rebuild it Test it Install the new version system-wide (I know that 5's not a good idea, I'm just curious...) I'm currently on Fedora, but I'd also be interested in the answers for Debian A link to a relevant tutorial would be a good answer. Edit: $ type -a echoecho is a shell builtinecho is /usr/bin/echo So I guess it's the one in /usr/bin/echo I'd like to see rather than trying to read the whole of bash . | RHEL/Fedora Run rpm -qf /path $ rpm -qf /usr/bin/echocoreutils-8.25-17.fc25.x86_64 Download the source package (use yum for RHEL): $ dnf download coreutils --enablerepo="*source" Extract the sources, patches from the SRPM package downloaded in current directory, change to the directory where the files are extracted and find your file: $ rpmbuild -rp coreutils-8.25-17.fc25.src.rpm$ cd ~/rpmbuild/BUILD/coreutils-8.25/$ find src -iname '*echo*'src/echo.c You can rebuild the package using rpmbuild --rebuild coreutils-8.25-17.fc25.src.rpm , which will produce the RPMs that you can directly install on your system. If you need to do some modification to fedora packages, it is much easier to go the maintainer way: Install fedpkg , clone the repository, do the modifications (using patches) and rebuild the package with modifications: $ sudo dnf install fedpkg$ fedpkg clone coreutils$ cd coreutils$ # do the modifications$ fedpkg local | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81213/"
]
} |
366,023 | I've flashed my Intel Braswell Chromebook with the RW_LEGACY flash, and installed Gallium. But every time I restart I get the OS verification off screen. I can bypass this with ctrl + l . But since I flashed it, why do I still have to do this? | RHEL/Fedora Run rpm -qf /path $ rpm -qf /usr/bin/echocoreutils-8.25-17.fc25.x86_64 Download the source package (use yum for RHEL): $ dnf download coreutils --enablerepo="*source" Extract the sources, patches from the SRPM package downloaded in current directory, change to the directory where the files are extracted and find your file: $ rpmbuild -rp coreutils-8.25-17.fc25.src.rpm$ cd ~/rpmbuild/BUILD/coreutils-8.25/$ find src -iname '*echo*'src/echo.c You can rebuild the package using rpmbuild --rebuild coreutils-8.25-17.fc25.src.rpm , which will produce the RPMs that you can directly install on your system. If you need to do some modification to fedora packages, it is much easier to go the maintainer way: Install fedpkg , clone the repository, do the modifications (using patches) and rebuild the package with modifications: $ sudo dnf install fedpkg$ fedpkg clone coreutils$ cd coreutils$ # do the modifications$ fedpkg local | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141725/"
]
} |
366,077 | I'm looking for a Linux command that does literally nothing, doesn't output anything, but stays alive until ^C . while true; do; done is not a good solution, because it is CPU intensive. | Just add a sleep command. while true; do sleep 600; done will sleep for 10 minutes between loops. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232170/"
]
} |
366,109 | A shell in Linux (for example: bash ) have its stdin and stdout and stderr file descriptors all point to the same device file, for example, the following are the stdin and stdout and stderr file descriptors for bash : Now /dev/tty1 is not a "real" file that you can read from and write to, it is a device file that points to a file or to a buffer in memory or to something else. Now my question is, does /dev/tty1 points to only one file, or does it point to two files? What I mean is, when bash reads from /dev/tty1 ( stdin ), and when bash writes to /dev/tty1 ( stdout or stderr ), is it reading from and writing to the same file , or does /dev/tty1 points to two files, one is used when reading from /dev/tty1 , and the other is used when writing to /dev/tty1 ? | A device node points to a single device , which in Linux is handled by the kernel. When bash reads from /dev/tty1 , it reads from the device driver managing the first terminal; when it writes to it, it writes to the same device driver. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228808/"
]
} |
366,131 | I know that the file /dev/ptmx is used to generate a master file for a pseudo-terminal. But I have found out that Fedora has another ptmx file ( /dev/pts/ptmx ): What is the purpose of this second file? | The reason, like with many things in the world of computing, is history and backwards compatibility. Back in 2.4.* kernels, before udev (the current virtual filesystem solution for /dev ) existed in Linux, there was two competing solutions, the "traditional Unix way" of having the devices in a real directory on a root filesystem, and devfs , the first virtual filesystem solution for /dev . The problem was, the author of devfs had constructed a completely new naming scheme for various devices, and people felt fairly strongly about it: some wanted to migrate to the new scheme and abolish the old one, others didn't see the need for migration. Some distributions used the old static devices, others chose devfs . At that point, there was a fixed number of pseudo-TTY devices created at installation time. (By the way, this is still possible, if the CONFIG_LEGACY_PTYS option is set while compiling your kernel.) Then, Unix98-style dynamically-allocated PTY devices were introduced. Implementing them on a static /dev directory required a virtual filesystem for /dev/pts , and this became known as the devpts filesystem. Also, having this as a separate filesystem would probably have made it possible to apply it on top of the dynamic devfs too without duplication of code. The dynamically-allocated PTY devices quickly became the favored choice, because having /dev cluttered with hundreds of statically-allocated PTY devices, most of which might well never be used in the system's lifetime, was clearly nonsensical. Then came Linux 2.6 and udev with it. It quickly obsoleted both the static /dev and devfs solutions. For backwards compatibility reasons, devpts filesystem still existed, but now the same functionality could be moved back to the main /dev filesystem, since it was now entirely RAM-based. Today, for example, Debian 9 still mounts devpts filesystem to /dev/pts for legacy compatibility, but assigns /dev/pts/ptmx zero permissions by default - this is a sign that the devpts filesystem is probably being deprecated and will be removed at some future point. # ls -l /dev/ptmx /dev/pts/ptmxcrw-rw-rw- 1 root tty 5, 2 Nov 22 11:47 /dev/ptmxc--------- 1 root root 5, 2 Nov 12 14:59 /dev/pts/ptmx If some program still needs /dev/pts/ptmx , that can be allowed by adjusting the default permissions, but this lets people know which programs are still using the older deprecated device name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228808/"
]
} |
366,219 | I have a program that exits automatically upon reading an EOF in a given stream ( in the following case, stdin ). Now I want to make a shell script, which creates a named pipe and connect the program's stdin to it. Then the script writes to the pipe several times using echo and cat ( and other tools that automatically generates an EOF when they exit ). The problem I'm facing is, when the first echo is done, it sends an EOF to the pipe and make the program exit. If I use something like tail -f then I can't send an EOF when I intend to quit the program. I'm researching a balanced solution but to no avail. I've already found both how to prevent EOFs and how to manually send an EOF but I can't combine them. Is there any hint? #!/bin/shmkfifo Pprogram < P & : # Run in background# < P tail -n +1 -f | programecho some stuff > P # Prevent EOF?cat more_stuff.txt > P # Prevent EOF?send_eof > P # How can I do this?# fg | As others have indicated, the reader of a pipe receives EOF once there are no writers left. So the solution is to make sure there is always one writer holding it open. That writer doesn't have to send anything, just hold it open. Since you're using a shell script, the simplest solution is to tell the shell to open the pipe for writing. And then close it when you're done. #!/bin/shmkfifo Pexec 3>P # open file descriptor 3 writing to the pipeprogram < P# < P tail -n +1 -f | programecho some stuff > Pcat more_stuff.txt > Pexec 3>&- # close file descriptor 3 Note that if you omit the last line, file descriptor 3 will be automatically closed (and thus the reader receive EOF) when the script exits. Aside from convenience, this also provides a safety of sorts if the script were to somehow terminate early. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211239/"
]
} |
366,231 | How to debug this? This issue has suddenly appeared within the last couple of days. All backups of a website are corrupted. If the backup is just left as tar , there are no problems, but as soon the tar is compressed as gz or xz I can't uncompress them. There is a lot of free disk Local disk space 2.68 TB total / 2.26 TB free / 432.46 GB used error tar: Skipping to next header[===============================> ] 39% ETA 0:01:14tar: A lone zero block at 2291466===============================> ] 44% ETA 0:01:13tar: Exiting with failure status due to previous errors 878MiB 0:00:58 [15.1MiB/s] [===================================> ] 44% And why does it say Skipping to next header ? It has never done that before. Something is terribly wrong the some of the files. There are about 15k pdf, jpg or png files in the directories. command pv $backup_file | tar -izxf - -C $import_dir There must be some data that corrupts the compression. I have also tried to check the HDD health by doing this: # getting the driveslsblk -dpno namesmartctl -H /dev/sdasmartctl -H /dev/sdb On both drives I get this: === START OF READ SMART DATA SECTION ===SMART overall-health self-assessment test result: PASSED How can I find out which files that are corrupting the tar.gz? I just want to delete them. update Have now copied all files to another server and I have the exact same issue. I can tar everything and extract it without problems, but as soon I want to compress the files, I can't uncompress them (gz/xz). | Your file is either truncated or corrupted, so xz can't get to the end of the data. tar complains because the archive stops in the middle, which is logical since xz didn't manage to read the whole data. Run the following commands to check where the problem is: cat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/nullxzcat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null If cat complains then the file is corrupted on the disk and the operating system detected the corruption. Check the kernel logs for more information; usually the disk needs to be replaced at this point. If only xz complains then the OS didn't detect any corruption but the file is nevertheless not valid (either corrupted or truncated). Either way, you aren't going to be able to recover this file. You'll need to get it back from your offline backups. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
366,234 | I'm using dd to copy data from an old hard drive to a new one. I want to be sure that the data's integrity is secure. On this answer , Gilles says If [dd] terminated successfully, then the backup is correct, barring a hardware fault… What does that mean exactly? Does dd have some kind of built in verification? If I were to use rsync instead, I run a second pass with --checksum as well, to verify. Is that kind of paranoia justified? | dd or any other application does not have “some kind of built in verification” in the sense you're probably thinking of: it doesn't read back the data from the storage medium to compare with what was written. That's the job of the operating system. It is not really possible to do a read-verify down to the hardware from an application. It would work in some scenarios, but in most cases it would achieve nothing. The application could read back what it just wrote if it's writing directly to a storage medium , but that would typically read back from an in-memory cache, which wouldn't give any useful assurance. In the example you cite , dd is writing to a pipe, and in that case it has no control over what happens to the data further down the line. In your rsync example, a second pass of rsync --checksum is pointless: in theory it could catch an error, but in practice, if an error does happen, then the second pass would probably not report anything wrong, so you're wasting effort on something that doesn't actually give a useful assurance. However, applications do verify what happens to the data, in the sense that they verify that the operating system has accepted responsibility for the data. All system calls return an error status. If a system call returns an error status, the application should propagate that error to the user, generally by displaying an error message and returning a nonzero exit status. Beware that dd is an exception: depending on the command line parameters, dd might ignore some errors . This is extremely unusual: dd is the only common command with this property. Use cat instead of dd , that way you don't risk corruption and it may well be faster . In a chain of data copying, two kinds of errors can arise. Corruption: a bit is flipped during the transfer. There is no way to verify this at the application level, because if that happens, it's due to a programming bug or hardware error that is highly likely to cause the same corruption when reading back. The only useful way to verify that no such corruption happened is to physically disconnect the media and try again, preferably on a different computer in case the problem was with the RAM. Truncation: all the data that was copied was copied correctly, but some of the data was not copied at all. This one is worth checking sometimes, depending on the complexity of the command. You don't need to read the data to do that: just check the size. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
366,256 | I am trying to understand how exit status is communicated when a pipe is used. Suppose I am using which to locate a non-existent program: which lssecho $?1 Since which failed to locate lss I got an exit status of 1. This is fine. However when I try the following: which lss | echo $?0 This indicates that the last command executed has exited normally. The only way I can understand this is that perhaps PIPE also produces an exit status. Is this right way to understand it? | The exit status of a pipe is the exit status of the right-hand command. The exit status of the left-hand command is ignored. (Note that which lss | echo $? doesn't show this. You would run which lss | true; echo $? to show this. In which lss | echo $? , echo $? reports the status of the last command before this pipeline.) The reason shells behave this way is that there is a fairly common scenario where an error in the left-hand side should be ignored. If the right-hand side exits (or more generally closes its standard input) while the left-hand side is still writing, then the left-hand side receives a SIGPIPE signal. In this case, there is usually nothing wrong: the right-hand side doesn't care about the data; if the role of the left-hand side is solely to produce this data then it's ok for it to stop. However, if the left-hand side dies for some reason other than SIGPIPE, or if the job of the left-hand side was not solely to produce data on standard output, then an error in the left-hand side is a genuine error that should be reported. In plain sh, the only solution is to use a named pipe. set -emkfifo pcommand1 >p & pid1=$!command2 <pwait $pid1 In ksh, bash and zsh, you can instruct the shell to make a pipeline exit with a nonzero status if any component of the pipeline exits with a nonzero status. You need to set the pipefail option: ksh: set -o pipefail bash: shopt -s pipefail zsh: setopt pipefail (or setopt pipe_fail ) In mksh, bash and zsh, you can get the status of every component of the pipeline using the variable PIPESTATUS (bash, mksh) or pipestatus (zsh), which is an array containing the status of all the commands in the last pipeline (a generalization of $? ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/200436/"
]
} |
366,347 | I was following a tutorial on how to find out the dependent libraries of a program and it was explained like this: whereis firefox shows the folders, where it is installed, take the full path to the binary, and ldd /usr/bin/firefox put it as argument of the ldd command. the tutorial also used firefox as an example and therefore I was sure to recreate it, but when I typed: whereis firefoxfirefox: /usr/bin/firefox /usr/lib/firefox /etc/firefox /usr/share/man/man1/firefox.1.gz ldd /usr/bin/firefox not a dynamic executable I got this "not a dynamic executable" message, instead of the list of libraries. Why? | The firefox executable is a shell script on your system. Some applications employ a wrapper script that sets up the execution environment for the application, possibly to allow for better integration with the current flavor of Unix, or to provide alternative ways to run the application (new sets of command line options etc.) that the application itself is not providing. Sometimes a wrapper script is used to pick the correct actual binary to run based on the way that script was called. For example, the MPI ("Message Passing Interface") C compiler is nothing more than a wrapper script around cc (or whatever compiler it's set up to use) that ensures that the MPI headers are in the search path and that the MPI library is linked in when compiling. Have a look at this script to see what binaries it's calling under what circumstances. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
366,352 | I have /etc/security/limits.conf , that seems not been applied: a soft nofile 1048576 # default: 1024a hard nofile 2097152a soft noproc 262144 # default 128039a hard noproc 524288 Where a is my username, when I run ulimit -Hn and ulimit -Sn , it shows: 40961024 There's only one other file in the /etc/security/limits.d that the content is: scylla - core unlimitedscylla - memlock unlimitedscylla - nofile 200000scylla - as unlimitedscylla - nproc 8096 I tried also append those values to /etc/security/limits.conf then restarting, and do this: echo 'session required pam_limits.so' | sudo tee -a /etc/pam.d/common-session but it didn't work. My OS is Ubuntu 17.04 . | https://superuser.com/questions/1200539/cannot-increase-open-file-limit-past-4096-ubuntu/1200818# = There's a bug since Ubuntu 16 apparently. Basically: Edit /etc/systemd/user.conf for the soft limit, and add DefaultLimitNOFILE=1048576 . Edit /etc/systemd/system.conf for the soft limit, and add DefaultLimitNOFILE=2097152 . Credit goes to @mkasberg . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27996/"
]
} |
366,361 | If one wants to have a customized prompt, one only has to edit the .bashrc file in one's home directory and add PS1="yourPrompt" at the very end. What if one sudo su into the root and want to have a customized prompt automatically displayed? | To change bash shell options for root, you need to edit/create /root/.bashrc with the options you need and switch to root either via regular login or su - (which can be called with sudo ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
366,407 | Is it possible to use a commit message from stdout, like: echo "Test commit" | git commit - Tried also to echo the message content in .git/COMMIT_EDITMSG , but then running git commit would ask to add changes in mentioned file. | You can use the -F <file>, --file=<file> option. echo "Test commit" | git commit -F - Its usage is described in the man page for git commit : Take the commit message from the given file. Use - to read the message from the standard input. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/366407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126666/"
]
} |
366,533 | QUESTION: How might I be able to specifically change the Mac Address of the enp3s0 and wlp2s0 interfaces through the /etc/network/interfaces file? What code would I have to include inside? I have been trying for some time now without success sadly enough. ELABORATING: So I found this great article online explaining how to change a Mac Address permanently through the /etc/network/interfaces file on my Ubuntu. In the article, it says: On Debian, Ubuntu, and similar systems, place the following in the appropriate section of /etc/network/interfaces (within an iface stanza, e.g., right after the gateway line) so that the MAC address is set when the network device is started: hwaddress ether 02:01:02:03:04:08 Source: https://en.wikibooks.org/wiki/Changing_Your_MAC_Address/Linux Now when I use the following code: cat /etc/network/interfaces I get the following output # interfaces(5) file used by ifup(8) and ifdown(8)auto loiface lo inet loopback And when I do ifconfig on my ubuntu, I get back 3 different interfaces: enp3s0 lo wlp2s0 I would like to change the mac address of all of my interfaces (enp3s0, wlp2s0) (lo is loopback so no need there), but I am unfamiliar with the commands in the /etc/network/interfaces file. I have been looking at tutorials online though I can't seem to get stuff right, and my computer even started acting very strangely a few times afterwards. | Use the hwaddress ether inside your interface configuration block. Example: auto enp3s0iface enp3s0 inet static address 192.0.2.7 netmask 255.255.255.0 gateway 192.0.2.254 hwaddress ether 00:11:22:33:44:55 or, if dhcp: allow-hotplug enp3s0iface enp3s0 inet dhcp hwaddress ether 00:11:22:33:44:55 A detail that i have missed: The hwaddress configuration item needs to be after the gateway stanza, if you are setting a static ip address. Related stuff: Good detailed explanation of /etc/network/interfaces syntax? However, if you are having problemas while changing mac through network/interfaces you could do it through udev udev method - Create the file etc/udev/rules.d/75-mac-spoof.rules with the following content: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="XX:XX:XX:XX:XX:XX", RUN+="/usr/bin/ip link set dev %k address YY:YY:YY:YY:YY:YY" You could also do it using systemd units as explained here: Changing mac using systemd units . But at the end of the day, they are also only wrappers for executing ip link set and macchanger . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170387/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.