source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
163,691
I am using bash and GNU screen on centos7. I notice that if I ssh to another server, change the title (via ctrl + a + A ), and log out of the server that my new title gets overwritten by USER@HOST:~ . How can I stop it from doing this? I've looked into dynamic titles and determined that's what's at play, but I'm unsure of how to disable that feature...
As documented in the man page, screen looks for a null title-escape-sequence. bash sends this sequence via the PROMPT_COMMAND environment variable (for example, mine defaults to printf "\033k%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}" . To disable this feature for a particular window, I just run unset PROMPT_COMMAND from that window. Of course, one could just add this to their ~/.bashrc or to a specific environment file to make it more persistent.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9428/" ] }
163,715
I've just discovered that I can add the following lines to ~/.vimrc . set mouse=a:vmap <C-C> "+y This has the effect of being able to select text with the mouse (i.e. in visual mode), and then copy the actual text to the X clipboard with Ctrl + c . This differs from the default copy from the terminal, as it's the real text I'm copying, not what the terminal sees. For example, if there were tabs in the text, then previously I'd copy them as spaces. Is there a way to make less behave the same way? i.e. can I copy verbatim what is in the text file I'm viewing in less?
Not a task for less No, I do not think you can do that directly, because less does not have a cursor to begin with. It would need one to navigate to start and end of the text to select. less is just not the right tool for character-level navigation. Tabs already expanded You can use the key shift and the mouse to make a selection; This is handled by the terminal, not by less. But the terminal does not know how spaces and tabs where arranged - less does the interpretation of tabs internally, and writes only normal " " characters to the screen. There are tools like screen , tmux and byobu , which can do lots of impressive things in this area.I did not check, but I assume that these terminal multiplexers do not have a way around that - being terminals, in the end - and will behave the same. Use vim If you are showing a file in less , there is a nice solution: Press the key v in less to open the current file in vim - asuming your $EDITOR etc. is set up for vim . This does not work when showing stdin from a pipeline or so, although there are workarounds . Mouse scrolling, at least But you can at least scroll with the mouse wheel: That seems even to be enabled by default, but the mouse wheel events get suppressed by a different option. For a quick test, try: LESS=-r man less The option -X ( --no-init ) blocks scrolling - check what your environment variable LESS contains: $ echo $LESS The option -q ( --quiet , --silent ) also causes trouble, according to SU: How to make mouse wheel scroll the less pager using bash and gnome-terminal?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18887/" ] }
163,721
The command cut has an option -c to work on characters, instead of bytes with the option -b . But that does not seem to work, in en_US.UTF-8 locale: The second byte gives the second ASCII character (which is encoded just the same in UTF-8): $ printf 'ABC' | cut -b 2 B but does not give the second of three greek non-ASCII characters in UTF-8 locale: $ printf 'αβγ' | cut -b 2 � That's alright - it's the second byte . So we look at the second character instead: $ printf 'αβγ' | cut -c 2 � That looks broken. With some experiments, it turns out that the range 3-4 shows the second character: $ printf 'αβγ' | cut -c 3-4β But that's just the same as the bytes 3 to 4: $ printf 'αβγ' | cut -b 3-4β So the -c does not more than the -b for UTF-8. I'd expect the locale setup is not right for UTF-8, but in comparison, wc works as expected; It is often used to count bytes, with option -c ( --bytes ). (Note the confusing option names.) $ printf 'αβγ' | wc -c6 But it can also count characters with option -m ( --chars ), which just works: $ printf 'αβγ' | wc -m3 So my configuration seems to be ok - but something is special about cut . Maybe it does not support UTF-8 at all? But it does seem to support multi-byte characters, otherwise it would not need to support -b and -c . So, what's wrong? And why? The locale setup looks right for utf8, as far as I can tell: $ localeLANG=en_US.UTF-8LANGUAGE=en_USLC_CTYPE=en_US.UTF-8LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_PAPER="en_US.UTF-8"LC_NAME="en_US.UTF-8"LC_ADDRESS="en_US.UTF-8"LC_TELEPHONE="en_US.UTF-8"LC_MEASUREMENT="en_US.UTF-8"LC_IDENTIFICATION="en_US.UTF-8"LC_ALL= The input, byte by byte: $ printf 'αβγ' | hd 00000000 ce b1 ce b2 ce b3 |......|00000006
You haven't said which cut you're using, but since you've mentioned the GNU long option --characters I'll assume it's that one. In that case, note this passage from info coreutils 'cut invocation' : ‘-c character-list’‘--characters=character-list’ Select for printing only the characters in positions listed in character-list. The same as -b for now , but internationalization will change that. (emphasis added) For the moment, GNU cut always works in terms of single-byte "characters", so the behaviour you see is expected. Supporting both the -b and -c options is required by POSIX — they weren't added to GNU cut because it had multi-byte support and they worked properly, but to avoid giving errors on POSIX-compliant input. The same -c has been done in some other cut implementations, although not FreeBSD 's and OS X 's at least. This is the historic behaviour of -c . -b was newly added to take over the byte role so that -c can work with multi-byte characters. Maybe in a few years it will work as desired consistently, although progress hasn't exactly been quick (it's been over a decade already). GNU cut doesn't even implement the -n option yet, even though it is orthogonal and intended to help the transition. There are potential compatibility problems with old scripts, which may be a concern, although I don't know definitively what the reason is.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63775/" ] }
163,726
I have to grep through some JSON files in which the line lengths exceed a few thousand characters. How can I limit grep to display context up to N characters to the left and right of the match? Any tool other than grep would be fine as well, so long as it available in common Linux packages. This would be example output, for the imaginary grep switch Ф : $ grep -r foo *hello.txt: Once upon a time a big foo came out of the woods.$ grep -Ф 10 -r foo *hello.txt: ime a big foo came of t
With GNU grep : N=10; grep -roP ".{0,$N}foo.{0,$N}" . Explanation: -o => Print only what you matched -P => Use Perl-style regular expressions The regex says match 0 to $N characters followed by foo followed by 0 to $N characters. If you don't have GNU grep : find . -type f -exec \ perl -nle ' BEGIN{$N=10} print if s/^.*?(.{0,$N}foo.{0,$N}).*?$/$ARGV:$1/ ' {} \; Explanation: Since we can no longer rely on grep being GNU grep , we make use of find to search for files recursively (the -r action of GNU grep ). For each file found, we execute the Perl snippet. Perl switches: -n Read the file line by line -l Remove the newline at the end of each line and put it back when printing -e Treat the following string as code The Perl snippet is doing essentially the same thing as grep . It starts by setting a variable $N to the number of context characters you want. The BEGIN{} means this is executed only once at the start of execution not once for every line in every file. The statement executed for each line is to print the line if the regex substitution works. The regex: Match any old thing lazily 1 at the start of line ( ^.*? ) followed by .{0,$N} as in the grep case, followed by foo followed by another .{0,$N} and finally match any old thing lazily till the end of line ( .*?$ ). We substitute this with $ARGV:$1 . $ARGV is a magical variable that holds the name of the current file being read. $1 is what the parens matched: the context in this case. The lazy matches at either end are required because a greedy match would eat all characters before foo without failing to match (since .{0,$N} is allowed to match zero times). 1 That is, prefer not to match anything unless this would cause the overall match to fail. In short, match as few characters as possible.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
163,747
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once? I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in parallel. I have tried openssl dgst -sha256 -md5 , but it only calculates the hash using one algorithm. Pseudo-code for the expected behavior: for each block: for each algorithm: hash_state[algorithm].update(block)for each algorithm: print algorithm, hash_state[algorithm].final_hash()
Check out pee (" tee standard input to pipes ") from moreutils . This is basically equivalent to Marco's tee command, but a little simpler to type. $ echo foo | pee md5sum sha256sumd3b07384d113edec49eaa6238ad5ff00 -b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c - $ pee md5sum sha256sum <foo.isof109ffd6612e36e0fc1597eda65e9cf0 -469a38cb785f8d47a0f85f968feff0be1d6f9398e353496ff7aa9055725bc63e -
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8250/" ] }
163,779
I'm trying to see a list of all the rules in IPtables in a Debian 7 server.when I try: iptables -L -n I only get one rule (which I entered 5 minutes ago). I have many others, for port 80, mysql and others which all work, but I can't see them anywhere. Any idea how could that be done?Thanks /* edit */ I'm adding some input I get from the different commands iptables -t nat -L -nChain PREROUTING (policy ACCEPT)target prot opt source destinationChain INPUT (policy ACCEPT)target prot opt source destinationChain OUTPUT (policy ACCEPT)target prot opt source destinationChain POSTROUTING (policy ACCEPT)target prot opt source destination When I try iptables -L -v -n --line-nChain INPUT (policy ACCEPT 43535 packets, 58M bytes)num pkts bytes target prot opt in out source destination1 126 56529 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp spt:443 state ESTABLISHEDChain FORWARD (policy ACCEPT 0 packets, 0 bytes)num pkts bytes target prot opt in out source destinationChain OUTPUT (policy ACCEPT 30151 packets, 7365K bytes)num pkts bytes target prot opt in out source destinationiptables-save# Generated by iptables-save v1.4.14 on Thu Oct 23 08:58:32 2014*raw:PREROUTING ACCEPT [17972:25607074]:OUTPUT ACCEPT [12416:1953400]COMMIT# Completed on Thu Oct 23 08:58:32 2014# Generated by iptables-save v1.4.14 on Thu Oct 23 08:58:32 2014*mangle:PREROUTING ACCEPT [19071:27028289]:INPUT ACCEPT [19071:27028289]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [13114:2110189]:POSTROUTING ACCEPT [13114:2110189]COMMIT# Completed on Thu Oct 23 08:58:32 2014# Generated by iptables-save v1.4.14 on Thu Oct 23 08:58:32 2014*security:INPUT ACCEPT [19514:27565428]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [13405:2178341]COMMIT# Completed on Thu Oct 23 08:58:32 2014# Generated by iptables-save v1.4.14 on Thu Oct 23 08:58:32 2014*nat:PREROUTING ACCEPT [141:11461]:INPUT ACCEPT [141:11461]:OUTPUT ACCEPT [11:1030]:POSTROUTING ACCEPT [11:1030]COMMIT# Completed on Thu Oct 23 08:58:32 2014# Generated by iptables-save v1.4.14 on Thu Oct 23 08:58:32 2014*filter:INPUT ACCEPT [43596:58181078]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [30216:7394285]-A INPUT -i eth0 -p tcp -m tcp --sport 443 -m state --state ESTABLISHED -j ACCEP TCOMMIT# Completed on Thu Oct 23 08:58:32 2014
Netfilter encourages to use iptables-save command since it will provide you a detailed view of your built-in chains and those you've defined yourself. If you want to get a human readable view you can use iptables -L -v -n --line-n
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88793/" ] }
163,782
cat file1 file2 will combine two text files. But if I want to add some separator between, like a line or two of ******************************** , do I have to open the first file, and add the line at its end, or open the second file and add the line at its top, and then run the cat command? Can it be done with just running a command?
In bash and zsh you can do: cat file1 <(echo '********************************') file2 or as mikeserv indicated in his comment (in any shell): echo '********************************' | cat file1 - file2 and in Bash as David Z commented: cat file1 - file2 <<< '********************************' Any newlines in the files will be shown. If you don't want a newline after the "separator" (e.g. in case file2 starts with a newline) you can use echo -n '****' , so suppress the newline after the * .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
163,798
Is it possible, using grep , find , etc, to sort a list of directories by the last modified date of the same-named file (e.g., file.php ) within? For example: domain1/file.php (last modified 20-Jan-2014 00:00)domain2/file.php (last modified 22-Jan-2014 00:00)domain3/file.php (last modified 24-Jan-2014 00:00)domain4/file.php (last modified 23-Jan-2014 00:00) Each directory has the same file name (e.g., file.php ). The result should be: domain3/ (last modified 24-Jan-2014 00:00)domain4/ (last modified 23-Jan-2014 00:00)domain2/ (last modified 22-Jan-2014 00:00)domain1/ (last modified 21-Jan-2014 00:00)
As Vivian suggested, the -t option of ls tells it to sort files by modification time(most recent first, by default; reversed if you add -r ). This is most commonly used (at least in my experience) to sort the files in a directory,but it can also be applied to a list of files on the command line. And wildcards (“globs”) produce a list of files on the command line. So, if you say ls -t */file.php it will list domain3/file.php domain4/file.php domain2/file.php domain1/file.php But, it you add the -1 (dash one) option, or pipe this into anything,it will list them one per line.  So the command you want is ls -t */file.php | sed 's|/file.php||' This is an ordinary s/ old_string / replacement_string / substitution in sed ,but using | as the delimiter, because the old_string contains a / ,and with an empty replacement_string . (I.e., it deletes the filename and the / before it — /file.php —from the ls output.) Of course, if you want the trailing / on the directory names,just do sed 's|file.php||' or sed 's/file.php//' . If you want, add the -l (lower-case L) option to ls to get the long listing, including modification date/time. And then you may want to enhance the sed commandto strip out irrelevant information (like the mode, owner, and size of the file)and, if you want, move the date/time after the directory name. This will look into the directories that are in the current directory, and only them. (This seems to be what the question is asking for.) Doing a one-level scan of some other directory is a trivial variation: ls -t /path/to/tld/*/file.php | sed 's|/file.php||' To (recursively) search the entire directory tree under your current directory(or some other top-level directory) is a little trickier. Type the command shopt -s globstar and then replace the asterisk ( * ) in one of the above commands with two asterisks ( ** ), e.g., ls -t **/file.php | sed 's|/file.php||'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88922/" ] }
163,806
I have a dual boot system with an NTFS partition (C:) dedicated to the windows 8 OS files , an EXT4 partition dedicated to Linux Mint 17 . There is also another NTFS partition (E:) which I would like to use for some files shared between the two OS's . The problem is that whenever I hibernate Windows 8 it sets the hibernation flag on both C: and E: making it impossible to write into the shared partition from Linux. Is there any way to force Linux Mint to mount the partiton as R/W or to prevent windows 8 from setting this flag on the shared partition? I know that an obvious solution would be to just normally shutdown windows 8 (with fast boot disabled) but I really need to have it hibernated.
ntfs-3g has an option that will force delete the hibernation file and force a rw mount: mount -t ntfs-3g -o remove_hiberfile /dev/sdXX /media/windows If it's still giving you errors, try running ntfsfix as root (e.g. with sudo ) to fix errors in the NTFS file system: sudo ntfsfix /dev/sdXX From ntfs-3g(8): SYNOPSIS mount -t ntfs-3g [-o option[,...]] volume mount_point OPTIONS remove_hiberfile Unlike in case of read-only mount, the read-write mount is denied if the NTFS volume is hibernated. One needs either to resume Windows and shutdown it properly, or use this option which will remove the Windows hibernation file. Please note, this means that the saved Windows session will be completely lost. Use this option under your own responsibility.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163806", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88931/" ] }
163,810
Let's say I have a variable line="This is where we select from a table." now I want to grep how many times does select occur in the sentence. grep -ci "select" $line I tried that, but it did not work. I also tried grep -ci "select" "$line" It still doesn't work. I get the following error. grep: This is where we select from a table.: No such file or directory
Have grep read on its standard input. There you go, using a pipe ... $ echo "$line" | grep select ... or a here string ... $ grep select <<< "$line" Also, you might want to replace spaces by newlines before grepping : $ echo "$line" | tr ' ' '\n' | grep select ... or you could ask grep to print the match only: $ echo "$line" | grep -o select This will allow you to get rid of the rest of the line when there's a match. Edit: Oops, read a little too fast, thanks Marco . In order to count the occurences, just pipe any of these to wc(1) ;) Another edit made after lzkata 's comment, quoting $line when using echo .
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/163810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68738/" ] }
163,831
I'm working on centos 7, and having problematic behaviour when setting network interface from dhcp to static ip configuration. I edit /etc/resolv.conf , and run systemctl restart network.service The changes that I made are gone, and a generic file is created: cat /etc/resolv.conf# Generated by NetworkManager# No nameservers found; try putting DNS servers into your# ifcfg files in /etc/sysconfig/network-scripts like so:## DNS1=xxx.xxx.xxx.xxx# DNS2=xxx.xxx.xxx.xxx# DOMAIN=lab.foo.com bar.foo.com NOTICE: PEERDNS="yes" in ifcfg-ens160 file. PEERDNS=, where is one of the following:yes — Modify /etc/resolv.conf if the DNS directive is set. If using DHCP, then yes is the default. no — Do not modify /etc/resolv.conf. Taken from here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/3/html/Reference_Guide/s1-networkscripts-interfaces.html I guess it has something to do with it, but it's working well when setting to dhcp, so I expect that if it configures /etc/resolv.conf automatically it will succeed. A workaround is to edit /etc/resolv.conf after service is restarted. But I want to understand the behavior, and how can I avoid the file being reset to this default failure message.
You're probably mixing the classic /etc/init.d/network (which gets translated to network.service ) with NetworkManager.service . While those are expected to partially coexist, it's much better to choose just one of them and stop and disable the other. Either way, it's better not to write /etc/resolv.conf directly but instead properly configure /etc/sysconfig/network and/or /etc/sysconfig/network-scripts/ifup-* files. You should either enable dhcp or set the name servers manually in /etc/sysconfig . Example (DHCP): BOOTPROTO=dhcp Example (static): BOOTPROTO=noneDNS1=192.168.1.1 If you really want to set /etc/resolv.conf directly and you want to make sure NetworkManager won't overwrite it, you can set it up in /etc/NetworkManager/NetworkManager.conf . [main]dns=none Regarding your additional question on the number of name servers, you should never need more than one or two name servers in /etc/resolv.conf . You shouldn't expect much from the libc resolver behavior, it just attempts the name servers in order and you'll experience long delays if you have defunct name servers in the list. I don't know your reasons to use more than three name servers. But if there is one, you definitely need to configure a local forwarding DNS server like unbound or dnsmasq and point /etc/resolv.conf to 127.0.0.1 . For the best experience with dynamic configuration you should use NetworkManager in this case. NetworkManager with dnsmasq has been long supported and is the default on Ubuntu and possibly other distributions. [main]dns=dnsmasq NetworkManager with unbound is in alpha quality in the lastest NetworkManager versions and currently also needs dnssec-trigger as the main use case is to provide DNSSEC validation on the local host. [main]dns=unbound Both dnsmasq and unbound plugins configure /etc/resolv.conf to nameserver 127.0.0.1 for you and each of them configures the respective local DNS server.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67047/" ] }
163,845
I have the below JSON file, with the data stored as columns enumerated by rank in an array: { "data": [ { "displayName": "First Name", "rank": 1, "value": "VALUE" }, { "displayName": "Last Name", "rank": 2, "value": "VALUE" }, { "displayName": "Position", "rank": 3, "value": "VALUE" }, { "displayName": "Company Name", "rank": 4, "value": "VALUE" }, { "displayName": "Country", "rank": 5, "value": "VALUE" } ]} I would like to have a CSV file in this format, where the header come from the value of a column's displayName and the data in the column is the singular value key's value: First Name, Last Name, Position, Company Name, CountryVALUE, VALUE, VALUE, VALUE, VALUE Is this possible by using only jq ? I don't have any programming skills.
jq has a filter, @csv, for converting an array to a CSV string. This filter takes into account most of the complexities associated with the CSV format, beginning with commas embedded in fields. (jq 1.5 has a similar filter, @tsv, for generating tab-separated-value files.) Of course, if the headers and values are all guaranteed to be free of commas and double quotation marks, then there may be no need to use the @csv filter. Otherwise, it would probably be better to use it. For example, if the 'Company Name' were 'Smith, Smith and Smith',and if the other values were as shown below, invoking jq with the "-r" option would produce valid CSV: $ jq -r '.data | map(.displayName), map(.value) | @csv' so.json2csv.json"First Name","Last Name","Position","Company Name","Country""John (""Johnnie"")","Doe","Director, Planning and Posterity","Smith, Smith and Smith","Transylvania"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88962/" ] }
163,848
I have a legitimate need to empty a directory to which my account does not have access (it is a cache directory used by an application, and is owned by the account that runs the application.) My account has full sudo privileges (I think), but I can't figure out how to delete the contents of the directory in question. My account can't cd into the directory, only into the parent directory. sudo rm directory/* gives the response cannot remove `directory/*': no such file or directory This seems like a problem that should have a simple answer, but I can't find it. Edit: The directory in question is definitely not empty: sudo ls directory/ returns over 1000 filenames.
You're having problems because your shell is trying to expand * into the list of files, but it can't since you don't have rights to read the directory. I can think of two things that would work sudo bash -c "rm directory/*" In this case, the * isn't expanded by you, but by root, who can read the directory OR sudo find directory -type f -exec rm {} \; The above will only delete files, not directories (otherwise it would delete directory along with it's contents), but feels less error prone to me. Edit: In the first option, I had accidentally written directory.* instead of directory/*
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163848", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48100/" ] }
163,865
I've purchased an SSL and I'm trying to set it on a browser.The port is forwarded in the router to the server, and I believe that SSL certificate is installed correctly (the apache starts OK). I opened the port in the IPtables firewall, but when I list the port listened - I don't see anything listening to port 443. I went over my configuration (default debian 7 w/ with LAMP server) and I have the following in my ports.conf file: # If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default# This is also true if you have upgraded from before 2.2.9-3 (i.e. from# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and# README.Debian.gzNameVirtualHost *:80Listen 80<IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. NameVirtualHost *:443 Listen 443</IfModule><IfModule mod_gnutls.c> NameVirtualHost *:443 Listen 443</IfModule> And in the sites-enabled I have a file called default-ssl containing (it's quite long, i'll just add the host data, not the entire ssl file options unless someone can think it could help) <IfModule mod_ssl.c><VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All# Order allow,deny# allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/dev.webmark.co.il/dev_webmark_co_il.pem SSLCertificateKeyFile /etc/ssl/dev.webmark.co.il/dev.webmark.co.il.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided So I apologize for the very long post, just thought this is relevant information. I think the ports.conf file enables the listener on 443, but I don't know why it doesn't. When I list the ports listening: netstat -a | egrep 'Proto|LISTEN' I get Proto Recv-Q Send-Q Local Address Foreign Address Statetcp 0 0 10.0.0.10:mysql *:* LISTENtcp 0 0 *:35563 *:* LISTENtcp 0 0 *:sunrpc *:* LISTENtcp 0 0 localhost:61619 *:* LISTENtcp 0 0 *:61620 *:* LISTENtcp 0 0 *:ftp *:* LISTENtcp 0 0 *:ssh *:* LISTENtcp 0 0 10.0.0.10:8888 *:* LISTENtcp 0 0 localhost:smtp *:* LISTENtcp 0 0 *:27017 *:* LISTENtcp6 0 0 [::]:sunrpc [::]:* LISTENtcp6 0 0 [::]:http [::]:* LISTENtcp6 0 0 [::]:ssh [::]:* LISTENtcp6 0 0 localhost:smtp [::]:* LISTENtcp6 0 0 [::]:https [::]:* LISTENtcp6 0 0 [::]:55644 [::]:* LISTENProto RefCnt Flags Type State I-Node Pathunix 2 [ ACC ] STREAM LISTENING 7400 /tmp/mongodb-27017.so ckunix 2 [ ACC ] STREAM LISTENING 7444 /var/run/dbus/system_ bus_socketunix 2 [ ACC ] STREAM LISTENING 7215 /var/run/rpcbind.sockunix 2 [ ACC ] SEQPACKET LISTENING 3434 /run/udev/controlunix 2 [ ACC ] STREAM LISTENING 7351 /var/run/acpid.socketunix 2 [ ACC ] STREAM LISTENING 7624 /var/run/mysqld/mysql I'm pretty sure that the condition is true. I hope I gave all the relevant information and not too much of it.Thanks for your time reading this.Yan Edit in order to make sure mod_ssl is running - I used apache2ctl -M Which resulted with: Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) headers_module (shared) mime_module (shared) ssl_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared)Syntax OK /edit
Did you enable mod_ssl ? Since you're running Debian, this is the way to do it (run as root, or via sudo): a2enmod ssl
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88793/" ] }
163,872
I am setting up a server where there are multiple developers working on multiple applications. I have figured out how to give certain developers shared access to the necessary application directories using the setgid bit and default ACLs to give anyone in a group access. Many of these applications run under a terminal while in development for easy access. When I work alone, I set up a user for an application and run screen as that user. This has the downside that every developer to use the screen session needs to know the password and it is harder to keep user and application accounts separate. One way that could work is using screen multiuser features. They do not work out-of-the-box however, screen complains about needing suid root . Does giving that have any downsides? I am pretty careful about using suid root anything. Maybe there is a reason why it isn't the default? Should I do it with screen or is there some other intelligent way of doing what I want?
Yes, you can do it with screen which has multiuser support. First, create a new session: screen -d -m -S multisession Attach to it: screen -r multisession Turn on multiuser support: Press Ctrl-a and type (NOTE: Ctrl+a is needed just before each single command, i.e. twice here) :multiuser on:acladd USER ← use username of user you want to give access to your screen Now, Ctrl-a d and list the sessions: $ screen -lsThere is a screen on: 4791.multisession (Multi, detached) You now have a multiuser screen session. Give the name multisession to acl'd user, so he can attach to it: screen -x youruser/multisession And that's it. The only drawback is that screen must run as suid root. But as far as I know is the default, normal situation. Another option is to do screen -S $screen_id -X multiuser on , screen -S $screen_id -X acladd authorized_user Hope this helps.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/163872", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17737/" ] }
163,892
Introduction I just installed tmux onto my CentOS 7 machine, but I have encountered a bewildering issue. Whenever I try to attempt to start up a session using tmux or tmux new -s session-name , it outputs a random string of characters into my prompt, and fails to start. $ tmux$ 1;2c I have no idea what to even make of this. tmux.conf My config for tmux is located in ~/.tmux.conf and is configured as follows. setw -g mode-keys vi# 12 hour clockset-window-option -g clock-mode-style 12# reload tmux.confbind r source-file ~/.tmux.conf \; display-message " ✱ ~/.tmux.conf is reloaded" I also noticed a few error logs were generated, and they are as follows. tmux-client-6310.log got 18 from servergot 3 from server tmux-server-6312.log server started, pid 6312socket path /tmp/tmux-1000/defaultnew client 8loading /etc/tmux.conf/etc/tmux.conf: #Prefix is Ctrl-a/etc/tmux.conf: set -g prefix C-a/etc/tmux.conf: bind C-a send-prefix/etc/tmux.conf: unbind C-b/etc/tmux.conf: /etc/tmux.conf: set -sg escape-time 1/etc/tmux.conf: set -g base-index 1/etc/tmux.conf: setw -g pane-base-index 1/etc/tmux.conf: /etc/tmux.conf: #Mouse works as expected/etc/tmux.conf: setw -g mode-mouse on/etc/tmux.conf: set -g mouse-select-pane on/etc/tmux.conf: set -g mouse-resize-pane on/etc/tmux.conf: set -g mouse-select-window on/etc/tmux.conf: /etc/tmux.conf: setw -g monitor-activity on/etc/tmux.conf: set -g visual-activity on/etc/tmux.conf: /etc/tmux.conf: set -g mode-keys vi/etc/tmux.conf: set -g history-limit 10000/etc/tmux.conf: /etc/tmux.conf: # y and p as in vim/etc/tmux.conf: bind Escape copy-mode/etc/tmux.conf: unbind p/etc/tmux.conf: bind p paste-buffer/etc/tmux.conf: bind -t vi-copy 'v' begin-selection/etc/tmux.conf: bind -t vi-copy 'y' copy-selection/etc/tmux.conf: bind -t vi-copy 'Space' halfpage-down/etc/tmux.conf: bind -t vi-copy 'Bspace' halfpage-up/etc/tmux.conf: /etc/tmux.conf: # extra commands for interacting with the ICCCM clipboard/etc/tmux.conf: bind C-c run "tmux save-buffer - | xclip -i -sel clipboard"/etc/tmux.conf: bind C-v run "tmux set-buffer \"$(xclip -o -sel clipboard)\"; tmux paste-buffer"/etc/tmux.conf: /etc/tmux.conf: # easy-to-remember split pane commands/etc/tmux.conf: bind | split-window -h/etc/tmux.conf: bind - split-window -v/etc/tmux.conf: unbind '"'/etc/tmux.conf: unbind %/etc/tmux.conf: /etc/tmux.conf: # moving between panes with vim movement keys/etc/tmux.conf: bind h select-pane -L/etc/tmux.conf: bind j select-pane -D/etc/tmux.conf: bind k select-pane -U/etc/tmux.conf: bind l select-pane -R/etc/tmux.conf: /etc/tmux.conf: # moving between windows with vim movement keys/etc/tmux.conf: bind -r C-h select-window -t :-/etc/tmux.conf: bind -r C-l select-window -t :+/etc/tmux.conf: /etc/tmux.conf: # resize panes with vim movement keys/etc/tmux.conf: bind -r H resize-pane -L 5/etc/tmux.conf: bind -r J resize-pane -D 5/etc/tmux.conf: bind -r K resize-pane -U 5/etc/tmux.conf: bind -r L resize-pane -R 5/etc/tmux.conf: /etc/tmux.conf: # I'm not hardcore enough for military time/etc/tmux.conf: set-window-option -g clock-mode-style 12/etc/tmux.conf: /etc/tmux.conf: # reload tmux.conf/etc/tmux.conf: bind r source-file /etc/tmux.conf \; display-message " ✱ ~/.tmux.conf is reloaded"/etc/tmux.conf: /etc/tmux.conf: # tmux is so slow by default (this allows for faster key repetition)/etc/tmux.conf: set -sg escape-time 190cmdq 0x6afde0: set-option -g prefix C-a (client -1)cmdq 0x6afde0: bind-key C-a send-prefix (client -1)cmdq 0x6afde0: unbind-key C-b (client -1)cmdq 0x6afde0: set-option -gs escape-time 1 (client -1)cmdq 0x6afde0: set-option -g base-index 1 (client -1)cmdq 0x6afde0: set-window-option -g pane-base-index 1 (client -1)cmdq 0x6afde0: set-window-option -g mode-mouse on (client -1)cmdq 0x6afde0: set-option -g mouse-select-pane on (client -1)cmdq 0x6afde0: set-option -g mouse-resize-pane on (client -1)cmdq 0x6afde0: set-option -g mouse-select-window on (client -1)cmdq 0x6afde0: set-window-option -g monitor-activity on (client -1)cmdq 0x6afde0: set-option -g visual-activity on (client -1)cmdq 0x6afde0: set-option -g mode-keys vi (client -1)cmdq 0x6afde0: set-option -g history-limit 10000 (client -1)cmdq 0x6afde0: bind-key Escape copy-mode (client -1)cmdq 0x6afde0: unbind-key p (client -1)cmdq 0x6afde0: bind-key p paste-buffer (client -1)cmdq 0x6afde0: bind-key -t vi-copy v begin-selection (client -1)cmdq 0x6afde0: bind-key -t vi-copy y copy-selection (client -1)cmdq 0x6afde0: bind-key -t vi-copy Space halfpage-down (client -1)cmdq 0x6afde0: bind-key -t vi-copy Bspace halfpage-up (client -1)cmdq 0x6afde0: bind-key C-c run "tmux save-buffer - | xclip -i -sel clipboard" (client -1)cmdq 0x6afde0: bind-key C-v run "tmux set-buffer "$(xclip -o -sel clipboard)"; tmux paste-buffer" (client -1)cmdq 0x6afde0: bind-key | split-window -h (client -1)cmdq 0x6afde0: bind-key - split-window -v (client -1)cmdq 0x6afde0: unbind-key " (client -1)cmdq 0x6afde0: unbind-key % (client -1)cmdq 0x6afde0: bind-key h select-pane -L (client -1)cmdq 0x6afde0: bind-key j select-pane -D (client -1)cmdq 0x6afde0: bind-key k select-pane -U (client -1)cmdq 0x6afde0: bind-key l select-pane -R (client -1)cmdq 0x6afde0: bind-key -r C-h select-window -t :- (client -1)cmdq 0x6afde0: bind-key -r C-l select-window -t :+ (client -1)cmdq 0x6afde0: bind-key -r H resize-pane -L 5 (client -1)cmdq 0x6afde0: bind-key -r J resize-pane -D 5 (client -1)cmdq 0x6afde0: bind-key -r K resize-pane -U 5 (client -1)cmdq 0x6afde0: bind-key -r L resize-pane -R 5 (client -1)cmdq 0x6afde0: set-window-option -g clock-mode-style 12 (client -1)cmdq 0x6afde0: bind-key r source-file /etc/tmux.conf ; display-message " ✱ ~/.tmux.conf is reloaded" (client -1)cmdq 0x6afde0: set-option -gs escape-time 190 (client -1)got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 6 from client 8got 0 from client 8cmdq 0x6af9d0: new-session (client 8)new term: xterm-256colorxterm-256color override: colors 256xterm-256color override: XT xterm-256color override: Ms ]52;%p1%s;%p2%sxterm-256color override: Cc ]12;%p1%sxterm-256color override: Cr ]112xterm-256color override: Cs [%p1%d qxterm-256color override: Csr [2 qnew key Oo: 0x1021 (KP/)new key Oj: 0x1022 (KP*)new key Om: 0x1023 (KP-)new key Ow: 0x1024 (KP7)new key Ox: 0x1025 (KP8)new key Oy: 0x1026 (KP9)new key Ok: 0x1027 (KP+)new key Ot: 0x1028 (KP4)new key Ou: 0x1029 (KP5)new key Ov: 0x102a (KP6)new key Oq: 0x102b (KP1)new key Or: 0x102c (KP2)new key Os: 0x102d (KP3)new key OM: 0x102e (KPEnter)new key Op: 0x102f (KP0)new key On: 0x1030 (KP.)new key OA: 0x101d (Up)new key OB: 0x101e (Down)new key OC: 0x1020 (Right)new key OD: 0x101f (Left)new key [A: 0x101d (Up)new key [B: 0x101e (Down)new key [C: 0x1020 (Right)new key [D: 0x101f (Left)new key OH: 0x1018 (Home)new key OF: 0x1019 (End)new key [H: 0x1018 (Home)new key [F: 0x1019 (End)new key Oa: 0x501d (C-Up)new key Ob: 0x501e (C-Down)new key Oc: 0x5020 (C-Right)new key Od: 0x501f (C-Left)new key [a: 0x901d (S-Up)new key [b: 0x901e (S-Down)new key [c: 0x9020 (S-Right)new key [d: 0x901f (S-Left)new key [11^: 0x5002 (C-F1)new key [12^: 0x5003 (C-F2)new key [13^: 0x5004 (C-F3)new key [14^: 0x5005 (C-F4)new key [15^: 0x5006 (C-F5)new key [17^: 0x5007 (C-F6)new key [18^: 0x5008 (C-F7)new key [19^: 0x5009 (C-F8)new key [20^: 0x500a (C-F9)new key [21^: 0x500b (C-F10)new key [23^: 0x500c (C-F11)new key [24^: 0x500d (C-F12)new key [25^: 0x500e (C-F13)new key [26^: 0x500f (C-F14)new key [28^: 0x5010 (C-F15)new key [29^: 0x5011 (C-F16)new key [31^: 0x5012 (C-F17)new key [32^: 0x5013 (C-F18)new key [33^: 0x5014 (C-F19)new key [34^: 0x5015 (C-F20)new key [2^: 0x5016 (C-IC)new key [3^: 0x5017 (C-DC)new key [7^: 0x5018 (C-Home)new key [8^: 0x5019 (C-End)new key [6^: 0x501a (C-NPage)new key [5^: 0x501b (C-PPage)new key [11$: 0x9002 (S-F1)new key [12$: 0x9003 (S-F2)new key [13$: 0x9004 (S-F3)new key [14$: 0x9005 (S-F4)new key [15$: 0x9006 (S-F5)new key [17$: 0x9007 (S-F6)new key [18$: 0x9008 (S-F7)new key [19$: 0x9009 (S-F8)new key [20$: 0x900a (S-F9)new key [21$: 0x900b (S-F10)new key [23$: 0x900c (S-F11)new key [24$: 0x900d (S-F12)new key [25$: 0x900e (S-F13)new key [26$: 0x900f (S-F14)new key [28$: 0x9010 (S-F15)new key [29$: 0x9011 (S-F16)new key [31$: 0x9012 (S-F17)new key [32$: 0x9013 (S-F18)new key [33$: 0x9014 (S-F19)new key [34$: 0x9015 (S-F20)new key [2$: 0x9016 (S-IC)new key [3$: 0x9017 (S-DC)new key [7$: 0x9018 (S-Home)new key [8$: 0x9019 (S-End)new key [6$: 0x901a (S-NPage)new key [5$: 0x901b (S-PPage)new key [11@: 0xd002 (C-S-F1)new key [12@: 0xd003 (C-S-F2)new key [13@: 0xd004 (C-S-F3)new key [14@: 0xd005 (C-S-F4)new key [15@: 0xd006 (C-S-F5)new key [17@: 0xd007 (C-S-F6)new key [18@: 0xd008 (C-S-F7)new key [19@: 0xd009 (C-S-F8)new key [20@: 0xd00a (C-S-F9)new key [21@: 0xd00b (C-S-F10)new key [23@: 0xd00c (C-S-F11)new key [24@: 0xd00d (C-S-F12)new key [25@: 0xd00e (C-S-F13)new key [26@: 0xd00f (C-S-F14)new key [28@: 0xd010 (C-S-F15)new key [29@: 0xd011 (C-S-F16)new key [31@: 0xd012 (C-S-F17)new key [32@: 0xd013 (C-S-F18)new key [33@: 0xd014 (C-S-F19)new key [34@: 0xd015 (C-S-F20)new key [2@: 0xd016 (C-S-IC)new key [3@: 0xd017 (C-S-DC)new key [7@: 0xd018 (C-S-Home)new key [8@: 0xd019 (C-S-End)new key [6@: 0xd01a (C-S-NPage)new key [5@: 0xd01b (C-S-PPage)new key [I: 0x1031 ((null))new key [O: 0x1032 ((null))new key OP: 0x1002 (F1)new key OQ: 0x1003 (F2)new key OR: 0x1004 (F3)new key OS: 0x1005 (F4)new key [15~: 0x1006 (F5)new key [17~: 0x1007 (F6)new key [18~: 0x1008 (F7)new key [19~: 0x1009 (F8)new key [20~: 0x100a (F9)new key [21~: 0x100b (F10)new key [23~: 0x100c (F11)new key [24~: 0x100d (F12)new key [1;2P: 0x100e (F13)new key [1;2Q: 0x100f (F14)new key [1;2R: 0x1010 (F15)new key [1;2S: 0x1011 (F16)new key [15;2~: 0x1012 (F17)new key [17;2~: 0x1013 (F18)new key [18;2~: 0x1014 (F19)new key [19;2~: 0x1015 (F20)new key [2~: 0x1016 (IC)new key [3~: 0x1017 (DC)replacing key OH: 0x1018 (Home)replacing key OF: 0x1019 (End)new key [6~: 0x101a (NPage)new key [5~: 0x101b (PPage)new key [Z: 0x101c (BTab)replacing key OA: 0x101d (Up)replacing key OB: 0x101e (Down)replacing key OD: 0x101f (Left)replacing key OC: 0x1020 (Right)new key [3;2~: 0x9017 (S-DC)new key [3;3~: 0x3017 (M-DC)new key [3;4~: 0xb017 (M-S-DC)new key [3;5~: 0x5017 (C-DC)new key [3;6~: 0xd017 (C-S-DC)new key [3;7~: 0x7017 (C-M-DC)new key [1;2B: 0x901e (S-Down)new key [1;3B: 0x301e (M-Down)new key [1;4B: 0xb01e (M-S-Down)new key [1;5B: 0x501e (C-Down)new key [1;6B: 0xd01e (C-S-Down)new key [1;7B: 0x701e (C-M-Down)new key [1;2F: 0x9019 (S-End)new key [1;3F: 0x3019 (M-End)new key [1;4F: 0xb019 (M-S-End)new key [1;5F: 0x5019 (C-End)new key [1;6F: 0xd019 (C-S-End)new key [1;7F: 0x7019 (C-M-End)new key [1;2H: 0x9018 (S-Home)new key [1;3H: 0x3018 (M-Home)new key [1;4H: 0xb018 (M-S-Home)new key [1;5H: 0x5018 (C-Home)new key [1;6H: 0xd018 (C-S-Home)new key [1;7H: 0x7018 (C-M-Home)new key [2;2~: 0x9016 (S-IC)new key [2;3~: 0x3016 (M-IC)new key [2;4~: 0xb016 (M-S-IC)new key [2;5~: 0x5016 (C-IC)new key [2;6~: 0xd016 (C-S-IC)new key [2;7~: 0x7016 (C-M-IC)new key [1;2D: 0x901f (S-Left)new key [1;3D: 0x301f (M-Left)new key [1;4D: 0xb01f (M-S-Left)new key [1;5D: 0x501f (C-Left)new key [1;6D: 0xd01f (C-S-Left)new key [1;7D: 0x701f (C-M-Left)new key [6;2~: 0x901a (S-NPage)new key [6;3~: 0x301a (M-NPage)new key [6;4~: 0xb01a (M-S-NPage)new key [6;5~: 0x501a (C-NPage)new key [6;6~: 0xd01a (C-S-NPage)new key [6;7~: 0x701a (C-M-NPage)new key [5;2~: 0x901b (S-PPage)new key [5;3~: 0x301b (M-PPage)new key [5;4~: 0xb01b (M-S-PPage)new key [5;5~: 0x501b (C-PPage)new key [5;6~: 0xd01b (C-S-PPage)new key [5;7~: 0x701b (C-M-PPage)new key [1;2C: 0x9020 (S-Right)new key [1;3C: 0x3020 (M-Right)new key [1;4C: 0xb020 (M-S-Right)new key [1;5C: 0x5020 (C-Right)new key [1;6C: 0xd020 (C-S-Right)new key [1;7C: 0x7020 (C-M-Right)new key [1;2A: 0x901d (S-Up)new key [1;3A: 0x301d (M-Up)new key [1;4A: 0xb01d (M-S-Up)new key [1;5A: 0x501d (C-Up)new key [1;6A: 0xd01d (C-S-Up)new key [1;7A: 0x701d (C-M-Up)spawn: /bin/bash -- session 0 destroyedwriting 18 to client 8writing 3 to client 8lost client 8 Some of the things in this log were from a previous version of my .tmux.conf . Any ideas? Edit #1 After reading @jasonwryan's answer, I read through the Sourceforge page for tmux and read about the TERM environment setting being a potential issue. My current value for $TERM was as follows: $ echo $TERMxterm-256color I tried running the following commands to try to change it. $ export TERM=screen$ tmux$ echo $TERMscreen After achieving the same results as before, I re-exported my $TERM value to xterm-256color . Edit #2 Running tmux sessions as the root user works fine; however, using tmux as any privileged user will always result in the above issues.
I also had this problem with CentOS 7 and the bundled tmux binary.Turns out I had to put my user in the tty group: # /etc/grouptty:x:5:<username> I had to do this even tough my ptmx permissions look like this: crw-rw-rw- 1 root tty 5, 2 Dec 9 23:17 /dev/ptmx
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47463/" ] }
163,898
The issue: I need to assign a variable a value that is decently long. All the lines of my script must be under a certain number of columns. So, I am trying to assign it using more than one line. It's simple to do without indents: VAR="This displays without \any issues."echo "${VAR}" Result: This displays without any issues. However with indents: VAR="This displays with \ extra spaces." echo "${VAR}" Result: This displays with extra spaces. How can I elegantly assign it without these spaces?
Here the issue is that you are surrounding the variable with double quotes (""). Remove it and things will work fine. VAR="This displays with \ extra spaces." echo ${VAR} Output This displays with extra spaces. Here the issue is that double quoting a variable preserves all white space characters. This can be used in case if you explicitly need it. For example, $ echo "Hello World ........ ... ...." will print Hello World ........ ... .... And on removing quotes, its different $ echo Hello World ........ ... ....Hello World ........ ... .... Here the Bash removes extra spaces in the text because in the first case the entire text is taken as a "single" argument and thus preserving extra spaces.But in the second case echo command receives the text as 5 arguments. Quoting a variable will also be helpful while passing arguments to commands. In the below command, echo only gets single argument as "Hello World" $ variable="Hello World"$ echo "$variable" But in case of the below scenario echo gets two arguments as Hello and World $ variable="Hello World"$ echo $variable
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/163898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88594/" ] }
163,911
I was trying to install Matlab, and used the following commands: # mkdir -p /mnt/disk# mount -o loop R2014a_UNIX.iso /mnt/disk# cd /mnt/disk# ls -l# ./install However, I don't want it in a directory called disk, I want it in a directory called Matlab using mkdir /mnt/matlab mount -o ro,loop ./R2014a_UNIX.iso /mnt/matlab /mnt/matlab/install umount /mnt/matlab (out of interest is this the best place to install it?) However, I am unable to uninstall or remove /mnt or /mnt/disk as they have read only privileges. My searching and attempts with chown, rmdir and rm -r have not helped yet. Please could you help me out.
The best place to install additional software packages in linux is /opt/ . So create a directory for MatLab there and install it. # mkdir /opt/matlab# mount -o ro,loop ./R2014a_UNIX.iso /media/cdrom# /media/cdrom/install# umount /media/cdrom As your installer is in the form of an ISO image, mount it in /media/cdrom . I hope the installer /media/cdrom/install will ask you the location of installation and specify it as /opt/matlab . Once things are done set the PATH environment variable appropriately so that the matlab binaries are accessible without their "absolute path". Why do u want to remove /mnt/ directory ?. This directory is important and they are part of Filesystem Hierarchy Standards . But incase if you still want to delete it, login as a root user and enter rm -rf /mnt it should get deleted.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89001/" ] }
163,929
I started to study Solaris in my university. Teacher told me to remove a file with mv command. I tried to move a file to /dev/null. But that did not work. How can I remove file with mv command?
mv other-file file-to-be-deletedmv file-to-be-deleted other-file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87434/" ] }
163,949
I have a directory with 10,000 JPEGs. I made a mistake while generating the files and now some are 720px × 480px, but they should all be 448px × 336px. The directory contains both the too large files and the correctly sized 448px × 336px. They need to be all the same size, so I need to scale the 720px × 480px images down to the correct 448px × 336px. Every file must be 448px × 336px. Since there are 10,000 files, it is difficult to check each one to see which are too big. Is there some way to use ImageMagick or a similar batch tool to selectively only resize those wrong-sized images?
mv other-file file-to-be-deletedmv file-to-be-deleted other-file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13099/" ] }
163,955
Yesterday I read this SO comment which says that in the shell (at least bash ) >&- "has the same result as" >/dev/null . That comment actually refers to the ABS guide as the source of its information. But that source says that the >&- syntax "closes file descriptors". It is not clear to me whether the two actions of closing a file descriptor and redirecting it to the null device are totally equivalent. So my question is: are they? On the surface of it it seems that closing a descriptor is like closing a door but redirecting it to a null device is opening a door to limbo! The two don't seem exactly the same to me because if I see a closed door, I won't try to throw anything out of it, but if I see an open door I will assume I can. In other words, I have always wondered if >/dev/null means that cat mybigfile >/dev/null would actually process every byte of the file and write it to /dev/null which forgets it. On the other hand, if the shell encounters a closed file descriptor I tend to think (but am not sure) that it will simply not write anything, though the question remains whether cat will still read every byte. This comment says >&- and >/dev/null " should " be the same, but it is not so resounding answer to me. I'd like to have a more authoritative answer with some reference to standard or source core or not...
No, you certainly don't want to close file descriptors 0, 1 and 2. If you do so, the first time the application opens a file, it will become stdin/stdout/stderr... For instance, if you do: echo text | tee file >&- When tee (at least some implementations, like busybox') opens the file for writing, it will be open on file descriptor 1 (stdout). So tee will write text twice into file : $ echo text | strace tee file >&-[...]open("file", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 1read(0, "text\n", 8193) = 5write(1, "text\n", 5) = 5write(1, "text\n", 5) = 5read(0, "", 8193) = 0exit_group(0) = ? That has been known to cause security vulnerabilities. For instance: chsh 2>&- And chsh (a setuid application) may end up writing error messages in /etc/passwd . Some tools and even some libraries try to guard against that. For instance GNU tee will move the file descriptor to one above 2 if the files it opens for writing are assigned 0, 1, 2 while busybox tee won't. Most tools, if they can't write to stdout (because for instance it's not open), will report an error message on stderr (in the language of the user which means extra processing to open and parse localisation files...), so it will be significantly less efficient, and possibly cause the program to fail. In any case, it won't be more efficient. The program will still do a write() system call. It can only be more efficient if the program gives up writing to stdout/stderr after the first failing write() system call, but programs generally don't do that. They generally either exit with an error or keep on trying.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/163955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54067/" ] }
163,970
I have a file similar to: abcdef123ghiflk234sfhskdhf483 I would like to search for a string, and the output should include the next (and/or previous) line as well. So for example: grep "bcd" myfileOutput:abcdef123 Or show previous line: grep "ifl" myfileOutput:123ghiflk I tried for days now to find a solution online but can't seem to figure it out.
The GNU and BSD grep utilities has the a -A option for lines after a match and a -B option for lines before a match. Thus, you can do something like: $ grep -A 1 bcd myfileabcdef123 to show the line after the match and $ grep -B 1 ifl myfile123ghiflk to show the line preceding the match. Finally, you can use -C to show N lines before and after the match: $ grep -C 1 ifl myfile123ghiflk234 If your version of grep doesn't support this feature, you could always try some other methods: Using POSIX grep and sed options along with shell: n=$(grep -n bcd foo | cut -d':' -f1); sed -n "${n},$((n + 1))p" myfile Using AWK awk '/bcd/ {print; getline; print}' myfile And many many more.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163970", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89041/" ] }
164,005
I want to boot to console instead of a GUI using systemd . How can I do that?
Open a terminal and (as root) run: systemctl set-default multi-user.target or with --force systemctl set-default -f multi-user.target to overwrite any existing conflicting symlinks 1 . Double-check with: systemctl get-default Another way is to add the following parameter to your kernel boot line: systemd.unit=multi-user.target
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/164005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
164,025
I have several files with the same base filename. I'd like to remove all but one foo.org #keepfoo.tex #deletefoo.fls #deletefoo.bib #deleteetc If I didn't need to keep one, I know I could use rm foo.* . TLDP demonstrates ^ to negate a match. Through trial and error, I was able to find that rm foo.*[^org] does what I need, but I don't really understand the syntax. Also, while not a limitation in my use case, I think this pattern also ignores foo.o and foo.or . How does this pattern work, and what would a glob that ignores only foo.org look like?
shopt -s extglobecho rm foo.!(org) This is "foo." followed by anything NOT "org" ref: https://www.gnu.org/software/bash/manual/bashref.html#Pattern-Matching
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/164025", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67205/" ] }
164,035
How do I do this? if [ -f town-*.swf ]thenmkdir townsfi This checks if town-*.swf exists, but I also need it to look for city-*.swf among others, so I need something like this: if [ -f town-*.swf, city-*.swf ]thenmkdir townsfi
POSIXly, you can use ls if ls town-*.swf >/dev/null 2>&1 && ls city-*.swf >/dev/null 2>&1 then mkdir townsfi or shorter if condition: if ls town-*.swf city-*.swf >/dev/null 2>&1 even if your shell supports brace expansion : if ls {town,city}-*.swf >/dev/null 2>&1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
164,125
What I am doing is converting mp3's with LAME. I already have an old script I wrote that works but I want to add to it this ability--to no longer delete the file but instead save it in a new root folder with sub-directories that match the using the path that it is already in. This is the code I got off the Internet that I am using in my script to get the path: c=$FILENAME xpath=${c%/*} xbase=${c##*/} xfext=${xbase##*.} xpref=${xbase%.*} path=${xpath} pref=${xpref} ext=${xfext} The xpath and path give me the directory structure /home/user/working-root-directory/band-folder/album-name/ Using that technique, how do I script this to get just the band-folder into a separate variable and the album-folder into a separate variable? Then I can use them to create new folders keeping all the mp3s in band album order to put them into a different root folder; this would eliminate me from having to move them myself so the next time I run my script I will not re-sample them again because they will no longer be in the working directoy path/folders and still have a back up copy of my files just in case.
When referring to $c if by filename you mean full-path to file then this job is really very easy. There are a few ways to do it. Using just POSIX shell globs and parameter expansion you can do: c='/home/user/working-root-directory/band-folder/album-name/music-file.mp3'file=${c##*/}album=${c#*"${c%/*/"$file"}"/}band=${c#*"${c%/*/"$album"}"/}band=${band%%/*} album=${album%%/*} Basically it works by trimming a string from the tail of the variable and then trimming the results from the head of the variable. In the last line any unwanted remains from the tail are stripped away. It is perhaps easier to understand with real-world data. If you were to wrap the above snippet in a shell with set -x enabled, you'd see something along these lines printed to stderr : + c=/home/user/working-root-directory/band-folder/album-name/music-file.mp3+ file=music-file.mp3+ album=album-name/music-file.mp3+ band=band-folder/album-name/music-file.mp3+ band=band-folder+ album=album-name Another way you might do this is to use the shell's internal field separator to split the pathname on / boundaries. set -f; IFS=/ #disable globbing so as not to gen more filenamesset -- $c #set the shell's arg array to split pathshift "$(($#-3))" #remove all but relevant fieldsband=$1 album=$2 file=$3 #assign values Here is some more set -x output: + c=/home/user/working-root-directory/band-folder/album-name/music-file.mp3+ set -f+ IFS=/+ set -- '' home user working-root-directory band-folder album-name music-file.mp3+ shift 4+ band=band-folder+ album=album-name+ file=music-file.mp3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52488/" ] }
164,134
I use Gentoo/Linux. Because of some reasons, I want to set the resolution of the console. So I rewrite grub file. GRUB_GFXMODE=1366x768GRUB_GFXPAFLOAD_LINUX=1366x768 But it isn't run.When I load the system, it is in silence. If I change the value, I can load the system, but nothing changes. Then I use the uveseafb. Although I compile the kernel, there isn't has a uveseafb. I'm sure that I have chosen the options and had v86d and klibc. What can I do to get this to work? In Ubuntu, I edit grub and initramfs-tools to do it. But in gentoo, I didn't do it.
When referring to $c if by filename you mean full-path to file then this job is really very easy. There are a few ways to do it. Using just POSIX shell globs and parameter expansion you can do: c='/home/user/working-root-directory/band-folder/album-name/music-file.mp3'file=${c##*/}album=${c#*"${c%/*/"$file"}"/}band=${c#*"${c%/*/"$album"}"/}band=${band%%/*} album=${album%%/*} Basically it works by trimming a string from the tail of the variable and then trimming the results from the head of the variable. In the last line any unwanted remains from the tail are stripped away. It is perhaps easier to understand with real-world data. If you were to wrap the above snippet in a shell with set -x enabled, you'd see something along these lines printed to stderr : + c=/home/user/working-root-directory/band-folder/album-name/music-file.mp3+ file=music-file.mp3+ album=album-name/music-file.mp3+ band=band-folder/album-name/music-file.mp3+ band=band-folder+ album=album-name Another way you might do this is to use the shell's internal field separator to split the pathname on / boundaries. set -f; IFS=/ #disable globbing so as not to gen more filenamesset -- $c #set the shell's arg array to split pathshift "$(($#-3))" #remove all but relevant fieldsband=$1 album=$2 file=$3 #assign values Here is some more set -x output: + c=/home/user/working-root-directory/band-folder/album-name/music-file.mp3+ set -f+ IFS=/+ set -- '' home user working-root-directory band-folder album-name music-file.mp3+ shift 4+ band=band-folder+ album=album-name+ file=music-file.mp3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89137/" ] }
164,151
When I installed Backtrack 5 R3, I chose to dual boot, it completely erased my BIOS and installed GNU Grub as the first thing that shows up when I boot up. The "Press escape for startup options" output still shows, but if I press escape, it says "BIOS is missing or corrupted". I would like to boot the Windows installer from my USB stick (testing out Windows 10), remove GNU GRUB, and flash back my BIOS while still being able to dual boot Backtrack 5 R3.
When referring to $c if by filename you mean full-path to file then this job is really very easy. There are a few ways to do it. Using just POSIX shell globs and parameter expansion you can do: c='/home/user/working-root-directory/band-folder/album-name/music-file.mp3'file=${c##*/}album=${c#*"${c%/*/"$file"}"/}band=${c#*"${c%/*/"$album"}"/}band=${band%%/*} album=${album%%/*} Basically it works by trimming a string from the tail of the variable and then trimming the results from the head of the variable. In the last line any unwanted remains from the tail are stripped away. It is perhaps easier to understand with real-world data. If you were to wrap the above snippet in a shell with set -x enabled, you'd see something along these lines printed to stderr : + c=/home/user/working-root-directory/band-folder/album-name/music-file.mp3+ file=music-file.mp3+ album=album-name/music-file.mp3+ band=band-folder/album-name/music-file.mp3+ band=band-folder+ album=album-name Another way you might do this is to use the shell's internal field separator to split the pathname on / boundaries. set -f; IFS=/ #disable globbing so as not to gen more filenamesset -- $c #set the shell's arg array to split pathshift "$(($#-3))" #remove all but relevant fieldsband=$1 album=$2 file=$3 #assign values Here is some more set -x output: + c=/home/user/working-root-directory/band-folder/album-name/music-file.mp3+ set -f+ IFS=/+ set -- '' home user working-root-directory band-folder album-name music-file.mp3+ shift 4+ band=band-folder+ album=album-name+ file=music-file.mp3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89146/" ] }
164,174
If I have a large file, I can use ln to make make "copies" of it that don't use up extra disk space. But what if I don't want an exact copy of the file? Is there a way to create a new version of a file with some modifications without needing to copy the whole file and use twice the disk space? My motivation is editing id3 tags on mp3 files from a torrent download. I don't want to edit the downloaded files directly because that messes up the seeding but I also don't want to copy the files and use twice the disk space just to edit some id3 tags their headers.
If there is no built-in capability in the program that you use, to overlay new information in some way over a base file, you have to resolve this on the filesystem level, transparently to the application using the file. Because of your space requirement a revision control system would not suffice, although it provides you with multiple versions. One thing you can investigate is to store the files on a Btrfs filesystem and have the originals in one "originals" snapshot and the updated versions in a view based on this snapshot. This should work well for ID3v1 tags (as they are the end of a file) and also for those files that have ID3v2 tags¹, as long as they have enough reserved space for the changes and do not require rewriting of the MP3 file. Thus only the actual blocks changed for a file are taking extra disk space. If you add additional file in the originals you have to make an explicit cp --reflink src dst for all the files added at a later stage. Your downloads would then work with the originals and your id3 editor ( e.g. picard) and your music player on the derived view. Unchanged (or not yet changed) files in that view will look exactly the same as under the originals. Example (starting with an Btrfs volume on /data0 and a test.mp3 file in /tmp ): /data0$ btrfs subvolume create /data0/mp3orgCreate subvolume '/data0/mp3org'/data0$ cp /tmp/test.mp3 mp3org//data0$ btrfs subvolume snapshot /data0/mp3org/ /data0/id3updateCreate a snapshot of '/data0/mp3org/' in '/data0/id3update' The file test.mp3 is now available in both directories ( mp3org and id3update ): /data0$ ls -l /data0/mp3orgtotal 7600-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:21 test.mp3/data0$ ls -l /data0/id3update/total 7600-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:21 test.mp3 Change the one in the snapshot: /data0$ id3v2 -c "This is a change" id3update/test.mp3/data0$ ls -l /data0/mp3orgtotal 7600-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:21 test.mp3/data0$ ls -l /data0/id3update/total 7608-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:24 test.mp3 The file size has not changed, but the contents of the second one did. This is because the comment did fit in the reserved space for the id3v2 in the original file. /data0$ grep -F "is a change" mp3org/* id3update/*Binary file id3update/test.mp3 matches Copy another file in the original subvolume, it doesn't show up in id3update : /data0$ cp /tmp/test.mp3 mp3org/abc.mp3/data0$ ls -l mp3org/ id3update/id3update/:total 7600-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:24 test.mp3mp3org/:total 15200-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:28 abc.mp3-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:21 test.mp3 Make an explicit reflink copy: /data0$ cp --reflink mp3org/abc.mp3 id3update//data0$ ls -l mp3org/ id3update/id3update/:total 15200-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:29 abc.mp3-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:24 test.mp3mp3org/:total 15200-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:28 abc.mp3-rw-rw-r-- 1 avanderneut users 7781043 Oct 25 15:21 test.mp3 And change the new file: /data0$ id3v2 -c "another file change" id3update/abc.mp3/data0$ grep -F change mp3org/* id3update/*Binary file id3update/abc.mp3 matchesBinary file id3update/test.mp3 matches If mp3org gets filled automatically you can keep id3update up to date by running a script on a regular basis that does the cp --reflink src dst if the destination doesn't yet exists. ¹ which are most often at the beginning of a file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23960/" ] }
164,187
In Bash you can redirect all future stdout output of the currently running script. For example with this script, exec > >(logger -t my-awesome-script)echo 1echo 2echo 3 This will end up in syslog: Oct 26 01:03:16 mybox my-awesome-script[72754]: 1Oct 26 01:03:16 mybox my-awesome-script[72754]: 2Oct 26 01:03:16 mybox my-awesome-script[72754]: 3 But this is Bash-specific and the naked exec with redirection doesn't seem to work in Dash. Syntax error: redirection unexpected How can I make it work in Dash, or possibly in both shells?
You can just do: { commands....} | logger -t my_awesome_script You can do that with any shell. If you don't like the way it looks, maybe make the script wrap itself in a function. #!/bin/shrun() if [ "$run" != "$$" ] || return then sh -c 'run=$$ exec "$0" "$@"' "$0" "$@" | logger -t my-awesome-script fi#script-bodyrun "$@" || do stuff
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210/" ] }
164,210
During linux installation I selected "minimal" option: When I went to run the nslookup command to look up an IP address I got the error message nslookup: command not found as shown in the example below. $ nslookup www.google.combash: nslookup: command not found
The minimal install likely did not come with the bind-utils package, which I believe contains nslookup . You can install bind-utils with: sudo yum install bind-utils In general, you can search for what package provides a command using the yum provides command: sudo yum provides '*bin/nslookup'
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/164210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16253/" ] }
164,217
I'm using stderred to color all output streamed to stderr red. It's working fine. But when I write my own bash script and throw an error with echo 'error' 1>&2 , it doesn't color the output in red. I reckon this is, because the command simply redirects the output to wherever the stderr file descriptor points to, but doesn't properly mark the message as belonging to stderr. Is that so? How can I properly write to stderr in bash?
It appears that the program is re-writting the various write() functions to detect whether you are printing to file descriptor 2 and then adding the relevant escape codes to make the output red at the terminal. Unfortunately, in shell, when you do something like echo "foo" 1>&2 The function will still be calling write (or some other similar system call) on file descriptor 1. The output appears on fd 2 since file descriptor 1 has been dupped to file descriptor 2 by your shell. Unfortunately, I don't know of way to write /directly/ to fd 2 in shell, but you can use awk. A function like this will write the arguments directly to file descriptor 2. error() { awk " BEGIN { print \"$@\" > \"/dev/fd/2\" }"} I believe this is a feature of GNU awk that isn't part of POSIX but it also works on the awk provided on OS X.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58056/" ] }
164,225
# Automatically generated file; DO NOT EDIT Is at the header of the kernel configuration file: /usr/src/linux/.config My question is why shouldn't you edit this file? If I know exactly what I need, or what I want to remove, then what is the problem with editing this file directly?
It's considered unsafe to edit .config because there are CONFIG -options which have dependencies on other options (needing some to be set, requiring others to be turned off, etc.). Other options aren't meant to be set by the user at all, but are set automatically by make config (resp. Kconfig to be correct) depending on architecture details, e.g. availability of some hardware dependant on architecture variant, like an MMU. Changing .config without using Kconfig has a high chance of missing some dependency, which will either result in a non-functioning kernel, build failures, or unexpected behaviour (i.e. the change being ignored, which usually is very confusing).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68646/" ] }
164,236
I use ubuntu 14.4 , and been attempting to redirect the output of grep command to a file, but I keep getting this error: grep: input file 'X' is also the output I run the following command: grep -E -r -o -n r"%}(.*){%" > myfile As the error states, it seems that somehow it's interpreting the input and output as same name/obj. I searched but couldn't find what exactly is the problem?!
It is not possible to use the same file as input and output for grep .You may consider the following alternatives: temporary file grep pattern file > tmp_filemv tmp_file file sed sed -i -n '/pattern/p' file put whole file in the variable (not bright idea for large files) x=$(cat file); echo "$x" | grep pattern > file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164236", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79615/" ] }
164,241
I use Lubuntu 14.04. I've gotten concerned about encrypting my internet traffic after reading about the problems with public WiFi and intend to start using VPN. However, all my internet usage is on public WiFi because I have no private internet access. What's either a good practice or good free service given this limitation? I'm confused about OpenVPN. It sounds good, but unless I misunderstand, it's only for private internet connections, right? It says something about establishing a server, which I assume I can't do since I don't have my own server hardware. However, if I can set up all the web encryption I'd ever need on my laptop alone,then I'd prefer that.
It is not possible to use the same file as input and output for grep .You may consider the following alternatives: temporary file grep pattern file > tmp_filemv tmp_file file sed sed -i -n '/pattern/p' file put whole file in the variable (not bright idea for large files) x=$(cat file); echo "$x" | grep pattern > file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86945/" ] }
164,245
I recently purchased a backlit keyboard that was designed such that the Scroll Lock key was used to toggle the back light. I quickly found that Cinnamon does not respond properly when the Scroll Lock key is pressed. Originally, I thought the keyboard backlight was DOA because everything else worked out of the box. After a reboot I found that before launching Xorg I was able to successfully toggle the backlight using the Scroll Lock key. Then, once again, after starting Xorg (and consequentially cinnamon_session), this functionality stopped working once again. In order to get the backlight working after launching Xorg I was forced to issue a: xset led named "Scroll Lock" to enabled and disable this functionality. After a bit of research I came across a program called xev that dumped key event information to the terminal after it was started. Upon pressing the Scroll Lock key the terminal was populated with this information: KeyPress event, serial 34, synthetic NO, window 0x2c00001, root 0x2df, subw 0x0, time 2609824, (410,0), root:(1724,142), state 0x0, keycode 78 (keysym 0xff14, Scroll_Lock), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 37, synthetic NO, window 0x2c00001, root 0x2df, subw 0x0, time 2609912, (410,0), root:(1724,142), state 0x0, keycode 78 (keysym 0xff14, Scroll_Lock), same_screen YES, XLookupString gives 0 bytes: So I know the key event is being sent to the kernel. Also, interestingly, I observed strange behavior when in the terminal (still prior to launching Xorg or Cinnamon) and using Scroll Lock. Namely, at this point my backlight was toggling as expected, but when Scroll Lock was enabled nothing I typed was written to the screen. After disabling Scroll Lock everything I had written was immediately dumped to the terminal as if it was previously being buffered. tl;dr What is the deal with Scroll Lock and Xorg?
I'm not familiar with Cinnamon, but it should be possible to enable your Scroll Lock key. First, we need to see if you have a spare keyboard modifier slot. Run: xmodmap -pm That will print a list of your current modifier setup. Hopefully, one of those lines won't have any keys listed; generally that will be mod3 . Assuming that's the case, you can enable Scroll Lock with this command: xmodmap -e "add mod3 = Scroll_Lock" Your Scroll Lock LED should now respond to Scroll Lock key presses. If that works, you probably want X to do that automatically whenever it starts. There are various ways to do that: it can be done on a per user basis, but for something like this I think it makes sense for it to be set globally. But I've never done this before myself, so I'd better test it before giving further details. :) OK. The method I tried to activate that modmap globally doesn't work for me on KDE. :( But activating it in my home directory seems to work OK. Create a file called .Xmodmap in your home directory containing this line: add mod3 = Scroll_Lock The easiest way to do that is: cd ~echo >.Xmodmap "add mod3 = Scroll_Lock" Now restart X (logout & login again). Hopefully, your Scroll Lock key will be working. If it doesn't, please let me know.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72608/" ] }
164,262
I would like to write a script to prevent a computer from locking by virtually wiggling the mouse. However, I do not have xdotool on the computer that I am using, and I cannot install since I am not root. Is there a way to move the cursor without using xdotool ?
That you cannot install xdotool because you are not root doesn't mean you cannot run the program, for that you don't need any special privileges. Just download and compile from source . If you don't have access to a compiler then you can download the package for your system directly and extract the file from the package (for .deb first use ar , extracting from .rpm can be done with rpm2cpio )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89204/" ] }
164,313
I extracted the metadata of a PDF in a .txt file using pdftk , and now I am trying to decrease the BookmarkPageNumber value for each bookmark by an integer. The .txt has these lines: BookmarkBeginBookmarkTitle: PrefaceBookmarkLevel: 1BookmarkPageNumber: 10BookmarkBeginBookmarkTitle: Author... and so on I am trying to do this using sed's substitute command, and here is what I have so far: // $1 is the source .txt file; $2 is the decrement// __ is a placeholder for the variable with the original valuecat $1 | sed "s/BookmarkPageNumber: [0-9]*/BookmarkPageNumber: `expr __ - $2`/" | cat > metadata.txt How can I put the original value in a variable, and then replace the palceholder __ with it, within this same sed expression?
For that purpose is better use awk so as it support arithmetic operations cat $1 | awk -v d=$2 '/BookmarkPageNumber:/{$2-=d}1'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164313", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87862/" ] }
164,314
When using the nemo file browser in icon mode (compact view) the scroll turns to horizontal rather than vertical. For me this is very hard to work with. If the files are in list mode you can scroll vertical to see the next list of files. Hopefully there is a setting to allow this same type of scrolling in the alternate view. I have spent a lot of time in all the setting options and searching the Internet. Hopefully someone with experience can give me the setting option, or advise me that it doesn't exist and I can stop searching.
For that purpose is better use awk so as it support arithmetic operations cat $1 | awk -v d=$2 '/BookmarkPageNumber:/{$2-=d}1'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81664/" ] }
164,325
How do you configure xinput to set multiple devices automatically using a script? $ xinput --list⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ AlpsPS/2 ALPS GlidePoint id=12 [slave pointer (2)]⎜ ↳ ALPS PS/2 Device id=13 [slave pointer (2)]⎜ ↳ Corsair Corsair M65 Gaming Mouse id=15 [slave pointer (2)]⎜ ↳ Corsair Corsair M65 Gaming Mouse id=17 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Power Button id=8 [slave keyboard (3)] ↳ Sleep Button id=9 [slave keyboard (3)] ↳ Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ↳ Dell WMI hotkeys id=14 [slave keyboard (3)] ↳ Corsair Corsair M65 Gaming Mouse id=16 [slave keyboard (3)] And the problem is both of these "corsair gaming mouse" have different IDs every time. I don't know why there are two mouses... but that's the world I live in. How do I write a script to set the properties on both of them? I used this, but it didn't work the next time I booted (ID changed): #!/bin/shxinput --set-prop 10 "Device Accel Profile" 6xinput --set-prop 10 "Device Accel Velocity Scaling" 5xinput --set-prop 10 "Device Accel Constant Deceleration" 3#xinput --set-prop 10 "Device Accel Velocity Tracker Count" 2 I had tried using the name, but it complains there are multiple matching devices. Any help is appreciated.
If you are need to make changes to both you can use loop #!/bin/shfor id in $(xinput --list | \ sed -n '/Corsair Corsair M65 Gaming Mouse.*pointer/s/.*=\([0-9]\+\).*/\1/p')do xinput --set-prop $id "Device Accel Profile" 6 ... whatever you want to do ...done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47870/" ] }
164,333
I have a text file which is of the form - b SN:2d SN:5f SN:10g SN:11h SN:15i SA:3j SN:1k SN:4 And I want to sort by the second column, actually the numerical value in the second column. I've tried - $ sort -n -k2,2 file$ sort -k2.4,2.5n file but nothing seems to work.
Because you don't use -t option (or -b with GNU sort ), so you must count from beginning of leading spaces. POSIX defined sort -k EXTENDED DESCRIPTION as: A field comprises a maximal sequence of non-separating characters and, in the absence of option -t, any preceding field separator So you must use: $ sort -nk2.7 filej SN:1b SN:2i SA:3k SN:4d SN:5f SN:10g SN:11h SN:15 But you can use : as field separator, then sort numeric by second field: $ sort -t':' -nk2 filej SN:1b SN:2i SA:3k SN:4d SN:5f SN:10g SN:11h SN:15
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164333", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89248/" ] }
164,355
I use Ubuntu for many years and I like to use Synaptic to manage packages. I'm testing Fedora (version 20) and I'm a little bit lost. Is there an equivalent of Synaptic (a graphical tool) for Fedora? A previous answer from 2012 mentionned YumExtender, is there a new tool which appeared since? Is there a software manager which is present in the repositories and can be installed with yum ? Edit: Wikipeda mentions Apper and GNOME Software . GNOME Software is installed by default, I didn't find it with Gnome Shell search because its name is Software , it's not translated even though I have configured the french locale in Fedora. Edit 2: GNOME Software was also cited but Marcelo , but it seems that it doesn't display all the available softwares. For example searching httpd display no result but I was able to install it with yum install httpd . So I'm still looking for an equivalent of Synaptic, GNOME Software seems to be too limited.
By default, Fedora 20 installs gnome-software for this purpose. Not sure if this is exactly what you are looking for. It is a GUI for managing and installing packages, but looks more like (or better say, wishes to look like...) MacOSX appstore. This may or may not be of your taste, but allows you to browse installed and uninstalled packages as well as identifies the currently available updates for the installed ones. Edit: gnome-software is Application oriented (and not package oriented), thus it may not show individual packages for webservers, libraries, etc. Apper , in the other hand, does. To install Apper, just run yum install apper as root in a terminal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50687/" ] }
164,364
My unsuccessful proposal find ./ -newerct '1 week ago' -print | grep TODO No output although should be. Files are text files like Lorem% TODO check this outLorem ipsun How can you find less than 1 week old files matching TODO? Output should be the line after TODO. Perl solution is also welcome, since I am practising it too.
Change this: find ./ -newerct '1 week ago' -print | grep TODO to this: find ./ -newerct '1 week ago' -exec grep TODO {} + or this: find ./ -newerct '1 week ago' -print | xargs grep TODO Explanation Your grep doesn't interpret the output of find as a list of files to search through, but rather as its input. That is, grep tries to match TODO in the names of files rather than their contents . From the grep(1) man page: grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) To match the line after TODO : find ./ -newerct '1 week ago' -exec grep -A1 TODO {} + | grep -v TODO This assumes you have GNU grep .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
164,384
I have a backup script that mounts and unmounts a USB drive. I just noticed that its warning me: EXT3-fs warning: maximal mount count reached, running e2fsck is recommended My question:How can I get it to run e2fsck automatically when the mount command is run? This is how it looks in /etc/fsck UUID=c870ccb3-e472-4a3e-8e82-65f4fdb73b38 /media/backup_disk_1 auto defaults,rw,noauto 0 3 So <pass> is 3 , so I was expecting fsck to be run when required. EDIT This is how I ended up doing it, based on the given answer: (In a Bash script) function fsck_disk { UUID=$1 echo "Checking if we need to fsck $UUID" MCOUNT=`tune2fs -l "UUID=$UUID" 2> /dev/null | sed -n '/Mount count:\s\+/s///p'` if [ "$MCOUNT" -eq "$MCOUNT" ] 2> /dev/null then echo "Mount count = $MCOUNT" if (( $MCOUNT > 30 )) then echo "Time to fsck" fsck -a UUID=$UUID \ 1>> output.log \ 2>> error.log else echo "Not yet time to fsck" fi fi}fsck_disk a60b1234-c123-123e-b4d1-a4a111ab2222
Change this: find ./ -newerct '1 week ago' -print | grep TODO to this: find ./ -newerct '1 week ago' -exec grep TODO {} + or this: find ./ -newerct '1 week ago' -print | xargs grep TODO Explanation Your grep doesn't interpret the output of find as a list of files to search through, but rather as its input. That is, grep tries to match TODO in the names of files rather than their contents . From the grep(1) man page: grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) To match the line after TODO : find ./ -newerct '1 week ago' -exec grep -A1 TODO {} + | grep -v TODO This assumes you have GNU grep .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89280/" ] }
164,391
cat < file prints the contents of file to stdout. cat > file reads stdin until Ctrl + D is detected and the input text is written to file . cat <> file , at least in my version of Bash, prints the contents of file happily (without error), but doesn't modify the file nor does it update the modification timestamp. How does the Bash standard justify the seemingly ignored > in the third statement - and, more importantly, is it doing anything?
Bash uses <> to create a read-write file descriptor : The redirection operator [n]<>word causes the file whose name is the expansion of word to be opened for both reading and writing on file descriptor n, or on file descriptor 0 if n is not specified. If the file does not exist, it is created. cat <> file opens file read-write and binds it to descriptor 0 (standard input). It's essentially equivalent to < file for any sensibly-written program, since nobody's likely to try writing to standard input ordinarily, but if one did it'd be able to. You can write a simple C program to test that out directly - write(0, "hello", 6) will write hello into file via standard input. <> should also work in any other POSIX-compliant shell with the same effect.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/164391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89283/" ] }
164,464
Is there a way to force apt-get to display yes/no prompt? A --force-yes option exists, but there seems to be no --force-prompt or similar option. If you attempt to install a package that has all dependencies already installed, it will begin installation without displaying a yes/no prompt. This can be bothersome if you want to review whether dependencies exist and which ones will be installed because you don't know if the potential dependencies are installed ahead of time. NOTE: When does “apt-get install” ask me to confirm whether I want to continue or not? is somewhat related in that it describes under what standard conditions the prompt is displayed. I'm interested to know how to force it though.
There's just no way to do this with the current implementation of apt-get, you would need to open a feature request and appeal to the maintainer. The current behavior of apt-get is that when the list of packages you explicitly requested to be installed is the same as the list of packages that will get installed, and no other packages are affected with upgrades or breaks, apt-get presumes the user already is sure of what is going to be done , if you are not sure or want to analyze what will be done without actually installing the package, you can use Costas recommendation of -s, --simulate, --just-print, --dry-run, --recon, --no-act . There are other tools like apt-listbugs that would analyze the versions of packages to be installed before you actually install them (in this case for bugs) and warn you.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29398/" ] }
164,465
On Arch Linux, whenever a kernel upgrade is applied (via pacman -Su ), problems occur. For example, iptables can no longer be used: # iptables -Liptables v1.4.21: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)Perhaps iptables or your kernel needs to be upgraded. VMWare can't find the kernel headers: Now, I want to have kernel upgrades applied, but I'd like to defer them until I can (or want to) restart the system (remind anyone of Windows?). Otherwise, if my work is blocked on iptables or VMWare or some other program, I have no choice but to restart and rebuild a session. How can I do that? Ideally I'd like to continue using pacman -Su , but have certain upgrades deferred. I believe Is it normal that a restart is required to mount USB after a kernel upgrade? is caused by the same problem.
If you want to keep pacman from up/down grading some package(s), you put a line in /etc/pacman.conf : # Pacman won't upgrade packages listed in IgnorePkg and members of IgnoreGroup#IgnorePkg =#IgnoreGroup = I think you want to have a line like this in pacman.conf : IgnorePkg = linux linux-headers linux-api-headers As you point out, that just keeps pacman from any upgrading. You could probably write some code like this: if pacman -Qu | egrep -q -s 'linux|linux-headers|linux-api-headers'then pacman -S --noconfirm linux pacman -S --noconfirm linux-headers pacman -S --noconfirm linux-api-headersfi It looks like you have to write a " systemd unit file " - that part's a bit beyond what I have confidence in telling you what to do. My thought is that the Before=shutdown.target line in the unit file is what makes it run at shutdown. The only unit file I wrote took me several tries to get correct, so I really don't think you want my advice on that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70524/" ] }
164,482
I currently use the following setup to redirect the output of multiple commands: echo "Some normal commands"(echo "Error: something happened"echo "Warning: this incident will be logged") >> logfileecho "More normal commands" This is pretty useful, and it also works with pipes. Is this the best way to do this? Is there an alternative that I should consider?
The alternative is to use braces instead of parentheses. This change executes the commands in the current shell, not in a subshell echo "Some normal commands"{echo "Error: something happened"echo "Warning: this incident will be logged"} >> logfileecho "More normal commands" ref: https://www.gnu.org/software/bash/manual/bashref.html#Command-Grouping This is particularly relevant when you're modifying variables inside the group: $ x=5; ( x=10; echo inside: $x; ); echo outside: $xinside: 10outside: 5$ x=5; { x=10; echo inside: $x; }; echo outside: $xinside: 10outside: 10
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37380/" ] }
164,492
I need to use the acpi_listen command. It wasn't installed so I did: pacman -S acpid Then when I run the command acpi_listen I get: acpi_listen: can't open socket /var/run/acpid.socket: No such file or directory I checked the in /var/run and the file does not exist. If I do a ps -ef | grep acpi , it outputs: [acpi_thermal_pm][ktpacpid] What can I do?
In archlinux, this will make it work: systemctl start acpid.service
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
164,508
I have a text file named links.txt which looks like this link1link2link3 I want to loop through this file line by line and perform an operation on every line. I know I can do this using while loop but since I am learning, I thought to use a for loop.I actually used command substitution like this a=$(cat links.txt) Then used the loop like this for i in $a; do ###something###;done Also I can do something like this for i in $(cat links.txt); do ###something###; done Now my question is when I substituted the cat command output in a variable a, the new line characters between link1 link2 and link3 are removed and is replaced by spaces echo $a outputs link1 link2 link3 and then I used the for loop.Is it always that a new line is replaced by space when we do a command substitution?? Regards
Newlines get swapped out at some points because they are special characters. In order to keep them, you need to make sure they're always interpreted, by using quotes: $ a="$(cat links.txt)"$ echo "$a"link1link2link3 Now, since I used quotes whenever I was manipulating the data, the newline characters ( \n ) always got interpreted by the shell, and therefore remained. If you forget to use them at some point, these special characters will be lost. The very same behaviour will occur if you use your loop on lines containing spaces. For instance, given the following file... mypath1/file with spaces.txtmypath2/filewithoutspaces.txt The output will depend on whether or not you use quotes: $ for i in $(cat links.txt); do echo $i; donemypath1/filewithspaces.txtmypath2/filewithoutspaces.txt$ for i in "$(cat links.txt)"; do echo "$i"; donemypath1/file with spaces.txtmypath2/filewithoutspaces.txt Now, if you don't want to use quotes, there is a special shell variable which can be used to change the shell field separator ( IFS ). If you set this separator to the newline character, you will get rid of most problems. $ IFS=$'\n'; for i in $(cat links.txt); do echo $i; donemypath1/file with spaces.txtmypath2/filewithoutspaces.txt For the sake of completeness, here is another example, which does not rely on command output substitution. After some time, I found out that this method was considered more reliable by most users due to the very behaviour of the read utility. $ cat links.txt | while read i; do echo $i; done Here is an excerpt from read 's man page: The read utility shall read a single line from standard input. Since read gets its input line by line, you're sure it won't break whenever a space shows up. Just pass it the output of cat through a pipe, and it'll iterate over your lines just fine. Edit: I can see from other answers and comments that people are quite reluctant when it comes to the use of cat . As jasonwryan said in his comment, a more proper way to read a file in shell is to use stream redirection ( < ), as you can see in val0x00ff's answer here . However, since the question isn't " how to read/process a file in shell programming ", my answer focuses more on the quotes behaviour, and not the rest.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/164508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63486/" ] }
164,512
The output of cat /sys/devices/system/clocksource/clocksource0/available_clocksource lists the available hardware clocks. I have changed the clocks, without any visible difference. sudo /bin/sh -c 'echo acpi_pm > current_clocksource' What are the practical implications of changing the hardware clock?Is there a way to check the resolutions (or some other visible change) of the available clocks?
Well, first of all, the kernel chooses the best one automatically, it is usually TSC if it's available, because it's kept by the cpu and it's very fast (RDTSC and reading EDX:EAX). But that wasn't always the case, in the early days when the SMP systems were mostly built with several discrete cpus it was very important that the cpus where as "equal" as it was possible (perfect match of model, speed and stepping), but even then it sometimes occurred that one was sightly faster that the other, so the TSC counter between then was "unstable", that was the reason to allow to change it (or disable it with the "notsc" kernel parameter). And even with this restrictions the TSC is still the best source, but the kernel has to take great care to only rely on one cpu in multicore systems or actively try to keep them synchronized, and also take in account things like suspend/resume (it resets the counter) and cpu frequency scaling (affects TSC in some cpu models). Some people in those early days of SMP even built systems with cpus of different speeds (kind of the new BIG.little architecture in arm), that created big problems in the timekeeping area. As for a way to check the resolutions you have clock_getres() and you have an example here . And a couple of extra links: oficial kernel doc (there are other interesting files in this dir) and TSC resynchronization in chromebooks with some benchmarks of different clocksources. In short, there shouldn't be any userspace visible changes when changing the clocksource, only a slower gettimeofday() .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164512", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29436/" ] }
164,517
I am trying to install python 3.2, and to get setuptools and pip in python 3.2. Everything seems to work right in python 2.7. However when I try to install setuptools using this code wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python3.2 I get the following error Extracting in /tmp/tmpcwnav_Traceback (most recent call last): File "<stdin>", line 332, in <module> File "<stdin>", line 329, in main File "<stdin>", line 51, in _install File "/usr/local/lib/python3.2/contextlib.py", line 28, in __enter__ return next(self.gen) File "<stdin>", line 101, in archive_context File "/usr/local/lib/python3.2/zipfile.py", line 1004, in extractall self.extract(zipinfo, path, pwd) File "/usr/local/lib/python3.2/zipfile.py", line 992, in extract return self._extract_member(member, path, pwd) File "/usr/local/lib/python3.2/zipfile.py", line 1035, in _extract_member source = self.open(member, pwd=pwd) File "/usr/local/lib/python3.2/zipfile.py", line 978, in open close_fileobj=not self._filePassed) File "/usr/local/lib/python3.2/zipfile.py", line 487, in __init__ self._decompressor = zlib.decompressobj(-15)AttributeError: 'NoneType' object has no attribute 'decompressobj' Based on some googling, it looks like I am getting the problem because zlib has not been installed. I do not have this problem when trying to install setuptools for python 2.7. I went into python 3.2 and tried to import zlib and got an error message when I tried that. I also tried to do sudo apt-get install zlib and got the error message E: Unable to locate package zlib . I did not get error messages when I tried sudo apt-get install zlib1g or sudo apt-get install zlib1g-dev` I really have no idea what's going on. How do I get zlib for python 3.2 (or otherwise fix this problem?)
Your problem seems to be you compiled Python without support for zlib. Make sure you have zlib-devel installed ( sudo apt-get install zlib1g-dev ) before compiling Python.There's nothing wrong with using Python compiled by you in addition or instead of the system one. However you have to remember to be explicit when invoking Python and invoke the one you intend to use by specifying full path like /usr/local/bin/python instead of plain python . Alternatively you can add ( /usr/local/bin/ ) to your PATH before /usr/bin/ so that when you type python system runs your compiled Python.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89376/" ] }
164,518
I am having a hard time figuring out exactly how to run rsync to get it to do what I need it to do. Basically what I need is as follows given a single source folder with multiple sub-directories: -If files for a given subdirectory are changed in the source folder, sync those changes to the destination (update files and delete files not found in the source folder any longer). -If a folder is found in the source but not the destination, sync the folder and all of its contents to the destination. -If a folder is found in the destination but not in the source, do nothing (e.g. don't delete it). This is what the directory structure would look like: Source Folder Folder 1 File 1 unchanged.txt Folder 2 File 2 newer.txt Folder 3 File 3.txtDestination Folder Folder 1 File 1 unchanged.txt Folder 2 File 2 old.txt (to be replaced with File 2 newer.txt) (Folder 3 not yet in destination, to be added from source) Folder X (not in source, to be left untouched)
Your problem seems to be you compiled Python without support for zlib. Make sure you have zlib-devel installed ( sudo apt-get install zlib1g-dev ) before compiling Python.There's nothing wrong with using Python compiled by you in addition or instead of the system one. However you have to remember to be explicit when invoking Python and invoke the one you intend to use by specifying full path like /usr/local/bin/python instead of plain python . Alternatively you can add ( /usr/local/bin/ ) to your PATH before /usr/bin/ so that when you type python system runs your compiled Python.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89374/" ] }
164,539
I have recently upgraded to Ubuntu 14.10 LTS. When I suspend my laptop and wake it up later from sleep, or when I close my laptop lid and open it later on, or when I want to change my network, I loose wifi connection and it would never work until I restart. This bugs me because I have programs open, and when I close my laptop and go to a different place, Ubuntu can't connect to wifi unless I restart (which means closing all programs). Do you have any idea what might be the problem?
I have the same issue. Might be a bug. Quick workaround is to run sudo killall NetworkManager from the command line. It will respawn quickly and connect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15135/" ] }
164,541
I have an image archive I keep up. Sometimes, the sites I pull them from reformat the file while keeping the extension the same, most often making PNG images into JPG's that are still named ".png". Is there a way to discover when this has happened and fix it automatically? When on Windows, I used IrfanView for this, but that needs a Wine wrapper.
You can use file command: $ file file.pngfile.png: PNG image data, 734 x 73, 8-bit/color RGB, non-interlaced$ mv file.png file.txt$ file file.txtfile.txt: PNG image data, 734 x 73, 8-bit/color RGB, non-interlaced The file does some tests on file to determine its type. Probably the most important test is comparing a magic number (string in a file header) with pre-defined list.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88324/" ] }
164,577
I have a node.js process that uses fs.appendFile to add lines to file.log . Only complete lines of about 40 chars per lines are appended, e.g. calls are like fs.appendFile("start-end") , not 2 calls like fs.appendFile("start-") and fs.appendFile("end") . If I move this file to file2.log can I be sure that no lines are lost or copied partially?
As long as you don't move the file across file-system borders, the operation should be safe. This is due to the mechanism, how »moving« actually is done. If you mv a file on the same file-system, the file isn't actually touched, but only the file-system entry is changed. $ mv foo bar actually does something like $ ln foo bar$ rm foo This would create a hard link (a second directory entry) for the file (actually the inode pointed by file-system entry) foo named bar and remove the foo entry. Since now when removing foo , there is a second file-system entry pointing to foo 's inode, removing the old entry foo doesn't actually remove any blocks belonging to the inode. Your program would happily append to the file anyways, since its open file-handle points to the inode of the file, not the file-system entry. Note: If your program closes and reopens the file between writes, you would end up having a new file created with the old file-system entry! Cross file-system moves: If you move the file across file-system borders, things get ugly. In this case you couldn't guarantee to have your file keeping consistent, since mv would actually create a new file on the target file-system copy the contents of the old file to the new file remove the old file or $ cp /path/to/foo /path/to/bar$ rm /path/to/foo resp. $ touch /path/to/bar$ cat < /path/to/foo > /path/to/bar$ rm /path/to/foo Depending on whether the copying reaches end-of-file during a write of your application, it could happen that you have only half of a line in the new file. Additionally, if your application does not close and reopen the old file, it would continue writing to the old file, even if it seems to be deleted: the kernel knows which files are open and although it would delete the file-system entry, it won't delete old file's inode and associated blocks until your application closes its open file-handle.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/164577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48453/" ] }
164,580
I'm doing a study about how CD-ROM can be mounted virtually and all I could find out was mounting using loop devices. mount -o loop disk1.iso /mount-point This is fairly easy. I understand that /dev/sr0 is a block device and it point to some buffer in kernel and the kernel device driver puts the filesystem (ot whatever it puts i am not sure) in that buffer and when we use mount it mounts the filesystem to the specified mount-point. But am wondering whether we can mount an ISO of our choice (e.g. disk1.iso ) by using SCSI CD-ROM device /dev/sr0 (without changing anything in the kernel) as it is done in Vmware and Virtualbox, where we can specify the ISO and it automatically emulates a CD-ROM hardware and the ISO can be mounted using /dev/sr0 device? The major problem which i see here is that how /dev/sr0/ will be linked to the iso?
The thing here is that /dev/sr0 is linked to a kernel device driver. That device driver will allow access to a physical CDROM if available through that node; VMWare and VirtualBox emulate hardware as you mention and hence the kernel and device driver think they're communicating with hardware. The /dev/sr0 doesn't point to a certain buffer directly, it provides an interface to the block device interface that allows userspace processes to acces the contents of the hardware device. If you want to make an image available as a block device, then your only choice (besides virtualization and emulating hardware) is to use loop devices with losetup ... or to write your own replacement device driver, but I expect that's not a viable option for now. If you want to make that image available as /dev/sr0 (are we talking about faking out some software that demands access to a CDROM at that location?) then you could move that file to e.g. /dev/sr0.moved and then symlink the appropriate /dev/loopX to /dev/sr0 . Of course, if the software in question tries special commands that only apply to CDROM devices, then this won't work. Otherwise it shouldn't be a problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164580", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89441/" ] }
164,596
I have a txt file that looks like this: <sss>ss< or <firstword>anotherword< I want it to look like this <sss>ss</sss> or <firstword>anotherword</firstword> Basically taking the first word and placing it in an end tag. And before you asked what I have tried, the answer is nothing, I couldn't think of anything.
With input: <abc>def<<firstword>anotherword</firstword><ghi>klm< Use: sed 's/<\([^>]*\)>\(.*\)<$/<\1>\2<\/\1>/' input Output: <abc>def</abc><firstword>anotherword</firstword><ghi>klm</ghi> The sed line only affects lines ending in < (because of the <$ ) and catches the patterns between the first <> pair and between '><' and pastes everything back in duplicating the first pair at the end (plus a closing '>')
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
164,653
The following are the memory usage of mysql and apache respectively on my server. As per the output of pmap say, mysql is using about 379M and apache is using 277M. [root@server ~]# pmap 10436 | grep total total 379564K[root@server ~]# pmap 10515 | grep total total 277588K Comparing this with the output of top , I see the values are almost matching. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND10515 apache 20 0 271m 32m 3132 S 0.0 6.6 0:00.73 /usr/sbin/httpd10436 mysql 20 0 370m 21m 6188 S 0.0 4.3 0:06.07 /usr/libexec/mysqld --basedir=.... Now these values definitely are not the current memory usage of those two processes, since if it were it would've exceeded the 512M ram on my system and I understand the fact that these are the size of the pages assigned to these two processes and not really the size of the memory actively used by them. Now, when we use pmap -x , I see an extra coloumn Dirty which shows far less memory usage for the process. As seen in the example show below, the Dirty coloumn shows 15M as opposed to 379M in the first coloumn. My question is: Is the value under coloumn Dirty is the 'real' amount of memory actively used by that process? If its not, then how can we find out the real memory usage of a process? Not ps and top for the same reasons above. Do we have anything under /proc that will give this info? [root@server ~]# pmap -x 10436 | grep totaltotal kB 379564 21528 15340[root@server ~]#[root@server ~]# free -m total used free shared buffers cachedMem: 489 447 41 0 52 214-/+ buffers/cache: 180 308Swap: 1023 0 1023[root@server ~]#
There is no command that gives the “actual memory usage of a process” because there is no such thing as the actual memory usage of a process . Each memory page of a process could be (among other distinctions): Transient storage used by that process alone. Shared with other processes using a variety of mechanisms. Backed up by a disk file. In physical memory or swap. I think the “dirty” figure adds up everything that is in RAM (not swap) and not backed by a file. This includes both shared and non-shared memory (though in most cases other than forking servers, shared memory consists of memory-mapped files only). The information displayed by pmap comes from /proc/ PID /maps and /proc/ PID /smaps . That is the real memory usage of the process — it can't be summarized by a single number.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164653", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68296/" ] }
164,660
Here is the content of /etc/network/interfaces : auto loiface lo inet loopbackauto eth0iface eth0 inet dhcppost-up /etc/network/if-up.d/sshstart And sshstart is a script with the following in it: curl something something darkside send a file over ftps in the background &/usr/bin/autossh -M 0 -f -N -o ServerAliveInterval=15 -o ServerAliveCountMax=3 -R 127.0.0.1:2005:127.0.0.1:22 -R 192.168.1.10:2006:192.168.2.110:1912 -L 127.0.0.1:5249:192.168.1.212:5249 [email protected] -p 8080 When the machine reboots, the curl command is executed multiple times, the file ends up 2 or 3 times on the ftp server and when I look at the processes it seems like there are multiple instances of autossh running... Not sure if this is how autossh does things or not, but for sure curl shouldn't upload the file multiple times. My hunch is that the whole sshstart script is ran multiple times but I don't understand why. I tried searching for details on the network setup process at boot but all I could find was syntax information for the interfaces file. Can someone help please? Thank you. ---Edit--- As suggested bellow I have modified my interfaces file as follow (removed the empty lines above post-up): auto loiface lo inet loopbackauto eth0iface eth0 inet dhcppost-up /etc/network/if-up.d/sshstart And added the following line to sshstart: echo $(date)>>/run/shm/sshstart.log Here is the content of /run/shm/sshstart.log after a reboot: Wed Oct 29 08:07:00 EDT 2014Wed Oct 29 08:07:07 EDT 2014Wed Oct 29 08:07:07 EDT 2014Wed Oct 29 08:07:07 EDT 2014 So its been ran 4 times :( what's going on?
Files in /etc/network/if-up.d already run automatically whenever an interface (any interface) comes up. When you specify the same script to run again in an explicit post-up command, you only cause the script to run again. So my guess is this is what should happen: It runs once when lo comes up (with environment variable IFACE=lo ) due to being located in /etc/network/if-up.d . It runs once when eth0 comes up (with environment variable IFACE=eth0 ) for the same reason. It runs again when eth0 comes up (with environment variable IFACE unset) because you asked for this in a post-up directive. I'm not sure where the fourth time comes from, but anyway that's three already. You need to either locate the script somewhere else and run it once using a post-up directive, or leave it where it is but don't mention it in a post-up directive and check the value of $IFACE so that it does nothing unless the desired interface ( eth0 ) has come up.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65034/" ] }
164,667
If I follow a file somehow like this: tail -f /var/log/syslog|grep s I see all lines containing an "s" Why does this not give any output, if I grep it again to the same "s"? tail -f /var/log/syslog|grep s|grep s
As Rubo77 mentioned, the issue is solved by adding the --line-buffered to the first grep command: tail -f /var/log/syslog|grep --line-buffered s|grep s However, you then may ask, why isn't this needed for a single grep command? The difference between the two is that in the following command: tail -f /var/log/syslog|grep s STDOUT for grep is pointed to a terminal. grep most likely writes to STDOUT via functions contained in the stdio library. Per the documentation ( stdio(3) ): Output streams that refer to terminal devices are always line buffered by default; Thus, the underlying library calls are flushing the buffer after each line without any action on grep's part. In this command: tail -f /var/log/syslog|grep --line-buffered s|grep s STDIO is now going to a pipe rather than a terminal device and the library functions that grep is using to write to STDOUT fully buffers these writes rather than using line buffering. When the --line-buffered flag is used, grep will call fflush , which will flush all of the buffered write.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
164,704
Is there a way to get the number of or list of system calls supported by currently running Linux Kernel? So I want to find a way to 'read' the syscall table of a running kernel.
The file /proc/kallsyms lists all the symbols of the running kernel. By convention, system calls have a name that begin with sys_ . On a 64-bit system, system calls for 32-bit programs have a name that begin with sys32_ . Strictly speaking, this lists internal kernel functions, not system call, but I think that the correspondence does work (every system call invokes an internal kernel function to do the job, and I think the name is always the name of the system call with sys_ prepended). </proc/kallsyms sed -n 's/.* sys_//p' This is usually not useful information, because system calls change very slowly. Optional components provide functionality in terms of existing system calls, using general features such as devices (with ioctl when read and write doesn't cut it), filesystems, sockets, etc. Determining the list of supported syscalls won't tell you anything about the features that the system supports. Other internal function names won't help either because these change very quickly: the name of the function that implements some feature on one kernel version may change on the next version.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23930/" ] }
164,741
I'd like to build an error handler which will make an empty file marked with the error_occur time. The core idea is to use the result of date command as a parameter. I did: time_stamp=$(date)touch $time_stamp But this turns out to create a series of empty file like 2014 , Wed , 11:15:20 . How to convert time_stamp to a whole string here?
You must double quote your variable: time_stamp="$(date)"touch "$time_stamp" In this case, double quote in "$(date)" is not mandatory, but it's a good practice to do that. You can read this for more understanding.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
164,823
When I start Thunderbird (31.2.0 under Ubuntu 14.04), my Outlook 365 account [email protected] accessed over IMAP is completely unreachable (I can't even see the list of folders, even though I have local archives). I see the following message in a D-Bus notification (several times): The current operation on 'Inbox' did not succeed. The mail server for account [email protected] responded: User is authenticated but not connected. This account has been working for a while. I changed my password during my previous Thunderbird session; after entering the new password, Thunderbird still worked. What does this seemingly nonsensical message “User is authenticated but not connected” mean? How do I get my email?
The message “User is authenticated but not connected” is due to a bug in the Exchange server's IMAP implementation. If the client presents a valid user name but an invalid password, the server accepts the login, but subsequent commands fail with the aforementioned error message. Source: SaneBox blog . So I need to change the password stored by Thunderbird . There's no actual way to change the password except at a password prompt (which Thunderbird doesn't show since the server never tells it that the password is invalid). I first need to make Thunderbird forget my saved password: use the “Preferences” → “Preferences” menu, go to the “Security” → “Passwords” tab, click the “Saved Passwords...” button and remove the entry(s) that have the old password. I don't know why I still had the old password there. It's possible that Thunderbird had worked after the password change only because existing connections had remained open. I then restarted Thunderbird (I couldn't find an easier way to make it re-attempt to connect to the Exchange server). Thunderbird deleted all my local archives for this account. I think it decided that the folders had been deleted on the server when it received an unknown error from the server, instead of treating the error as an error. This is a bug in Thunderbird, I haven't tracked it down. When I subsequently changed my password again, I closed Thunderbird first, and it didn't delete any archives.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
164,826
This answer and comments mention --rfc-3339 and a "hidden" --iso-8601 option that I have used for a long time and now seems to be undocumented. When did that option documentation get removed from the --help text? Will the option go away anytime soon?
The option was introduced in the coreutils date (which is probably what you have) in 1999 (Apr. 8). The documentation was removed in 2005 without much explanation in the commit. In 2011 , the help for --iso-8601 was reintroduced with the following explanation: We deprecated and undocumented the --iso-8601 (-I) option mostlybecause date could not parse that particular format. Now thatit can, it's time to restore the documentation.* src/date.c (usage): Document it.* doc/coreutils.texi (Options for date): Reinstate documentation.Reported by Hubert Depesz Lubaczewski in http://bugs.gnu.org/7444. It looks like the help was taken out in version 5.90 and put back in, in version 8.15 (it is not in my 8.13) and the comment above suggests that it is now back to stay and not likely to be disappearing any time soon. In version 8.31 (as provided by Solus July 2020) the man page descriptions for two two options are: -I[FMT], --iso-8601[=FMT] output date/time in ISO 8601 format. FMT='date' for date only (the default), 'hours', 'minutes', 'sec‐ onds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14T02:34:56-06:00 --rfc-3339=FMT output date/time in RFC 3339 format. FMT='date', 'seconds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14 02:34:56-06:00
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/164826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89609/" ] }
164,842
I am new to software development, and over the course of compiling about 20 programs and dependencies from source I have seen a rough pattern, but I don't quite get it. I'm hoping you could shed some light on it. I am SSHing on a SLC6 machine and without root permissions, I have to install all the software dependencies and the most difficult part - to LINK them to the right place. For instance: I need to install log4cpp. I download a tarball and unpack it ./autogen.sh (if there isn't this one, just continue to next)./configuremake So It is installed in the folder itself along with the source code, just lying there dormant, until I can call it in the right way. Then there is an other program which I need to install, and it requires me to specify the lib and include dirs for some dependencies --with-log4cpp-inc=--with-log4cpp-lib= For SOME source compilations, the folder has a lib, bin and inc or include dir - Perfect!For some, the folder has just lib and inc dir.For some, the folder has just inc dir. I have no problem when they all have a nice folder, easy to find. But I often run into problems, like with the log4cpp. locate log4cpp.so returns null(The lib dirs have .so files in it? or do they?) So I have a problem, in this specific instance, that the library dir is missing and I cannot find it. But I want to know how to solve the problem every time, and also have some background information. However my googling skills seem to return nothing when searching for how library, include and bin environment variables work. I have also tried looking up the documentation for the program, but it seems that the questions I have:"Where is the lib dir, where is the include dir, where is the bin dir?" are so trivial, that they do not even need to communicate it. So: What is an include dir, what does it do, contain, how do I find it. What is a library dir, what does it do, contain, how do I find it - every time - useful commands perhaps. What is a binary dir, what does it do, contain, how do I find it.
The option was introduced in the coreutils date (which is probably what you have) in 1999 (Apr. 8). The documentation was removed in 2005 without much explanation in the commit. In 2011 , the help for --iso-8601 was reintroduced with the following explanation: We deprecated and undocumented the --iso-8601 (-I) option mostlybecause date could not parse that particular format. Now thatit can, it's time to restore the documentation.* src/date.c (usage): Document it.* doc/coreutils.texi (Options for date): Reinstate documentation.Reported by Hubert Depesz Lubaczewski in http://bugs.gnu.org/7444. It looks like the help was taken out in version 5.90 and put back in, in version 8.15 (it is not in my 8.13) and the comment above suggests that it is now back to stay and not likely to be disappearing any time soon. In version 8.31 (as provided by Solus July 2020) the man page descriptions for two two options are: -I[FMT], --iso-8601[=FMT] output date/time in ISO 8601 format. FMT='date' for date only (the default), 'hours', 'minutes', 'sec‐ onds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14T02:34:56-06:00 --rfc-3339=FMT output date/time in RFC 3339 format. FMT='date', 'seconds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14 02:34:56-06:00
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/164842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89583/" ] }
164,853
I know what chgrp and chown do. My question is since chown does the same thing as chgrp (and even more), what is the point of chgrp ?
When you use chgrp you are using a simple tool to change one thing... group permissions. For many people this is preferred over using chown , especially when you run the risk of mistyping a character while using the chown command and completely breaking permissions to whatever files/folder you specified. So instead of doing one of the following: chown user:group [file/dir]chown :group [file/dir] You just do: chgrp group [file/dir] This keeps the risks of changing file permissions in a production grade environment down. Which is always good for SysAdmins.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89629/" ] }
164,873
The command $ find ~ -name .DS_Store -ls -delete works on Mac OS X, but $ find ~ -name __pycache__ -type d -ls -delete does not - the directories are found but not deleted. Why? PS. I know I can do $ find ~ -name __pycache__ -type d -ls -exec rm -rv {} + the question is why find -delete does not work.
find 's -delete flag works similar to rmdir when deleting directories. If the directory isn't empty when it's reached it can't be deleted. You need to empty the directory first. Since you are specifying -type d , find won't do that for you. You can solve this by doing two passes: first delete everything within dirs named __pycache__ , then delete all dirs named __pycache__ : find ~ -path '*/__pycache__/*' -deletefind ~ -type d -name '__pycache__' -empty -delete Somewhat less tightly controlled, but in a single line: find ~ -path '*/__pycache__*' -delete This will delete anything within your home that has __pycache__ as part of its path.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/164873", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31443/" ] }
164,884
I'm sharing a directory, /home/pi/pydev on a debian box (raspberry pi, in fact) with Samba. I'm reading from and writing to that directory from a Windows 7 machine. When I create, under W7, a file in that directory, it gets 0764 rights, and it's owned by user rolf and group rolf - that's me on the W7 machine. User pi on the debian box and user rolf (on W7) both need to be able to modify files in that directory, so I made them both member of group coders , hoping I could configure it so that members of coders have at least read & write access to files in that directory. . But user pi can't modify any file that belongs to group rolf. I could chmod rolf:coders <filename> file by file. Adding user pi to group rolf is ugly, and doesn't work (didn't expect that. Does Samba maintain an entirely different user administration with groups, beside Debian's?). I could also log on to the debian machine as rolf, and navigate to that folder. But the most elegant way (to me) would be if a file created by rolf from the W7 machine would get userid rolf and groupid coders, by default. Can I configure Samba to do that, or is there some other way to automate that task?
If I understand what you are asking correctly then what you want is inside the smb.conf located here: /etc/samba/smb.conf Add these options to the [global] section: force user = rolf force group = coders
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81185/" ] }
164,895
I've followed the instructions on the Elixir site for Ubuntu by downloading and installing their erlang-solutions_1.0_all.deb but no install target is found when trying to install. $ sudo apt-get install elixir Reading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package elixir No matching target in apt-cache $ sudo apt-cache search elixirelyxer - standalone LyX to HTML converterlibelixirfm-perl - perl implementation for Functional Arabic Morphologypython-elixir - Declarative Mapper for SQLAlchemy Erlang solutions repo in sources $ ll /etc/apt/sources.list.d total 12K-rw-r--r-- 1 root root 183 Oct 29 23:38 erlang-solutions.list-rw-r--r-- 1 root root 58 Nov 26 2013 getdeb.list-rw-r--r-- 1 root root 458 Apr 20 2014 official-package-repositories.list I'm running Linux Mint 16 (Petra) based on Ubuntu 13.10 (Saucy Salamander) $ cat /etc/apt/sources.list.d/erlang-solutions.list### THIS FILE IS AUTOMATICALLY CONFIGURED #### You may comment out this entry, but any other modifications may be lost.deb http://binaries.erlang-solutions.com/debian saucy contrib$ sudo apt-get update | grep erlangIgn http://binaries.erlang-solutions.com saucy InReleaseHit http://binaries.erlang-solutions.com saucy Release.gpgHit http://binaries.erlang-solutions.com saucy ReleaseHit http://binaries.erlang-solutions.com saucy/contrib amd64 PackagesHit http://binaries.erlang-solutions.com saucy/contrib i386 PackagesIgn http://binaries.erlang-solutions.com saucy/contrib Translation-en_GBIgn http://binaries.erlang-solutions.com saucy/contrib Translation-en Not sure why this repo doesn't provide me with an install target for elixir.
I tried too from Elixir's documentation; at first I failed, then eventually installed elixir successfully on my MintDebian1 (Debian wheezy). I don't really know what's going on. I tend to think they have a typo on their documentation and wrote apt-get install elixir instead of erlang , because all other blog posts I found that use the same .deb do install erlang and then install elixir manually. I went to /etc/apt/sources.list.d/erlang-solutions.list , changed squeeze to wheezy , ran apt-get update and finally I found elixir and all is well. PS: it is possible to not install Elixir but still run it, and the iex repl too, from a Docker image. See https://registry.hub.docker.com/u/nifty/elixir/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164895", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15634/" ] }
164,899
I was fiddling with my GRUB 2 config files ( /boot/grub/grub.cfg ) and I noticed that the menuentry line for the automatically added Ubuntu boot looks like this: menuentry 'Ubuntu 14.04 Trusty Tahr (on sda5)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-fe3a2033-d77c-4d8c-ba04-3bb27b267dc2' { What is that $menuentry_id_option 'gnulinux-simple-fe3a2033-d77c-4d8c-ba04-3bb27b267dc2' part at the end and do I need it when I add new boot options? So, what does the $menuentry_id_option mean? Do I need to use it when I add another boot menu item for some other distro? What would happen if I didn't include it? Also, is there some GRUB reference I can look to for questions about what these things do?
The line you are looking for is: if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id"else menuentry_id_option=""fi Gives you the value of feature_menuentry_id and if it's equal to y then it will add the --id parameter to your menu entries: menuentry 'Ubuntu 14.04 Trusty Tahr (on sda5)' --class ubuntu --class gnu-linux --class gnu --class os --id 'gnulinux-simple-fe3a2033-d77c-4d8c-ba04-3bb27b267dc2' { If it's not, then it will leave it as is: menuentry 'Ubuntu 14.04 Trusty Tahr (on sda5)' --class ubuntu --class gnu-linux --class gnu --class os 'gnulinux-simple-fe3a2033-d77c-4d8c-ba04-3bb27b267dc2' { The --id parameter for menuentry isn't defined in the manual for menuentry , but one can haphazardly guess is the UUID for the partition the kernel is supposed to boot from.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89671/" ] }
164,903
I have a text file that looks something like this: foobarziprartar I need to use a bash script on OSX to make a new text file after every new line like this: cat text1.txtfoocat text2.txtbarcat text3.txtzipcat text4.txt rarcat text5.txttar
You can use csplit . It does the job well, except that it's somewhat inflexible regarding the output file names (you can only specify a prefix, not a suffix) and you need a first pass to calculate the number of pieces. csplit -f text -- input.txt '//' "{$(wc -l input.txt)}"for x in text[0-9]*; do mv -- "$x" "$x.txt"; done The GNU version, but not the OSX version, has extensions that solve both issues. csplit -b '%d.txt' -f text -- input.txt '//' '{*}' Alternatively, if csplit is too inflexible, you can use awk. awk '{filename = sprintf("text%d.txt", NR); print >filename; close(filename)}' input.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
164,909
What does the output of echo $-1 , echo $-2 , echo $-3 .. mean? On one of my Linux boxes, it shows me: echo $-1imsBEl1echo $-2imsBEl2 And on another Linux box, it shows: echo $-1himBH1echo $-2himBH2 Are they mapping to some memory locations or some thing else?
echo $- prints options of the current shell (without - prefix). 1 and 2 are additional unrelated tokens.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89676/" ] }
164,944
One of the applications I use at work sometimes screws with my bash and so I don't see my own input anymore. I.e. $ echo foofoo$ becomes $ foo $ I incorrectly tried to run stty -echo which made matters worse and now it stopps accepting commands all together which put my input in some state that just causes > to appear every time I line break and nothing else. What should I have done?
The usual remedy for things like this is stty sane The stty -echo should not have made this worse, as that just turns off echoing of input, and you already had that. The fact that you say returns just causes > to appear means that you've started somethng that is causing continuance over the next lines, e.g. echo ' will do that because it's waiting for the closing ' to terminate the string. Other things will cause this as well, such as if something ; it's waiting for the then ... fi part. You could probably have hit ctrl - c at that stage to stop it waiting for the rest of the command, unless the terminal was so messed up that interrupts were also not being generated.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/164944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3125/" ] }
164,955
I have a file name called temp.csv in my script one of the intermediate step is echo "some info" > final.csv | cat temp.csv >> final.csv at times the file final.csv is created without data which is in temp.csv (when it is running through scheduler). And then, when I rerun the job, then the final.csv is created as I expected. Why it is happening like this (what exactly happening in the command echo "some info" > final.csv | cat temp.csv >> final.csv )? If I replace the command in the following way: echo "some info" > final.csv ; cat temp.csv >> final.csv will this modification be helpful?
The usual remedy for things like this is stty sane The stty -echo should not have made this worse, as that just turns off echoing of input, and you already had that. The fact that you say returns just causes > to appear means that you've started somethng that is causing continuance over the next lines, e.g. echo ' will do that because it's waiting for the closing ' to terminate the string. Other things will cause this as well, such as if something ; it's waiting for the then ... fi part. You could probably have hit ctrl - c at that stage to stop it waiting for the rest of the command, unless the terminal was so messed up that interrupts were also not being generated.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/164955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69047/" ] }
164,957
I would like to know in what filesystem mount point I am currently on, similarly to using pwd to know what directory I am in. I know you can use df . (or df $(pwd) or many variants), but I find it somehow overkilling to check the file system usage just to know where I am. So: is there any command showing in what filesystem mount point I am?
I think df . is your best bet. The filesystem usage check is not that expensive (it doesn't have to count any blocks on disk, that information is readily available and stored in memory once the filesystem is mounted). Alternatives like comparing the current path against mount points by using a script would be more expensive.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40596/" ] }
164,978
I want to be able to list files showing the number of lines the each file has and the date. I can happily get the line count using wc -l * . Not a problem. I can get the date using ls -l . Is there a way to combine the two commands to give me a single output in columns?
Here is something with find + wc + date . find . -maxdepth 1 -exec sh -c '[ -f "$0" ] && \ printf "%6s\t\t%s\t%s\n" "$(wc -l<"$0")" "$(date -r "$0")" "$0"' {} \; Instead of date -r one can also use for example stat -c%y . The output looks like this: 394 Thu Oct 16 22:38:14 UTC 2014 ./.zshrc 7 Thu Oct 30 11:19:01 UTC 2014 ./tmp.txt 2 Thu Oct 30 06:02:00 UTC 2014 ./tmp2.txt 40 Thu Oct 30 04:16:30 UTC 2014 ./pp.txt Using this as starting point one can create a function which accepts directory and pattern as parameters: myls () { find "$1" -maxdepth 1 -name "$2" -exec sh -c '[ -f "$0" ] && \ printf "%6s \t\t%s\t%s\n" "$(wc -l<"$0")" "$(date -r "$0")" "$0"' {} \;; } After that myls /tmp '*.png' will list only images from /tmp (notice single quotes around pattern to prevent shell from expanding a glob operator * ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164978", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86140/" ] }
164,982
Is there a command either standalone or in vi or similar (can be gui) that will show me the numeric sequence of a string? Input: The cat hopped in a box. Output: T h e c a t h o p p e d i n a b o x .1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Here's a hybrid perl/fold approach: $ echo "The cat hopped in a box." | fold -w 1 | perl -lne 'push @k, "$_ "; push @l,sprintf "%-2s",$.; END{print "@k\n@l"}'T h e c a t h o p p e d i n a b o x . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Explanation fold -w 1 : this will fold the input at a width of one character, resulting in each input character printed on a separate line. perl -lne : the -l removes trailing newlines from the input and adds a newline to each print call; the n reads input line by line and the e provides the script to run on it. push @k, " $_"; : Add a space to the the current line ( $_ ) and save it in in the array @k . push @l,sprintf "%-2s",$.; : sprintf will return a formatted string, here we are giving it the current line number ( $. ) and telling it to print it with spaces added as needed to make its length 2. The string is then added to the @l array. END{print "@k\n@l"}' : once the whole file has been read, print the two arrays. If you just need to number the characters and don't mind multi-line output, a simpler approach is (using a shorter string for brevity): $ echo "foo bar" | fold -w1 | cat -n 1 f 2 o 3 o 4 5 b 6 a 7 r
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/164982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20107/" ] }
164,988
I have a shell script which uses find -print0 to save a list of files to be processed into a temporary file. As part of the logging I'd like to output the number of files found, and so I need a way to get that count. If the -print0 option weren't being used for safety I could use wc -l to get the count.
Some options: tr -cd '\0' | wc -ctr '\n\0' '\0\n' | wc -l # Generic approach for processing NUL-terminated # records with line-based utilities (that support # NUL characters in their lines like GNU ones).grep -cz '^' # GNU grepsed -nz '$=' # recent GNU sed, no output for empty inputawk -v RS='\0' 'END{print NR}' # not all awk implementations Note that for an input that contains data after the last NUL character (or non-empty input with no NUL characters), the tr approaches will always count the number of NUL characters, but the awk / sed / grep approaches will count an extra record for those extra bytes.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/164988", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26360/" ] }
165,002
I'm aware of the uptime command, but it returns seconds since booted, so if I just substract that number from current timestamp, in theory I can get a different result if second changes after I've read the uptime and current timestamp. uptime -s is what I want, but it is not available on centos (how is it calculated btw?). Can I just get ctime of /proc dir? This seems to give me the proper number, but I wonder if every linux system has /proc created on boot.
First of all, crtime is tricky on Linux . That said, running something like $ stat -c %z /proc/ 2014-10-30 14:00:03.012000000 +0100 or $ stat -c %Z /proc/ 1414674003 is probably exactly what you need. The /proc file system is defined by the LFS standard and should be there for any Linux system as well as for most (all?) UNIXen. Alternatively, assuming you don't really need seconds precision, but only need the timestamp to be correct, you can use who : $ who -b system boot 2014-10-30 14:00 From man who : -b, --boot time of last system boot You can convert that to seconds since the epoch using GNU date : $ date -d "$(who -b | awk '{print $4,$3}' | tr - / )" +%s1414674000
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48453/" ] }
165,030
On my new computer I have 2 screens connected on the igpu of a i7 processor, 1 using hdmi. the other using dvi. Both screens are the same but the one connected using hdmi has a wrong resolution inside X, and it can't be selected inside the settings. How can this be solved?
first run $ xrandr this will give output like this: Screen 0: minimum 320 x 200, current 5120 x 1080, maximum 8192 x 8192HDMI1 connected 2560x1080+2560+0 (normal left inverted right x axis y axis) 677mm x 290mm 1920x1080 60.00* 1680x1050 59.88 1600x900 59.98 ...HDMI2 connected 2560x1080+0+0 (normal left inverted right x axis y axis) 677mm x 290mm 2560x1080 60.00*+ 2560x1080_60.00 59.98 1920x1080 60.00 50.00 59.94 30.00 29.97 1920x1080i 60.00 50.00 59.94 ... to add 2560x1080 to HDMI1 first run the next command $ cvt 2560 1080 sample output: 2560x1080 59.98 Hz (CVT) hsync: 67.17 kHz; pclk: 230.00 MHzModeline "2560x1080_60.00" 230.00 2560 2720 2992 3424 1080 1083 1093 1120 -hsync +vsync now you need to use this output to test the resolution and than add the settings for use in the next sessions. testing: $ xrandr --newmode "2560x1080_60.00" 230.00 2560 2720 2992 3424 1080 1083 1093 1120 -hsync +vsync$ xrandr --addmode HDMI1 2560x1080_60.00$ xrandr --output HDMI1 --mode 2560x1080_60.00 if it works you can add it to the xorg configuration $ sudo vim /etc/X11/xorg.conf.d/40-monitor.conf or $ sudo vim /usr/share/X11/xorg.conf.d/40-monitor.confSection "Monitor" Identifier "HDMI1" Modeline "2560x1080_60.00" 230.00 2560 2720 2992 3424 1080 1083 1093 1120 -hsync +vsync Option "PreferredMode" "2560x1080"EndSection
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78496/" ] }
165,049
Without creating a file can I use any inode which is free? I want to write a script that will use all free inodes in the system. is it possible?
Yes you can consume all the inodes of a system. They are a limited resource just like diskspace is, and they're pre-allocated when you perform a mkfs.ext4 , for example. You can use tools such as tune2fs -l <device> or df -i <path> to see how many are allocated and used. Example $ df -i /Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora-root 3276800 355850 2920950 11% / So this filesystem has 2920950 inodes free. If I started making files, directories, or links on the filesystem, that would be all I needed to do to consume them all. Realize that I could consume all these inodes with small files or links, and still have roughly all the diskspace still available to me. Consuming inodes without files? I'm not sure what you're getting at here, but the only way I'm aware of, where you can consume inodes is to create files, directories, or links. I'm not familiar with any other way to consume them. Example Here you can see I'm consuming 1 inodes when I create a empty directory. $ df -i /Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora-root 3276800 355850 2920950 11% /$ sudo mkdir /somedir$ df -i /Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora-root 3276800 355851 2920949 11% / The easiest way to consume the inodes is likely to make a directory tree of directories. $ sudo mkdir /somedir/1$ df -i /Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora-root 3276800 355852 2920948 11% /$ sudo mkdir /somedir/2$ df -i /Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora-root 3276800 355853 2920947 11% /$ sudo mkdir /somedir/3$ df -i /Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora-root 3276800 355854 2920946 11% / Here's another example where I'm consuming inodes by creating several links using ln tothe same file. $ ln -s afile ln1$ df -i .Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora_greeneggs-home 26722304 1153662 25568642 5% /home$ ln -s afile ln2$ df -i .Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora_greeneggs-home 26722304 1153663 25568641 5% /home$ ln -s afile ln3$ df -i .Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/fedora_greeneggs-home 26722304 1153664 25568640 5% /home
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165049", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89716/" ] }
165,057
For the past 3 days (after an update) my Debian Jessie refuses to mount NTFS disks. I reinstalled libfuse2 and ntfs-3g, yet I get the same Input/output error I tried the same disks under Windows 7 and OSX Mavericks (using ntfs-3g) and they work fine. I purged ntfs-3g and reinstalled, and still the same problem. The disks will sometimes mount and sometimes won't mount. If they do mount, I am sometimes able to go into the mount directory, whereas some other times, I get a bash error Input/output error for the mount directory. The times I am able to go into the mount directory, when I try an ls -l, I see tons of question marks, instead of file/dir attributes. I have tried ntfsfix and chkdisk under windows, and they both reported no problems, it is only under this Jessie install that all of a sudden I can't mount them properly. dmesg has no usefull info other than the external disk being attached: [12816.210969] scsi 20:0:0:0: Direct-Access Seagate External SG16 PQ: 0 ANSI: 4[12816.211825] sd 20:0:0:0: Attached scsi generic sg7 type 0[12816.212542] sd 20:0:0:0: [sdg] 732566642 4096-byte logical blocks: (3.00 TB/2.72 TiB)[12816.213591] sd 20:0:0:0: [sdg] Write Protect is off[12816.213595] sd 20:0:0:0: [sdg] Mode Sense: bf 00 00 00[12816.214782] sd 20:0:0:0: [sdg] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA[12816.215561] sd 20:0:0:0: [sdg] 732566642 4096-byte logical blocks: (3.00 TB/2.72 TiB)[12816.242055] sdg: sdg1 sdg2[12816.243244] sd 20:0:0:0: [sdg] 732566642 4096-byte logical blocks: (3.00 TB/2.72 TiB)[12816.246031] sd 20:0:0:0: [sdg] Attached SCSI diskparted /dev/sdg 'print'Model: Seagate External (scsi)Disk /dev/sdg: 3001GBSector size (logical/physical): 4096B/4096BPartition Table: msdosNumber Start End Size Type File system Flags 1 258kB 1038GB 1038GB primary 2 1038GB 3001GB 1962GB primaryfdisk -l /dev/sdgNote: sector size is 4096 (not 512)Disk /dev/sdg: 3000.6 GB, 3000592965632 bytes255 heads, 63 sectors/track, 45600 cylinders, total 732566642 sectorsUnits = sectors of 1 * 4096 = 4096 bytesSector size (logical/physical): 4096 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk identifier: 0x00090a06 Device Boot Start End Blocks Id System/dev/sdg1 63 253473569 1013894028 7 HPFS/NTFS/exFAT/dev/sdg2 253473792 732566527 1916370944 83 Linuxmount -t ntfs-3g /dev/sdg1 /media/Downloadsntfs-3g-mount: failed to access mountpoint /media/Downloads: Input/output error If I manage to mount it via mount -t ntfs-3g /dev/sdg1 /media/Downloads Once I cd into it: cd media/Downloads root@athena:/media/Downloads# ls -lls: reading directory .: Input/output errortotal 0root@athena:/media/Downloads# mount however, says: /dev/sdf1 on /media/Downloads type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096) What did I brake? EDIT ntfsinfo -m /dev/sdg1Volume is scheduled for check.Please boot into Windows TWICE, or use the 'force' option.NOTE: If you had not scheduled check and last time accessed this volumeusing ntfsmount and shutdown system properly, then init scripts in yourdistribution are broken. Please report to your distribution developers(NOT to us!) that init scripts kill ntfsmount or mount.ntfs-fuse duringshutdown instead of proper umount.Failed to open '/dev/sdg1'. EDIT#2 ntfsinfo -fm /dev/sdg1 WARNING: Dirty volume mount was forced by the 'force' mount option.Volume Information Name of device: /dev/sdg1 Device state: 11 Volume Name: Volume State: 91 Volume Flags: 0x0001 DIRTY Volume Version: 3.1 Sector Size: 4096 Cluster Size: 4096 Index Block Size: 4096 Volume Size in Clusters: 253473506MFT Information MFT Record Size: 4096 MFT Zone Multiplier: 0 MFT Data Position: 24 MFT Zone Start: 0 MFT Zone End: 31684192 MFT Zone Position: 4 Current Position in First Data Zone: 31684192 Current Position in Second Data Zone: 0 Allocated clusters 145403 (0.1%) LCN of Data Attribute for FILE_MFT: 4 FILE_MFTMirr Size: 4 LCN of Data Attribute for File_MFTMirr: 126736753 Size of Attribute Definition Table: 2560 Number of Attached Extent Inodes: 0FILE_Bitmap Information FILE_Bitmap MFT Record Number: 6 State of FILE_Bitmap Inode: 80 Length of Attribute List: 0 Number of Attached Extent Inodes: 0FILE_Bitmap Data Attribute Information Decompressed Runlist: not done yet Base Inode: 6 Attribute Types: not done yet Attribute Name Length: 0 Attribute State: 3 Attribute Allocated Size: 31686656 Attribute Data Size: 31684192 Attribute Initialized Size: 31684192 Attribute Compressed Size: 0 Compression Block Size: 0 Compression Block Size Bits: 0 Compression Block Clusters: 0 Free Clusters: 199331046 (78.6%) I will try mounting it under windows in a few hours (I'm running a check on another disk I don't want to interrupt). EDIT#3 I went back into windows, and scanned the disks. Windows indeed found problems with one of them, but both were fixed, mountable and browsable. Yet, under Debian, I still cannot do anything.I opened Gparted, and interestingly enough, it complains: Unable to read the contents of this file system!Because of this some operations may be unavailable.The cause might be a missing software package.The following list of software packages is required for ntfs file system support: ntfsprogs / ntfs-3g. However, apt-cache policy ntfs-3gntfs-3g: Installed: 1:2014.2.15AR.2-1 Candidate: 1:2014.2.15AR.2-1 Version table: *** 1:2014.2.15AR.2-1 0 !!! So, have I run into some kind of ntfs-3g bug, or is my system now broken???
It is ntfs-3g bug . Downgrade ntfs-3g and it will work.I had the same problem with 1:2014 version, and no problem with 1:2012 version (which in "stable" repository)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67204/" ] }
165,101
When I try to install linux as dual boot on my laptop. However it does not show any available drives for me to install it on when I get to the install screen on a live boot cd. I have tried creating a EXT and Fat32 partitions however its still not finding any drives.. This is what my windows partitions look like using the windows7 partition tool This is what the linux install shows The Laptop is a dell laptop. -- inspiron-14z http://www.dell.com/uk/business/p/inspiron-14z-5423/pd
I didn't want to disable Intel(R) Smart response Technology as it does offer performance improvement. Changing the BIOS to get rid of the raid setup would have done just this. The bulk of my resolution came from this Super User answer here: How do I install Windows 7 (with Intel RST) and Linux to dual boot on a Dell XPS 15? Mine differed in a few ways though - mainly I used the Windows 7 bootloader and not the Linux grub one. Here are the steps I did. I created some free space partitions using windows partition tool before booting into the Ubuntu live DVD. Control Panel -> Type partition in the search -- open windows partition tool.. I created 100GB and 4GB. Then used the windows tool to keep the new partitions as unallocated. I then used Acronis disk manager partition tool to convert it to EXT3 and Linux swap (other free alternatives exists). I had to use the windows tool to make the space free first as the Aconis tool seemed to crash when I try to resize the C drive directly. Next I disabled Intel smart response from the Windows task bar right click -- options -- disable. (see screenshot below) You can enable it again after you have installed Linux. I then re-booted my laptop and loaded Linux from the Live DVD disk. When Linux booted up I typed in the terminal modprobe dm_moddmraid -ayls -la /dev/mapper/ The above commands made the drives visible. After the commands I clicked the "install Linux". Then on the 'choose partition' sections I selected the one EXT3 one I created and set that as my Linux root. Then the 4GB and set that as the swap. I then installed Linux. Upon restart it didn't boot straight into Linux - this is because I did not overwrite the Windows bootloader. Everything was installed only on the partitions I created. I was worried about messing it up and not being able to get into Windows. To enable my laptop to load into Linux on this new setup - I booted it into Windows (it didn't give any other options at this point). Then in Windows I downloaded and installed EasyBCD . This is quite a handy tool as it allows you to add add additions to the Windows bootloader - you just select the partition. When I load my laptop now it allows me to load Linux or Windows 7 from the bootloader. After this was working I then re-enabled Intel smart response.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89792/" ] }
165,105
Which packages should be rebuilt after upgrading gcc on a gentoo system? Is it sufficient to run # emerge -a --oneshot `equery depends gcc |awk '{print " ="$1}'` as suggested similar for perl in this FAQ ?
TL;DR I have a different take on this as a Gentoo user. While I agree with peterph's approach of "Let the System Decide," I disagree when it comes to an ABI Update. An ABI Update is sometimes a major shift in behavior. In the case of GCC 4.7, the ABI Change was the adoption of the new C++11 Standard, which peterph also pointed out. Here is why I write this answer. I'm a standards junkie. I started in the web world when there were about 4 different browsers, and a plethora of tags in HTML that were only supported by certain browsers. At the time, all those tags increased confusion, and IMO made work harder. C++ has been standardized for this same reason, in short so that you can compile code that I write, and I can compile code that you write . If we chose not to follow a standard, we lose the freedom to share. C++98 has been the approved Standard for 13 years. C++11 was ratified by the ISO Committee in 2011, and was completely integrated into GCC 4.7. See the current ISO status , and the new ISO Standard . Why We Should Feel Privileged as Gentoo Users As users of a source-based distribution, we have the unique opportunity to shape the future behavior of a package because we compile it before we use it. As such, to prepare for that opportunity, I feel that the following commands should be run, when updating to the new compiler: emerge -ev systemgcc-config -l && gcc-config *new compiler name*env-update && source /etc/profileemerge -1v libtoolemerge -ev system The first pass through system builds the new compiler, and it's dependencies, with the old compiler. The second pass through system rebuilds the new compiler and it's dependencies with the new compiler. Specifically, we want to do this so that our Build Chain takes advantage of the new features of the new compiler, if the Build Chain packages have been updated also... Some people replace the 2nd pass through system with the world set, although I find this to be overkill, as we don't know which packages already support the new standard, but we do want our build chain to behave sanely. Doing this to at least the system set, prepares us to test every package that we compile against the new standard, because we use a rolling release. In this way, adding -std=c++11 to CXXFLAGS after updating the build chain allows us to test for breakage, and be able to submit bugs directly to either our bugzilla or upstream to the actual developers for the simple reason of: Hey, your package blah blah breaks using the new C++ standard, and I've attached my build log. I consider this a courtesy to the developers, as they now have time to prepare as the standard becomes more widely adopted, and the old standard is phased out. Imagine the commotion on the developer's part if he received hundreds of bugs, because he or she waited until the standard was phased out... No other distribution that I know of can use this method as the actual package maintainers exist as middlemen before a patch or update can be used by the respective user community. We do have maintainers, but we also have the ability to use a local portage tree. Regarding Insightful Thoughts Posted in the Bounty Request I don't know if the bounty was posted because you all like my insightful, well thought out answers, but in an attempt at the bounty, I'll attempt to answer your insightful, well thought out bounty offering. First off, let me say in response that as a user of a source based distribution, I firmly believe what connects the dots are all the things you've asked for in your bounty request. Someone can be a great coder, but have crappy care for software. In the same way, there are people that are crappy coders that have great care for software. Before I came here, I was an avid poster, over at the Gentoo Forums . I finally realized when I started coming here that everyone has some degree of some talent they can use. It's what they choose to do with it that makes the contributory difference. Some of us are Great writers (not I), so if you want to contribute to some project, but you don't or can't write code, or fix bugs, remember that great writers can write great documentation, or great Wiki Articles. The standard is there for another reason: In a Community, certain rules are expected of it's members . Follow that statement here too. If I submit a fix, patch, enhancement etc and there are no standards, the patch will only work in the situations that I deem important, i.e if I'm using whizbang compiler 2.0, and the patch is built against whizbang compiler 1.0, it will fail. Since the effort is for a community, the community expects everything to work in most situations, so instead of forcing all users to upgrade to compiler 2, I can stipulate in a standard: This package chooses to allow Backwards Compatibility with Whizbang Compiler 1.0 In this way, as a developer, crappy coder or not, I know that I must use or at least test against Compiler Version 1.0. As a user on the other hand, I can choose what I want to do. If I'm unhappy, I can request a patch, by submitting a bug, or the other extreme of "This software is a piece of crap!," and do nothing. Regardless, the user, and the developer understand the standard because it's been written. Bridging the gap takes action of some form on a user's part, and that requires all the things you asked me and other's to comment on, and we must rely on the user community and their talents of all forms bridge that gap. If you choose to be one of the contributing users, I applaud you. For those of you who choose to be inactive, remember that if you want something fixed, the active ones need your input. So I'm telling you, don't be shy about submitting a bug, or telling us we need to update documentation, and if we're rude tell us, or find someone else, until you find your area of expertise. Other Interesting Reading Related to This Topic The Biggest Changes in C++11 (and Why You Should Care) C++0x/C++11 Support in GCC News, Status & Discussion about Standard C++
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26440/" ] }
165,108
During the last few days I've read a lot about PF as a nice alternative to iptables and a much more improved and secure way of filtering.Nevertheless I came across the following statement which I thought could be a security issue: Keeping State for UDP: One will sometimes hear it said that, "One can not create state with UDP as UDP is a stateless protocol!" While it is true that a UDP communication session does not have any concept of state (an explicit start and stop of communications), this does not have any impact on PF's ability to create state for a UDP session. In the case of protocols without "start" and "end" packets, PF simply keeps track of how long it has been since a matching packet has gone through. If the timeout is reached, the state is cleared. The timeout values can be set in the options section of the pf.conf file. My concerns: UDP does not have SequenceNr. So if an attacker eavesdrops an UDP-stream (which has already received a state in pf's state table) he could easily inject spoofed packages which will then pass the firewall, no? Isn't that a big security issue? Or have I misunderstood something in pf's mechanism?
TL;DR I have a different take on this as a Gentoo user. While I agree with peterph's approach of "Let the System Decide," I disagree when it comes to an ABI Update. An ABI Update is sometimes a major shift in behavior. In the case of GCC 4.7, the ABI Change was the adoption of the new C++11 Standard, which peterph also pointed out. Here is why I write this answer. I'm a standards junkie. I started in the web world when there were about 4 different browsers, and a plethora of tags in HTML that were only supported by certain browsers. At the time, all those tags increased confusion, and IMO made work harder. C++ has been standardized for this same reason, in short so that you can compile code that I write, and I can compile code that you write . If we chose not to follow a standard, we lose the freedom to share. C++98 has been the approved Standard for 13 years. C++11 was ratified by the ISO Committee in 2011, and was completely integrated into GCC 4.7. See the current ISO status , and the new ISO Standard . Why We Should Feel Privileged as Gentoo Users As users of a source-based distribution, we have the unique opportunity to shape the future behavior of a package because we compile it before we use it. As such, to prepare for that opportunity, I feel that the following commands should be run, when updating to the new compiler: emerge -ev systemgcc-config -l && gcc-config *new compiler name*env-update && source /etc/profileemerge -1v libtoolemerge -ev system The first pass through system builds the new compiler, and it's dependencies, with the old compiler. The second pass through system rebuilds the new compiler and it's dependencies with the new compiler. Specifically, we want to do this so that our Build Chain takes advantage of the new features of the new compiler, if the Build Chain packages have been updated also... Some people replace the 2nd pass through system with the world set, although I find this to be overkill, as we don't know which packages already support the new standard, but we do want our build chain to behave sanely. Doing this to at least the system set, prepares us to test every package that we compile against the new standard, because we use a rolling release. In this way, adding -std=c++11 to CXXFLAGS after updating the build chain allows us to test for breakage, and be able to submit bugs directly to either our bugzilla or upstream to the actual developers for the simple reason of: Hey, your package blah blah breaks using the new C++ standard, and I've attached my build log. I consider this a courtesy to the developers, as they now have time to prepare as the standard becomes more widely adopted, and the old standard is phased out. Imagine the commotion on the developer's part if he received hundreds of bugs, because he or she waited until the standard was phased out... No other distribution that I know of can use this method as the actual package maintainers exist as middlemen before a patch or update can be used by the respective user community. We do have maintainers, but we also have the ability to use a local portage tree. Regarding Insightful Thoughts Posted in the Bounty Request I don't know if the bounty was posted because you all like my insightful, well thought out answers, but in an attempt at the bounty, I'll attempt to answer your insightful, well thought out bounty offering. First off, let me say in response that as a user of a source based distribution, I firmly believe what connects the dots are all the things you've asked for in your bounty request. Someone can be a great coder, but have crappy care for software. In the same way, there are people that are crappy coders that have great care for software. Before I came here, I was an avid poster, over at the Gentoo Forums . I finally realized when I started coming here that everyone has some degree of some talent they can use. It's what they choose to do with it that makes the contributory difference. Some of us are Great writers (not I), so if you want to contribute to some project, but you don't or can't write code, or fix bugs, remember that great writers can write great documentation, or great Wiki Articles. The standard is there for another reason: In a Community, certain rules are expected of it's members . Follow that statement here too. If I submit a fix, patch, enhancement etc and there are no standards, the patch will only work in the situations that I deem important, i.e if I'm using whizbang compiler 2.0, and the patch is built against whizbang compiler 1.0, it will fail. Since the effort is for a community, the community expects everything to work in most situations, so instead of forcing all users to upgrade to compiler 2, I can stipulate in a standard: This package chooses to allow Backwards Compatibility with Whizbang Compiler 1.0 In this way, as a developer, crappy coder or not, I know that I must use or at least test against Compiler Version 1.0. As a user on the other hand, I can choose what I want to do. If I'm unhappy, I can request a patch, by submitting a bug, or the other extreme of "This software is a piece of crap!," and do nothing. Regardless, the user, and the developer understand the standard because it's been written. Bridging the gap takes action of some form on a user's part, and that requires all the things you asked me and other's to comment on, and we must rely on the user community and their talents of all forms bridge that gap. If you choose to be one of the contributing users, I applaud you. For those of you who choose to be inactive, remember that if you want something fixed, the active ones need your input. So I'm telling you, don't be shy about submitting a bug, or telling us we need to update documentation, and if we're rude tell us, or find someone else, until you find your area of expertise. Other Interesting Reading Related to This Topic The Biggest Changes in C++11 (and Why You Should Care) C++0x/C++11 Support in GCC News, Status & Discussion about Standard C++
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89794/" ] }
165,113
I know there is the ls -r command for listing in a decreasing order. But is there any simple way to do it with a combination with another function by only using ls and not its arguments?
TL;DR I have a different take on this as a Gentoo user. While I agree with peterph's approach of "Let the System Decide," I disagree when it comes to an ABI Update. An ABI Update is sometimes a major shift in behavior. In the case of GCC 4.7, the ABI Change was the adoption of the new C++11 Standard, which peterph also pointed out. Here is why I write this answer. I'm a standards junkie. I started in the web world when there were about 4 different browsers, and a plethora of tags in HTML that were only supported by certain browsers. At the time, all those tags increased confusion, and IMO made work harder. C++ has been standardized for this same reason, in short so that you can compile code that I write, and I can compile code that you write . If we chose not to follow a standard, we lose the freedom to share. C++98 has been the approved Standard for 13 years. C++11 was ratified by the ISO Committee in 2011, and was completely integrated into GCC 4.7. See the current ISO status , and the new ISO Standard . Why We Should Feel Privileged as Gentoo Users As users of a source-based distribution, we have the unique opportunity to shape the future behavior of a package because we compile it before we use it. As such, to prepare for that opportunity, I feel that the following commands should be run, when updating to the new compiler: emerge -ev systemgcc-config -l && gcc-config *new compiler name*env-update && source /etc/profileemerge -1v libtoolemerge -ev system The first pass through system builds the new compiler, and it's dependencies, with the old compiler. The second pass through system rebuilds the new compiler and it's dependencies with the new compiler. Specifically, we want to do this so that our Build Chain takes advantage of the new features of the new compiler, if the Build Chain packages have been updated also... Some people replace the 2nd pass through system with the world set, although I find this to be overkill, as we don't know which packages already support the new standard, but we do want our build chain to behave sanely. Doing this to at least the system set, prepares us to test every package that we compile against the new standard, because we use a rolling release. In this way, adding -std=c++11 to CXXFLAGS after updating the build chain allows us to test for breakage, and be able to submit bugs directly to either our bugzilla or upstream to the actual developers for the simple reason of: Hey, your package blah blah breaks using the new C++ standard, and I've attached my build log. I consider this a courtesy to the developers, as they now have time to prepare as the standard becomes more widely adopted, and the old standard is phased out. Imagine the commotion on the developer's part if he received hundreds of bugs, because he or she waited until the standard was phased out... No other distribution that I know of can use this method as the actual package maintainers exist as middlemen before a patch or update can be used by the respective user community. We do have maintainers, but we also have the ability to use a local portage tree. Regarding Insightful Thoughts Posted in the Bounty Request I don't know if the bounty was posted because you all like my insightful, well thought out answers, but in an attempt at the bounty, I'll attempt to answer your insightful, well thought out bounty offering. First off, let me say in response that as a user of a source based distribution, I firmly believe what connects the dots are all the things you've asked for in your bounty request. Someone can be a great coder, but have crappy care for software. In the same way, there are people that are crappy coders that have great care for software. Before I came here, I was an avid poster, over at the Gentoo Forums . I finally realized when I started coming here that everyone has some degree of some talent they can use. It's what they choose to do with it that makes the contributory difference. Some of us are Great writers (not I), so if you want to contribute to some project, but you don't or can't write code, or fix bugs, remember that great writers can write great documentation, or great Wiki Articles. The standard is there for another reason: In a Community, certain rules are expected of it's members . Follow that statement here too. If I submit a fix, patch, enhancement etc and there are no standards, the patch will only work in the situations that I deem important, i.e if I'm using whizbang compiler 2.0, and the patch is built against whizbang compiler 1.0, it will fail. Since the effort is for a community, the community expects everything to work in most situations, so instead of forcing all users to upgrade to compiler 2, I can stipulate in a standard: This package chooses to allow Backwards Compatibility with Whizbang Compiler 1.0 In this way, as a developer, crappy coder or not, I know that I must use or at least test against Compiler Version 1.0. As a user on the other hand, I can choose what I want to do. If I'm unhappy, I can request a patch, by submitting a bug, or the other extreme of "This software is a piece of crap!," and do nothing. Regardless, the user, and the developer understand the standard because it's been written. Bridging the gap takes action of some form on a user's part, and that requires all the things you asked me and other's to comment on, and we must rely on the user community and their talents of all forms bridge that gap. If you choose to be one of the contributing users, I applaud you. For those of you who choose to be inactive, remember that if you want something fixed, the active ones need your input. So I'm telling you, don't be shy about submitting a bug, or telling us we need to update documentation, and if we're rude tell us, or find someone else, until you find your area of expertise. Other Interesting Reading Related to This Topic The Biggest Changes in C++11 (and Why You Should Care) C++0x/C++11 Support in GCC News, Status & Discussion about Standard C++
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89800/" ] }
165,124
Is there an equivalent product/method for AutoHotkey? For those that don't know, it's a product that allows you to program your mouse movements and keyboard. This allows me to "macro" certain functions on programs instead of having to do it manually. In the old days they used to call them keyboard stuffers. But that is only half of the solution as I need a "mouse movement" stuffer as well. Is there such a beast?
There's a port of AutoHotKey to Linux called IronAHK and a similar tool called Autokey . I haven't used either, I don't know how well they work. Other than such programs, you won't find exactly the same kind of automation tools. The basic user automation tool on unix systems is the shell. The shell is the glue that combines other tools: it can launch external programs and control how they exchange data. The shell itself doesn't include anything to manipulate GUI concepts like windows and the mouse pointer¹. What you do is call specialized tools in a shell script, in particular xdotool to manipulate windows, and inject keystrokes and mouse events . For example, the following script clicks at the position (40,20) in the window of Myapp. #!/bin/shxdotool search --class Myapp \ mousemove --window %1 40 20 \ click 1 ¹ Except for dtksh , but I've never seen a Linux port of it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33863/" ] }
165,155
In vim I want to substitute a long path /a/b/f/g/d/g with another /s/g/w/d/g/r in a text file. Usually I use \/ but there are too many slashes here. Is there an easier way?
You can use another character instead of slash / as delimiter to substitution command. Example using # : :%s#/a/b/f/g/d/g#/s/g/w/d/g/r#
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/165155", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
165,160
Let's say you want to cat the contents of a really big file, but want to view it a few bits at a time. Let's say one were to do the following: $ cat /dev/sda1 | less As a programmer of languages such as Java and ActionScript, when I look at that code I imagine Bash first running the command cat /dev/sda1 (loading everything the command returns into RAM), and then running the command less which has access to that really big "pseudo-variable" represented as - . Is that the way Bash does things (meaning that command is a really bad idea if the file is larger than the amount of RAM on your system, and you should use another command), or does it have a way of optimising the piping of large amounts of data?
No it doesn't load everything into memory, that would be an impractical way to design this. It uses buffers to buffer the output from the left side of the pipe, and then connect these buffers to the input of the command on the right side of the pipe. The man page man 7 pipe has all the details, as well as this other U&L Q&A titled: How big is the pipe buffer?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5769/" ] }
165,192
I have to extract from the command hcitool dev only the MAC address of the bluetooth dongle. Output of hcitool dev is: Devices:hci0 xx:xx:xx:xx:xx:xx I write this output to a file and try to get the info with awk : hcitool dev > /home/pi/mario/BT.txtawk ' { print $2 } ' /home/pi/mario/BT.txt The output also contains the first row which is an empty cell: xx:xx:xx:xx:xx:xx How can I put off the first cell?
For you purpose is quite enough grep hcitool dev | grep -o "[[:xdigit:]:]\{11,17\}" -o outputs just finded patten [[:xdigit:]:] mean all hexadecimal digits plus : char {11,17} the set of chars should be neither less then 11 no more 17 in length
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89837/" ] }
165,200
When I run ls | sort -S I get sort : option requires an argument -- ´S´ Why I can't sort the list of my files with the sort option by size? I know that I can use only the ls command alone.
First of all command ls has option -S From man ls -S sort by file size So proper command is: ls -S sort command is for sorting lines of text file: From man sort : -S, --buffer-size=SIZE use SIZE for main memory buffer SIZE is an integer and optional unit (example: 10M is 10*1024*1024). Units are K, M, G, T, P, E, Z, Y (powers of 1024) or KB, MB, ... (powers of 1000). That's why you are getting error: sort : option requires an argument -- ´S´ . Use ls -S for sorting file by size!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89800/" ] }
165,201
I accidentally deleted the /var/log/mail file. Until that point I was able to monitor it using postfix stuff. Now, it seems that Postfix doesn't send its logs to /var/log/mail , since the file is not getting updated with new log messages.
When you delete mail.log file, rsyslog (on ubuntu) loose handle to file. To get it working back on Ubuntu, please give: sudo service rsyslog restart This will not only create new file but also start to write logs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165201", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72520/" ] }
165,214
Are there any relatively strightforward options with top to track a specific process? Ideally by identifying the process by a human readable value? e.g. chrome or java . In other words, I want to view all the typical information top provides, but for the results to be filtered to the parameters provided i.e.. 'chrome' or 'java'
You can simply use grep : NAME grep, egrep, fgrep, rgrep - print lines matching a patternSYNOPSIS grep [OPTIONS] PATTERN [FILE...] grep [OPTIONS] [-e PATTERN | -f FILE] [FILE...]DESCRIPTION grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) for lines containing a match to the given PATTERN. By default, grep prints the matching lines. Run following command to get output which you want (ex-chrome): top | grep chrome Here we are using grep with pipelines | so top & grep run parallel ; top output given to grep (as input) and grep chrome filters matching lines chrome until top stopped.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/165214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89437/" ] }
165,225
I have a large number of files and directories in one directory. I need to sort them in terms of the permissions. For example drwx------drwxr-xr-x drwxr-x--- I am just wondering if we can sort the files and dirs using ls ?
ls does not directly support sorting by permissions, but you can combine it with the sort command: ls -l | sort You can use the -k option to sort to start matching from a specific character, the format is -k FIELD.CHAR , the permissions are the first field in the ls output. So e.g. -k 1.2 will start from the second character of the permission string, which will ignore any directory / device / link etc. flag, or -k 1.5 for sorting by group permissions. If you don't want the additional output of ls -l , you can remove it with awk: ls -l | sort | awk '{ print $1, $NF}' This will print only the first field (the permissions) and the last one (the filename).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44259/" ] }
165,240
When I use setfacl to manage on what permissions should the children files / directories have, for some reason the files have all the permissions except the execute ("x") one. someuser@someuser-MS-7816:/opt/lampp/htdocs/project$ getfacl .# file: .# owner: someuser# group: webs# flags: -s-user::rwxgroup::rwxother::rwxdefault:user::rwxdefault:group::rwxdefault:other::rwxsomeuser@someuser-MS-7816:/opt/lampp/htdocs/project$ touch filesomeuser@someuser-MS-7816:/opt/lampp/htdocs/project$ mkdir dirsomeuser@someuser-MS-7816:/opt/lampp/htdocs/project$ ls -ltotal 4drwxrwsrwx+ 2 someuser webs 4096 paź 31 13:35 dir-rw-rw-rw- 1 someuser webs 0 paź 31 13:35 file I thought it has something to do with umask but changing it in various ways never gives the expected result unless I'm missing something. How can this be fixed?
Hauke Laging’s answer is trying to say: Any program that creates a file or directory specifies the mode (permissions)that it wants that file to have. This is almost always hard-coded in the C program(or whatever language is used)and is hardly ever directly accessible to the user. Then the umask value and the default ACL can turn off permission bits,but not add them. Your problem is that, while mkdir specifies a mode of 777 ( rwxrwxrwx ),almost all programs that create files specify 666 ( rw-rw-rw- ). This includes touch , the shell(for I/O redirection; e.g., program > file ),the editors ( vi , vim , emacs , etc…), dd , split , and so on. Therefore, you will not get permissions of rwxrwxrwx on a plain fileimmediately upon creation (by any of these programs),no matter what you do with ACLs;you must create the file and then chmod it. There are a couple of exceptions to this rule: cp and related programs (e.g., cpio , tar , etc.)that copy or otherwise re-create a file,which will (attempt to) set the new file to the same modeas the original file. Compilers, which create binary executable files, specify a mode of 777(at least, if the compilation succeeds),so the user will actually be able to execute the program they just compiled.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73704/" ] }
165,246
I have a couple of scripts in my bash file, that should do certain tasks, say like ... #!bin/bashyum -y updateservice restart nginxyum install atopcat /var/log/somelog.log > /home/cron/php/script It goes on and on, but the problem is that with each task taken, bash shows me the output. Like service restart nginx for example, outputs some message. And I want to hide all these messages. What is generally the accepted way of achieving this? Because, I was about to redirect the STDOUT to /dev/null but, considering I have over 50 consecutive tasks to run, it would mean I have to /dev/null that much, which for some reason does not seem efficient to me.
Redirect all the output as a block. ( yum -y update service restart nginx yum install atop cat /var/log/somelog.log > /home/cron/php/script) > /dev/null 2>&1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165246", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77324/" ] }
165,256
Is there a multi-threaded cp command on Linux? I know how to do this on Windows, but I don't know how this is approached in a Linux environment.
As Celada mentioned, there would be no point to using multiple threads of execution since a copy operation doesn't really use the cpu. As ryekayo mentioned, you can run multiple instances of cp so that you end up with multiple concurrent IO streams, but even this is typically counter-productive. If you are copying files from one location to another on the same disk, trying to do more than one at a time will result in the disk wasting time seeking back and forth between each file, which will slow things down. The only time it is really beneficial to copy multiple files at once is if you are, for instance, copying several files from several different slow, removable disks onto your fast hard disk, or vice versa.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/165256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
165,326
I am using a set of local yum repositories and trying to install a set of packages from those repos. I noticed that when one of the packages on the command line does not exist, Yum just prints out that it was not found and goes along its merry way. Can I make Yum quit when this happens? Is there some other Yum utility that I can use to give it my repos and my packages and tell me if there is a problem? yum --disablerepo=* --enablerepo=myrepo --nogpgcheck \ --installroot=/var/some/place/test install \ abasdfasfeafseasfeasef bash coreutils utils-linuxLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfileNo package **abasdfasfeafseasfeasef** available.No package **utils-linux** available.<snip>Complete! I am calling Yum from another script and don't appear to have a way to tell if the packages that I installed are really installed.
Modern versions of yum (yum-3.4.3-133.el7+, ticket ) provide two options that should help with this use-case: skip_missing_names_on_install If set to False, 'yum install'will fail if it can't find any of the provided names (package,group, rpm file). Boolean (1, 0, True, False, yes, no).Defaults to True. skip_missing_names_on_update If set to False, 'yum update'will fail if it can't find any of the provided names (package,group, rpm file). It will also fail if the provided name is apackage which is available, but not installed. Boolean (1, 0,True, False, yes, no). Defaults to True. Source: man-pages Usage: yum --setopt=skip_missing_names_on_install=False <commands-here>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165326", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89623/" ] }
165,329
I have an SQL query that must be run on every startup after a software update (updates come with new installations of mysql and thus change the debian-sys-maint user's password, which my script updates in the database). I have a script that is in /etc/init.d that does this exact thing when I run it as root user: ./update . But when I boot it does not run correctly. I run the service command to get it to run through the init.d process, but it says: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO I put a plaintext password in /root/.my.cnf to avoid having to use it in various other scripts and to improve security. It can make my mysql call (without -p ) perfectly when I call the script manually, but not when I use the service process and not on boot. There was a mention on another question that the environment might not be set up correctly for the script, but I have no idea what environment variables I'd have to set up to call mysql in a script so that it will read /root/.my.cnf . I have already checked to make sure that the script is owned by root and has 755 permissions. What do I have to do to get this to work?
Modern versions of yum (yum-3.4.3-133.el7+, ticket ) provide two options that should help with this use-case: skip_missing_names_on_install If set to False, 'yum install'will fail if it can't find any of the provided names (package,group, rpm file). Boolean (1, 0, True, False, yes, no).Defaults to True. skip_missing_names_on_update If set to False, 'yum update'will fail if it can't find any of the provided names (package,group, rpm file). It will also fail if the provided name is apackage which is available, but not installed. Boolean (1, 0,True, False, yes, no). Defaults to True. Source: man-pages Usage: yum --setopt=skip_missing_names_on_install=False <commands-here>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89919/" ] }