source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
238,180 | I'm currently studying penetration testing and Python programming. I just want to know how I would go about executing a Linux command in Python. The commands I want to execute are: echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 8080 If I just use print in Python and run it in the terminal will it do the same as executing it as if you was typing it yourself and pressing Enter ? | You can use os.system() , like this: import os
os.system('ls') Or in your case: os.system('echo 1 > /proc/sys/net/ipv4/ip_forward')
os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 8080') Better yet, you can use subprocess's call, it is safer, more powerful and likely faster: from subprocess import call
call('echo "I like potatos"', shell=True) Or, without invoking shell: call(['echo', 'I like potatos']) If you want to capture the output, one way of doing it is like this: import subprocess
cmd = ['echo', 'I like potatos']
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
o, e = proc.communicate()
print('Output: ' + o.decode('ascii'))
print('Error: ' + e.decode('ascii'))
print('code: ' + str(proc.returncode)) I highly recommend setting a timeout in communicate , and also to capture the exceptions you can get when calling it. This is a very error-prone code, so you should expect errors to happen and handle them accordingly. https://docs.python.org/3/library/subprocess.html | {
"source": [
"https://unix.stackexchange.com/questions/238180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122522/"
]
} |
238,640 | My rental Linux server doesn't respond to nmap the way I thought it would. When I run nmap it shows three open ports: 80, 443 and 8080. However, I know ports 2083, 22 and 2222 should all be open, as they're used for the web-based C-Panel, SSH and SFTP, respectively. Has my server rental company not opened these ports fully, or is does nmap not give a complete list (by default)? | By default, nmap scans the thousand most common ports. Ports 2083 and 2222 aren't on that list. In order to perform a complete scan, you need to specify "all ports" ( nmap -p 1-65535 , or the shortcut form nmap -p- ). Port 22, on the other hand, is on the list. If nmap isn't reporting it, it's because something's blocking your access, or the SSH server isn't running. | {
"source": [
"https://unix.stackexchange.com/questions/238640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138973/"
]
} |
238,738 | I want to find files newer than 15 seconds but older than 2 seconds.
Here is the script I'm currently using that grabs files newer than 15 seconds: find /my/directory -name '*.jpg' -not -newermt '-15 seconds' Any help is greatly appreciated | You can combine multiple predicates by chaining them. There's no -oldermt , but you can write that as -not -newermt . You want: -newermt '-15 seconds' to say the file is less than 15 seconds old, and -not -newermt '-2 seconds' to say the file is more than 2 seconds old Try: find /my/directory -newermt '-15 seconds' -not -newermt '-2 seconds' Or, to be POSIX compliant: find /my/directory -newermt '-15 seconds' \! -newermt '-2 seconds' Also, just so you (and other readers) are aware, "newer" means modified more recently than, not created more recently than. | {
"source": [
"https://unix.stackexchange.com/questions/238738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140279/"
]
} |
239,118 | I am learning Linux. I was surprised to see that the parameter order seems to matter when making a tarball. tar -cfvz casual.tar.gz snapback.txt bucket.txt gives the error: tar: casual.tar.gz: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors But if I issue the command like this: tar -cvzf casual.tar.gz snapback.txt bucket.txt the tarball is created without errors Can anyone explain to me why the parameter order matters in this example or where I can find that information to learn why myself? I tried it the way I did in my first example that received an error with the logic of putting the required parameters c and f first followed by my other parameters. I want to completely absorb Linux, which includes understanding why things like this occur. Thanks in advance! | Whether the order matters depends on whether you start the options with a minus $ tar -cfvz casual.tar.gz snapback.txt bucket.txt
tar: casual.tar.gz: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
$ tar cfvz casual.tar.gz snapback.txt bucket.txt
snapback.txt
bucket.txt This unusual behavior is documented in the man page Options to GNU tar can be given in three different styles.
In traditional style
...
Any command line words that remain after all options has
been processed are treated as non-optional arguments: file or archive
member names.
...
tar cfv a.tar /etc
...
In UNIX or short-option style, each option letter is prefixed with a
single dash, as in other command line utilities. If an option takes
argument, the argument follows it, either as a separate command line
word, or immediately following the option.
...
tar -cvf a.tar /etc
...
In GNU or long-option style, each option begins with two dashes and
has a meaningful name
...
tar --create --file a.tar --verbose /etc tar , which is short for "tape archive" has been around before the current conventions were decided on, so it keeps the different modes for compatibility. So to "absorb Linux", I'd suggest a few starting lessons: always read the man page minor differences in syntax are sometimes important the position of items - most commands require options to be the first thing after the command name whether a minus is required (like tar , ps works differently depending on whether there is a minus at the start) whether a space is optional, required, or must not be there ( xargs -ifoo is different from xargs -i foo ) some things don't work the way you'd expect To get the behavior you want in the usual style, put the output file name directly after the f or -f , e.g. $ tar -cvzf casual.tar.gz snapback.txt bucket.txt
snapback.txt
bucket.txt or: $ tar -c -f casual.tar.gz -z -v snapback.txt bucket.txt or you could use the less common but easier to read GNU long style: $ tar --create --verbose -gzip --file casual.tar.gz snapback.txt bucket.txt | {
"source": [
"https://unix.stackexchange.com/questions/239118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109278/"
]
} |
239,295 | People say you shouldn't use spaces in Unix file naming. Are there good reasons to not use capital letters in file names (i.e., File_Name.txt vs. file_name.txt )? Or is this just a matter of personal preference? | People say you shouldn't spaces in Unix file naming. People say a lot of things. There are some tools that may screw up, but hopefully they are few in number at this point in time, since spaces are a virus proliferated by giant consumer proprietary OS corporations and now impossible to avoid. Spaces make specifying filenames on the command line, etc., awkward. That's about it. The only categorically prohibited characters on *nix systems are NUL (don't worry, it's not on your keyboard, or anyone else's) and / , since that is the path separator. 1 Other than that anything goes. Individual path elements (file names) are limited to 255 bytes (a possible complication if you are using extended character sets) and complete paths to 4 KiB. Or is this just a matter of personal preference I would say it is. Most DE's seem to create a slew of capitalized directories in your $HOME ( Downloads , Desktop , Documents -- the D is very popular), so there's nothing bizarre about it. There are also very commonplace traditional files with capitals in them, such as .Xclients and .Xauthority . A value of capitalizing things at the beginning is that when listed lexicographically they'll come before lower case things -- at least, with many tools, and subject to locale. I'm a fan of camel case (aka. camelCase) and use it with filenames, e.g., /home/goldilocks/blueSuedeShoes -- never mind what's in there. Definitely a matter of personal preference but it has yet to cause me grief. Java class files tend to contain capitals by nature, because Java class names do. And of course, let's not forget NetworkManager , even if some of us would prefer to. 1. There is a much more delimited, recommended by POSIX "Portable Filename Character Set" that doesn't include the space -- but it does include upper case! POSIX also specifies the more general restriction regarding "the slash character and the null byte" elsewhere in the same document . This reflects, or is reflected in, long standing conventional practices . | {
"source": [
"https://unix.stackexchange.com/questions/239295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139554/"
]
} |
239,309 | I have setup a 2-seat computer. I start one X11 server using the computer's onboard graphics card (intel) and another in the dedicated one (nvidia).
Everything runs fine, except opengl. Currently, only the nvidia-seat has opengl due to conflicting files from nvidia and intel opengl packages in /lib. Is there any way to force one user to use libs from different a path? Every general /lib thing I found affects the whole system (ldconfig).
I've also considered FUSE, but I worry about general security and performance issues. chroot is only viable if I don't have to double and maintain all files. unionfs seemed right if it would allow for user-dependent overlays, but I never messed with unionfs and nothing I found suggests it's possible. | People say you shouldn't spaces in Unix file naming. People say a lot of things. There are some tools that may screw up, but hopefully they are few in number at this point in time, since spaces are a virus proliferated by giant consumer proprietary OS corporations and now impossible to avoid. Spaces make specifying filenames on the command line, etc., awkward. That's about it. The only categorically prohibited characters on *nix systems are NUL (don't worry, it's not on your keyboard, or anyone else's) and / , since that is the path separator. 1 Other than that anything goes. Individual path elements (file names) are limited to 255 bytes (a possible complication if you are using extended character sets) and complete paths to 4 KiB. Or is this just a matter of personal preference I would say it is. Most DE's seem to create a slew of capitalized directories in your $HOME ( Downloads , Desktop , Documents -- the D is very popular), so there's nothing bizarre about it. There are also very commonplace traditional files with capitals in them, such as .Xclients and .Xauthority . A value of capitalizing things at the beginning is that when listed lexicographically they'll come before lower case things -- at least, with many tools, and subject to locale. I'm a fan of camel case (aka. camelCase) and use it with filenames, e.g., /home/goldilocks/blueSuedeShoes -- never mind what's in there. Definitely a matter of personal preference but it has yet to cause me grief. Java class files tend to contain capitals by nature, because Java class names do. And of course, let's not forget NetworkManager , even if some of us would prefer to. 1. There is a much more delimited, recommended by POSIX "Portable Filename Character Set" that doesn't include the space -- but it does include upper case! POSIX also specifies the more general restriction regarding "the slash character and the null byte" elsewhere in the same document . This reflects, or is reflected in, long standing conventional practices . | {
"source": [
"https://unix.stackexchange.com/questions/239309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140667/"
]
} |
239,479 | I know I can use du -h to output the total size of a directory. But when it contains other subdirectories, the output would be something like: du -h /root/test
.
.
.
.
24K /root/test/1
64K /root/test/2
876K /root/test/3
1.1M /root/test/4
15M /root/test/5
17M /root/test I only want the last line because there are too many small directories in the /root/test directory. What can I do? | Add the --max-depth parameter with a value of 0: du -h --max-depth=0 /root/test Or, use the -s (summary) option: du -sh /root/test Either of those should give you what you want. For future reference, man du is very helpful. | {
"source": [
"https://unix.stackexchange.com/questions/239479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
239,489 | I keep getting the following error messages in the syslog of one of my servers: # tail /var/log/syslog
Oct 29 13:48:40 myserver dbus[19617]: [system] Failed to activate service 'org.freedesktop.login1': timed out
Oct 29 13:48:40 myserver dbus[19617]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service'
Oct 29 13:49:05 myserver dbus[19617]: [system] Failed to activate service 'org.freedesktop.login1': timed out
Oct 29 13:49:05 myserver dbus[19617]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service' They seem to correlate to FTP Logins on the ProFTPd daemon: # tail /var/log/proftpd/proftpd.log
2015-10-29 13:48:40,433 myserver proftpd[17872] myserver.example.com (remote.example.com[192.168.22.33]): USER switch: Login successful.
2015-10-29 13:48:40,460 myserver proftpd[17872] myserver.example.com (remote.example.com[192.168.22.33]): FTP session closed.
2015-10-29 13:48:40,664 myserver proftpd[17881] myserver.example.com (remote.example.com[192.168.22.33]): FTP session opened.
2015-10-29 13:49:05,687 myserver proftpd[17881] myserver.example.com (remote.example.com[192.168.22.33]): USER switch: Login successful.
2015-10-29 13:49:05,705 myserver proftpd[17881] myserver.example.com (remote.example.com[192.168.22.33]): FTP session closed.
2015-10-29 13:49:05,908 myserver proftpd[17915] myserver.example.com (remote.example.com[192.168.22.33]): FTP session opened. The FTP logins themselves seem to work without problems for the user, though. I've got a couple of other servers also running ProFTPd but so far never got these errors. They might be related to a recent upgrade from Debian 7 to Debian 8 though. Any ideas what the message want to tell me or even what causes them? I already tried restarting the dbus and proftpd daemons and even the server and made sure that the DBUS socket /var/run/dbus/system_bus_socket is existing but so far the messages keep coming. EDIT:
The output of journalctl as requested in the comment: root@myserver:/home/chammers# systemctl status -l dbus-org.freedesktop.login1.service
● systemd-logind.service - Login Service
Loaded: loaded (/lib/systemd/system/systemd-logind.service; static)
Active: active (running) since Tue 2015-10-27 13:23:32 CET; 1 weeks 0 days ago
Docs: man:systemd-logind.service(8)
man:logind.conf(5)
http://www.freedesktop.org/wiki/Software/systemd/logind
http://www.freedesktop.org/wiki/Software/systemd/multiseat
Main PID: 467 (systemd-logind)
Status: "Processing requests..."
CGroup: /system.slice/systemd-logind.service
└─467 /lib/systemd/systemd-logind
Oct 28 10:15:25 myserver systemd-logind[467]: New session c3308 of user switch.
Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3308.
Oct 28 10:15:25 myserver systemd-logind[467]: New session c3309 of user switch.
Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3309.
Oct 28 10:15:25 myserver systemd-logind[467]: New session c3310 of user switch.
Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3310.
Oct 28 10:15:25 myserver systemd-logind[467]: New session c3311 of user switch.
Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3311.
Oct 28 10:19:52 myserver systemd-logind[467]: New session 909 of user chammers.
Oct 28 10:27:11 myserver systemd-logind[467]: Failed to abandon session scope: Transport endpoint is not connected And more journalctl output: Nov 03 16:21:19 myserver dbus[19617]: [system] Failed to activate service 'org.freedesktop.login1': timed out
Nov 03 16:21:19 myserver proftpd[23417]: pam_systemd(proftpd:session): Failed to create session: Activation of org.freedesktop.login1 timed out
Nov 03 16:21:19 myserver proftpd[23418]: pam_systemd(proftpd:session): Failed to create session: Activation of org.freedesktop.login1 timed out
Nov 03 16:21:19 myserver proftpd[23417]: pam_unix(proftpd:session): session closed for user switch
Nov 03 16:21:19 myserver proftpd[23418]: pam_unix(proftpd:session): session closed for user switch
Nov 03 16:21:19 myserver proftpd[23420]: pam_unix(proftpd:session): session opened for user switch by (uid=0)
Nov 03 16:21:19 myserver dbus[19617]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service'
Nov 03 16:21:19 myserver proftpd[23421]: pam_unix(proftpd:session): session opened for user switch by (uid=0) | Restart logind: # systemctl restart systemd-logind Beware that restarting dbus will break their connection again. | {
"source": [
"https://unix.stackexchange.com/questions/239489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48404/"
]
} |
239,543 | and vice versa. I am running a RedHat if relevant. | You cannot do this because for such a conversion, you need to know the meaning of the binary content. If e.g. there is a string inside a binary file it must not be converted and a 4 byte integer may need different treatment than a two byte integer. In other words, for a byte order conversion, you need a data type description. | {
"source": [
"https://unix.stackexchange.com/questions/239543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86960/"
]
} |
239,708 | After searching in google I found out that we can telnet to web-server to its http port and use GET to retrieve a html page. For ex: $ telnet web-server-name 80 But I am not able to understand how is this possible ? I thought that if port 80 is for http server, then port 80 will only listen for http requests. But how am I able to telnet to an HTTP port ? Aren't telnet and HTTP two different protocol ? | Congratulations, you've just delved into the concept of networking layers by realizing that ports and protocols are not directly connected with each other. As others are saying, telnet can be used to connect to any TCP port. However to understand why this is possible you need to understand a bit about networking layers. If you've ever heard of the OSI 7 layer model this is what allows you to use telnet to connect to another port. Although on the Internet, they only concern themselves with 4 of the layers and its called the Internet Protocol Suite . Without layers of networking, each program would not only need to understand its own protocol, but would have to define its own IP addressing scheme and port system, which means each router would need to understand how to route these schemes and different protocols would be much harder to learn and diagnose. To put it simply, the Internet wouldn't work nearly as well without layers. What you are concerned with are the transport layer and the application layer. At the transport layer we have Internet protocols like TCP and UDP with port numbers ranging from 1 to 65535 on each. At the application layer we have protocols such as HTTP, SMTP and DNS. Usually each Internet standards document that defines a protocol specifies a default TCP or UDP port that the protocol should use by default. Such as TCP port 80 for HTTP, TCP port 25 for SMTP, UDP port 53 for DNS and TCP port 23 for Telnet. The telnet program actually speaks the TELNET protocol, which is a standard protocol , but mostly an ancient one by current standards. Because its protocol sequences are made from 8-bit characters, you rarely see the protocol itself and its mostly transparent when compared with other more modern protocols like HTTP and SMTP that use human visible words in ASCII such as GET, POST, HELO, LOGIN, etc. Because its protocol isn't generally visible, telnet made for a decent tool for connecting to other TCP ports and allowing the user to type in protocols manually. Some network administrators use this technique in order to diagnose problems with servers. However because the telnet program still has its own protocol and may send extra bits of data sometimes, you can still experience problems with this technique. When you use telnet you really are "making a connection" at the application layer as well as the transport layer. It just happens that other application layer protocols may work ok through it for most diagnostics and won't interfere with the telnet protocol. There is a better program for doing this through called nc (Net Cat. It gets its name from being a Network based version of the cat command). $ nc www.stackexchange.com 80 The nc program doesn't speak any application layer protocol and when you make a connection with it you are "making a connection" only at the Internet layer (IP address) and Transport layer (TCP or UDP). What that means is that you control what application layer protocol is used. Almost anything is fair game, even binary protocols. This also allows you to do useful things like transfer files without them being corrupted and listen on ports for incoming traffic: nc -l 9000 < movie.mp4 (Your friend runs this)
nc friends.computer.hostname 9000 > movie.mp4 (you run this) And then movie.mp4 is transfered over the network using no application layer protocol (such as FTP) at all. The application protocol is actually your friend telling you that they are ready for you to run your command. nc can also handle UDP packets and UNIX-domain sockets. Using it to listen can also be interesting. nc -l 12345 Now in your web browser visit http://localhost:12345/ and in your nc session you should see the browser's GET / HTTP/1.1 request. At this point you can type something in and press Ctrl-D and it should show up in your browser in plain text (If you want HTML to show up, you have to send it back the proper HTTP protocol response followed by HTML code). Sometimes, programs which natively speak one protocol like HTTP can connect to other ports that are meant for a different protocol. You typically can't do this in a GUI browser anymore because they have restricted them from connecting to some ports, but if you use a program like curl to connect to port 25 (SMTP for sending mail) you'll probably see a couple of errors about breaking protocol. $ curl yourispsmtpserverhost.com:25
220 yourispsmtpserverhost.com ESMTP Postfix
221 2.7.0 Error: I can break rules, too. Goodbye. This happens because curl normally speaks the HTTP protocol, so after it establishes a TCP handshake, it starts sending data like this: GET / HTTP/1.1
Host: yourispsmtpserverhost.com:25
User-agent: curl But what the SMTP server is expecting is SMTP, which is more like this: HELO myhomecomputername.local At which point the server sends back its identification line: 250 yourispsmtpserverhost.com So you see that there is nothing that prevents curl from establishing a transport layer connection with the SMTP server, it just can't speak the protocol. But you can speak the protocol yourself with a program like telnet or more preferably nc. | {
"source": [
"https://unix.stackexchange.com/questions/239708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109220/"
]
} |
239,772 | I thought this would be simple - but it is proving more complex than I expected. I want to iterate through all the files of a particular type in a directory, so I write this: #!/bin/bash
for fname in *.zip ; do
echo current file is ${fname}
done This works as long as there is at least one matching file in the directory. However if there are no matching files, I get this: current file is *.zip I then tried: #!/bin/bash
FILES=`ls *.zip`
for fname in "${FILES}" ; do
echo current file is ${fname}
done While the body of the loop does not execute when there are no files, I get an error from ls: ls: *.zip: No such file or directory How do I write a loop which cleanly handles no matching files? | In bash , you can set the nullglob option so that a pattern that matches nothing "disappears", rather than treated as a literal string: shopt -s nullglob
for fname in *.zip ; do
echo "current file is ${fname}"
done In POSIX shell script, you just verify that fname exists (and at the same time with [ -f ] , check it is a regular file (or symlink to regular file) and not other types like directory/fifo/device...): for fname in *.zip; do
[ -f "$fname" ] || continue
printf '%s\n' "current file is $fname"
done Replace [ -f "$fname" ] with [ -e "$fname" ] || [ -L "$fname ] if you want to loop over all the (non-hidden) files whose name ends in .zip regardless of their type. Replace *.zip with .*.zip .zip *.zip if you also want to consider hidden files whose name ends in .zip . | {
"source": [
"https://unix.stackexchange.com/questions/239772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9178/"
]
} |
239,808 | When you type control characters in the shell they get displayed using what is called "caret notation". Escape for example gets written as ^[ in caret notation. I like to customize my bash shell to make it look cool. I have for example changed my PS1 and PS2 to become colorized. I now want control characters to get a unique appearance as well to make them more distinguishable from regular characters. $ # Here I type CTRL-C to abort the command.
$ blahblah^C
^^ I want these two characters to be displayed differently Is there a way to make my shell highlight control characters differently? Is it possible to make it display them in a bold font or maybe make them appear in different colors from regular text? I am using bash shell here but I did not tag the question with bash because maybe there is a solution that applies to many different shells. Note that I do not know at what level highlighting of control characters takes place. I first thought it was in the shell itself. Now I have heard that it is readline that controls how control characters are in shells like bash . So the question is now tagged with readline and I am still looking for answers. | When you press Ctrl+X , your terminal emulator writes the byte 0x18 to the master side of the pseudo-terminal pair. What happens next depends on how the tty line discipline (a software module in the kernel that sits in between the master side (under control of the emulator) and the slave side (which applications running in the terminal interact with)) is configured. A command to configure that tty line discipline is the stty command. When running a dumb application like cat that is not aware of and doesn't care whether its stdin is a terminal or not, the terminal is in a default canonical mode where the tty line discipline implements a crude line editor . Some interactive applications that need more than that crude line editor typically change those settings on start-up and restore them on leaving. Modern shells, at their prompt are examples of such applications. They implement their own more advanced line editor. Typically, while you enter a command line, the shell puts the tty line discipline in that mode, and when you press enter to run the current command, the shell restores the normal tty mode (as was in effect before issuing the prompt). If you run the stty -a command, you'll see the current settings in use for the dumb applications . You're likely to see the icanon , echo and echoctl settings being enabled. What that means is that: icanon : that crude line editor is enabled. echo : characters you type (that the terminal emulator writes to the master side) are echoed back (made available for reading by the terminal emulator). echoctl : instead of being echoed asis, the control characters are echoed as ^X . So, let's say you type A B Backspace-aka-Ctrl+H/? C Ctrl+X Backspace Return . Your terminal emulator will send: AB\bC\x18\b\r . The line discipline will echo back: AB\b \bC^X\b \b\b \b\r\n , and an application that reads the input from the slave side ( /dev/pts/x ) will read AC\n . All the application sees is AC\n , and only when your press Enter so it can't have any control on the output for ^X there. You'll notice that for echo , the first ^H ( ^? with some terminals, see the erase setting) resulted in \b \b being sent back to the terminal. That's the sequence to move the cursor back, overwrite with space, move cursor back again, while the second ^H resulted in \b \b\b \b to erase those two ^ and X characters. The ^X (0x18) itself was being translated to ^ and X for output. Like B , it didn't make it to the application, as we deleted it with Backspace. \r (aka ^M ) was translated to \r\n ( ^M^J ) for echo, and \n ( ^J ) for the application. So, what are our options for those dumb applications: disable echo ( stty -echo ). That effectively changes the way control characters are echoed, by... not echoing anything. Not really a solution. disable echoctl . That changes the way control characters (other than ^H , ^M ... and all the other ones used by the line editor) are echoed. They are then echoed as-is. That is for instance, the ESC character is send as the \e ( ^[ / 0x1b ) byte (which is recognised as the start of an escape sequence by the terminal), ^G you send a \a (a BEL, making your terminal beep)... Not an option. disable the crude line editor ( stty -icanon ). Not really an option as the crude applications would become a lot less usable. edit the kernel code to change the behaviour of the tty line discipline so the echo of a control character sends \e[7m^X\e[m instead of just ^X (here \e[7m usually enables reverse video in most terminals). An option could be to use a wrapper like rlwrap that is a dirty hack to add a fancy line editor to dumb applications. That wrapper in effect tries to replace simple read() s from the terminal device to calls to readline line editor (which do change the mode of the tty line discipline). Going even further, you could even try solutions like this one that hijacks all input from the terminal to go through zsh's line editor (which happens to highlight ^X s in reverse video) relying on GNU screen's :exec feature. Now for applications that do implement their own line editor, it's up to them to decide how the echo is done. bash uses readline for that which doesn't have any support for customizing how control characters are echoed. For zsh , see: info --index-search='highlighting, special characters' zsh zsh does highlight non-printable characters by default. You can customize the highlighting with for instance: zle_highlight=(special:fg=white,bg=red) For white on red highlighting for those special characters. The text representation of those characters is not customizable though. In a UTF-8 locale, 0x18 will be rendered as ^X , \u378 , \U7fffffff (two unassigned unicode code points) as <0378> , <7FFFFFFF> , \u200b (a not-really printable unicode character) as <200B> . \x80 in a iso8859-1 locale would be rendered as ^� ... etc. | {
"source": [
"https://unix.stackexchange.com/questions/239808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
239,920 | How do I set the fully qualified hostname on CentOS 7.0? I have seen a few posts online for example using: $ sudo hostnamectl set-hostname nodename.domainname However, running domainname returns nothing: $ domainname
(none) Also: $ hostname
nodename.domainname However, $ hostname -f
hostname: Name or service not known
$ hostname -d
hostname: Name or service not known Some debug output: $ cat /etc/hostname
nodename.domainname
$ grep ^hosts /etc/nsswitch.conf
hosts: files dns | To set the hostname do use hostnamectl , but only with the hostname, like this: hostnamectl set-hostname nodename To set the (DNS) domainname edit /etc/hosts file and ensure that: There is a line <machine's primary, non-loopback IP address> <hostname>.<domainname> <hostname> there There are NO other lines with <some IP> <hostname> , and this includes lines with 127.0.0.1 and ::1 (IPv6) addresses. Note that unless you’re using NIS, (none) is the correct output when running the domainname command. To check if your DNS domainname is set correctly use dnsdomainname command and check output of hostname vs hostname -f (FQDN). NIS vs. DNS domain This issue confused me when I first came across it. It seems that the domainname command predates the popularity of the Internet. Instead of the DNS domain name, it shows or sets the system’s NIS (Network Information Service) aka YP (Yellow Pages) domain name (a group of computers which have services provided by a master NIS server). This command simply displays the name returned by the getdomainname(2) standard library function. ( nisdomainname and ypdomainname are alternative names for this command.) Display the FQDN or DNS domain name To check the DNS (Internet) domain name, you should run the dnsdomainname command or hostname with the -d, --domain options. (Note that the dnsdomainname command can’t be used to set the DNS domain name – it’s only used to display it.) To display the FQDN (Fully Qualified Domain Name) of the system, run hostname with the -f, --fqdn, --long options (likewise, this command can’t be used to set the domain name part). The above commands use the system’s resolver (implemented by the gethostbyname(3) function from the standard library, as specified by POSIX) to determine the DNS domain name and the FQDN. Name Resolution In modern operating systems such as RHEL 7, the hosts entry in /etc/nsswitch.conf is used for resolving host names. In your CentOS 7 machine, this line is configured as (default for CentOS 7): hosts: files dns This means that when when the resolver functions look up hostnames or IP address, they first check for an entry in the /etc/hosts file and next try the DNS server(s) which are listed in /etc/resolv.conf . When running hostname -f to obtain the FQDN of a host, the resolver functions try to get the FQDN for the system’s hostname. If the host is not listed in the /etc/hosts file or by the relevant DNS server, the attempt fails and hostname reports that Name or service not known . When hostname -d is run to obtain the domain name, the same operations are carried out, and the domain name part is determined by stripping the hostname part and the first dot from the FQDN. Configure the domain name Update the relevant DNS name server In my case, I had already added an entry for my new CentOS 7 machine in the DNS server for my local LAN so when the FQDN wasn’t found in the /etc/hosts file when I ran hostname with the -d or -f option, the local DNS services were able to fully resolve the FQDN for my new hostname. Use the /etc/hosts file If the DNS server haven’t been configured, the fully qualified domain name can be specified in the /etc/hosts file. The most common way to do this is to specify the primary IP address of the system followed by its FQDN and its short hostname. E.g., 172.22.0.9 nodename.domainname nodename Excerpt from hostname man page You cannot change the FQDN with hostname or dnsdomainname . The recommended method of setting the FQDN is to make the hostname be
an alias for the fully qualified name using /etc/hosts, DNS, or
NIS. For example, if the hostname was "ursula", one might have a line
in /etc/hosts which reads: 127.0.1.1 ursula.example.com ursula Technically: The FQDN is the name getaddrinfo(3) returns for the host
name returned by gethostname(2). The DNS domain name is the part
after the first dot. Therefore it depends on the configuration of the resolver (usually in /etc/host.conf ) how you can change it. Usually the hosts file is
parsed before DNS or NIS, so it is most common to change the FQDN in /etc/hosts . | {
"source": [
"https://unix.stackexchange.com/questions/239920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24554/"
]
} |
239,973 | Problem: When trying to install guest additions in Kali linux the following error occurs. Oops! There was a problem running this software. Unable to locate program This occurred after a fresh install of Kali Linux 2.0 in Virtual Box 4.3.32 Action taken to get this error: Virtualbox -> Devices -> Insert Guest Additions CD image then from Kali Linux GUI the message "VBOXADDITIONS_4.3.32_103443" contains software intended to be automatically started. Would you like to run it? Select run and the error occurs How to solve this problem? What is the cause? | The question is a bit old, but deserves an answer to the root cause of the error, not a work-around. The root cause of your issue is in /etc/fstab . If yours looks anything like mine, the mount options for /dev/sr0 are probably user,noauto . The user option automatically implies noexec which strips executable bits off all binary files on the mounted file system. You simply need to add the exec option to your mount statement in /etc/fstab from: /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 to: /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0 This will allow you to execute binaries from optical media. Cheers, Rich | {
"source": [
"https://unix.stackexchange.com/questions/239973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137195/"
]
} |
240,013 | I'm getting a bizarre error message while using git: $ git clone [email protected]:Itseez/opencv.git
Cloning into 'opencv'
Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.
X11 forwarding request failed on channel 0
(...) I was under the impression that X11 wasn't required for git, so this seemed strange. This clone worked successfully, so this is more of a "warning" issue than an "error" issue, but it seem unsettling. After all, git shouldn't need X11. Any suggestions? | Note that to disable ForwardX11 just for github.com you need something like the following in your ~/.ssh/config Host github.com
ForwardX11 no
Host *
ForwardX11 yes The last two lines assume that in general you /do/ want to forward your X connection. This can cause confusion because the following is WRONG: ForwardX11 yes
Host github.com
ForwardX11 no Which is what I had (and caused me no end of confusion). This is because in .ssh/config, the first setting wins, and isn't overwritten by subsequent customizations. HTH, Dan. | {
"source": [
"https://unix.stackexchange.com/questions/240013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141083/"
]
} |
240,252 | I ran sudo pacman -Syu and I got some interesting errors reading: error: failed to commit transaction (conflicting files) and a long list of files followed by exists in filesystem . Full output is here: http://ix.io/lLw It appears that many of these files are not associated with a package when I checked them with pacman -Qo <path-to-file> , but I did not check them all. I had a weak connection when I ran pacman -Syu , but I get the same errors when I updated later: http://ix.io/lLx What should I do? Should I check all files and delete the ones that do not have an associated package? Should I force update (with sudo pacman -S --force <package-name> ?) Update I tried running sudo pacman -S --force <package-name> and got this: [my-pc]/home/average-joe$ pacman -Qo /usr/lib/python3.5/site-packages/PyYAML-3.11-py3.5.egg-info
error: No package owns /usr/lib/python3.5/site-packages/PyYAML-3.11-py3.5.egg-info It looks like pacman -S --force <package does not overwrite directories that contain files. From the man: Using --force will not allow overwriting a directory with a file or installing packages with conflicting files and directories. Should I just delete the conflicting directories? (they do not have associated packages) | After pacman finally deprecated the --force option and made the surrogate --overwrite option work as expected, the following usage pattern should be noted. A command to reproduce the --force option that blindly overwrites anything that conflicts is this: sudo pacman -S --overwrite \* <package_name> Or sudo pacman -S --overwrite "*" <package_name> The tricky part is escaping the wildcard to stop the shell from expanding it first. | {
"source": [
"https://unix.stackexchange.com/questions/240252",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59802/"
]
} |
240,344 | I always boot up my GNU/Linux laptop in console mode. But sometimes I need to bring up the GUI mode. it always requires entering the root password. So I wrote the following script "gogui.sh" and put it in /usr/bin : #!/bin/bash
echo "mypassword" | sudo service lightdm start It is a really stupid idea, as if someone read the file, can easily see my password. Is the an alternative to this? | Passing a password to sudo in a script is utterly pointless. Instead, add a sudo rule adding the particular command you want to run with the NOPASSWD tag . Take care that the command-specific NOPASSWD rule must come after any general rule. saeid ALL = (ALL:ALL) ALL
saeid ALL = (root) NOPASSWD: service lightdm start But this is probably not useful anyway. lightdm start starts a login prompt, but you only need that if you want to let other users log in graphically. You don't need it if all you want is to start a GUI session. Instead, call startx to start a GUI session from your text mode session. This does not require any extra privilege. You may need to explicitly specify your window manager or desktop environment, as startx might not pick up the same default session type that lightdm uses. startx -- gnome-session | {
"source": [
"https://unix.stackexchange.com/questions/240344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122146/"
]
} |
240,444 | I am working on a remote Debian Jessie server. I have started a screen session, started running a script, then been disconnected by a network timeout. Now I have logged in again and want to resume the session. This is what I see when I list screens: $ screen -ls
There are screens on:
30608.pts-8.myserver (11/03/2015 08:47:58 AM) (Attached)
21168.pts-0.myserver (11/03/2015 05:29:24 AM) (Attached)
7006.pts-4.myserver (10/23/2015 09:05:45 AM) (Detached)
18228.pts-4.myserver (10/21/2015 07:50:49 AM) (Detached)
17849.pts-0.myserver (10/21/2015 07:43:53 AM) (Detached)
5 Sockets in /var/run/screen/S-me. I seem to be attached to two screens at once. Now I want to resume the session I was running before, to see the results of my script: $ screen -r 30608.pts-8.myserver
There is a screen on:
30608.pts-8.OpenPrescribing (11/03/2015 08:47:58 AM) (Attached)
There is no screen to be resumed matching 30608.pts-8.myserver. Why I can't I re-attach? I have the same problem with the other screen: $ screen -r 21168.pts-0.myserver
There is a screen on:
21168.pts-0.OpenPrescribing (11/03/2015 05:29:24 AM) (Attached)
There is no screen to be resumed matching 21168.pts-0.myserver. | The session is still attached on another terminal. The server hasn't detected the network outage on that connection: it only detects the outage when it tries to send a packet and gets an error back or no response after a timeout, but this hasn't happened yet. You're in a common situation where the client detected the outage because it tried to send some input and failed, but the server is just sitting there waiting for input. Eventually the server will send a keepalive packet and detect that the connection is dead. In the meantime, use the -d option to detach the screen session from the terminal where it's in. screen -r -d 30608 screen -rd is pretty much the standard way to attach to an existing screen session. | {
"source": [
"https://unix.stackexchange.com/questions/240444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118973/"
]
} |
240,448 | I am using a shared server at HG and I want to automate a bash script that will run hourly once and notify me to my gmail account with details of authorized/non-authorized users who have logged into the system in the past hour. HG doesn't allow tools like inotify in their shared plans. Is this possible? Do you think it's a decent idea? Although I am the only user, what happens if someone illicitly logs in without my knowledge? The problem is I can't run who every time or scan the logs as it is a tedious process. | The session is still attached on another terminal. The server hasn't detected the network outage on that connection: it only detects the outage when it tries to send a packet and gets an error back or no response after a timeout, but this hasn't happened yet. You're in a common situation where the client detected the outage because it tried to send some input and failed, but the server is just sitting there waiting for input. Eventually the server will send a keepalive packet and detect that the connection is dead. In the meantime, use the -d option to detach the screen session from the terminal where it's in. screen -r -d 30608 screen -rd is pretty much the standard way to attach to an existing screen session. | {
"source": [
"https://unix.stackexchange.com/questions/240448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140396/"
]
} |
241,173 | There are special files in Linux that are not really files. The most notable and clear examples of these are in the dev folder, "files" like: /dev/null - Ignores anything you write to the file /dev/random - Outputs random data instead of the contents of a file /dev/tcp - Sends any data you write to this file over the network First of all, what is the name of these types of "files" that are really some sort of script or binary in disguise? Second, how are they created? Are these files built into the system at a kernel level, or is there a way to create a "magic file" yourself (how about a /dev/rickroll )? | /dev/zero is an example of a "special file" — particularly, a "device node". Normally these get created by the distro installation process, but you can totally create them yourself if you want to. If you ask ls about /dev/zero : # ls -l /dev/zero
crw-rw-rw- 1 root root 1, 5 Nov 5 09:34 /dev/zero The "c" at the start tells you that this is a "character device"; the other type is "block device" (printed by ls as "b"). Very roughly, random-access devices like harddisks tend to be block devices, while sequential things like tape drives or your sound card tend to be character devices. The "1, 5" part is the "major device number" and the "minor device number". With this information, we can use the mknod command to make our very own device node: # mknod foobar c 1 5 This creates a new file named foobar , in the current folder, which does exactly the same thing as /dev/zero . (You can of course set different permissions on it if you want.) All this "file" really contains is the three items above — device type, major number, minor number. You can use ls to look up the codes for other devices and recreate those too. When you get bored, just use rm to remove the device nodes you just created. Basically the major number tells the Linux kernel which device driver to talk to, and the minor number tells the device driver which device you're talking about. (E.g., you probably have one SATA controller, but maybe multiple harddisks plugged into it.) If you want to invent new devices that do something new... well, you'll need to edit the source code for the Linux kernel and compile your own custom kernel. So let's not do that! :-) But you can add device files that duplicate the ones you've already got just fine. An automated system like udev is basically just watching for device events and calling mknod / rm for you automatically. Nothing more magic than that. There are still other kinds of special files: Linux considers a directory to be a special kind of file. (Usually you can't directly open a directory, but if you could, you'd find it's a normal file that contains data in a special format, and tells the kernel where to find all the files in that directory.) A symlink is a special file. (But a hard link isn't.) You can create symlinks using the ln -s command. (Look up the manpage for it.) There's also a thing called a "named pipe" or "FIFO" (first-in, first-out queue). You can create one with mkfifo . A FIFO is a magical file that can be opened by two programs at once — one reading, one writing. When this happens, it works like a normal shell pipe. But you can start each program separately... A file that isn't "special" in any way is called a "regular file". You will occasionally see mention of this in Unix documentation. That's what it means; a file that isn't a device node or a symlink or whatever. Just a normal, every day file with no magical properties. | {
"source": [
"https://unix.stackexchange.com/questions/241173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
241,178 | I have a bash script which uses set -o errexit so that on error the entire script exits at the point of failure. The script runs a curl command which sometimes fails to retrieve the intended file - however when this occurs the script doesn't error exit. I have added a for loop to pause for a few seconds then retry the curl command use false at the bottom of the for loop to define a default non-zero exit status - if the curl command succeeds - the loop breaks and the exit status of the last command should be zero. #! /bin/bash
set -o errexit
# ...
for (( i=1; i<5; i++ ))
do
echo "attempt number: "$i
curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim
if [ -f ~/.vim/autoload/pathogen.vim ]
then
echo "file has been retrieved by curl, so breaking now..."
break;
fi
echo "curl'ed file doesn't yet exist, so now will wait 5 seconds and retry"
sleep 5
# exit with non-zero status so main script will errexit
false
done
# rest of script ..... The problem is when the curl command fails, the loop retries the command five times - if all attempts are unsuccessful the for loop finishes and the main script resumes - instead of triggering the errexit . How can I get the entire script to exit if this curl statement fails? | Replace: done with: done || exit 1 This will cause the code to exit if the for loop exits with a non-zero exit code. As a point of trivia, the 1 in exit 1 is not needed. A plain exit command would exit with the exit status of the last executed command which would be false (code=1) if the download fails. If the download succeeds, the exit code of the loop is the exit code of the echo command. echo normally exits with code=0, signally success. In that case, the || does not trigger and the exit command is not executed. Lastly, note that set -o errexit can be full of surprises. For a discussion of its pros and cons, see Greg's FAQ #105 . Documentation From man bash : for (( expr1 ; expr2 ; expr3 )) ; do list ; done First, the arithmetic expression expr1 is evaluated
according the rules described below under ARITHMETIC EVALUATION. The
arithmetic expression expr2 is then evaluated repeatedly until it
evaluates to zero. Each time expr2 evaluates to a non-zero value,
list is executed and the arithmetic expression expr3 is evaluated. If
any expression is omitted, it behaves as if it evaluates to 1. The return value is the exit status of the last command in list
that is executed, or false if any of the expressions is
invalid. [Emphasis added] | {
"source": [
"https://unix.stackexchange.com/questions/241178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
241,215 | I've recently been creating new users and assigning them to certain groups. I was wondering if there is a command that shows all the users assigned to a certain group?
I have tried using the 'groups' command however whenever I use this it says 'groups: not found' | You can use grep: grep '^group_name_here:' /etc/group This only lists supplementary group memberships, not the user who have this group as their primary group. And it only finds local groups, not groups from a network service such as LDAP. | {
"source": [
"https://unix.stackexchange.com/questions/241215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133708/"
]
} |
241,726 | When I run ls on a folder with directories that have a 777 permission, the ls colors are purple text with a green background, which is unreadable: What can I do to make this more pleasant to look at? | If you are using Linux (and not, e.g., using a Mac which does things differently) you can use dircolors with a custom database to specify which colors are used for which file attributes. First, create a dircolors database file. $ dircolors -p > ~/.dircolors Then edit it, you probably want to change the STICKY_OTHER_WRITABLE and OTHER_WRITABLE lines to something more pleasant than 34;42 (34 is blue, 42 is green - dircolors -p helpfully includes comments with the color codes listed). Then run eval $(dircolors ~/.dircolors) Edit your ~/.profile (or ~/.bash_profile etc) and find the line that runs eval $(dircolors) and change it to include the filename as above. Or if there isn't such a line in your .profile (etc) add it. Or, if you want it to work whether there is a ~/.dircolors file or not, change it to: [ -e ~/.dircolors ] && eval $(dircolors -b ~/.dircolors) ||
eval $(dircolors -b) | {
"source": [
"https://unix.stackexchange.com/questions/241726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141502/"
]
} |
241,972 | I have the following general question regarding cron jobs. Suppose I have the following in my crontab : * 10 * * * * someScript.sh
* 11 * * * * someScript2.sh
30 11 */2 * * someScript3.sh <-- Takes a long time let's say 36 hours.
* 12 * * * someScript4.sh Is it smart enough to run the remaining jobs at the appropriate times? For example, the long script doesn't need to terminate? Also, what happens if the initial long script is still running and it gets called by cron again? | Each cron job is executed independent of any other jobs you may have specified. This means that your long-lived script will not impede other jobs from being executed at the specified time. If any of your scripts are still executing at their next scheduled cron interval, then another, concurrent, instance of your script will be executed. This can have unforeseen consequences depending on what your script does. I would recommend reading the Wikipedia article on File Locking , specifically the section on Lock files . A lock file is a simple mechanism to signal that a resource — in your case the someScript3.sh script — is currently 'locked' (i.e. in use) and should not be executed again until the lock file is removed. Take a look at the answers to the following question for details of ways to implement a lock file in your script: How to make sure only one instance of a bash script runs? | {
"source": [
"https://unix.stackexchange.com/questions/241972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81787/"
]
} |
242,087 | I have script including multiple commands. How can I group commands to run together ( I want to make several groups of commands. Within each group, the commands should run in parallel (at the same time). The groups should run sequentially, waiting for one group to finish before starting the next group)
... i.e. #!/bin/bash
command #1
command #2
command #3
command #4
command #5
command #6
command #7
command #8
command #9
command #10 how can I run every 3 commands to gether? I tried: #!/bin/bash
{
command #1
command #2
command #3
} &
{
command #4
command #5
command #6
} &
{
command #7
command #8
command #9
}&
command #10 But this didn't work properly ( I want to run the groups of commands in parallel at the same time. Also I need to wait for the first group to finish before running the next group) The script is exiting with an error message! | The commands within each group run in parallel, and the groups run sequentially, each group of parallel commands waiting for the previous group to finish before starting execution. The following is a working example: Assume 3 groups of commands as in the code below. In each group the three commands are started in the background with & . The 3 commands will be started almost at the same time and run in parallel while the script waits for them to finish. After all three commands in the the third group exit, command 10 will execute. $ cat command_groups.sh
#!/bin/sh
command() {
echo $1 start
sleep $(( $1 & 03 )) # keep the seconds value within 0-3
echo $1 complete
}
echo First Group:
command 1 &
command 2 &
command 3 &
wait
echo Second Group:
command 4 &
command 5 &
command 6 &
wait
echo Third Group:
command 7 &
command 8 &
command 9 &
wait
echo Not really a group, no need for background/wait:
command 10
$ sh command_groups.sh
First Group:
1 start
2 start
3 start
1 complete
2 complete
3 complete
Second Group:
4 start
5 start
6 start
4 complete
5 complete
6 complete
Third Group:
7 start
8 start
9 start
8 complete
9 complete
7 complete
Not really a group, no need for background/wait:
10 start
10 complete
$ | {
"source": [
"https://unix.stackexchange.com/questions/242087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
242,129 | I am using GNOME 3.18.1 on Arch Linux 4.2.5-1-ARCH x86_64 on a Dell E6530 laptop. Since I installed this OS years ago, the power button on my laptop has always led my OS to completely power down. However, in the last few weeks this behaviour has changed, so that pressing the power button now puts my laptop into energy savings mode. I did not change my power settings. I always keep my system up to date using pacman -Syyu , however, so I suspect that an update changed this functionality. In the power settings there is no option for this. How can I restore the initial behaviour, so that pressing that button powers the system off? | That's caused by the latest gnome-settings-daemon updates... There is no such option in power settings because it was removed by the GNOME devs (the shutdown/power off action is considered "too destructive" ). Bottom line: you can no longer power off your laptop by pressing the power off button. You could however add a new dconf / gsettings option (i.e. shutdown ) to the settings daemon power plugin if you're willing to patch and rebuild gnome-settings-daemon : --- gnome-settings-daemon-3.18.2/data/gsd-enums.h 2015-11-10 09:07:12.000000000 -0500
+++ gnome-settings-daemon-3.18.2/data/gsd-enums.h 2015-11-11 18:43:43.240794875 -0500
@@ -114,7 +114,8 @@
{
GSD_POWER_BUTTON_ACTION_NOTHING,
GSD_POWER_BUTTON_ACTION_SUSPEND,
- GSD_POWER_BUTTON_ACTION_HIBERNATE
+ GSD_POWER_BUTTON_ACTION_HIBERNATE,
+ GSD_POWER_BUTTON_ACTION_SHUTDOWN
} GsdPowerButtonActionType;
typedef enum
--- gnome-settings-daemon-3.18.2/plugins/media-keys/gsd-media-keys-manager.c 2015-11-10 09:07:12.000000000 -0500
+++ gnome-settings-daemon-3.18.2/plugins/media-keys/gsd-media-keys-manager.c 2015-11-11 18:47:52.388602012 -0500
@@ -1849,6 +1849,9 @@
action_type = g_settings_get_enum (manager->priv->power_settings, "power-button-action");
switch (action_type) {
+ case GSD_POWER_BUTTON_ACTION_SHUTDOWN:
+ do_config_power_action (manager, GSD_POWER_ACTION_SHUTDOWN, in_lock_screen);
+ break;
case GSD_POWER_BUTTON_ACTION_SUSPEND:
do_config_power_action (manager, GSD_POWER_ACTION_SUSPEND, in_lock_screen);
break; Once you install the patched version, a new shutdown option will be available in dconf-editor under org > gnome > settings-daemon > plugins > power > power-button-action : so select that to shutdown via power button or, if you prefer CLI, run in terminal: gsettings set org.gnome.settings-daemon.plugins.power power-button-action shutdown Sure, for the above to work you also need the right settings in /etc/systemd/logind.conf : HandlePowerKey=poweroff
PowerKeyIgnoreInhibited=yes Keep in mind that pressing the power button will shutdown your system without any warning. | {
"source": [
"https://unix.stackexchange.com/questions/242129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22513/"
]
} |
242,149 | I have a series of software files downloaded into my subdirectory ~/Downloads on my personal computer. I am also using bash to connect remotely to a computer using ssh . Is it possible to transfer this file via ssh to the remote computer? | You may want to use scp for this purpose. It is a secure means to transfer files using the SSH protocol. For example, to copy a file named yourfile.txt from ~/Downloads to remote computer, use: scp ~/Downloads/yourfile.txt [email protected]:/some/remote/directory You can see more examples here . | {
"source": [
"https://unix.stackexchange.com/questions/242149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115891/"
]
} |
242,298 | Respectable projects release tar archives that contain a single directory, for instance zyrgus-3.18.tar.gz contains a zyrgus-3.18 folder which in turn contains src , build , dist , etc. But some punk projects put everything at the root :'-( This results in a total mess when unarchiving. Creating a folder manually every time is a pain, and unnecessary most of the time. Is there a super-fast way to tell whether a .tar or .tar.gz file contains more than a single directory at its root? Even for a big archive. Or even better, is there a tool that in such cases would create a directory (name of the archive without the extension) and put everything inside? | patool handles different kinds of archives and creates a subdirectory in case the archive contains multiple files to prevent cluttering the working directory with the extracted files. Extract archive patool extract archive.tar To obtain a list of the supported formats, use patool formats . | {
"source": [
"https://unix.stackexchange.com/questions/242298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2305/"
]
} |
242,496 | I want to make a bash script to delete the older file form a folder. Every time when I run the script will be deleted only one file, the older one. Can you help me with this?
Thanks | As Kos pointed out, It might not be possible to know the oldest file (as per creation date). If modification time are good for you, and if file name have no new line: rm "$(ls -t | tail -1)" | {
"source": [
"https://unix.stackexchange.com/questions/242496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108074/"
]
} |
242,503 | For example: RXOTG-1388 holds 3 object RM4FD1,RM4FD2,RM4FD3 RXOTG-1398 holds 3 object VT08D1 VT08D2,VT08D3 and so on. Based on this text file I would like to count, using awk , how many object each RXOTG holds. RXOTG-1388 RM4FD1 0
RM4FD2 0
RM4FD3 0
END
RXOTG-1398 VT08D1 0
VT08D2 0
VT08D3 0
END
RXOTG-1400 VT08S1 0
VT08S2 0
VT08S3 0
END | As Kos pointed out, It might not be possible to know the oldest file (as per creation date). If modification time are good for you, and if file name have no new line: rm "$(ls -t | tail -1)" | {
"source": [
"https://unix.stackexchange.com/questions/242503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142803/"
]
} |
242,551 | I use Bash as my interactive shell and I was wondering if there was an easy way to get Bash to run a system command instead of a shell builtin command in the case where they both share the same name. For example, use the system kill (from util-linux ) to print the process id (pid) of the named process(es) instead of sending a signal: $ /bin/kill -p httpd
2617
... Without specifying the full path of the system command, the Bash builtin is used instead of the system command. The kill builtin doesn’t have the -p option so the command fails: $ kill -p httpd
bash: kill: p: invalid signal specification I tried the answers listed in Make bash use external `time` command rather than shell built-in but most of them only work because time is actually a shell keyword – not a shell builtin . Other than temporarily disabling the Bash builtin with enable -n kill , the best solution I’ve seen so far is to use: $(which kill) -p httpd Are there other easier (involve less typing) ways to execute an external command instead of a shell builtin? Note that kill is just an example and I’d like a generalised solution similar to the way that prefixing with the command builtin prevents functions which have the same name as an external command from being run. In most cases, I usually prefer to use the builtin version as it saves forking a new process and some times the builtin has features that the external command doesn’t. | Assuming env is in your path: env kill -p http env runs the executable file named by its first argument in a (possibly) modified environment; as such, it does not know about or work with shell built-in commands. This produces some shell job control cruft, but doesn't rely on an external command: exec kill -p bash & exec requires an executable to replace the current shell, so doesn't use any built-ins. The job is run in the background so that you replace the forked background shell, not your current shell. | {
"source": [
"https://unix.stackexchange.com/questions/242551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22812/"
]
} |
242,946 | I am trying to sum certain numbers in a column using awk . I would like to sum just column 3 of the "smiths" to get a total of 212. I can sum the whole column using awk but not just the "smiths". I have: awk 'BEGIN {FS = "|"} ; {sum+=$3} END {print sum}' filename.txt Also I am using putty. Thank you for any help. smiths|Login|2
olivert|Login|10
denniss|Payroll|100
smiths|Time|200
smiths|Logout|10 | awk -F '|' '$1 ~ /smiths/ {sum += $3} END {print sum}' inputfilename The -F flag sets the field separator; I put it in single quotes because it is a special shell character. Then $1 ~ /smiths/ applies the following {code block} only to lines where the first field matches the regex /smiths/ . The rest is the same as your code. Note that since you're not really using a regex here, just a specific value, you could just as easily use: awk -F '|' '$1 == "smiths" {sum += $3} END {print sum}' inputfilename Which checks string equality. This is equivalent to using the regex /^smiths$/ , as mentioned in another answer, which includes the ^ anchor to only match the start of the string (the start of field 1) and the $ anchor to only match the end of the string. Not sure how familiar you are with regexes. They are very powerful, but for this case you could use a string equality check just as easily. | {
"source": [
"https://unix.stackexchange.com/questions/242946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142178/"
]
} |
242,995 | Say I have a folder: ./folder/ Inside it there are many files and even sub-directories. When I execute: mkdir -p folder I won't see any errors even warnings. So just want to confirm, is there anything lost or changed in result of this command? | mkdir -p would not give you an error if the directory already exists and the contents for the directory will not change. Manual entry for mkdir | {
"source": [
"https://unix.stackexchange.com/questions/242995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
243,195 | From many docs, I read that startx is starting LXDE in Raspbian OS. I am a little bit confused. Will always startx run LXDE GUI? Also I have seen example with using startlxde command. How is that command different and why startx and startlxde are running the same GUI(LXDE)? Or maybe it runs it because it is the default GUI? How can I choose default GUI if I have multiple ones? Could you please explain more details around the GUI in Linux systems? | startx runs xinit which starts an X server and a client session. The client session is ~/.xinitrc if present, and otherwise /etc/X11/xinit/xinitrc (the location may vary between distributions). What this script does varies between distributions. On Debian (including derivatives such as Raspbian), /etc/X11/xinit/xinitrc runs /etc/X11/Xsession which in turn runs scripts in /etc/X11/Xsession.d . The Debian scripts look for a user session in other files ( ~/.xsession , ~/.xsessionrc , ~/.Xsession ) and, if no user setting is applicable, runs x-session-manager (falling back to x-window-manager if no [session manager] is installed, falling back to x-terminal-emulator in the unlikely case that no window manager is installed). If you want control over what gets executed, you can create one of the user files, either ~/.xsession or ~/.xinitrc . The file ~/.xsession is also used if you log in on a display manager (i.e. if you type your password in a GUI window). The file ~/.xinitrc is specific to xinit and startx . Using ~/.xsession goes through /etc/X11/Xsession so it sets up things like input methods, resources, password agents, etc. If you use .xinitrc , you'll have to do all of these manually. Once again, I'm describing Debian here, other Unix variants might set things up differently. The use of ~/.xinitrc to specify what gets executed when you run startx or xinit is universal. Whether you use ~/.xinitrc or ~/.xsession , this file (usually a shell script, but it doesn't have to be if you really want to use something else) must prepare whatever needs to be prepared (e.g. keyboard settings, resources, applets that aren't started by the window manager, etc.), and then at the end run the program that manages the session. When the script ends, the session terminates. Typically, you would use exec at the end of the script, to replace the script by the session manager or window manager. Your system presumably has /usr/bin/startlxde as the system-wide default session manager. On Debian and derivatives, you can check the available session managers with update-alternatives --list x-session-manager or get a more verbose description indicating which one is current with update-alternatives --display x-session-manager If LXDE wasn't the system-wide default and you wanted to make it the default for your account, you could use the following ~/.xsession file: #!/bin/sh
exec startlxde On some Unix variants, that would only run for graphical logins, not for startx , so you'd also need to create an identical ~/.xinitrc . (Or not identical: in ~/.xsession , you might want to do other things, because that's the first file that's executed in a graphical session; for example you might put . ~/.profile near the top, to set some environment variables.) If you want to try out other environments as a one-off, you can specify a different program to run on the command line of startx itself. The startx program has a quirk: you need to use the full path to the program. startx /usr/bin/startkde The startx command also lets you specify arguments to pass to the server. For example, if you want to run multiple GUI sessions at the same time, you can pass a different display number each time. Pass server arguments after -- on the command line of startx . startx /usr/bin/startkde -- :1 | {
"source": [
"https://unix.stackexchange.com/questions/243195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39990/"
]
} |
243,207 | In the following file: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut eu metus id lectus vestibulum ultrices. Maecenas rhoncus. I want to delete everything before consectetuer and everything after elit . My desired output: consectetuer adipiscing elit. How can I do this? | I'd use sed sed 's/^.*\(consectetuer.*elit\).*$/\1/' file Decoded the sed s/find/replace/ syntax: s/^.* -- substitute starting at the beginning of the line ( ^ ) followed by anything ( .* ) up to... \( - start a named block consectetuer.*elit\. - match the first word, everything ( .* ) up to the last word (in this case, including the trailing (escaped)dot) you want to match \) - end the named block match everything else ( .* ) to the end of the line ( $ ) / - end the substitute find section \1 - replace with the name block between the \( and the \) above / - end the replace | {
"source": [
"https://unix.stackexchange.com/questions/243207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143276/"
]
} |
243,317 | I need to run a command, and then run the same command again with just one string changed. For example, I run the command $ ./myscript.sh xxx.xxx.xxx.xxx:8080/code -c code1 -t query Now from there, without going back in the command history (via the up arrow), I need to replace code1 with mycode or some other string. Can it be done in Bash? | I renamed your script, but here's an option: $ ./myscript.sh xxx.xxx.xxx.xxx:8080/code -c code1 -t query after executing the script, use: $ ^code1^code2 ... which results in: ./myscript.sh xxx.xxx.xxx.xxx:8080/code -c code2 -t query man bash and search for "Event Designators": ^string1^string2^ Quick substitution. Repeat the last command, replacing string1 with string2. Equivalent to !!:s/string1/string2/ Editing to add global replacement, which I learned just now from @slm's answer at https://unix.stackexchange.com/a/116626/117549 : $ !!:gs/string1/string2 which says: !! - recall the last command
g - perform the substitution over the whole line
s/string1/string2 - replace string1 with string2 | {
"source": [
"https://unix.stackexchange.com/questions/243317",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19072/"
]
} |
243,350 | According to its documentation, bash waits until all commands in a pipeline have finished running before continuing The shell waits for all commands in the pipeline to terminate before returning a value. So why does the command yes | true finish immediately? Shouldn't the yes loop forever and cause the pipeline to never return? And a subquestion: according to the POSIX spec , shell pipelines may choose to either return after the last command finishes or wait until all the commands finish. Do common shells have different behavior in this sense? Are there any shells where yes | true will loop forever? | When true exits, the read side of the pipe is closed, but yes continues trying to write to the write side. This condition is called a "broken pipe", and it causes the kernel to send a SIGPIPE signal to yes . Since yes does nothing special about this signal, it will be killed. If it ignored the signal, its write call would fail with error code EPIPE . Programs that do that have to be prepared to notice EPIPE and stop writing, or they will go into an infinite loop. If you do strace yes | true 1 you can see the kernel preparing for both possibilities: write(1, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 4096) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=17556, si_uid=1000} ---
+++ killed by SIGPIPE +++ strace is watching events via the debugger API, which first tells it about the system call returning with an error, and then about the signal. From yes 's perspective, though, the signal happens first. (Technically, the signal is delivered after the kernel returns control to user space, but before any more machine instructions are executed, so the write "wrapper" function in the C library does not get a chance to set errno and return to the application.) 1 Sadly, strace is Linux-specific. Most modern Unixes have some command that does something similar, but it often has a different name, it probably doesn't decode syscall arguments as thoroughly, and sometimes it only works for root. | {
"source": [
"https://unix.stackexchange.com/questions/243350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
243,428 | I have been searching for a solution for my question but didn't find a or better said I did not get it with what I found.
My problem is:
I am using a Smart Home Control Software on a Raspberry Pi.
Using pilight-receive ,
I can capture the data from my outdoor temperature sensor.
The output of pilight-receive looks like that: {
"message": {
"id": 4095,
"temperature": 409.5
},
"origin": "receiver",
"protocol": "alecto_wsd17",
"uuid": "0000-b8-27-eb-0f3db7",
"repeats": 3
}
{
"message": {
"id": 1490,
"temperature": 25.1,
"humidity": 40.0,
"battery": 1
},
"origin": "receiver",
"protocol": "alecto_ws1700",
"uuid": "0000-b8-27-eb-0f3db7",
"repeats": 3
}
{
"message": {
"id": 2039,
"temperature": 409.5
},
"origin": "receiver",
"protocol": "alecto_wsd17",
"uuid": "0000-b8-27-eb-0f3db7",
"repeats": 4
} Now my question is:
How the can I extract the temperature and humidity from messages where the id is 1490?
And how would you recommend me to do check this frequently?
By a cron job that runs every 10 minutes, creates an output of the pilight-receive ,
extracts the data of the output and pushes it to the Smart Home Control API? | You can use jq to process json files in shell. For example, I saved your sample json file as raul.json and then ran: $ jq .message.temperature raul.json
409.5
25.1
409.5
$ jq .message.humidity raul.json
null
40
null jq is available pre-packaged for most linux distros. There's probably a way to do it in jq itself, but the simplest way I found to get both the wanted values on one line is to use xargs . For example: $ jq 'select(.message.id == 1490) | .message.temperature, .message.humidity' raul.json | xargs
25.1 40 or, if you want to loop through each .message.id instance, we can add .message.id to the output and use xargs -n 3 as we know that there will be three fields (id, temperature, humidity): jq '.message.id, .message.temperature, .message.humidity' raul.json | xargs -n 3
4095 409.5 null
1490 25.1 40
2039 409.5 null You could then post-process that output with awk or whatever. Finally, both python and perl have excellent libraries for parsing and manipulating json data. As do several other languages, including php and java. | {
"source": [
"https://unix.stackexchange.com/questions/243428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143439/"
]
} |
243,756 | I haven't found a slam-dunk document on this, so let's start one. On a CentOS 7.1 host, I have gone through the linuxconfig HOW-TO , including the firewall-cmd entries, and I have an exportable filesystem. [root@<server> ~]# firewall-cmd --list-all
internal (default, active)
interfaces: enp5s0
sources: 192.168.10.0/24
services: dhcpv6-client ipp-client mdns ssh
ports: 2049/tcp
masquerade: no
forward-ports:
rich rules:
[root@<server> ~]# showmount -e localhost
Export list for localhost:
/export/home/<user> *.localdomain However, if I showmount from the client, I still have a problem. [root@<client> ~]# showmount -e <server>.localdomain
clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) Now, how am I sure that this is a firewall problem? Easy. Turn off the firewall. Server side: [root@<server> ~]# systemctl stop firewalld And client side: [root@<client> ~]# showmount -e <server>.localdomain
Export list for <server>.localdomain:
/export/home/<server> *.localdomain Restart firewalld. Server side: [root@<server> ~]# systemctl start firewalld And client side: [root@<client> ~]# showmount -e <server>.localdomain
clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) So, let's go to town, by adapting the iptables commands from a RHEL 6 NFS server HOW-TO ... [root@ ~]# firewall-cmd \
> --add-port=111/tcp \
> --add-port=111/udp \
> --add-port=892/tcp \
> --add-port=892/udp \
> --add-port=875/tcp \
> --add-port=875/udp \
> --add-port=662/tcp \
> --add-port=662/udp \
> --add-port=32769/udp \
> --add-port=32803/tcp
success
[root@<server> ~]# firewall-cmd \
> --add-port=111/tcp \
> --add-port=111/udp \
> --add-port=892/tcp \
> --add-port=892/udp \
> --add-port=875/tcp \
> --add-port=875/udp \
> --add-port=662/tcp \
> --add-port=662/udp \
> --add-port=32769/udp \
> --add-port=32803/tcp \
> --permanent
success
[root@<server> ~]# firewall-cmd --list-all
internal (default, active)
interfaces: enp5s0
sources: 192.168.0.0/24
services: dhcpv6-client ipp-client mdns ssh
ports: 32803/tcp 662/udp 662/tcp 111/udp 875/udp 32769/udp 875/tcp 892/udp 2049/tcp 892/tcp 111/tcp
masquerade: no
forward-ports:
rich rules: This time, I get a slightly different error message from the client: [root@<client> ~]# showmount -e <server>.localdomain
rpc mount export: RPC: Unable to receive; errno = No route to host So, I know I'm on the right track. Having said that, why can't I find a definitive tutorial on this anywhere? I can't have been the first person to have to figure this out! What firewall-cmd entries am I missing? Oh, one other note. My /etc/sysconfig/nfs files on the CentOS 6 client and the CentOS 7 server are unmodified, so far. I would prefer to not have to change (and maintain!) them, if at all possible. | This should be enough: firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload | {
"source": [
"https://unix.stackexchange.com/questions/243756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31778/"
]
} |
243,757 | I use ubuntu 14. About three days ago computer started hanging when I work - I even can't move mouse, so I press power button to restart the computer. It happens about 1-7 times an hour. For example I work in open office and it hangs. Or I work with console - it hangs. The log is below. What does it mean and how to fix it? Nov 17 21:18:16 pavel-desktop kernel: [ 275.292875] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.292908] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.292929] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.292938] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0f04 data 0x3f800000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.293036] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.293048] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.293057] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.293071] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304073] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304093] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304105] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304120] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304156] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304187] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304202] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304376] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304388] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304397] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.304411] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329457] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329477] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329488] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329504] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329541] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329566] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329575] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329737] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329750] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329758] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.329772] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338336] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338357] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338368] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338384] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338415] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338443] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338452] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338618] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338630] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338639] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.338654] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.354981] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355001] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355013] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355031] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355061] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355085] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355097] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0f04 data 0x3f800000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355195] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355206] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355214] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.355229] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371365] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371386] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371397] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371413] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371440] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371459] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371465] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371624] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371636] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371645] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.371659] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.387885] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.387905] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.387916] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.387931] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.387952] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.387962] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.387970] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.388133] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.388145] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.388153] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.388168] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405478] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405500] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405514] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405532] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405554] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405565] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405573] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405733] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405745] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405753] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.405767] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424274] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424290] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424299] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424313] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424449] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424459] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424464] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.424475] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437337] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437351] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437360] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437373] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437388] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437396] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437401] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437561] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437569] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437575] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.437586] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.458990] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459003] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459008] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459019] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459041] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459067] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459072] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459232] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459239] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459242] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.459252] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471067] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471088] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471102] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471119] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471140] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471151] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471160] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471320] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471332] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471340] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010
Nov 17 21:18:16 pavel-desktop kernel: [ 275.471355] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487181] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487202] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487215] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487233] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMIT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487264] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487275] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487280] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487444] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULT
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487456] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000
Nov 17 21:18:16 pavel-desktop kernel: [ 275.487465] nouveau E[ PGRAPH][0000:01 | This should be enough: firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload | {
"source": [
"https://unix.stackexchange.com/questions/243757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139986/"
]
} |
244,064 | While setting up a sudo environment I noticed that the include directive is prefixed with the pound (#) character. Solaris shows this as: ## Read drop-in files from /etc/sudoers.d
## (the '#' here does not indicate a comment)
#includedir /etc/sudoers.d The manual (Linux as well as Solaris) states: Including other files from within sudoers
It is possible to include other sudoers files from within
the sudoers file currently being parsed using the #include
and #includedir directives. And: Other special characters and reserved words
The pound sign (`#') is used to indicate a comment (unless
it is part of a #include directive or unless it occurs in
the context of a user name and is followed by one or more
digits, in which case it is treated as a uid). Both the
comment character and any text after it, up to the end of
the line, are ignored. Does anybody knows why the choice was made to use the pound character in the #include and #includedir directives? As a side note: I often use something like egrep -v '^#|^$' configfile to get the non-default/active configured settings, and this obviously does not work for the sudoers file. | #include was added in 2004 . It had to be compatible with what was already there. I don't think include /path/to/file would have been ambiguous, though, but it might have been a little harder to parse, because the parser would have to distinguish include /path/to/file (include directive) from include = foo (allow the user include to run the command foo ). But I think mostly the reason was to look like the C preprocessor, which the manual explicitly cites as inspiration. | {
"source": [
"https://unix.stackexchange.com/questions/244064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107263/"
]
} |
244,323 | When I check on my system's environment, a lot of environmental variables will pop up. How can I just search for a particular variable? A book I'm reading says: Sometimes the number of variables in your environment grows quite
large, so much so that you don't want to see all of the values
displayed when you are interested in just one. If this is the case,
you can use the echo command to show an environment variable's current
value. How do I do this in a Linux terminal? | Just: echo "$VARIABLENAME" For example for the environment variable $HOME , use: echo "$HOME" Which then prints something similar to: /home/username Edit : according to the comment of Stéphane Chazelas , it may be better if you use printenv instead of echo : printenv HOME | {
"source": [
"https://unix.stackexchange.com/questions/244323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124780/"
]
} |
244,367 | This is a rather low-level question, and I understand that it might not be the best place to ask. But, it seemed more appropriate than any other SE site, so here goes. I know that on the Linux filesystem, some files actually exist , for example: /usr/bin/bash is one that exists. However, (as far as I understand it), some also don't actually exist as such and are more virtual files, eg: /dev/sda , /proc/cpuinfo , etc. My questions are (they are two, but too closely related to be separate questions): How does the Linux kernel work out whether these files are real (and therefore read them from the disk) or not when a read command (or such) is issued? If the file isn't real: as an example, a read from /dev/random will return random data, and a read from /dev/null will return EOF . How does it work out what data to read from this virtual file (and therefore what to do when/if data written to the virtual file too) - is there some kind of map with pointers to separate read/write commands appropriate for each file, or even for the virtual directory itself? So, an entry for /dev/null could simply return an EOF . | So there are basically two different types of thing here: Normal filesystems, which hold files in directories with data and metadata, in the familiar manner (including soft links, hard links, and so on). These are often, but not always, backed by a block device for persistent storage (a tmpfs lives in RAM only, but is otherwise identical to a normal filesystem). The semantics of these are familiar; read, write, rename, and so forth, all work the way you expect them to. Virtual filesystems, of various kinds. /proc and /sys are examples here, as are FUSE custom filesystems like sshfs or ifuse . There's much more diversity in these, because really they just refer to a filesystem with semantics that are in some sense 'custom'. Thus, when you read from a file under /proc , you aren't actually accessing a specific piece of data that's been stored by something else writing it earlier, as under a normal filesystem. You're essentially doing a kernel call, requesting some information that's generated on-the-fly. And this code can do anything it likes, since it's just some function somewhere implementing read semantics. Thus, you have the weird behavior of files under /proc , like for instance pretending to be symlinks when they aren't really. The key is that /dev is actually, usually, one of the first kind. It's normal in modern distributions to have /dev be something like a tmpfs, but in older systems, it was normal to have it be a plain directory on disk, without any special attributes. The key is that the files under /dev are device nodes, a type of special file similar to FIFOs or Unix sockets; a device node has a major and minor number, and reading or writing them is doing a call to a kernel driver, much like reading or writing a FIFO is calling the kernel to buffer your output in a pipe. This driver can do whatever it wants, but it usually touches hardware somehow, e.g. to access a hard disk or play sound in the speakers. To answer the original questions: There are two questions relevant to whether the 'file exists' or not; these are whether the device node file literally exists, and whether the kernel code backing it is meaningful. The former is resolved just like anything on a normal filesystem. Modern systems use udev or something like it to watch for hardware events and automatically create and destroy the device nodes under /dev accordingly. But older systems, or light custom builds, can just have all their device nodes literally on the disk, created ahead of time. Meanwhile, when you read these files, you're doing a call to kernel code which is determined by the major and minor device numbers; if these aren't reasonable (for instance, you're trying to read a block device that doesn't exist), you'll just get some kind of I/O error. The way it works out what kernel code to call for which device file varies. For virtual filesystems like /proc , they implement their own read and write functions; the kernel just calls that code depending on which mount point it's in, and the filesystem implementation takes care of the rest. For device files, it's dispatched based on the major and minor device numbers. | {
"source": [
"https://unix.stackexchange.com/questions/244367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142558/"
]
} |
244,531 | This is the process I want to kill: sooorajjj@Treako ~/Desktop/MerkMod $ sudo netstat -tunap | grep :80
tcp6 0 0 :::80 :::* LISTEN 20570/httpd | There are several ways to find which running process is using a port. Using fuser it will give the PID(s) of the multiple instances associated with the listening port. sudo apt-get install psmisc
sudo fuser 80/tcp
80/tcp: 1858 1867 1868 1869 1871 After finding out, you can either stop or kill the process(es). You can also find the PIDs and more details using lsof sudo lsof -i tcp:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1858 root 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1867 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1868 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1869 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN)
nginx 1871 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN) To limit to sockets that listen on port 80 (as opposed to clients that connect to port 80): sudo lsof -i tcp:80 -s tcp:listen To kill them automatically: sudo lsof -t -i tcp:80 -s tcp:listen | sudo xargs kill | {
"source": [
"https://unix.stackexchange.com/questions/244531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137002/"
]
} |
244,970 | My current directory is buried deep in multiple subfolder layers from my home directory. If I want to open this directory in a gui-based file browser, I have to double click folder after folder to reach it. This is very time consuming. On the other hand, with very few key strokes and several times hitting the tab button, it is very easily reachable via a terminal. I want to know if there is a way to open the current directory in a terminal onto a a file browser. What is the command to do this? For reference, I have an ubuntu system, but I'd like to know what the commands are across the various distributions of linux. | xdg-open . xdg-open is part of the xdg-utils package, which is commonly installed by default in many distributions (including Ubuntu). It is designed to work for multiple desktop environments, calling the default handler for the file type in your desktop environment. You can pass a directory, file, or URL , and it will open the proper program for that parameter. For example, on my KDE system: xdg-open . opens the current directory in the Dolphin file manager xdg-open foo.txt opens foo.txt in emacsclient, which I've configured to be the default handler for .txt files xdg-open http://www.google.com/ opens google.com in my default web browser The application opens as a separate window, and you'll get a prompt back in your terminal and can issue other commands or close your terminal without affecting your new GUI window. I usually get a bunch of error message printed to stderr , but I just ignore them. Edit: Adding the arguments xdg-open . >/dev/null 2>&1 redirects the errors and the output. This call won't block your terminal. Binding this to an alias like filemanager='xdg-open . >/dev/null 2>&1' can come in handy. | {
"source": [
"https://unix.stackexchange.com/questions/244970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15417/"
]
} |
245,145 | I've created a simple C program like so: int main(int argc, char *argv[]) {
if (argc != 5) {
fputs("Not enough arguments!\n", stderr);
exit(EXIT_FAILURE);
} And I have my PATH modified in etc/bash.bashrc like so: PATH=.:$PATH I've saved this program as set.c and am compiling it with gcc -o set set.c in the folder ~/Programming/so However, when I call set 2 3 nothing happens. There is no text that appears. Calling ./set 2 3 gives the expected result I've never had a problem with PATH before and which set returns ./set . So it seems the PATH is the correct one. What's is happening? | Instead of using which , which doesn't work when you need it most , use type to determine what will run when you type a command: $ which set
./set
$ type set
set is a shell builtin The shell always looks for builtins before searching the $PATH , so setting $PATH doesn't help here. It would be best to rename your executable to something else, but if your assignment requires the program to be named set , you can use a shell function: $ function set { ./set; }
$ type set
set is a function
set ()
{
./set
} (That works in bash , but other shells like ksh may not allow it. See mikeserv's answer for a more portable solution.) Now typing set will run the function named "set", which executes ./set . GNU bash looks for functions before looking for builtins, and it looks for builtins before searching the $PATH . The section named "COMMAND EXECUTION" in the bash man page gives more information on this. See also the documentation on builtin and command : help builtin and help command . | {
"source": [
"https://unix.stackexchange.com/questions/245145",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137641/"
]
} |
245,293 | I am trying to compare two command output (no files) vimdiff "$(tail /tmp/cachain.pem)" "$(tail /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem)" I tried playing with redirection, pipe, and vim - -c but I must be missing something. Can anyone help please ? | You are confusing $(…) with <(…) . You used the former, which passes the output as arguments to vimdiff . For example, if the last line of /path/to/foo contains bar bar bar , then the following command echo $(tail -1 /path/to/foo) is equivalent to echo bar bar bar Instead, you need to use <(…) . This is called process substitution , and passes the output as a pseudo-file to the vimdiff command. Hence, use the following. vimdiff <(tail /tmp/cachain.pem) <(tail /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem) This works in bash and zsh , but apparently there is no way to do process substitution in tcsh . | {
"source": [
"https://unix.stackexchange.com/questions/245293",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74152/"
]
} |
245,920 | I was wondering whether using wget it was possible to download an RPM and then pipe it through sudo rpm -i to install it, in a single line? I realize I could just run: wget -c <URL>
sudo rpm -i <PACKAGE-NAME>.rpm to install the package but I was wondering whether it might be possible to do this in a single line using the quiet and write to standard output options of wget. I have tried using: wget -cqO- <URL> | sudo rpm -i but it returned: rpm: no packages given for install | RPM has native support to download a package from a URL. You can do: sudo rpm -i <URL> There is no need to download the RPM manually. If this support didn't exist, you could use bash 's process substitution. sudo bash -c 'rpm -i <(wget -O - <URL>)' | {
"source": [
"https://unix.stackexchange.com/questions/245920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
246,338 | On 2013-01-10 Glenn Fowler posted this to the ast-users mailing list : As has been pointed out several times on the AST and UWIN lists, AT&T gives very little support to OpenSouce software, which is why we have so few people involved with our rather large collection of AST software. In spite of this, ksh , nmake , vczip , UWIN and other AST tools continue to be used in several AT&T projects. It turns out that software isn't the only thing lacking support: both dgk (David Korn) (AT&T fellow, 36 years of service) and gsf (Glenn Fowler) (AT&T fellow, 29 years of service) have been terminated, effective October 10. Our third major partner, Phong Vo (AT&T fellow, 32 years of service), left a few months ago for Google. The UWIN maintainer, Jeff Fellin, is still with AT&T and provides UWIN support for some critical operations. Both dgk and gsf will continue to work on AST software, and might actually have more time (at least in the short run) to focus on it. The download site and mail groups will remain within AT&T for at least the next several months. Our AT&T colleague, dr.ek, AST user and bug detector, will maintain the site. We have secured the astopen.org domain and are investigating non-AT&T hosting options, including a repository with bug tracking. The process of change will take time; the patience of the user community will be greatly appreciated. Its quite a shock to have 3 weeks to plan personal, career, and hacking futures after working in an environment that has essentially been stable for almost 30 years. The user groups will be informed as plans solidify. Korn's own wikipedia page says he worked for AT&T Labs Research until 2013..., but he is now working for Google citation needed . A dgkorn github user account was created in November 2014, but it has been the source of exactly 0 public contributions since that time, and subscribes to as many repos. Since 2013, the related mailing-lists have grown progressively less active. For example, the fourth-quarter ast-developers list for 2013 had posted 156 messages by 2013-12-01, but the same list for fourth-quarter 2015 lists only three messages, and this is the last of them: Subject: Re: [ast-developers] Transitioning ast to GitHub Is there any intention to transition the ast codebase to a source code
repository like GitHub? That would make it much easier for the community to contribute. I'm concerned that without such a collaborative environment, ast-related development will stall as bug reports and source-code patches get lost in the ether. Does anyone have a full git repo they can publish somewhere
(repo.or.cz, github, whatever)?
Git server is down for ages, now even www2.research.att.com (204.178.8.28)
went down. This makes one wonder about the future of Kornshell. Has it died? Are we to see no more releases? And, indeed, though AT&T lists all of the AST links at their labs research landing page, none of these seem to work. These are the same dead links listed at kornshell.com for download. Even if the current server state should prove only temporary for now, the dried-up mailing-list doesn't seem to bode well. And so, is the korn shell now kaput? Or is there more activity along these lines elsewhere? | NO tldr: github.com/att/ast and github.com/att/uwin On Jan 19-20, 2016 the following ( 1 | 2 ) messages were posted to the ast-users mailing-list : (and I consider the dgk has some patches comment especially encouraging) Wed, Jan 20 2016; From Glenn Fowler : Thanks Lefty for all the work getting this up and running. I know dgk has
some patches in the works. He may be offline for the next few weeks. Tue, Jan 19, 2016; From Eleftherios Koutsofios : hi AST and UWIN users. as many of you noticed, the download site on www.research.att.com went
off the air shortly before the end of the year due to some security issue. the timing was unfortunate because several people including me were on
vacation so it's been down for a long time. but we've finally managed to move most of that software on GitHub. you can find the AST and UWIN software packages at: https://github.com/att/uwin and https://github.com/att/ast (btw. the /att tree on GitHub hosts a lot of open source software developed by the AT&T Research group. feel free to browse. I'll be putting up some of my code there soon) . /att/ast corresponds to the ast-open package. it includes the software that
was also available under individual packages, like ast-ksh, ast-dss, etc.,
so I decided to only create this one. it has 3 branches, matching the old structure: master (i.e. official), alpha, and beta. beta is the most recent one. it includes the last package I had gotten from Glenn and Dave with some minor fixes to get it to compile on some new OS versions, like Centos 7 and Ubuntu 14. /att/uwin is the source code for the UWIN system. it has a master and a beta branch. I don't have an environment to build and test this on, so I don't know how well it builds. cloning either of these git repos is equivalent to downloading the INIT and
ast-open (or INIT and uwin) packages from the old site and then running: ./bin/package read so the next step after the clone step is to run: ./bin/package make vanilla build, where no previous version of NMAKE is available should still
work and on some systems that was actually the way to go for me. as an example, to get and compile the beta branch of AST: git clone --branch beta \
https://github.com/att/ast.git
cd ast
./bin/package make very little of the documentation from the old site has moved to the GitHub
site, I'll try to migrate the rest later, I just wanted to get the software up again. thanks
lefteris | {
"source": [
"https://unix.stackexchange.com/questions/246338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52934/"
]
} |
246,535 | I have a file with .rar extension, ex: foo.rar I want to extract content from that file, how do I extract it? | You can install unrar - "Unarchiver for .rar files" or unp - "unpack (almost) everything with one command" To unrar a file: unrar x <myfile> To unp a file: unp <myfile.rar> Since unrar is not open source, some distros might not have it in their package manager already. If it's not, try unrar-free . Notice that unrar x <myfile> will preserve directory structure in archive, in difference with unrar e <myfile> which will flatten it | {
"source": [
"https://unix.stackexchange.com/questions/246535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119746/"
]
} |
246,846 | I've been trying to set my locale to en_US.UTF-8 without any success. Based off of other answers around the internet, I should first generate the locale with sudo locale-gen en_US.UTF-8 And then apply it with sudo dpkg-reconfigure locales However, running locale-gen does something weird: user@Host /home/user $ sudo locale-gen en_US.UTF-8
Generating locales (this might take a while)...
en_US.ISO-8859-1... done
Generation complete. As you see, it never actually generates UTF-8, but instead keeps falling back to ISO-8859-1. I can never manage to set LC_ALL to en_US.UTF-8 , probably because it can't generate. Am I doing something wrong? I'm running Debian 8.1. | You've tried to apply a recipe for Ubuntu under Debian. That usually works, but in this specific case it doesn't. Ubuntu is derived from Debian, and doesn't change much apart from the installer and the GUI. The locale-gen command is one of those few other things that it changes. I don't know why. Under Debian, the locale-gen command takes no arguments and regenerates the compiled locale definitions according to the configured list of locales. To modify the selection of locales that you want to use, edit the file /etc/locale.gen then run the locale-gen command. Alternatively, run dpkg-reconfigure locales as root, select the additional locales you want (and deselect the ones you don't want), and press OK. Under Ubuntu, if you run the locale-gen command without arguments, it regenerates the compiled locale definitions according to the configured list of locales. But if you pass some arguments, they're added to the list and generated immediately. The list of locales is kept in /var/lib/locales/supported.d/local . Running dpkg-reconfigure locales just regenerates the compiled locales without giving you an opportunity to modify the selection. In summary, to add en_US.UTF-8 to the list of usable locales: Debian, interactive: dpkg-reconfigure locales Debian, automated: sed -i 's/^# *\(en_US.UTF-8\)/\1/' /etc/locale.gen && locale-gen Ubuntu, automated: locale-gen en_US.UTF-8 | {
"source": [
"https://unix.stackexchange.com/questions/246846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86192/"
]
} |
246,859 | I'm running Debian Jessie 8.2 from a live boot, from a USB stick. It works; I can boot and use it normally, but I can't connect to WiFi. I've read that I probably need non-free drivers, (but right now i just want it to work, and it doesn't matter if the drivers are non-free). I've been using this tutorial , which I've been following pretty well. My network controller is Broadcom Corporation bcm4313 802.11bgm wireless network adapter. The first weird thing was that the sources.list file that was mentioned didn't have the line exactly as in the article -- the difference was that it said http instead of ftp.us -- all the rest of that line was the same. Would the fact that I'm in Argentina be a reason for this? I saved the changes, and then as said in the article, ran apt-get update in the terminal. Question: Why is there a # before the instruction in the tutorial? The terminal responded with output saying failed to fetch and could not resolve . Also, how do I copy text from Debian? By the way, to understand this process better, we aren't trying to download things, right? Because that's my problem: Debian has no Internet connection. | You've tried to apply a recipe for Ubuntu under Debian. That usually works, but in this specific case it doesn't. Ubuntu is derived from Debian, and doesn't change much apart from the installer and the GUI. The locale-gen command is one of those few other things that it changes. I don't know why. Under Debian, the locale-gen command takes no arguments and regenerates the compiled locale definitions according to the configured list of locales. To modify the selection of locales that you want to use, edit the file /etc/locale.gen then run the locale-gen command. Alternatively, run dpkg-reconfigure locales as root, select the additional locales you want (and deselect the ones you don't want), and press OK. Under Ubuntu, if you run the locale-gen command without arguments, it regenerates the compiled locale definitions according to the configured list of locales. But if you pass some arguments, they're added to the list and generated immediately. The list of locales is kept in /var/lib/locales/supported.d/local . Running dpkg-reconfigure locales just regenerates the compiled locales without giving you an opportunity to modify the selection. In summary, to add en_US.UTF-8 to the list of usable locales: Debian, interactive: dpkg-reconfigure locales Debian, automated: sed -i 's/^# *\(en_US.UTF-8\)/\1/' /etc/locale.gen && locale-gen Ubuntu, automated: locale-gen en_US.UTF-8 | {
"source": [
"https://unix.stackexchange.com/questions/246859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133124/"
]
} |
246,935 | I'm working on a systemd .service script that is supposed to start after a CIFS network location is mounted via /etc/fstab to /mnt/ on boot-up. The script waits for an OpenVPN dependency script to launch first, but I also want it to wait for mount to complete. /etc/systemd/system/my-daemon.service : [Unit]
Description=Launch My Daemon
After=network.target vpn-launch.service
Requires=vpn-launch.service I tried to add systemd.mount to the line: After=network.target vpn-launch.service systemd.mount , but it didn't give the results I was hoping for. | a CIFS network location is mounted via /etc/fstab to /mnt/ on boot-up. No, it is not. Get this right, and the rest falls into place naturally. The mount is handled by a (generated) systemd mount unit that will be named something like mnt-wibble.mount . You can see its actual name in the output of systemctl list-units --type=mount command. You can look at it in detail just like any other unit with systemctl status . Very simply, then: you have to order your unit to be started after that mount unit is started. After=network.target vpn-launch.service mnt-wibble.mount Further reading https://unix.stackexchange.com/a/236968/5132 | {
"source": [
"https://unix.stackexchange.com/questions/246935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136753/"
]
} |
247,187 | I want to combine multiple conditions in a shell if statement, and negate the combination. I have the following working code for a simple combination of conditions: if [ -f file1 ] && [ -f file2 ] && [ -f file3 ] ; then
# do stuff with the files
fi This works fine. If I want to negate it, I can use the following working code: if ! ( [ -f file1 ] && [ -f file2 ] && [ -f file3 ] ) ; then
echo "Error: You done goofed."
exit 1
fi
# do stuff with the files This also works as expected. However, it occurs to me that I don't know what the parentheses are actually doing there. I want to use them just for grouping, but is it actually spawning a subshell? (How can I tell?) If so, is there a way to group the conditions without spawning a subshell? | You need to use { list;} instead of (list) : if ! { [ -f file1 ] && [ -f file2 ] && [ -f file3 ]; }; then
: do something
fi Both of them are Grouping Commands , but { list;} executes commands in current shell environment. Note that, the ; in { list;} is needed to delimit the list from } reverse word, you can use other delimiter as well. The space (or other delimiter) after { is also required. | {
"source": [
"https://unix.stackexchange.com/questions/247187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
247,311 | I get that I can use mount to set up / directories and that I can use the /etc/fstab to remount them on reboot. Testing the fstab file is also fun with mount -faV . When I'm looking at the fstab file, the number of space is disconcerting. I would have expected one space (like a separator between command parameters) or four spaces (like a tab). I'm seeing seven spaces at a time, almost as convention. My question is: What are all the spaces in the /etc/fstab for? (Perhaps also - Will it matter if I get the wrong number?) | The number of spaces is a way to cosmetically separate the columns/fields. It has no meaning other than that. I.e. no the amount of white space between columns does not matter . The space between columns is comprised of white space (including tabs), and the columns themselves, e.g. comma-separated options, mustn't contain unquoted white space. From the fstab(5) man page: [...] fields on each line are separated by tabs or spaces. and If the name of the mount point contains spaces these can be escaped as `\040'. Example With the following lines alignment using solely a single tab becomes hard to achieve. In the end the fstab without white space looks messier than what you consider disconcerting now. /dev/md3 /data/vm btrfs defaults 0 0
/var/spool/cron/crontabs /etc/crontabs bind defaults,bind
//bkpsrv/backup /mnt/backup-server cifs iocharset=utf8,rw,credentials=/etc/credentials.txt,file_mode=0660,dir_mode=0770,_netdev Can you still see the "columns"? | {
"source": [
"https://unix.stackexchange.com/questions/247311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112231/"
]
} |
247,329 | In vim editor, I want to replace a newline character ( \n ) with two new line characters ( \n\n ) using vim command mode. Input file content: This is my first line.
This is second line. Command that I tried: :%s/\n/\n\n/g But it replaces the string with unwanted characters as This is my first line.^@^@This is second line.^@^@ Then I tried the following command :%s/\n/\r\r/g It is working properly. Can you explain why it is working fine with second command? | Oddly enough, \n in vim for replacement does not mean newline, but null. ASCII nul is ^@ ( Ctrl + @ ). Historically, vi replaces ^M ( Ctrl + M ) as the line-ending, which is the newline. vim added an extension \r (like the C language) to mean the same as ^M , but the developers chose to make \n mean null when replacing text. This is inconsistent with its use in searches , which find a newline. Further reading: Search and replace (Vim wiki) vim replace character to \n | {
"source": [
"https://unix.stackexchange.com/questions/247329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53844/"
]
} |
247,418 | So I'm setting up an nginx server with SSL enabled with a server definition something like: server {
listen :80;
listen [::]:80;
server_name example.org;
root /foo/bar;
ssl on;
ssl_certificate /path/to/public/certificate;
ssl_certificate_key /path/to/private/key;
...
} You get the idea (please forgive any typos). Anyway, what I'm wondering is; if I renew my certificate(s), is there a way to install them without having to restart nginx? For example, if I were to use symbolic links from /path/to/public/certificate and /path/to/private/key , pointing to my current certificate(s), would I still need to restart nginx if I were to simply change these to point to new (renewed) certificates? Are there alternatives? | You will need to RELOAD Nginx in order for the renewed certificates to display the correct expiration date (read the clarification below and the other comments for an explanation of the difference between RELOADING and RESTARTING Nginx). After reloading Nginx, a simple cache-clearing and browse should allow you to view this the updated expiration dates on the SSL cert. Or if you prefer cli, you could always use the old trusty OpenSSL command: echo | openssl s_client -connect your.domain.com:443 | openssl x509 -noout -dates That would give you the current dates on the certificate. In your case the port would be 80 instead of 443 (it was later stated by OP that the ports 80 in the question should have actually been 443, but Nginx will listen on HTTP or HTTPS on whatever ports you give it, as long as they are not currently in use by another process). Many times nginx -s reload does not work as expected. On many systems (Debian, etc.), you would need to use /etc/init.d/nginx reload . Edit to update and clarify this answer: On modern systems with systemd , you can also run systemctl reload nginx or service nginx reload . All of these reload methods are different from restart by the fact that they send a SIGHUP signal that tells Nginx to reload its configuration without killing off existing connections (which would happen with a full restart and would almost certainly be user-impacting). If for some reason, Nginx does not reload your certificate, you can restart it, but note that it will have much more of an impact than reload . To restart Nginx, you would simply run systemctl restart nginx , or on systems without systemd , you would do nginx -s stop && nginx -s start . If all else fails (for whatever reason), just kill the Nginx PID(s), and you can always start it up manually by specifying the configuration file directly using nginx -c /path/to/nginx.conf . | {
"source": [
"https://unix.stackexchange.com/questions/247418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38840/"
]
} |
247,576 | I have an USER variable in my script, and I want to see his HOME path based on the USER variable. How can I do that? | There is a utility which will lookup user information regardless of whether that information is stored in local files such as /etc/passwd or in LDAP or some other method. It's called getent . In order to get user information out of it, you run getent passwd $USER . You'll get a line back that looks like: [jenny@sameen ~]$ getent passwd jenny
jenny:*:1001:1001:Jenny Dybedahl:/home/jenny:/usr/local/bin/bash Now you can simply cut out the home dir from it, e.g. by using cut, like so: [jenny@sameen ~]$ getent passwd jenny | cut -d: -f6
/home/jenny | {
"source": [
"https://unix.stackexchange.com/questions/247576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
247,612 | Every time I ssh onto my remote server, I need to provide the password. I copied my public key (id_dsa.pub) to the remote server using: ssh-copy-id -i id_dsa.pub user@server I checked it was correctly added to authorized_keys. All the file/directory permissions are correct: ~user 755
~user/.ssh 700
~user/.ssh/authorized_keys 640
~user/.ssh/id_dsa.pub 644 The PasswordAuthentication field in /etc/ssh/sshd_config is set to yes. I put the sshd in debug mode and added the verbose switch to the ssh command. I get the impression that the server did not try to use id_pub.dsa because of the line Skipping ssh-dss key: ........... not in PubkeyAcceptedKeyTypes There is no encrypted disc on server side. Any ideas how to progress?
Here is the ssh daemon debug info: sudo /usr/sbin/sshd -d
====
debug1: sshd version OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type RSA
debug1: private host key: #0 type 1 RSA
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type DSA
debug1: private host key: #1 type 2 DSA
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type ECDSA
debug1: private host key: #2 type 3 ECDSA
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-d'
Set /proc/self/oom_score_adj from 0 to -1000
debug1: Bind to port 22 on 0.0.0.0.
Server listening on 0.0.0.0 port 22.
debug1: Bind to port 22 on ::.
Server listening on :: port 22.
debug1: Server will not fork when running in debugging mode.
debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8
debug1: inetd sockets after dupping: 3, 3
Connection from xxx port 63521 on yyy port 22
debug1: Client protocol version 2.0; client software version OpenSSH_7.1
debug1: match: OpenSSH_7.1 pat OpenSSH* compat 0x04000000
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3
debug1: permanently_set_uid: 115/65534 [preauth]
debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 [preauth]
debug1: SSH2_MSG_KEXINIT sent [preauth]
debug1: SSH2_MSG_KEXINIT received [preauth]
debug1: kex: client->server [email protected] <implicit> none [preauth]
debug1: kex: server->client [email protected] <implicit> none [preauth]
debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]
debug1: SSH2_MSG_NEWKEYS sent [preauth]
debug1: expecting SSH2_MSG_NEWKEYS [preauth]
debug1: SSH2_MSG_NEWKEYS received [preauth]
debug1: KEX done [preauth]
debug1: userauth-request for user damian service ssh-connection method none [preauth]
debug1: attempt 0 failures 0 [preauth]
debug1: PAM: initializing for "damian"
debug1: PAM: setting PAM_RHOST to "freebox-server.local"
debug1: PAM: setting PAM_TTY to "ssh"
Connection closed by xxxx [preauth]
debug1: do_cleanup [preauth]
debug1: monitor_read_log: child log fd closed
debug1: do_cleanup Here is the ssh verbose output: $ ssh -v user@server
OpenSSH_7.1p1, OpenSSL 1.0.2d 9 Jul 2015
debug1: Connecting to server [xxxx] port 22.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /home/user/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/user/.ssh/id_rsa-cert type -1
debug1: identity file /home/user/.ssh/id_dsa type 2
debug1: key_load_public: No such file or directory
debug1: identity file /home/user/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/user/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/user/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/user/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/user/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3
debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3 pat OpenSSH_6.6.1* compat 0x04000000
debug1: Authenticating to server:22 as 'user'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client [email protected] <implicit> none
debug1: kex: client->server [email protected] <implicit> none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:v4BNHM0Q33Uh6U4VHenA9iJ0wEyi8h0rFVetbcXBKqA
debug1: Host 'server' is known and matches the ECDSA host key.
debug1: Found key in /home/user/.ssh/known_hosts:2
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /home/user/.ssh/id_rsa
debug1: Skipping ssh-dss key /home/user/.ssh/id_dsa for not in PubkeyAcceptedKeyTypes
debug1: Trying private key: /home/user/.ssh/id_ecdsa
debug1: Trying private key: /home/user/.ssh/id_ed25519
debug1: Next authentication method: password
user@server's password: | The new openssh version (7.0+) deprecated DSA keys and is not using DSA keys by default (not on server or client). The keys are not preferred to be used anymore, so if you can, I would recommend to use RSA keys where possible. If you really need to use DSA keys, you need to explicitly allow them in your client config using PubkeyAcceptedKeyTypes +ssh-dss Should be enough to put that line in ~/.ssh/config , as the verbose message is trying to tell you. | {
"source": [
"https://unix.stackexchange.com/questions/247612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146170/"
]
} |
247,924 | I wanted to delete some package in my home file, but the filename was too long ( google-chrome-stable_current_i386.deb ). So, I decided to use the command ls|grep chrome|rm to pipe the files to grep to filter out the chrome file, and then remove it. It didn't work, so I would like to see how I can do this. | This almost made me wince. You might want to stop pointing that shotgun at your foot. Basically any kind of parsing of ls is going to be more complicated and error-prone than established methods like find [...] -exec or globs . Unless someone installed a troll distro for you, your shell has Tab completion. Just type rm google and press Tab . If it doesn't complete immediately, press Tab again to see a list of matching files. Type more characters of the filename to narrow it down until it does complete, then run the command. Pipes != parameters . Standard input is a binary data stream which can be fed to a command asynchronously. Parameters are space separated strings which are passed once and only once to a command when running it. These are very rarely interchangeable. | {
"source": [
"https://unix.stackexchange.com/questions/247924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146386/"
]
} |
247,999 | There are many questions on SE that show how to recover from terminal broken by cat /dev/urandom . For those that are unfamiliar with this issue - here what it is about: You execute cat /dev/urandom or equivalent (for example, cat binary_file.dat ). Garbage is printed. That would be okay... except your terminal continues to print garbage even after the command has finished! Here's a screenshot of a misrendered text that is in fact g++ output: I guess people were right about C++ errors sometimes being too cryptic! The usual solution is to run stty sane && reset , although it's kind of annoying to run it every time this happens. Because of that, what I want to focus on in this question is the original reason why this happens, and how to prevent the terminal from breaking after such command is issued. I'm not looking for solutions such as piping the offending commands to tr or xxd , because this requires you to know that the program/file outputs binary before you actually run/print it, and needs to be remembered each time you happen to output such data. I noticed the same behavior in URxvt, PuTTY and Linux frame buffer so I don't think this is terminal-specific problem. My primary suspect is that the random output contains some ANSI escape code that flips the character encoding (in fact, if you run cat /dev/urandom again, chances are it will unbreak the terminal, which seems to confirm this theory). If this is right, what is this escape code? Are there any standard ways to disable it? | No: there is no standard way to "disable it", and the details of breakage are actually terminal-specific, but there are some commonly-implemented features for which you can get misbehavior. For commonly-implemented features, look to the VT100-style alternate character set, which is activated by ^N and ^O (enable/disable). That may be suppressed in some terminals when using UTF-8 mode, but the same terminals have ample opportunity for trashing your screen (talking about GNU screen, Linux console, PuTTY here) with the escape sequences they do recognize. Some of the other escape sequences for instance rely upon responses from the terminal to a query (escape sequence) by the host. If the host does not expect it, the result is trash on the screen. In other cases (seen for instance in network devices with hardcoded escape sequences for the Linux console), other terminals will see that as miscoded, and seem to freeze. So... you could focus on just one terminal, prune out whatever looks like a nuisance (as for instance, some suggest removing the ability to use the mouse for positioning in editors), and you might get something which has no apparent holes. But that's only one terminal. | {
"source": [
"https://unix.stackexchange.com/questions/247999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41991/"
]
} |
248,245 | Let say I have the program: Calculate.py Is there a unix command-line that counts the number of lines outputted from my program, Calculate.py? | You can pipe the output in to wc . You can use the -l flag to count lines. Run the program normally and use a pipe to redirect to wc. python Calculate.py | wc -l Alternatively, you can redirect the output of your program to a file, say calc.out , and run wc on that file. python Calculate.py > calc.out
wc -l calc.out | {
"source": [
"https://unix.stackexchange.com/questions/248245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146637/"
]
} |
248,248 | I have samba installed, selinux in enforcing mode, samba shares works fine. But I am not sure where samba_share_t is defined? semanage fcontext -l | grep samba_share_t does not return anything. Another questions is how do I know if there are other selinux security context not listed by semanage. I found this link http://danwalsh.livejournal.com/14195.html but man samba_selinux says No manual entry for samba_selinux man samba_selinux worked after installed selinux-policy-doc I assumed that semanage fcontext -l will give me a complete list of all selinux security file context but it is not that case. Thanks | You can pipe the output in to wc . You can use the -l flag to count lines. Run the program normally and use a pipe to redirect to wc. python Calculate.py | wc -l Alternatively, you can redirect the output of your program to a file, say calc.out , and run wc on that file. python Calculate.py > calc.out
wc -l calc.out | {
"source": [
"https://unix.stackexchange.com/questions/248248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39569/"
]
} |
248,321 | I can ssh into a remote machine that has 64 cores. Lets say I need to run 640 shell scripts in parallel on this machine. How do I do this? I can see splitting the 640 scripts into 64 groups each of 10 scripts. How would I then run each of these groups in parallel , i.e. one group on each of one of the available cores. Would a script of the form ./script_A &
./script_B &
./script_C &
... where script_A corresponds to the first group, script_B to the second group etc., suffice? The scripts within one group that run on one core are ok to run sequentially, but I want the groups to run in parallel across all cores. | This looks like a job for gnu parallel: parallel bash -c ::: script_* The advantage is that you don't have to group your scripts by cores, parallel will do that for you. Of course, if you don't want to babysit the SSH session while the scripts are running, you should use nohup or screen | {
"source": [
"https://unix.stackexchange.com/questions/248321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146709/"
]
} |
248,327 | I need to manage multiple cluster nodes from my PC. The catch is that the cluster nodes are accessible only from a remote PC. So first I have to ssh to the remote PC, and then ssh to a cluster node from the remote PC. If I had direct access to the nodes, I would use something like clusterssh to manage all the nodes simultaneously. Is there a tool similar to clusterssh that I can use in my situation? | This looks like a job for gnu parallel: parallel bash -c ::: script_* The advantage is that you don't have to group your scripts by cores, parallel will do that for you. Of course, if you don't want to babysit the SSH session while the scripts are running, you should use nohup or screen | {
"source": [
"https://unix.stackexchange.com/questions/248327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56755/"
]
} |
248,544 | Can I use mv file1 file2 in a way that it only moves file1 to file2 if file2 doesn't exist? I've tried yes n | mv -i file1 file2 (this lets mv ask if file2 should be overridden and automatically answer no) but besides abusing -i it also doesn't give me nice error codes (always 141 instead of 0 if moved and something else if not moved) | mv -vn file1 file2 . This command will do what you want. You can skip -v if you want. -v makes it verbose - mv will tell you that it moved file if it moves it(useful, since there is possibility that file will not be moved) -n moves only if file2 does not exist. Please note however, that this is not POSIX as mentioned by ThomasDickey . | {
"source": [
"https://unix.stackexchange.com/questions/248544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146871/"
]
} |
248,761 | I would like to (recursively) find all files with "ABC" in their file name, which also contain "XYZ" in the file. I tried: find . -name "*ABC*" | grep -R 'XYZ' but its not giving the correct output. | That's because grep can't read file names to search through from standard input. What you're doing is printing file names that contain XYZ . Use find 's -exec option instead: find . -name "*ABC*" -exec grep -H 'XYZ' {} + From man find : -exec command ;
Execute command; true if 0 status is returned. All following
arguments to find are taken to be arguments to the command until
an argument consisting of `;' is encountered. The string `{}'
is replaced by the current file name being processed everywhere
it occurs in the arguments to the command, not just in arguments
where it is alone, as in some versions of find.
[...]
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of invoca‐
tions of the command will be much less than the number of
matched files. The command line is built in much the same way
that xargs builds its command lines. Only one instance of `{}'
is allowed within the command. The command is executed in the
starting directory. If you don't need the actual matching lines but only the list of file names containing at least one occurrence of the string, use this instead: find . -name "*ABC*" -exec grep -l 'XYZ' {} + | {
"source": [
"https://unix.stackexchange.com/questions/248761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50597/"
]
} |
249,039 | How is "/a/./b/../../c/" equal to /c ? I saw this as a question on one of the Stack Exchange sites. Apparently .. means to pop the stack(?). Why is this the case? | Assume root looks like: /a/b
/c Let's break it down to componenets: / -> root /a -> in (a) . -> THIS dir path /a/./ -> still in /a /a/./b -> in /a/b .. -> go "up" one level /a/./b/.. -> /a/b/.. -> /a /a/./b/../.. -> /a/.. -> / /a/./b/../../c -> /c | {
"source": [
"https://unix.stackexchange.com/questions/249039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145794/"
]
} |
249,342 | I'm reading and trying to understand why would anyone want to use Pulse Audio and I'm failing to understand.
I read this https://www.linux.com/news/hardware/drivers/8100-why-you-should-care-about-pulseaudio-and-how-to-start-doing-it , and I'm still not getting a convincing answer.
I have a set up, with one sound card. ( I don't need to multiplex sounds from or to several sound cards).
I know that all applications are written with different APIs, ALSA, OSS, JACK etc. So if I configure all those frameworks to route the sound through pulse audio, what benefit do I get, vs allowing all those frameworks talking directly to the sound card driver?
In addition, I don't see that Pulse Audio has it's own Application API. So I need to choose a framework anyway (like ALSA).
Thanks | It's all about multiplexing. I don't need to multiplex sounds from or to several sound cards Ah, but you do! If you want to be able to play audio from two sources at once, ever, for any reason, you need multiplexing. OSX and Windows handle Multiplexing in the Kernel (but still in software), which is why this never/rarely comes up on those platforms. However, on Linux, with software like ALSA, multiplexing is left up to the specific sound card / implementation / driver. Unfortunately, not all cards and all drivers actually support this out of the box. That's where PulseAudio comes in, doing the multiplexing in software, regardless of your sound card / driver situation. Without this functionality, if you were say, using ALSA directly sans-PulseAudio, with a sound card that had poor PCM multiplexing support on Linux, you would only ever be able to hear sound from one application at a time. E.g. if you had a video playing in your web browser, and received a notification in Pidgin, you would not hear the notification sound because your web browser would already have control of your sound card. By routing all sound through PulseAudio first, this problem is avoided. Source: A long IRC conversation I once had with the Ubuntu maintainer for ALSA, where I asked them the exact same question you're asking now. | {
"source": [
"https://unix.stackexchange.com/questions/249342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140660/"
]
} |
249,394 | I have 2 files containing a list of songs.
hdsongs.txt and sdsongs.txt I wrote a simple script to list all songs and output to text files, to then run a diff against.
It works fine for the most part, but the actual diff command in the script is showing the same line as being different. This is actually happening for multiple lines, but not all. Here is an example of a song in both files: $ grep Apologize \*songs\*
hdsongs.txt:Timbaland/Apologize.mp3
sdsongs.txt:Timbaland/Apologize.mp3 There is no trailing special character that I can see: $ cat -A hdsongs.txt sdsongs.txt | grep Apologize
Timbaland/Apologize.mp3$
Timbaland/Apologize.mp3$ When I run diff, it shows the same line being in each file; but aren't the lines the same? $ diff hdsongs.txt sdsongs.txt | grep Apologize
> Timbaland/Apologize.mp3
< Timbaland/Apologize.mp3 This is similar to the thread here: diff reports two files differ, although they are the same! but this is for lines within the file, not the whole file, and the resolution there doesn't seem to fit in this case. $ diff <(cat -A phonesongsonly.txt) <(cat -A passportsongsonly.txt) | grep Apologize
< Timbaland/Apologize.mp3$
> Timbaland/Apologize.mp3$
$ wdiff -w "$(tput bold;tput setaf 1)" -x "$(tput sgr0)" -y "$(tput bold;tput setaf 2)" -z "$(tput sgr0)" hdsongs.txt sdsongs.txt | grep Apologize
>Timbaland/Apologize.mp3
>Timbaland/Apologize.mp3 Does anyone know why diff would report the same line twice like this? | My guess is you simply haven't sorted the files. That's one of the behaviors you can get on unsorted input: $ cat file1
foo
bar
$ cat file2
bar
foo
$ $ diff file1 file2
1d0
< foo
2a2
> foo But, if you sort: $ diff <(sort file1) <(sort file2)
$ The diff program's job is to tell you whether two files are identical and, if not, where they differ. It is not designed to find similarities between different lines. If line X of the one file is not the same as line X of the other, then the files are not the same. It doesn't matter if they contain exactly the same information, if that information is organized in a different way, the files are reported as different. | {
"source": [
"https://unix.stackexchange.com/questions/249394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147481/"
]
} |
249,425 | New to docker. Installed docker from software management tool in mint 17 . When I run docker run hello-world I receive: FATA[0000] Error response from daemon: Cannot start container a6bcc1ede2c38cb6b020cf5ab35ebd51b64535af57fa44f5966c37bdf89c8781: [8] System error: mountpoint for devices not found When I look at the service logs ( /var/log/upstart/docker.log ) I see: ERRO[0617] Couldn't run auplink before unmount: exec: "auplink": executable file not found in $PATH
ERRO[0617] Couldn't run auplink before unmount: exec: "auplink": executable file not found in $PATH : docker version Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.2.1
Git commit (client): 7c8fca2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.2.1
Git commit (server): 7c8fca2
OS/Arch (server): linux/amd64 : docker info Containers: 2
Images: 1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 5
Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-24-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 8
Total Memory: 15.6 GiB
Name: DWDEV-HOME-HBABAI
ID: K4GX:DTV6:547V:U3BO:YEOA:WVNU:NZEZ:L3GG:4W7U:IXNS:X3QK:5PVR
WARNING: No memory limit support
WARNING: No swap limit support Update: Installed sudo apt-get install aufs-tools , restarted docker service. I no longer see the following error: ERRO[0617] Couldn't run auplink before unmount: exec: "auplink": executable file not found in $PATH However, in the logs I see that when docker is starting it is warning me about memory mount point: INFO[0000] -job init_networkdriver() = OK (0)
/var/run/docker.sock is up
WARN[0000] mountpoint for memory not found
INFO[0000] Loading containers: start. I have a feeling it has to do with cgroup...but i don't know anything about that technology (yet)... | It turned out that I needed to install cgroup-lite . It was a shot in the dark but I followed this answer | {
"source": [
"https://unix.stackexchange.com/questions/249425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79893/"
]
} |
249,436 | I am using PulseAudio to manage my sound on Debian 8 with i3wm. Everything works properly, except the volume level on my on-board sound card. I can get sound out of it if I crank all settings to max (153% volume on both input and output on pavucontrol) and turn my speakers up pretty loud. The expected audio is played, just very quietly. The setting is on "analog stereo output" everything is recognized and acts appropriately, except the volume. I have a USB headset that works fine with the correct volume when selected in pavucontrol. When I boot into Windows, sound is fine. This used to work properly with the correct volume, then stopped. My guess is that it was an update, but I don't use sound that often and am not able to correlate the events. There may also have been a reboot in there. I have already killed the configs and restarted pulse. I've tried a few other troubleshooting steps, but none have yielded any results. I can provided any data / logs requested, not sure what to look at at this point. I've played around in pacmd, but didn't find anything useful in there. So I guess my question is, is there a baseline volume setting that is being set or calculated that is incorrect for my sound card that I can set statically? Or what in the world else could be happening here. amixer output: $›amixer
Simple mixer control 'Master',0
Capabilities: pvolume pswitch pswitch-joined
Playback channels: Front Left - Front Right
Limits: Playback 0 - 65536
Mono:
Front Left: Playback 103525 [158%] [on]
Front Right: Playback 103525 [158%] [on]
Simple mixer control 'Capture',0
Capabilities: cvolume cvolume-joined cswitch cswitch-joined
Capture channels: Mono
Limits: Capture 0 - 65536
Mono: Capture 55141 [84%] [off] alsamixer cap: Link to amixer -D hw:0 contents: http://pastebin.com/bB7ERZ13 | It turned out that I needed to install cgroup-lite . It was a shot in the dark but I followed this answer | {
"source": [
"https://unix.stackexchange.com/questions/249436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43724/"
]
} |
249,494 | Simple question, but no resources found about this. Is there any way to install only a PostgreSQL client, the terminal-based one, psql , on a CentOS7 system, without installing the complete PostgreSQL server? There is no dedicated postgresql-client or postgresql94-client or anything similar on the repositories. | I think the naming convention might simply be backwards from what you expect there: there's a package postgresql-server The programs needed to create and run a PostgreSQL server and a package postgresql PostgreSQL client programs (and postgresql does not have a dependency on postgresql-server , at least not in CentOS 6, though they both depend on a common postgresql-libs package). | {
"source": [
"https://unix.stackexchange.com/questions/249494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111149/"
]
} |
249,654 | At shutdown I often get the message watchdog did not stop! and then the laptop freezes after few other lines without shutting down. Any idea on how to fix this? Recently it happened very often, usually when the laptop was powered on for some time. I am using Debian 8 on an Asus UX32LA I found this systemd file (it shows a conflict with the shutdown.target), if it may help. My impression is that the problem depends on some issue coming from me trying to fix the backlight (which actually only works with the grub paramenter "acpi_osi=" ) [Unit]
Description=Load/Save Screen Backlight Brightness of %i
Documentation=man:[email protected](8)
DefaultDependencies=no
RequiresMountsFor=/var/lib/systemd/backlight
Conflicts=shutdown.target
After=systemd-readahead-collect.service systemd-readahead-replay.service systemd-remount-fs.service
Before=sysinit.target shutdown.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/lib/systemd/systemd-backlight load %i
ExecStop=/lib/systemd/systemd-backlight save %i | The watchdog did not stop! line is normal behavior. systemd sets a " hardware watchdog " timer as a failsafe, to ensure that if the normal shutdown process freezes/fails that the computer will still shutdown after the specified period of time. This time period is defined in the variable ShutdownWatchdogSec= in the file /etc/systemd/system.conf . Here is the description from the docs : RuntimeWatchdogSec=, ShutdownWatchdogSec= Configure the hardware watchdog at runtime and at reboot. Takes a timeout value in seconds (or in other time units if suffixed with
"ms", "min", "h", "d", "w"). If RuntimeWatchdogSec= is set to a
non-zero value, the watchdog hardware (/dev/watchdog) will be
programmed to automatically reboot the system if it is not contacted
within the specified timeout interval. The system manager will ensure
to contact it at least once in half the specified timeout interval.
This feature requires a hardware watchdog device to be present, as it
is commonly the case in embedded and server systems. Not all hardware
watchdogs allow configuration of the reboot timeout, in which case the
closest available timeout is picked. ShutdownWatchdogSec= may be used
to configure the hardware watchdog when the system is asked to reboot.
It works as a safety net to ensure that the reboot takes place even if
a clean reboot attempt times out. By default RuntimeWatchdogSec=
defaults to 0 (off), and ShutdownWatchdogSec= to 10min. These settings
have no effect if a hardware watchdog is not available. It sounds likely, as you indicated, that your actual problem is related to changing ACPI settings. The answers on this Debian forum thread suggest the following: 1) Edit the file at /etc/default/grub and edit the GRUB_CMDLINE_LINUX line to look like this: GRUB_CMDLINE_LINUX="reboot=bios" 2) run: update-grub If reboot=bios doesn't work, they suggest retrying with reboot=acpi Do either of these work for you? | {
"source": [
"https://unix.stackexchange.com/questions/249654",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140154/"
]
} |
249,701 | I'm not that versed in unix and I cannot have Java, so YUI Compressor does not apply, but I have this known Minify tool , which will get a minified version of a JS/CSS file from a specific URI /min/?f=/path/to/file.js.css Which unix commands may I use, using such method, to minify all the js/css files on the public_html folder, replacing all js/css files by their minified versions? | After searching and implementing it, I give the answer here through a bash file. I use the npm packages uglifyjs and uglifycss for compressing JS and CSS files respectively. I use command find to loop through those files. I assume js and css files are in a js/ and a css/ folder respectively. #minification of JS files
find js/ -type f \
-name "*.js" ! -name "*.min.*" ! -name "vfs_fonts*" \
-exec echo {} \; \
-exec uglifyjs -o {}.min {} \;
#minification of CSS files
find css/ -type f \
-name "*.css" ! -name "*.min.*" \
-exec echo {} \; \
-exec uglifycss --output {}.min {} \; This will minify all js and css files in respective js/ and css/ directories. If you want to exclude some particular folders or patterns within them use the option ! -name If you want to simply replace the minified file by the original file, that is, removing original file: #minification of JS files
find js/ -type f \
-name "*.js" ! -name "*.min.*" ! -name "vfs_fonts*" \
-exec echo {} \; \
-exec uglifyjs -o {}.min {} \; \
-exec rm {} \; \
-exec mv {}.min {} \;
#minification of CSS files
find css/ -type f \
-name "*.css" ! -name "*.min.*" \
-exec echo {} \; \
-exec uglifycss --output {}.min {} \; \
-exec rm {} \; \
-exec mv {}.min {} \; | {
"source": [
"https://unix.stackexchange.com/questions/249701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147671/"
]
} |
249,723 | Given a command that changes its behaviour when its output is going to a terminal (e.g. produce coloured output), how can that output be redirected in a pipeline while preserving the changed behaviour? There must be a utility for that, which I am not aware of. Some commands, like grep --color=always , have option flags to force the behaviour, but the question is how to work around programs that rely solely on testing their output file descriptor. If it matters, my shell is bash on Linux. | You might get what you need by using unbuffer . unbuffer is a tcl / expect script. Look at the source if you want. Also note the CAVEATS section in man. Also note that it does not execute aliases such as: alias ls='ls --color=auto' unless one add a trick as noted by Stéphane Chazelas: If you do a alias unbuffer='unbuffer ' (note the trailing space), then aliases will be expanded after unbuffer . | {
"source": [
"https://unix.stackexchange.com/questions/249723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147685/"
]
} |
249,804 | If I want to make the contents of file2 match the contents of file1 , I could obviously just run cp file1 file2 . However, if I want to preserve everything about file2 except the contents—owner, permissions, extended attributes, ACLs, hard links, etc., etc., then I wouldn't want to run cp .* In that case I just want to plop the contents of file1 into file2 . It seems like the following would do it: < file1 > file2 But it doesn't work. file2 is truncated to nothing and not written to. However, cat < file1 > file2 does work. It surprised me that the first version doesn't work. Is the second version a UUOC? Is there a way to do this without invoking a command, merely by using redirections? Note: I'm aware that UUOC is more of a pedantic point than a true anti-pattern. * As tniles09 discovered , cp will in fact work in this case. | cat < file1 > file2 is not a UUOC. Classically, < and > do redirections which correspond to file descriptor duplications at the system level.
File descriptor duplications by themselves don’t do a thing (well, > redirections open with O_TRUNC , so to be accurate, output redirections do truncate the output file). Don’t let the < > symbols confuse you. Redirections don’t move data—they assign file descriptors to other file descriptors. In this case you open file1 and assign that file descriptor to file descriptor 0 ( <file1 == 0<file1 ) and file2 and assign that file descriptor to file descriptor 1 ( >file2 == 1>file2 ). Now that you’ve got two file descriptors, you need a process to shovel data between the two—and that’s what cat is for. | {
"source": [
"https://unix.stackexchange.com/questions/249804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
249,869 | I have the following script to launch a MySQL process: if [ "${1:0:1}" = '-' ]; then
set -- mysqld_safe "$@"
fi
if [ "$1" = 'mysqld_safe' ]; then
DATADIR="/var/lib/mysql"
... What does 1:0:1 mean in this context? | It's a test for a - dashed argument option, apparently. It's a little strange, really. It uses a non-standard bash expansion in an attempt to extract the first and only the first character from $1 . The 0 is the head character index and the 1 is string length. In a [ test like that it might also be: [ " -${1#?}" = " $1" ] Neither comparison is particularly suited to test though, as it interprets - dashed arguments as well - which is why I use the leading space there. The best way to do this kind of thing - and the way it is usually done - is : case $1 in -*) mysqld_safe "$@"; esac | {
"source": [
"https://unix.stackexchange.com/questions/249869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119023/"
]
} |
249,874 | Let's say I have two files, file1 and file2 . file1 : passd:xxx
hopla:alli
gnar:gungg
araf:utap file2 : alli
utap How can i check what lines/words from file2 match file? Indeed I could probably do it with comm -1 -2 file1 file2 but is it possible to do it with awk? | It's a test for a - dashed argument option, apparently. It's a little strange, really. It uses a non-standard bash expansion in an attempt to extract the first and only the first character from $1 . The 0 is the head character index and the 1 is string length. In a [ test like that it might also be: [ " -${1#?}" = " $1" ] Neither comparison is particularly suited to test though, as it interprets - dashed arguments as well - which is why I use the leading space there. The best way to do this kind of thing - and the way it is usually done - is : case $1 in -*) mysqld_safe "$@"; esac | {
"source": [
"https://unix.stackexchange.com/questions/249874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144031/"
]
} |
249,931 | I have file called -l in my directory now I tried to do for i in *; do stat -c "%s %n" "$i"; done it lists all files with sizes but in the middle of the output there is something like 395 koko.pub
stat: invalid option -- 'l'
Try 'stat --help' for more information.
2995974 list.txt so it can not process -l as normal filename, how do I get desired behavior from stat ? | Use ./ before filename: for i in *; do stat -c "%s %n" "./$i"; done Or use -- to indicate the end of options for stat : for i in *; do stat -c "%s %n" -- "$i"; done Though that one will still fail for a file called - (will report information for the file open on stdin instead of the - file in the current directory). | {
"source": [
"https://unix.stackexchange.com/questions/249931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147765/"
]
} |
250,263 | Suppose I search for a package to install using nix-env 's --query operation: $ nix-env -qa 'aspell.*en'
aspell-dict-en-7.1-0 I write this package name in /etc/nixos/configuration.nix , NixOS's main configuration file : environment.systemPackages = with pkgs; [
aspell-dict-en
]; Yet if I run sudo nixos-rebuild switch , NixOS command to update the configuration and install all system-wide packages specified by declaratively , it aborts with an error: error: undefined variable ‘aspell-dict-en’ at /etc/nixos/configuration.nix:44:5 I know that for many packages, although not all, the name that nix-env returns and the name one should specify in environment.systemPackages configuration option are different, but I don't understand the logic. How do I install a package that I found through nix-env ? | NixOS community has three manuals, always consult them first, if you're stuck: Nix manual , for the package manager NixOS manual , for the operating system Nixpkgs manual , for Nix package infrastructure Every package on Nix is specified by a Nix expression. A Nix expression is some text, written in Nix language, typically residing in a file with extension .nix . Every expression has the so-called “symbolic name”, a human-readable name that is printed, when you use nix-env . See sample Nix expression . Nix itself doesn't use this symbolic name anywhere internally, so it doesn't matter if your package is named aspell-dict-en , it's just for your, human's, convenience. What actually matters is the so-called “attribute path”. So your confusion is between symbolic name and attribute path. Every package has an attribute path, which you can use in environment.systemPackages configuration option to install system-wide using declarative package management . To find out your package's attribute path, add another flag -P to your query: $ nix-env -qaP 'aspell.*en'
nixos.aspellDicts.en aspell-dict-en-7.1-0 You should be comfortable using nix-env on a daily basis, so practice calling nix-env with --query and --install options. However you can also browse packages and find out their attribute paths online on Nix packages search . Type aspell , click on aspell-dict-en and you'll see various package's properties, including attribute path as part of the install command: $ nix-env -iA nixos.pkgs.aspellDicts.en Now you can put this attribute path into /etc/nixos/configuration.nix : environment.systemPackages = with pkgs; [
aspellDicts.en
]; Then update the system by running sudo nixos-rebuild switch . | {
"source": [
"https://unix.stackexchange.com/questions/250263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11397/"
]
} |
250,681 | In bash you can use exec -a and in zsh you can alternatively also set ARGV0 to execute a program with a certain zeroth argument but is there also a POSIX way of doing so? As suggested in this one comment you could create a (temporary) symbolic link to achieve this but this way I couldn't set the new zeroth argument value to truly any arbitrary value, e.g. the command with a certain absolute path. So is there any other solution? | No, there's no POSIX way, other than compiling a C program that does it. As a quick and dirty one: $ echo 'int main(int c,char*v[]){
execvp(v[1],&v[2]);perror(v[1]);return 127;}'>r.c && make r
$ ./r ps zzz -f
UID PID PPID C STIME TTY TIME CMD
chazelas 7412 7411 0 10:44 pts/4 00:00:00 /bin/zsh
chazelas 21187 7412 0 22:33 pts/4 00:00:00 zzz -f exec -a is supported by ksh93 , bash , zsh , busybox ash (since version 1.27.0), yash , mksh ( since version r50e ), the Schily Bourne Shell (since August 2015) so is the most widespread among shells. Probably the most portable would be to resort to perl which is more likely to be available than a C compiler. $ perl -e 'exec {shift} @ARGV' ps zzz -f
UID PID PPID C STIME TTY TIME CMD
chazelas 7554 7411 0 10:58 pts/12 00:00:00 /bin/zsh
chazelas 7630 7554 0 11:02 pts/12 00:00:00 zzz -f | {
"source": [
"https://unix.stackexchange.com/questions/250681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117599/"
]
} |
250,688 | No OS is installed on my system. So I am trying to install bunsenlabs linux from BIOS through live usb, but when I click on install option I get an error saying CD is not inserted in CD-ROM. I want to install the linux in hard disk but not in CD, please help. | No, there's no POSIX way, other than compiling a C program that does it. As a quick and dirty one: $ echo 'int main(int c,char*v[]){
execvp(v[1],&v[2]);perror(v[1]);return 127;}'>r.c && make r
$ ./r ps zzz -f
UID PID PPID C STIME TTY TIME CMD
chazelas 7412 7411 0 10:44 pts/4 00:00:00 /bin/zsh
chazelas 21187 7412 0 22:33 pts/4 00:00:00 zzz -f exec -a is supported by ksh93 , bash , zsh , busybox ash (since version 1.27.0), yash , mksh ( since version r50e ), the Schily Bourne Shell (since August 2015) so is the most widespread among shells. Probably the most portable would be to resort to perl which is more likely to be available than a C compiler. $ perl -e 'exec {shift} @ARGV' ps zzz -f
UID PID PPID C STIME TTY TIME CMD
chazelas 7554 7411 0 10:58 pts/12 00:00:00 /bin/zsh
chazelas 7630 7554 0 11:02 pts/12 00:00:00 zzz -f | {
"source": [
"https://unix.stackexchange.com/questions/250688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147499/"
]
} |
250,690 | I want to delete a word by Ctrl + W in zsh like. vim /foo/bar^W
vim /foo/ And found a solution for bash , but bind is not in zsh function. Is it possible to configure ctrl-w (delete word)? How can I configure Ctrl + W as a delete-word ? | Here's a snippet from .zshrc i've been using: my-backward-delete-word() {
local WORDCHARS=${WORDCHARS/\//}
zle backward-delete-word
}
zle -N my-backward-delete-word
bindkey '^W' my-backward-delete-word I recall this was the original source: http://www.zsh.org/mla/users/2001/msg00870.html | {
"source": [
"https://unix.stackexchange.com/questions/250690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44001/"
]
} |
250,740 | I have the following pattern in a string (an IP address): 123.444.888.235 I want to replace the last number after the dot with 0 , so it becomes: 123.444.888.0 How could I do it in bash or another shell script language? | In any POSIX shell: var=123.444.888.235
new_var="${var%.*}.0" ${var%pattern} is an operator introduced by ksh in the 80s, standardized by POSIX for the standard sh language and now implemented by all shells that interpret that language, including bash . ${var%pattern} expands to the content of $var stripped of the shortest string that matches pattern off the end of it (or to the same as $var if that pattern doesn't match). So ${var%.*} (where .* is a pattern that means dot followed by any number of characters) expands to $var without the right-most . and what follows it. By contrast, ${var%%.*} where the longest string that matches the pattern is stripped would expand to $var without the left-most . and what follows it. | {
"source": [
"https://unix.stackexchange.com/questions/250740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148294/"
]
} |
250,818 | An essential convenience feature every reasonable shell provides is command / file name completion (usually triggered by pressing Tab ). I miss it heavily when I use command line in Midnight Commander. Is there a way to use it (other than by hiding the panels with Ctrl + O )? | You just need to prepend it with Esc : Esc-Tab does completion, and it will even give you a tiny dropdown if you do it twice. (That being said, you probably won't get the more fancy expansion possibilities of some shells.) | {
"source": [
"https://unix.stackexchange.com/questions/250818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
250,913 | In most shell scripts I've seen (besides ones I haven't written myself), I noticed that the shebang is set to #!/bin/sh . This doesn't really surprise me on older scripts, but it's there on fairly new scripts, too. Is there any reason for preferring /bin/sh over /bin/bash , since bash is pretty much ubiquitous, and often default, on many Linux and BSD machines going back well over a decade? | There are systems not shipping bash by default (e.g. FreeBSD). Even if bash is installed, it might not be located in /bin . Most simple scripts don't require bash. Using the POSIX shell is more portable and the scripts will run on a greater variety of systems. | {
"source": [
"https://unix.stackexchange.com/questions/250913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19463/"
]
} |
250,938 | I have a USB ADC/DAC and a HASP protected proprietary data acquisition system for it, both of which do not work in linux. I am trying to make it work in Windows virtual machine using qemu.
Here are the devices: $ lsusb
...
Bus 003 Device 011: ID 0529:0001 Aladdin Knowledge Systems HASP copy protection dongle
Bus 003 Device 010: ID 16b2:1001
$ ls -l /dev/bus/usb/003
...
crw-rw-r-- 1 root qemu 189, 265 дек 22 18:29 010
crw-rw-rw- 1 root qemu 189, 266 дек 22 18:29 011 My user is a member of qemu group.
Qemu command line: qemu-system-x86_64 \
-enable-kvm \
-m 2G \
-device usb-ehci,id=usb,bus=pci.0,addr=0x4 \
--device usb-host,vendorid=0x16b2,productid=0x1001 \ # ADC/DAC
-device piix3-usb-uhci,id=usb1,bus=pci.0,addr=0x5 \
--device usb-host,vendorid=0x0529,productid=0x0001 \ # HASP
-usbdevice tablet \
-net nic \
-net bridge,br=br0 \
-vga qxl \
-spice port=5930,disable-ticketing \
-device virtio-serial-pci \
-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-chardev spicevmc,id=spicechannel0,name=vdagent \
-drive file=/mnt/data/win-patch.img,if=virtio The problem is, both devices are showing in guest, but do not work. ADC/DAC should identify as a USB block drive, and is showing as one in device list, but doesn't work. I've installed HASP drivers for my dongle on the guest system, but the DAS software doesn't recognize it. What am I doing wrong? | I finally got help on the other forum. The issue seems to be with the USB bus implementation in I440FX chipset that is emulated by qemu by default (details here ). The workaround is emulating the ICH9 chipset instead. This is done by adding -M q35 parameter. I also changed the way the USB devices are specified and the final command line looks like this: qemu-system-x86_64 \
-enable-kvm \
-M q35 \
-m 2G \
-usb -usbdevice host:16b2:1001 \
-usb -usbdevice host:0529:0001 \
-usbdevice tablet \
-net nic \
-net bridge,br=br0 \
-vga qxl \
-spice port=5930,disable-ticketing \
-device virtio-serial-pci \
-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-chardev spicevmc,id=spicechannel0,name=vdagent \
-drive file=/mnt/data/win-patch.img,if=virtio Everything works perfectly now. Update for 2019: the usb-device option was deprecated; you can achieve the same by replacing it with -usb -device and specifying product and vendor id in hexadecimal numbers, likewise: qemu-system-x86_64 \
-enable-kvm \
-M q35 \
-m 2G \
-usb -device usb-host:productid=0x16b2,vendorid=0x1001 \
-usb -device usb-host:productid=0x0529,vendorid=0x0001 \
-usbdevice tablet \
-net nic \
-net bridge,br=br0 \
-vga qxl \
-spice port=5930,disable-ticketing \
-device virtio-serial-pci \
-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-chardev spicevmc,id=spicechannel0,name=vdagent \
-drive file=/mnt/data/win-patch.img,if=virtio reference: https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/usb2.txt;hb=HEAD | {
"source": [
"https://unix.stackexchange.com/questions/250938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148488/"
]
} |
251,013 | I'm trying to match multiple alphanumeric values (this number could vary) from a string and save them to a bash capture group array. However, I'm only getting the first match: mystring1='<link rel="self" href="/api/clouds/1/instances/1BBBBBB"/> dsf <link rel="self" href="/api/clouds/1/instances/2AAAAAAA"/>'
regex='/instances/([A-Z0-9]+)'
[[ $mystring1 =~ $regex ]]
echo ${BASH_REMATCH[1]}
1BBBBBB
echo ${BASH_REMATCH[2]} As you can see- it matches the first value I'm looking for, but not the second. | It's a shame that you can't do global matching in bash. You can do this: global_rematch() {
local s=$1 regex=$2
while [[ $s =~ $regex ]]; do
echo "${BASH_REMATCH[1]}"
s=${s#*"${BASH_REMATCH[1]}"}
done
}
global_rematch "$mystring1" "$regex" 1BBBBBB
2AAAAAAA This works by chopping the matched prefix off the string so the next part can be matched. It destroys the string, but in the function it's a local variable, so who cares. I would actually use that function to populate an array: $ mapfile -t matches < <( global_rematch "$mystring1" "$regex" )
$ printf "%s\n" "${matches[@]}"
1BBBBBB
2AAAAAAA | {
"source": [
"https://unix.stackexchange.com/questions/251013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143799/"
]
} |
251,054 | I would expect xdg-open command to use the same application that opens when I double-click the file in the default file manager, but this is not always true. For example my DE is XFCE, my file manager is Thunar and my default picture viewer is Ristretto. However, xdg-open example.png opens the example PNG file in Pinta. Why? | xdg-open is a desktop-independent tool for configuring the default
applications of a user. Many applications invoke the xdg-open command
internally. Inside a desktop environment (like GNOME, KDE, or Xfce),
xdg-open simply passes the arguments to those desktop environment's
file-opener application (eg. gvfs-open, kde-open, or exo-open). which
means that the associations are left up to the desktop environment.
When no desktop environment is detected (for example when one runs a
standalone window manager like eg. Openbox), xdg-open will use its own
configuration files. from archwiki specific to your question, you could try this to set the default application associated with the png file: xdg-mime default <ristretto.desktop> image/png you need find out what exactly the desktop file name of Ristretto.
afterwards, you could check it with this: xdg-mime query default image/png | {
"source": [
"https://unix.stackexchange.com/questions/251054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
251,090 | An existing directory is needed as a mount point . $ ls
$ sudo mount /dev/sdb2 ./datadisk
mount: mount point ./datadisk does not exist
$ mkdir datadisk
$ sudo mount /dev/sdb2 ./datadisk
$ I find it confusing since it overlays existing contents of the directory. There are two possible contents of the mount point directory which may get switched unexpectedly (for a user who is not performing the mount). Why doesn’t mount happen into a newly created directory? This is the way how graphical operating systems display removable media. It would be clear if the directory is mounted (exists) or not mounted (does not exist). I am pretty sure there is a good reason but I haven’t been able to discover it yet. | This is a case of an implementation detail that has leaked. In a UNIX system, every directory consists of a list of names mapped to inode numbers. An inode holds metadata which tells the system whether it is a file, directory, special device, named pipe, etc. If it is a file or directory it also tells the system where to find the file or directory contents on disk. Most inodes are files or directories. The -i option to ls will list inode numbers. Mounting a filesystem takes a directory inode and sets a flag on the kernel's in-memory copy to say "actually, when looking for the contents of this directory look at this other filesystem instead" (see slide 10 of this presentation ). This is relatively easy as it's changing a single data item. Why doesn't it create a directory entry for you pointing at the new inode instead? There are two ways you could implement that, both of which have disadvantages. One is to physically write a new directory into the filesystem - but that fails if the filesystem is readonly! The other is to add to every directory listing process a list of "extra" things that aren't really there. This is fiddly and potentially incurs a small performance hit on every file operation. If you want dynamically-created mount points, the automount system can do this. Special non-disk filesystems can also create directories at will, e.g. proc , sys , devfs and so on. Edit: see also the answer to What happens when you 'mount over' an existing folder with contents? | {
"source": [
"https://unix.stackexchange.com/questions/251090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54675/"
]
} |
251,159 | How can we concatenate results from stdout (or stderr) and a file into a final file. For example ls -a | grep text1 concatenate with file2.txt into a final result (not file2.txt ), without storing grep text1 to something intermediate such as grep text1 > file1.txt | ls -a | grep text1 | cat file2.txt - The - stands for standard input. Alternatively you may write ls -a | grep text1 | cat - file2.txt to have the output in different order. Yet another possibility using process substitution: cat <(ls -a | grep text1) file2.txt or in different order: cat file2.txt <(ls -a | grep text1) | {
"source": [
"https://unix.stackexchange.com/questions/251159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144104/"
]
} |
251,163 | I'm installing "Red Hat Enterprise Linux 7.2 (Linux version 3.10.0-327.el7.x86_64 ([email protected]) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Thu Oct 29 17:29:29 EDT 2015)" I am trying to switch from LANG="en_US.UTF-8" to LANG="en_US" as we need to operate the OS in 8 bits ASCII mode. I have tried to change /etc/locale.conf and reboot. It doesn't work for gnome. For instance, when I try to launch a terminal session, I get this error: Dec 23 14:27:56 cmt22 gnome-session: Error constructing proxy for org.gnome.Terminal:/org/gnome/Terminal/Factory0: Error calling StartServiceByName for org.gnome.Terminal: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process /usr/libexec/gnome-terminal-server exited with status 8 Accordingly to gnome documentation , it says the locale is not defined but localectl list-locales shows it is defined. | ls -a | grep text1 | cat file2.txt - The - stands for standard input. Alternatively you may write ls -a | grep text1 | cat - file2.txt to have the output in different order. Yet another possibility using process substitution: cat <(ls -a | grep text1) file2.txt or in different order: cat file2.txt <(ls -a | grep text1) | {
"source": [
"https://unix.stackexchange.com/questions/251163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148636/"
]
} |
251,388 | I have ran into a problem trying to write a Bash script. When grep outputs, it returns (usually) many lines. I would like to prefix and suffix a string to each of these output lines. I would also like to note that I'm piping ls into grep , like: ls | grep | With sed: ls | grep txt | sed 's/.*/prefix&suffix/' | {
"source": [
"https://unix.stackexchange.com/questions/251388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146386/"
]
} |
251,691 | Why is this not possible? pv ${dest_file} | gzip -1 pv is a progress bar error gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
0 B 0:00:00 [ 0 B/s] [> ] 0% This works pv ${file_in} | tar -Jxf - -C /outdir | What are you trying to achieve is to see the progress bar of the compression process. But it is not possible using pv . It shows only transfer progress, which you can achieve by something like this (anyway, it is the first link in the google): pv input_file | gzip > compressed_file The progress bar will run fast, and then it will wait for compression, which is not observable anymore using pv . But you can do that other way round and watch the output stream, bot here you will not be able to see the actual progress, because pv does not know the actual size of the compressed file: gzip <input_file | pv > compressed_file The best I found so far is the one from commandlinefu even with rate limiting and compression of directories: $D=directory
tar pcf - $D | pv -s $(du -sb $D | awk '{print $1}') --rate-limit 500k | gzip > target.tar.gz | {
"source": [
"https://unix.stackexchange.com/questions/251691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
251,893 | For example, I have a variable: env_name="GOPATH" Now I want to get the environment variable GOPATH as if like this: echo $GOPATH How can I get $GOPATH by $env_name ? | Different shells have different syntax for achieving this. In bash , you use variable indirection : printf '%s\n' "${!env_name}" In ksh , you use nameref aka typeset -n : nameref env_name=GOPATH
printf '%s\n' "$env_name" In zsh , you use P parameter expansion flag : print -rl -- ${(P)env_name} In other shell, you must use eval , which put you under many security implications if you're not sure the variable content is safe: eval "echo \"\$$name_ref\"" | {
"source": [
"https://unix.stackexchange.com/questions/251893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109888/"
]
} |
251,902 | I've set up an encrypted home directory for user piranha3: root@raspberrypi:~# ecryptfs-verify -u piranha3 -h
INFO: [/home/piranha3/.ecryptfs] exists
INFO: [/home/piranha3/.ecryptfs/Private.sig] exists
INFO: [/home/piranha3/.ecryptfs/Private.sig] contains [2] signatures
INFO: [/home/piranha3/.ecryptfs/Private.mnt] exists
INFO: [/home/piranha3] is a directory
INFO: [/home/piranha3/.ecryptfs/auto-mount] Automount is set
INFO: Mount point [/home/piranha3] is the user's home
INFO: Ownership [piranha3] of mount point [/home/piranha3] is correct
INFO: Configuration valid But after piranha3 logouts directory is not unmounted: root@raspberrypi:~# mount | grep ecryptfs
/home/.ecryptfs/piranha3/.Private on /home/piranha3 type ecryptfs (rw,nosuid,nodev,relatime,ecryptfs_fnek_sig=729061d7fa17b3a4,ecryptfs_sig=eb5ec4d9c13e2d74,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs) lsof output: lsof: WARNING: can't stat() cifs file system /media/cifs
Output information may be incomplete.
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete. System Information: root@raspberrypi:~# dpkg -l ecryptfs-utils
Deseado=desconocido(U)/Instalar/eliminaR/Purgar/retener(H)
| Estado=No/Inst/ficheros-Conf/desempaqUetado/medio-conF/medio-inst(H)/espera-disparo(W)/pendienTe-disparo
|/ Err?=(ninguno)/requiere-Reinst (Estado,Err: mayúsc.=malo)
||/ Nombre Versión Arquitectura Descripción
+++-========================-=================-=================-======================================================
ii ecryptfs-utils 103-5 armhf ecryptfs cryptographic filesystem (utilities)
root@raspberrypi:~# uname -a
Linux raspberrypi 4.1.13-v7+ #826 SMP PREEMPT Fri Nov 13 20:19:03 GMT 2015 armv7l GNU/Linux And finally about PAM: root@raspberrypi:~# grep -r ecryptfs /etc/pam.d
/etc/pam.d/common-session:session optional pam_ecryptfs.so unwrap
/etc/pam.d/common-password:password optional pam_ecryptfs.so
/etc/pam.d/common-auth:auth optional pam_ecryptfs.so unwrap
/etc/pam.d/common-session-noninteractive:session optional pam_ecryptfs.so unwrap Why is not /home/directory unmounted? | Different shells have different syntax for achieving this. In bash , you use variable indirection : printf '%s\n' "${!env_name}" In ksh , you use nameref aka typeset -n : nameref env_name=GOPATH
printf '%s\n' "$env_name" In zsh , you use P parameter expansion flag : print -rl -- ${(P)env_name} In other shell, you must use eval , which put you under many security implications if you're not sure the variable content is safe: eval "echo \"\$$name_ref\"" | {
"source": [
"https://unix.stackexchange.com/questions/251902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47954/"
]
} |
251,969 | Someone sent me a ZIP file containing files with Hebrew names (and created on Windows, not sure with which tool). I use LXDE on Debian Stretch. The Gnome archive manager manages to unzip the file, but the Hebrew characters are garbled. I think I'm getting UTF-8 octets extended into Unicode characters, e.g. I have a file whose name has four characters and a .doc suffic, and the characters are: 0x008E 0x0087 0x008E 0x0085 . Using the command-line unzip utility is even worse - it refuses to decompress altogether, complaining about an "Invalid or incomplete multibyte or wide character". So, my questions are: Is there another decompression utility that will decompress my files with the correct names? Is there something wrong with the way the file was compressed, or is it just an incompatibility of ZIP implementations? Or even misfeature/bug of the Linux ZIP utilities? What can I do to get the correct filenames after having decompressed using the garbled ones? | It sounds like the filenames are encoded in one of Windows' proprietary codepages ( CP862 , 1255 , etc). Is there another decompression utility that will decompress my files with the correct names? I'm not aware of a zip utility that supports these code pages natively. 7z has some understanding of encodings, but I believe it has to be an encoding your system knows about more generally (you pick it by setting the LANG environment variable) and Windows codepages likely aren't among those. unzip -UU should work from the command line to create files with the correct bytes in their names (by disabling all Unicode support). That is probably the effect you got from GNOME's tool already. The encoding won't be right either way, but we can fix that below. Is there something wrong with the way the file was compressed, or is it just an incompatibility of ZIP implementations? Or even misfeature/bug of the Linux ZIP utilities? The file you've been given was not created portably. That's not necessarily wrong for an internal use where the encoding is fixed and known in advance, although the format specification says that names are supposed to be either UTF-8 or cp437 and yours are neither. Even between Windows machines, using different codepages doesn't work out well, but non-Windows machines have no concept of those code pages to begin with. Most tools UTF-8 encode their filenames (which still isn't always enough to avoid problems). What can I do to get the correct filenames after having decompressed using the garbled ones? If you can identify the encoding of the filenames, you can convert the bytes in the existing names into UTF-8 and move the existing files to the right name. The convmv tool essentially wraps up that process into a single command: convmv -f cp862 -t utf8 -r . will try to convert everything inside . from cp862 to UTF-8. Alternatively, you can use iconv and find to move everything to their correct names. Something like: find -mindepth 1 -exec sh -c 'mv "$1" "$(echo "$1" | iconv -f cp862 -t utf8)"' sh {} \; will find all the files underneath the current directory and try to convert the names into UTF-8. In either case, you can experiment with different encodings and try to find one that makes sense. After you've fixed the encoding for you, if you want to send these files back in the other direction it's possible you'll have the same problem on the other end. In that case, you can reverse the process before zipping the files up with -UU , since it's likely to be very hard to fix on the Windows end. | {
"source": [
"https://unix.stackexchange.com/questions/251969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
252,016 | The second field in the Linux /etc/shadow file represents a password. However, what we have seen is that: Some of the password fields may have a single exclamation <account>:!:..... Some of the password fields may have a double exclamation <account>:!!:..... Some of the password fields may have an asterisk sign <account>:*:..... By some research on internet and through this thread , I can understand that * means password never established, ! means locked. Can someone explain what does double exclamation ( !! ) mean? and how is it different from ( ! )? | Both "!" and "!!" being present in the password field mean an account is locked. As it can be read in the following document, "!!" in an account entry in shadow means the account of an user has been created, but not yet given a password. Until being given an initial password by a sysadmin, it is locked by default. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/System_Administration_Guide/s2-redhat-config-users-process.html | {
"source": [
"https://unix.stackexchange.com/questions/252016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95872/"
]
} |
252,349 | I am familiar with kill command , and most of the time we just use kill -9 to kill a process forcefully, there are many other signals that can be used with kill . But I wonder what are the use cases of pkill and killall , if there is already a kill command. Do pkill and killall use the kill command in their implementation? I mean they are just wrappers over kill or they have their own implementation? I would also like to know how pgrep command gets the process id from the process name. Do all these commands use the same underlying system calls? Is there any difference from a performance point of view, which one is faster? | The kill command is a very simple wrapper to the kill system call , which knows only about process IDs (PIDs). pkill and killall are also wrappers to the kill system call , (actually, to the libc library which directly invokes the system call), but can determine the PIDs for you, based on things like, process name, owner of the process, session id, etc. How pkill and killall work can be seen using ltrace or strace on them. On Linux, they both read through the /proc filesystem, and for each pid (directory) found, traverses the path in a way to identify a process by its name or other attributes. How this is done is technically speaking, kernel and system specific. In general, they read from /proc/<PID>/stat which contains the command name as the 2nd field. For pkill -f and pgrep examine the /cmdline entry for each PID's proc entry. pkill and pgrep use the readproc system call, whereas killall does not. I couldn't say if there's a performance difference: you'll have to benchmark that on your own. | {
"source": [
"https://unix.stackexchange.com/questions/252349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148778/"
]
} |
Subsets and Splits