source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
205,016
While I am connecting to my server I get, -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable And I try following commands also, then the result is same. -bash-4.1$ df -h -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable -bash-4.1$ -bash-4.1$ ls -lrth -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Interrupted system call -bash-4.1$ -bash-4.1$ ps -aef | grep `pwd` -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable -bash-4.1$ Why this comming ? And how can I resolve it ?
This could be due to some resource limit, either on the server itself (or) specific to your user account. Limits in your shell could be checked via ulimit -a . Esp check for ulimit -u max user processes, if you have reached max processes, fork is unable to create any new and failing with that error. This could also be due to swap/memory resource issue
{ "source": [ "https://unix.stackexchange.com/questions/205016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105701/" ] }
205,076
I'm using the timeout function on debian to wait 5 seconds for my script. Works great but the problem I have is that I need a return value. Like 1 for timeout and 0 for no timeout How am I going to do this? Have a look at my code: timeout 5 /some/local/script/connect_script -x 'status' > output.txt # here i need the return of timeout As you see my connect_script -x 'status' returns the status as a string and print it to the screen (probably you can't see this) Background of this issue is that if the server (for connect_script) is freeze the script does nothing. That's why I need the timeout around that. And when it timeouts I want to restart the server. I can do that, but I have no idea how I can see if its timeout or not...
If timeout times out, it exits with status 124 ; you can check this to determine whether the script timed out or not.
{ "source": [ "https://unix.stackexchange.com/questions/205076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116430/" ] }
205,180
I have MySQL password saved on a file foo.php , for example P455w0rd , when I try to use it: $ cat foo.php | grep '$dbpwd=' | cut -d '"' -f 2 | mysql -U root -p mydb -h friendserver Enter password: (holds) $ echo P455w0rd | mysql -u root -p mydb -h friendserver Enter password: (holds) Both option still ask for password, what's the correct way to send password from stdin ?
You have to be very careful how you pass passwords to command lines as, if you're not careful, you'll end up leaving it open to sniffing using tools such as ps . The safest way to do this would be to create a new config file and pass it to mysql using either the --defaults-file= or --defaults-extra-file= command line option. The difference between the two is that the latter is read in addition to the default config files whereas with the former, only the one file passed as the argument is used. Your additional configuration file should contain something similar to: [client] user=foo password=P@55w0rd Make sure that you secure this file. Then run: mysql --defaults-extra-file=<path to the new config file> [all my other options]
{ "source": [ "https://unix.stackexchange.com/questions/205180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27996/" ] }
205,233
I am using Ubuntu 14.04 with Cinnamon desktop. After trying to create a shortcut for a PDF file to Cinnamon's Taskbar, I found maybe I should have searched for a folder containing the Taskbar's configuration information and create a launcher there. And by the way I don't know if I've guessed right or if yes, where would it be! How would I add the shortcut to the pdf file and then place it in the Taskbar?
A simple GUI method: Right-click Menu and then click Configure . Click Open the Menu Editor . Optionally create a new folder for your custom links. Create a new item that opens the file, using the command, evince /path/to/file.pdf , or whichever PDF viewer you want to use. Close the menu editor and right-click on your new menu item, selecting Add to Panel . If you chose to make a new folder in the menu, it exists in ~/.local/share/desktop-directories/ as a file with the extension, .directory . If you chose to make a new menu item, it exists in ~/.local/share/applications/ as a file with the extension, .desktop . These were created by alacarte . They are regular text files; and, now that you know their location, you could do this manually, too. The rest of the files for the menu are located in /usr/share/desktop-directories and /usr/share/applications .
{ "source": [ "https://unix.stackexchange.com/questions/205233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49856/" ] }
205,546
One thing I really miss in Midnight Commander (compared to some GUI file explorers, like f.e. Thunar) is the ability to go to a certain directory by just typing prefix of its name. For example for a current directory containing: files other many many_other some Typing man would take me to (focus) directory many . Is there any plugin that would let me configure MC that way?
You don't need any plugins. You have two options: In current directory panel, type Alt + s or Ctrl + s , then type your search pattern, the cursor will jump to the matches sequentially. To toggle through all results that match the current pattern, repeat the keystroke. Note : The Ctrl + s combination will freeze many terminal implementations (press [ Ctrl + q to unfreeze), so use Alt + s instead if that happens to you. Disable Command prompt in Options/Layout .
{ "source": [ "https://unix.stackexchange.com/questions/205546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
205,708
I have a PC(kernel 3.2.0-23-generic ) which has 192.168.1.2/24 configured to eth0 interface and also uses 192.168.1.1 and 192.168.1.2 addresses for tun0 interface: root@T42:~# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:41:54:01:93 brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 inet6 fe80::216:41ff:fe54:193/64 scope link valid_lft forever preferred_lft forever 3: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 4: irda0: <NOARP> mtu 2048 qdisc noop state DOWN qlen 8 link/irda 00:00:00:00 brd ff:ff:ff:ff 5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:13:ce:8b:99:3e brd ff:ff:ff:ff:ff:ff inet 10.30.51.53/24 brd 10.30.51.255 scope global eth1 inet6 fe80::213:ceff:fe8b:993e/64 scope link valid_lft forever preferred_lft forever 6: tun0: <POINTOPOINT,MULTICAST,NOARP> mtu 1500 qdisc pfifo_fast state DOWN qlen 100 link/none inet 192.168.1.1 peer 192.168.1.2/32 scope global tun0 root@T42:~# ip route show dev eth0 192.168.1.0/24 proto kernel scope link src 192.168.1.2 root@T42:~# As seen above, tun0 is administratively disabled( ip link set dev tun0 down ). Now when I receive ARP requests for 192.168.1.2 , the PC does not reply to those requests: root@T42:~# tcpdump -nei eth0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 15:30:34.875427 00:1a:e2:ae:cb:b7 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.1, length 46 15:30:36.875268 00:1a:e2:ae:cb:b7 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.1, length 46 15:30:39.138651 00:1a:e2:ae:cb:b7 > 00:1a:e2:ae:cb:b7, ethertype Loopback (0x9000), length 60: ^C 3 packets captured 3 packets received by filter 0 packets dropped by kernel root@T42:~# Only after I delete the tun0 interface( ip link del dev tun0 ) the PC will reply to ARP request for 192.168.1.2 on eth0 interface. Routing table looks exactly alike before and after ip link del dev tun0 : root@T42:~# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.30.51.254 0.0.0.0 UG 0 0 0 eth1 10.30.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 192.168.1.2 255.255.255.0 UG 0 0 0 eth0 root@T42:~# ip link del dev tun0 root@T42:~# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.30.51.254 0.0.0.0 UG 0 0 0 eth1 10.30.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 192.168.1.2 255.255.255.0 UG 0 0 0 eth0 root@T42:~# Routing entry below is removed already with ip link set dev tun0 down command: Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 However, while routing tables are exactly alike before and after the ip link del dev tun0 command, the actual routing decisions kernel will make are not: T42:~# ip route get 192.168.1.1 local 192.168.1.1 dev lo src 192.168.1.1 cache <local> T42:~# ip link del dev tun0 T42:~# ip route get 192.168.1.1 192.168.1.1 dev eth0 src 192.168.1.2 cache ipid 0x8390 T42:~# Is this an expected behavior? Why does kernel ignore the routing table?
Your routing table isn't being ignored, exactly. It's being overruled by a higher-priority routing table. What's Going On The routing table you see when you type ip route show isn't the only routing table the kernel uses. In fact, there are three routing tables by default, and they are searched in the order shown by the ip rule command: # ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default The table you're most familiar with is main , but the highest-priority routing table is local . This table is managed by the kernel to keep track of local and broadcast routes: in other words, the local table tells the kernel how to route to the addresses of its own interfaces. It looks something like this: # ip route show table local broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.1.0 dev eth0 proto kernel scope link src 192.168.1.2 local 192.168.1.1 dev tun0 proto kernel scope host src 192.168.1.1 local 192.168.1.2 dev eth0 proto kernel scope host src 192.168.1.2 broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.2 Check out that line referencing tun0 . That's what's causing your strange results from route get . It says 192.168.1.1 is a local address, which means if we want to send an ARP reply to 192.168.1.1, it's easy; we send it to ourself. And since we found a route in the local table, we stop searching for a route, and don't bother checking the main or default tables. Why multiple tables? At a minimum, it's nice to be able to type ip route and not see all those "obvious" routes cluttering the display (try typing route print on a Windows machine). It can also serve as some minimal protection against misconfiguration: even if the main routing table has gotten mixed up, the kernel still knows how to talk to itself. (Why keep local routes in the first place? So the kernel can use the same lookup code for local addresses as it does for everything else. It makes things simpler internally.) There are other interesting things you can do with this multiple-table scheme. In particular, you can add your own tables, and specify rules for when they are searched. This is called "policy routing", and if you've ever wanted to route a packet based on its source address, this is how to do it in Linux. If you're doing especially tricky or experimental things, you can add or remove local routes yourself by specifying table local in the ip route command. Unless you know what you're doing, though, you're likely to confuse the kernel. And of course, the kernel will still continue to add and remove its own routes, so you have to watch to make sure yours don't get overwritten. Finally, if you want to see all of the routing tables at once: # ip route show table all For more info, check out the ip-rule(8) man page or the iproute2 docs . You might also try the Advanced Routing and Traffic Control HOWTO for some examples of what you can do.
{ "source": [ "https://unix.stackexchange.com/questions/205708", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
205,759
I pressed ~ Tab Tab on the bash command prompt and got an unexpected set of completions. First it looked like all the folks in the /Users directory, and a lot more. Then I thought it was doing the reverse lookup of folks with "home" directories in /etc/password , or perhaps the ones that were /var/empty -- this seems about right. What I'm curious about is what's really going on and why this works as it does.
I don't have an OSX system handy to check on but on all *nixes, ~foo is a shorthand for the home directory of user foo . For example, this command will move into my user's $HOME ( cd ~ alone will move into your home directory): cd ~terdon So, ~ and Tab will expand to all possible user names. The list should be the same as the list of users in /etc/passwd . I can confirm that that is exactly what happens when I try this on my Debian.
{ "source": [ "https://unix.stackexchange.com/questions/205759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13077/" ] }
205,830
EDIT The issue as exposed here is solved (about files modes of the .ssh folder. But an other issue persists so I create a new question : > Unable to login with SSH-RSA key I can no longer connect with ssh-rsa key for a specific user, but it still work for other users. The git user defined as follow : # cat /etc/passwd | grep git git:x:1002:1002:,,,:/var/git:/bin/bash So you noticed that this is the git user thus its home is /var/git , it's not in /home . Now, ssh always prompt me for password : $ ssh git@srv git@srv's password: I checked logs : # tail -n 1 /var/log/auth.log [...] Authentication refused: bad ownership or modes for file /var/git/.ssh/authorized_keys So authorized_keys as some ownership or modes missconfiguration. I don't understand because here is the conf for this file : # ls -l /var/git/.ssh/ | grep auth -rw-rw-r-- 1 git git 394 mai 22 17:39 authorized_keys And here is (in case...) the parent .ssh dir: # ls -al /var/git/ | grep ssh drwxrwxr-x 2 git git 4096 mai 22 17:39 .ssh And the $HOME directory : # ls -l /var/ | grep git drwxr-xr-x 7 git git 4096 mai 27 10:49 git So owners are always git , like owner groups. And files are readable so where could be the trick ?
The problem is the fact that file and directory permissions do not meet the requirements of StrictModes , which in OpenSSH is yes by default and should not be changed. Try setting the permissions of authorized_keys to 0600 and the .ssh directory to 0700 . # chmod 0700 .../.ssh/ # chmod 0600 .../.ssh/authorized_keys Note that the ... will differ based on installation (e.g., in this question it is /var/git/ but for users it will be /home/username/ .
{ "source": [ "https://unix.stackexchange.com/questions/205830", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80244/" ] }
205,867
Is there a way to view iptables rules in a bit more detail? I recently added masquerade to a range of IPs: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE service iptables save service iptables restart Which has done what I want it to, but when I use: iptables -L I get the same output as I normally get: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination How can I see the rules including the ones I add? (system is CentOS 6)
When using the -L , --list option to list the current firewall rules, you also need to specify the appropriate Netfilter table (one of filter , nat , mangle , raw or security ). So, if you’ve added a rule for the nat table, you should explicitly specify this table using the -t , --table option: iptables --table nat --list Or using the options short form: iptables -t nat -L If you don’t specify a specific table, the filter table is used as the default. For faster results, it can be useful to also include the -n , --numeric option to print numeric IP addresses instead of hostnames, thus avoiding the need to wait for reverse DNS lookups. You can get even more information by including the -v , --verbose option.
{ "source": [ "https://unix.stackexchange.com/questions/205867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89568/" ] }
205,876
I'm trying to work remotely on a project that I have stored in a server, but the computer I am on belongs to the university and I don't have any keys nor permission to install anything. Could it be possible to log into my account using eclipse, gedit or something similar? Or to create somehow because I'm using a guest account a local folder connected to the remote? I have been able to connect using firefox, but it doesn't allow to work remotely. Update: The host is active24.com (owned by Mamut I think), it is a simple webhosting with ftp and mysql. It's running over linux. I'm the owner, but I don't administer the server, only the domain and db. I'm locally in a ubuntu machine. I need the FTP acces for editing the web files, because the website is not yet ready and I want to modify it, so I want to either create a remote folder (which I don't think is possible) or to log in remotely to the files. I thougt eclipse would allow me but it requires installing the Remote System Explorer. I have also tried login with ssh but the host doesn't allow me.
When using the -L , --list option to list the current firewall rules, you also need to specify the appropriate Netfilter table (one of filter , nat , mangle , raw or security ). So, if you’ve added a rule for the nat table, you should explicitly specify this table using the -t , --table option: iptables --table nat --list Or using the options short form: iptables -t nat -L If you don’t specify a specific table, the filter table is used as the default. For faster results, it can be useful to also include the -n , --numeric option to print numeric IP addresses instead of hostnames, thus avoiding the need to wait for reverse DNS lookups. You can get even more information by including the -v , --verbose option.
{ "source": [ "https://unix.stackexchange.com/questions/205876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116986/" ] }
205,883
As I understand, Linux kernel logs to /proc/kmsg file(mostly hardware-related messages) and /dev/log socket? Anywhere else? Are other applications also able to send messages to /proc/kmsg or /dev/log ? Last but not least, am I correct that it is the syslog daemon( rsyslog , syslog-ng ) which checks messages from those two places and then distributes those to various files like /var/log/messages or /var/log/kern.log or even central syslog server?
Simplified, it goes more or less like this: The kernel logs messages (using the printk() function) to a ring buffer in kernel space. These messages are made available to user-space applications in two ways: via the /proc/kmsg file (provided that /proc is mounted), and via the sys_syslog syscall. There are two main applications that read (and, to some extent, can control) the kernel's ring buffer: dmesg(1) and klogd(8) . The former is intended to be run on demand by users, to print the contents of the ring buffer. The latter is a daemon that reads the messages from /proc/kmsg (or calls sys_syslog , if /proc is not mounted) and sends them to syslogd(8) , or to the console. That covers the kernel side. In user space, there's syslogd(8) . This is a daemon that listens on a number of UNIX domain sockets (mainly /dev/log , but others can be configured too), and optionally to the UDP port 514 for messages. It also receives messages from klogd(8) ( syslogd(8) doesn't care about /proc/kmsg ). It then writes these messages to some files in /log , or to named pipes, or sends them to some remote hosts (via the syslog protocol, on UDP port 514), as configured in /etc/syslog.conf . User-space applications normally use the libc function syslog(3) to log messages. libc sends these messages to the UNIX domain socket /dev/log (where they are read by syslogd(8) ), but if an application is chroot(2) -ed the messages might end up being written to other sockets, f.i. to /var/named/dev/log . It is, of course, essential for the applications sending these logs and syslogd(8) to agree on the location of these sockets. For these reason syslogd(8) can be configured to listen to additional sockets aside from the standard /dev/log . Finally, the syslog protocol is just a datagram protocol. Nothing stops an application from sending syslog datagrams to any UNIX domain socket (provided that its credentials allows it to open the socket), bypassing the syslog(3) function in libc completely. If the datagrams are correctly formatted syslogd(8) can use them as if the messages were sent through syslog(3) . Of course, the above covers only the "classic" logging theory. Other daemons (such as rsyslog and syslog-ng , as you mention) can replace the plain syslogd(8) , and do all sorts of nifty things, like send messages to remote hosts via encrypted TCP connections, provide high resolution timestamps, and so on. And there's also systemd , that is slowly phagocytosing the UNIX part of Linux. systemd has its own logging mechanisms, but that story would have to be told by somebody else. :) Differences with the *BSD world: On *BSD there is no klogd(8) , and /proc either doesn't exist (on OpenBSD) or is mostly obsolete (on FreeBSD and NetBSD). syslogd(8) reads kernel messages from the character device /dev/klog , and dmesg(1) uses /dev/kmem to decode kernel names. Only OpenBSD has a /dev/log . FreeBSD uses two UNIX domain sockets /var/run/log and var/rub/logpriv instead, and NetBSD has a /var/run/log .
{ "source": [ "https://unix.stackexchange.com/questions/205883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
206,309
I have a path on a Linux machine (Debian 8) which I want to share with Samba 4 to Windows computers (Win7 and 8 in a domain). In my smb.conf I did the following: [myshare] path = /path/to/share writeable = yes browseable = yes guest ok = yes public = yes I have perfect read access from Windows. But in order to have write access, I need to do chmod -R 777 /path/to/share in order to be able to write to it from Windows. What I want is write access from Windows after I provide the Linux credentials of the Linux owner of /path/to/share . I already tried: [myshare] path = /path/to/share writeable = yes browseable = yes Then Windows asks for credentials, but no matter what I enter, it's always denied. What is the correct way to gain write access to Samba shares from a Windows domain computer without granting 777 permissions?
I recommend to create a dedicated user for that share and specify it in force user (see docs) . Create a user ( shareuser for example) and set the owner of everything in the share folder to that user: adduser --system shareuser chown -R shareuser /path/to/share Then add force user and permission mask settings in smb.conf : [myshare] path = /path/to/share writeable = yes browseable = yes public = yes create mask = 0644 directory mask = 0755 force user = shareuser Note that guest ok is a synonym for public .
{ "source": [ "https://unix.stackexchange.com/questions/206309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50666/" ] }
206,315
Before all the unit files were in /etc/systemd/system/ but now some are showing up in /usr/lib/systemd/system (<- on CentOS, or /lib/systemd/system <- on Debian/Ubuntu), what is the difference between these folders?
This question is already answered in man 7 file-hierarchy which comes with systemd (there is also online version ): /etc System-specific configuration. (…) VENDOR-SUPPLIED OPERATING SYSTEM RESOURCES /usr Vendor-supplied operating system resources. Usually read-only, but this is not required. Possibly shared between multiple hosts. This directory should not be modified by the administrator, except when installing or removing vendor-supplied packages. Basically, files that ships in packages downloaded from distribution repository go into /usr/lib/systemd/ . Modifications done by system administrator (user) go into /etc/systemd/system/ . System-specific units override units supplied by vendors. Using drop-ins, you can override only specific parts of unit files, leaving the rest to vendor (drop-ins are available since the very beginning of systemd, but were properly documented only in v219; see man systemd.unit ).
{ "source": [ "https://unix.stackexchange.com/questions/206315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109720/" ] }
206,322
I have a netgear wireless router, single web server , 100 clients with 192.168.0.0/24 network. I haven't Internet connection and I am not connected to outside world. Now my goal is to provide the name to server's ip by installing bind& configuring in the same server. This means single server acting as DNS server & web server. observe the scenario: actually my server is getting the ip and every setting from the router so my server's ip always changes dynamically.In this type of situations how can i configure the "bind" in that server with dynamic ip which i am getting from router. is this possible that the server's ip and primary dns can have same address? if yes how the router will generate this perticular configuration to the server?. will router assign the configuration like this to the server? Ip:192.168.0.101 broadcast:192.168.0.255 Primary dns:192.168.0.101 default route:192.168.0.1
This question is already answered in man 7 file-hierarchy which comes with systemd (there is also online version ): /etc System-specific configuration. (…) VENDOR-SUPPLIED OPERATING SYSTEM RESOURCES /usr Vendor-supplied operating system resources. Usually read-only, but this is not required. Possibly shared between multiple hosts. This directory should not be modified by the administrator, except when installing or removing vendor-supplied packages. Basically, files that ships in packages downloaded from distribution repository go into /usr/lib/systemd/ . Modifications done by system administrator (user) go into /etc/systemd/system/ . System-specific units override units supplied by vendors. Using drop-ins, you can override only specific parts of unit files, leaving the rest to vendor (drop-ins are available since the very beginning of systemd, but were properly documented only in v219; see man systemd.unit ).
{ "source": [ "https://unix.stackexchange.com/questions/206322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89716/" ] }
206,323
I have a list that i've copy and i want to paste it to the shell to give me just the repeated lines. 1 1 3 2 As i read about bash commands i've maded this: cat > /tmp/sortme ; diff <(sort /tmp/sortme) <(sort -u /tmp/sortme) When i write the above command i paste my list and press CTRL+Z to stop cat and it shows me the repeated lines. I don't want to compare files, just pasted input of several rows. Now to the question: Is there any way to turn that command into script? Because when i try to make it as a script and CTRL+Z stops it. PS: Please don't laugh. This is my firs time trying. Till now just reading. :)
This question is already answered in man 7 file-hierarchy which comes with systemd (there is also online version ): /etc System-specific configuration. (…) VENDOR-SUPPLIED OPERATING SYSTEM RESOURCES /usr Vendor-supplied operating system resources. Usually read-only, but this is not required. Possibly shared between multiple hosts. This directory should not be modified by the administrator, except when installing or removing vendor-supplied packages. Basically, files that ships in packages downloaded from distribution repository go into /usr/lib/systemd/ . Modifications done by system administrator (user) go into /etc/systemd/system/ . System-specific units override units supplied by vendors. Using drop-ins, you can override only specific parts of unit files, leaving the rest to vendor (drop-ins are available since the very beginning of systemd, but were properly documented only in v219; see man systemd.unit ).
{ "source": [ "https://unix.stackexchange.com/questions/206323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114652/" ] }
206,350
In shell scripts one specifies language interpreter on shebang( #! ) line. As far as I know, it is recommended to use #!/usr/bin/env bash because env is always located in /usr/bin directory while location of bash may vary from system to system. However, are there any technical differences if bash is started directly with /bin/bash or through env utility? In addition, am I correct that if I do not specify any variables for env , the bash is started in unmodified environment?
In one sense, using env could be considered "portable" in that the path to bash is not relevant ( /bin/bash , /usr/bin/bash , /usr/local/bin/bash , ~/bin/bash , or whatever path) because it is specified in the environment. In this way, a script author could make his script easier to run on many different systems. In another sense, using env to find bash or any other shell or command interpreter is considered a security risk because an unknown binary (malware) might be used to execute the script. In these environments, and sometimes by managerial policy, the path is specified explicitly with a full path: #!/bin/bash . In general, use env unless you know you are writing in one of these environments that scrutinize the minute details of risk. When Ubuntu first started using dash , some time in 2011, many scripts were broken by that action. There was discussion about it on askubuntu.com. Most scripts were written #!/bin/sh which was a link to /bin/bash . The consensus was this: the script writer is responsible for specifying the interpreter. Therefore, if your script should always be invoked with BASH, specify it from the environment. This saves you having to guess the path, which is different on various Unix/Linux systems. In addition, it will work if tomorrow /bin/sh becomes a link to some other shell like /bin/newsh . Another difference is that the env method won't allow the passing of arguments to the interpreter.
{ "source": [ "https://unix.stackexchange.com/questions/206350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
206,386
When I run netstat --protocol unix or lsof -U I see that some unix socket paths are prepended with @ symbol, for example, @/tmp/dbus-qj8V39Yrpa . Then when I run ls -l /tmp I don't see file named dbus-qj8V39Yrpa there. The question is what does that prepended @ symbol denote? And second related question, is -- where can I actually find that unix socket file ( @/tmp/dbus-qj8V39Yrpa ) on the filesystem?
The @ probably indicates a socket held in an abstract namespace which doesn't belong to a file in the filesystem. Quoting from The Linux Programming Interface by Michael Kerrisk : 57.6 The Linux Abstract Socket Namespace The so-called abstract namespace is a Linux-specific feature that allows us to bind a UNIX domain socket to a name without that name being created in the file system. This provides a few potential advantages: We don’t need to worry about possible collisions with existing names in the file system. It is not necessary to unlink the socket pathname when we have finished using the socket. The abstract name is automatically removed when the socket is closed. We don’t need to create a file-system pathname for the socket. This may be useful in a chroot environment, or if we don’t have write access to a file system. To create an abstract binding, we specify the first byte of the sun_path field as a null byte (\0). [...] Displaying a leading null byte to denote such type of a socket may be difficult, so that is maybe the reason for the leading @ sign.
{ "source": [ "https://unix.stackexchange.com/questions/206386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58428/" ] }
206,415
I was wondering if there is a way to use Samba to send items to a client machine via the command line (I need to send the files from the Samba server). I know I could always use scp but first I was wondering if there is a way to do it with Samba. Thanks!
Use smbclient , a program that comes with Samba: $ smbclient //server/share -c 'cd c:/remote/path ; put local-file' There are many flags, such as -U to allow the remote user name to be different from the local one. On systems that split Samba into multiple binary packages, you may have the Samba servers installed yet still be missing smbclient . In such a case, check your package repository for a package named smbclient , samba-client , or similar.
{ "source": [ "https://unix.stackexchange.com/questions/206415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96115/" ] }
206,513
I was running a Python script that malfunctioned and used sudo to create a file named > . How can I get rid of this file? Of course, when I try sudo rm > , I get the error bash: syntax error near unexpected token 'newline' , because it thinks I'm trying to redirect the output of rm . Its permissions are -rw-r--r-- .
Any of these should work: sudo rm \> sudo rm '>' sudo rm ">" sudo find . -name '>' -delete sudo find . -name '>' -exec rm {} + Note that the last two commands, those using find , will find all files or directories named > in the current folder and all its subfolders. To avoid that, use GNU find: sudo find . -maxdepth 1 -name '>' -delete sudo find . -maxdepth 1 -name '>' -exec rm {} +
{ "source": [ "https://unix.stackexchange.com/questions/206513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117416/" ] }
206,540
I built Alpine Linux in a Docker container with the following Dockerfile: FROM alpine:3.2 RUN apk add --update jq curl && rm -rf /var/cache/apk/* the build run successfully: $ docker build -t collector . Sending build context to Docker daemon 2.048 kB Sending build context to Docker daemon Step 0 : FROM alpine:3.2 3.2: Pulling from alpine 8697b6cc1f48: Already exists alpine:3.2: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. Digest: sha256:eb84cc74347e4d7c484d566dec8a5eef82bab1b78308b92cda559bcff29c27cc Status: Downloaded newer image for alpine:3.2 ---> 8697b6cc1f48 Step 1 : RUN apk add --update jq curl && rm -rf /var/cache/apk/* ---> Running in 888571296e79 fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz (1/11) Installing run-parts (4.4-r0) (2/11) Installing openssl (1.0.2a-r1) (3/11) Installing lua5.2-libs (5.2.4-r0) (4/11) Installing lua5.2 (5.2.4-r0) (5/11) Installing ncurses-terminfo-base (5.9-r3) (6/11) Installing ncurses-widec-libs (5.9-r3) (7/11) Installing lua5.2-posix (33.3.1-r2) (8/11) Installing ca-certificates (20141019-r2) (9/11) Installing libssh2 (1.5.0-r0) (10/11) Installing curl (7.42.1-r0) (11/11) Installing jq (1.4-r0) Executing busybox-1.23.2-r0.trigger Executing ca-certificates-20141019-r2.trigger OK: 9 MiB in 26 packages ---> 7625779b773d Removing intermediate container 888571296e79 Successfully built 7625779b773d anyway when I run date -d it fails: $ docker run -i -t collector sh / # date -d yesterday date: invalid date 'yesterday' / # date -d now date: invalid date 'now' / # date -d next-month date: invalid date 'next-month' while the rest of the options seem running ok: / # date Sat May 30 18:57:24 UTC 2015 / # date +"%A" Saturday / # date +"%Y-%m-%dT%H:%M:%SZ" 2015-05-30T19:00:38Z
BusyBox/Alpine version of date doesn't support -d options, even if the help is exatly the same in the Ubuntu version as well as in others more fat distros. Also the "containerization" doesn't miss anything here. To work with -d options you just need to add coreutils package: $ cat Dockerfile.alpine-coreutils FROM alpine:3.2 RUN apk add --update coreutils && rm -rf /var/cache/apk/* $ docker build -t alpine-coreutils - < Dockerfile.alpine-coreutils Sending build context to Docker daemon 2.048 kB Sending build context to Docker daemon Step 0 : FROM alpine:3.2 3.2: Pulling from alpine 8697b6cc1f48: Already exists alpine:3.2: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. Digest: sha256:eb84cc74347e4d7c484d566dec8a5eef82bab1b78308b92cda559bcff29c27cc Status: Downloaded newer image for alpine:3.2 ---> 8697b6cc1f48 Step 1 : RUN apk add --update coreutils && rm -rf /var/cache/apk/* ---> Running in 694fa5cb271c fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz (1/3) Installing libattr (2.4.47-r3) (2/3) Installing libacl (2.2.52-r2) (3/3) Installing coreutils (8.23-r0) Executing busybox-1.23.2-r0.trigger OK: 12 MiB in 18 packages ---> a7d9116a00ee Removing intermediate container 694fa5cb271c Successfully built a7d9116a00ee $ docker run -i -t alpine-coreutils sh / # date -d last-week Sun May 24 09:19:34 UTC 2015 / # date -d yesterday Sat May 30 09:19:46 UTC 2015 / # date Sun May 31 09:19:50 UTC 2015 The image size will double but is till 11.47 MB, more than an order of siZe less, compared to Debian standard : $ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE alpine-coreutils latest a7d9116a00ee 2 minutes ago 11.47 MB alpine 3.2 8697b6cc1f48 2 days ago 5.242 MB debian latest df2a0347c9d0 11 days ago 125.2 MB Thanks to Andy Shinn: https://github.com/gliderlabs/docker-alpine/issues/40#issuecomment-107122371 And to Christopher Horrell: https://github.com/docker-library/official-images/issues/771#issuecomment-107101595
{ "source": [ "https://unix.stackexchange.com/questions/206540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8947/" ] }
206,594
I have directory exam with 2 files in it. I need to delete files but permission is denied. Even rm -rf command can't delete these files. I logged in as a root user.
From root user check attributes of files # lsattr if you notice i (immutable) or a (append-only), remove those attributes: # man chattr # chattr -i [filename] # chattr -a [filename]
{ "source": [ "https://unix.stackexchange.com/questions/206594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
206,886
I want to print previous line,every time a match was found. I know about grep -A and -B options. But my Solaris 5.10 machine doesn't support those options. I want solution using only sed . Foo.txt : Name is : sara age is : 10 Name is : john age is : 20 Name is : Ron age is : 10 Name is : peggy age is : 30 Out.txt : Name is : sara Name is : Ron Pattern I am trying to match was age is : 10 . My environment, Solaris 5.10.
$ sed -n '/age is : 10/{x;p;d;}; x' Foo.txt Name is : sara Name is : Ron The above was tested on GNU sed. If Solaris' sed does not support chaining commands together with semicolons, try: $ sed -n -e '/age is : 10/{x;p;d;}' -e x Foo.txt Name is : sara Name is : Ron How it works sed has a hold space and a pattern space. Newlines are read into the pattern space. The idea of this script is that the previous line is saved in the hold space. /age is : 10/{x;p;d;} If the current line contains age is : 10 , then do: x : swap the pattern and hold space so that the prior line is in the pattern space p : print the prior line d : delete the pattern space and start processing next line x This is executed only on lines which do not contain age is : 10 . In this case, it saves the current line in the hold space. Doing the opposite Suppose that we want to print the names for people whose age is not 10: $ sed -n -e '/age is : 10/{x;d}' -e '/age is :/{x;p;d;}' -e x Foo.txt Name is : john Name is : peggy The above adds a command to the beginning, /age is : 10/{x;d} , to ignore any age-10 people. The command which follows, /age is :/{x;p;d;} , now accepts all the remaining ages.
{ "source": [ "https://unix.stackexchange.com/questions/206886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115560/" ] }
206,903
I have a text file: deiauk 1611516 afsdf 765 minkra 18415151 asdsf 4152 linkra sfsfdsfs sdfss 4555 deiauk1 sdfsfdsfs 1561 51 deiauk2 115151 5454 4 deiauk 1611516 afsdf ddfgfgd luktol1 4545 4 9 luktol 1 and I want to match exactly deiauk . When I do this: grep "deiauk" file.txt I get this result: deiauk 1611516 afsdf 765 deiauk1 sdfsfdsfs 1561 51 deiauk2 115151 5454 4 but I only need this: deiauk 1611516 afsdf 765 deiauk 1611516 afsdf ddfgfgd I know there's a -w option, but then my string has to mach whole line.
Try one of: grep -w "deiauk" textfile grep "\<deiauk\>" textfile
{ "source": [ "https://unix.stackexchange.com/questions/206903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111696/" ] }
206,922
I'm looking for a way to only execute replacement when the last character is a newline, using sed . For instance: lettersAtEndOfLine is replaced, but this is not: lettersWithCharacterAfter& Since sed does not work well with newlines, it is not as simple as $ sed -E "s/[a-zA-Z]*\n/replace/" file.txt How can this be accomplished?
With standard sed , you will never see a newline in the text read from a file. This is because sed reads line by line, and there is therefore no newline at the end of the text of the current line in sed 's pattern space. In other words, sed reads newline-delimited data, and the delimiters are not part of what a sed script sees. Regular expressions can be anchored at the end of the line using $ (or at the beginning, using ^ ). Anchoring an expression at the start/end of a line forces it to match exactly there, and not just anywhere on the line. If you want to replace anything matching the pattern [A-Za-z]* at the end of the line with something, then anchor the pattern like this: [A-Za-z]*$ ...will force it to match at the end of the line and nowhere else. However, since [A-Za-z]*$ also matches nothing (for example, the empty string present at the end of every line), you need to force the matching of something , e.g. by specifying [A-Za-z][A-Za-z]*$ or [A-Za-z]\{1,\}$ So, your sed command line will thus be $ sed 's/[A-Za-z]\{1,\}$/replace/' file.txt I did not use the non-standard -E option here because it's not strictly needed. With it, you could have written $ sed -E 's/[A-Za-z]+$/replace/' file.txt It's a matter of taste.
{ "source": [ "https://unix.stackexchange.com/questions/206922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45038/" ] }
207,195
I want to remove files, more specifically, symbolic links of /usr/include that are newer than 2 JUN 22:27 How can I do this?
You might want to use find -newermt . Make sure to review files to be removed first: find /usr/include -type l -newermt "Jun 2 22:27" Use -delete to perform actual removes. find /usr/include -type l -newermt "Jun 2 22:27" -delete
{ "source": [ "https://unix.stackexchange.com/questions/207195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112677/" ] }
207,210
pwd gives me /data/users/me/some/random/folder Is there an easy way of obtaining ~/some/random/folder from pwd ?
If your're using bash, then the dirs builtin has the desired behavior: dirs +0 ~/some/random/folder (Note +0 , not -0 .) With zsh : dirs ~/some/random/folder To be exactly, we first need to clear the directory stack, else dirs would print all contents: dirs -c; dirs Or with zsh 's print builtin: print -rD $PWD or print -P %~ (that one turns prompt expansion on. %~ in $PS1 expands to the current directory with $HOME replaced with ~ but also handles other named directories like the home directory of other users or named directories that you define yourself).
{ "source": [ "https://unix.stackexchange.com/questions/207210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/71888/" ] }
207,276
I would like to know how file types are known if filenames don't have suffixes. For example, a file named myfile could be binary or text to start with, how does the system know if the file is binary or text?
The file utility determines the filetype over 3 ways: First the filesystem tests : Within those tests one of the stat family system calls is invoked on the file. This returns the different unix file types : regular file, directory, link, character device, block device, named pipe or a socket. Depending on that, the magic tests are made. The magic tests are a bit more complex. File types are guessed by a database of patterns called the magic file . Some file types can be determined by reading a bit or number in a particular place within the file (binaries for example). The magic file contains " magic numbers " to test the file whether it contains them or not and which text info should be printed. Those " magic numbers " can be 1-4Byte values, strings, dates or even regular expressions. With further tests additional information can be found. In case of an executable, additional information would be whether it's dynamically linked or not, stripped or not or the architecture. Sometimes multiple tests must pass before the file type can be truly identified. But anyway, it doesn't matter how many tests are performed, it's always just a good guess . Here are the first 8 bytes in a file of some common filetypes which can help us to get a feeling of what these magic numbers can look like: Hexadecimal ASCII PNG 89 50 4E 47|0D 0A 1A 0A ‰PNG|.... JPG FF D8 FF E1|1D 16 45 78 ÿØÿá|..Ex JPG FF D8 FF E0|00 10 4A 46 ÿØÿà|..JF ZIP 50 4B 03 04|0A 00 00 00 PK..|.... PDF 25 50 44 46|2D 31 2E 35 %PDF|-1.5 If the file type can't be found over magic tests, the file seems to be a text file and file looks for the encoding of the contents. The encoding is distinguished by the different ranges and sequences of bytes that constitute printable text in each set. The line breaks are also investigated, depending on their HEX values: 0A ( \n ) classifies a Un*x/Linux/BSD/OSX terminated file 0D 0A ( \r\n ) are file from Microsoft operating systems 0D ( \r ) would be Mac OS until version 9 15 ( \025 ) would be IBMs AIX Now the language tests start. If it appears to be a text file, the file is searched for particular strings to find out which language it contains (C, Perl, Bash). Some script languages can also be identified over the hashbang ( #!/bin/interpreter ) in the first line of the script. If nothing applies to the file, the file type can't be determined and file just prints "data". So, you see there is no need for a suffix. A suffix anyway could confuse, if set wrong.
{ "source": [ "https://unix.stackexchange.com/questions/207276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
207,294
I want to take down data in /path/to/data/folder/month/date/hour/minute/file and symlink it to /path/to/recent/file and do this automatically every time a file is created. Assuming I will not know ahead of time if /path/to/recent/file exists, how can I go about creating it (if it doesn't exist) or replacing it (if it does exist)? I am sure I can just check if it exists and then do a delete, symlink, but I'm wondering if there is a simple command which will do what I want in one step.
This is the purpose of ln 's -f option: it removes existing destination files, if any, before creating the link. ln -sf /path/to/data/folder/month/date/hour/minute/file /path/to/recent/file will create the symlink /path/to/recent/file pointing to /path/to/data/folder/month/date/hour/minute/file , replacing any existing file or symlink to a file if necessary (and working fine if nothing exists there already). If a directory, or symlink to a directory, already exists with the target name, the symlink will be created inside it (so you'd end up with /path/to/recent/file/file in the example above). The -n option, available in some versions of ln , will take care of symlinks to directories for you, replacing them as necessary: ln -sfn /path/to/data/folder/month/date/hour/minute/file /path/to/recent/file POSIX ln doesn’t specify -n so you can’t rely on it generally. Much of ln ’s behaviour is implementation-defined so you really need to check the specifics of the system you’re using. If you’re using GNU ln , you can use the -t and -T options too, to make its behaviour fully predictable in the presence of directories ( i.e. fail instead of creating the link inside the existing directory with the same name).
{ "source": [ "https://unix.stackexchange.com/questions/207294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32234/" ] }
207,365
I am using Open SSH (OpenSSH_6.6.1p1, OpenSSL 1.0.1i 6 Aug 2014) in Windows 8.1. X11 Forwarding does not appear to be working. The DISPLAY environment variable does not appear to be set. For example, if I use BitVise or Putty to connect, and run env, I see: [marko@vm:~]$ env XDG_SESSION_ID=6 TERM=xterm SHELL=/bin/bash SSH_CLIENT=192.168.1.174 61102 22 SSH_TTY=/dev/pts/0 USER=marko MAIL=/var/mail/marko PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games PWD=/home/marko LANG=en_CA.UTF-8 NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript SHLVL=1 HOME=/home/marko LANGUAGE=en_CA:en LOGNAME=marko SSH_CONNECTION=192.168.1.174 61102 192.168.1.64 22 XDG_RUNTIME_DIR=/run/user/1000 DISPLAY=localhost:10.0 _=/usr/bin/env If I instead use OpenSSH (ssh -X marko@vm): [marko@vm:~]$ env XDG_SESSION_ID=8 TERM=cygwin SHELL=/bin/bash SSH_CLIENT=192.168.1.174 61150 22 SSH_TTY=/dev/pts/1 USER=marko MAIL=/var/mail/marko PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games PWD=/home/marko LANG=en_CA.UTF-8 NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript SHLVL=1 HOME=/home/marko LANGUAGE=en_CA:en LOGNAME=marko SSH_CONNECTION=192.168.1.174 61150 192.168.1.64 22 XDG_RUNTIME_DIR=/run/user/1000 _=/usr/bin/env
Have you set DISPLAY environment variable on the client? I'm not sure which shell you are using, but with Bourne shell derivative (like bash), please try: export DISPLAY=127.0.0.1:0 ssh -X marko@vm Or if you're using cmd.exe: set DISPLAY=127.0.0.1:0 ssh -X marko@vm Or if you're using powershell.exe: $env:DISPLAY = '127.0.0.1:0' ssh -X marko@vm
{ "source": [ "https://unix.stackexchange.com/questions/207365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50909/" ] }
207,469
I've got a problem with this (shortened) systemd service file: [Unit] Description=control FOO daemon After=syslog.target network.target [Service] Type=forking User=FOOd Group=FOO ExecStartPre=/bin/mkdir -p /var/run/FOOd/ ExecStartPre=/bin/chown -R FOOd:FOO /var/run/FOOd/ ExecStart=/usr/local/bin/FOOd -P /var/run/FOOd/FOOd.pid PIDFile=/var/run/FOOd/FOOd.pid [Install] WantedBy=multi-user.target Let FOOd be the user name and FOO the group name, which already exist for my daemon /usr/local/bin/FOOd . I need to create the directory /var/run/FOOd/ before starting the daemon process /usr/local/bin/FOOd via # systemctl start FOOd.service . This fails, because mkdir can't create the directory due to permissions: ... Jun 03 16:18:49 PC0515546 mkdir[2469]: /bin/mkdir: cannot create directory /var/run/FOOd/: permission denied Jun 03 16:18:49 PC0515546 systemd[1]: FOOd.service: control process exited, code=exited status=1 ... Why does mkdir fail at ExecStartPre and how can I fix it? (And no, I can't use sudo for mkdir...)
You need to add PermissionsStartOnly=true to [Service] . Your user FOOd is of course not authorized to create a directory in /var/run . To cite the man page: Takes a boolean argument. If true, the permission-related execution options, as configured with User= and similar options (see systemd.exec(5) for more information), are only applied to the process started with ExecStart=, and not to the various other ExecStartPre=, ExecStartPost=, ExecReload=, ExecStop=, and ExecStopPost= commands. If false, the setting is applied to all configured commands the same way. Defaults to false.
{ "source": [ "https://unix.stackexchange.com/questions/207469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118045/" ] }
207,504
I would like to find files whose name has only 4 characters. Example, there are three files under /tmp : $ ls /tmp txt file linux Output should only show file because it only has 4 characters.
Use the ? wildcard for file globbing: ls -d /tmp/???? This will print all files and directories whose filename is 4-char long. As suggested by @roaima, the -d flag will prevent ls to display the content of subdirectories that match the pattern.
{ "source": [ "https://unix.stackexchange.com/questions/207504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118063/" ] }
207,591
I just installed NodeJS & NPM on Debian Jessie using the recommended approach: apt-get install curl curl -sL https://deb.nodesource.com/setup | bash - apt-get install -y nodejs However it’s a pretty old version (node v0.10.38 & npm 1.4.28). Any suggestions on the easiest way to install newer versions, e.g., currently node is v0.12.4 and npm is 2.7.4? Is installing from source my only approach?
There is a setup script available for Node.js (see installation insctructions ): # Adapt version number to the version you want curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash - sudo apt-get install -y nodejs A little comment: In my humble opinion, it's a very bad idea to curl | sudo bash . You are running a script you did not check with root privileges. It's always better to download the script, read through it, check for malicious commands, and after that , run it. But that's just my two cents. The installation can be achieved manually in a few steps following the manual installation procedure : Remove old PPA (if applicable) Add node repo ssh key Add node repo to sources.list update package list and install using favorite apt tool
{ "source": [ "https://unix.stackexchange.com/questions/207591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118108/" ] }
207,617
I'm not sure if this gets the past date within the current day or if it only takes 30 or 31 days to it. e.g. If the current date is March 28th , 1 month ago must be February 28th , but what happen when it's March 30th ? Scenario I want to backup some files each day, the script will save this files within the current date with $(date +%Y%m%d) format, like 20150603_bckp.tar.gz , then when the next month arrives, remove all those files within 1 month ago except the 1st's and the 15th's files, so this is my condition: past_month = $(date -d "-1 month" +%Y%m%d) day = $(date +%d) if [ "$day" != 01 ] && [ "$day" != 15 ] then rm /path/of/files/${past_month}_bckp.tar.gz echo "Depuration done" else echo "Keep file" fi But I want to know, what will happen when the date is 30th, 31th or even the past February example? It will keep those files? or remove day 1st files? When it's 31th the depuration will execute, so if the past month only had 30 days, this will remove the day 1st file? I hope I hinted.
- 1 month will subtract one from the month number, and then if the resulting date is not valid ( February 30 , for example), adjust it so that it is valid. So December 31 - 1 month is December 1 , not a day in November, and March 31 - 1 month is March 3 (unless executed in a leap year). Here's quote from the info page for Gnu date (which is the date version which implements this syntax), which includes a good suggestion to make the arithmetic more robust: The fuzz in units can cause problems with relative items. For example, 2003-07-31 -1 month might evaluate to 2003-07-01, because 2003-06-31 is an invalid date. To determine the previous month more reliably, you can ask for the month before the 15th of the current month. For example: $ date -R Thu, 31 Jul 2003 13:02:39 -0700 $ date --date='-1 month' +'Last month was %B?' Last month was July? $ date --date="$(date +%Y-%m-15) -1 month" +'Last month was %B!' Last month was June! Another warning, also quoted from the info page: Also, take care when manipulating dates around clock changes such as daylight saving leaps. In a few cases these have added or subtracted as much as 24 hours from the clock, so it is often wise to adopt universal time by setting the TZ environment variable to UTC0 before embarking on calendrical calculations.
{ "source": [ "https://unix.stackexchange.com/questions/207617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68382/" ] }
207,782
When I view the length and width of my terminal emulator with stty size then it is 271 characters long and 71 lines tall. When I log into another server over SSH and execute stty size , then it is also 271 characters long and 71 lines tall. I can even log into some Cisco IOS device and terminal is still 271 characters long and 71 lines tall: C1841#show terminal | i Len|Wid Length: 71 lines, Width: 271 columns C1841# Now if I resize my terminal emulator(Gnome terminal) window in local machine, both stty size in remote server and "show terminal" in IOS show different line length and number of lines. How are terminal length and width forwarded over SSH and telnet?
The telnet protocol, described in RFC 854 , includes a way to send in-band commands, consisting of the IAC character , '\255' , followed by several more bytes. These commands can do things like send an interrupt to the remote, but typically they're used to send options . A detailed look at an exchange that sends the terminal type option can be found in Microsoft Q231866 . The window size option is described in RFC 1073 . The client first sends its willingness to send an NAWS option. If the server replies DO NAWS , the client can then send the NAWS option data, which is comprised of two 16-bit values. Example session, on a 47 row 80 column terminal: telnet> set options Will show option processing. telnet> open localhost Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. SENT WILL NAWS RCVD DO NAWS SENT IAC SB NAWS 0 80 (80) 0 47 (47) The ssh protocol is described in RFC 4254 . It consists of a stream of messages. One such message is "pty-req" , which requests a pseudo-terminal, and its parameters include the terminal height and width. byte SSH_MSG_CHANNEL_REQUEST uint32 recipient channel string "pty-req" boolean want_reply string TERM environment variable value (e.g., vt100) uint32 terminal width, characters (e.g., 80) uint32 terminal height, rows (e.g., 24) uint32 terminal width, pixels (e.g., 640) uint32 terminal height, pixels (e.g., 480) string encoded terminal modes The telnet and ssh clients will catch the SIGWINCH signal, so if you resize a terminal window during a session, they will send an appropriate message to the server with the new size. Ssh sends the Window Dimension Change Message: byte SSH_MSG_CHANNEL_REQUEST uint32 recipient channel string "window-change" boolean FALSE uint32 terminal width, columns uint32 terminal height, rows uint32 terminal width, pixels uint32 terminal height, pixels
{ "source": [ "https://unix.stackexchange.com/questions/207782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
207,919
> brew install moreutils ==> Downloading https://homebrew.bintray.com/bottles/moreutils-0.55.yosemite.bottle.tar.gz ######################################################################## 100.0% ==> Pouring moreutils0.55.yosemite.bottle.tar.gz /usr/local/Cellar/moreutils/0.55: 67 files, 740K sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before writing the output file. This allows constructing pipelines that read from and write to the same file. I don't understand. Please give me some useful examples. What does soaks up mean?
Assume that you have a file named input , you want to remove all line start with # in input . You can get all lines don't start with # using: grep -v '^#' input But how do you make changes to input ? With standard POSIX toolchest, you need to use a temporary file, some thing like: grep -v '^#' input >/tmp/input.tmp mv /tmp/input.tmp ./input With shell redirection: grep -v '^#' input >input will truncate input before you reading from it. With sponge , you can: grep -v '^#' input | sponge input
{ "source": [ "https://unix.stackexchange.com/questions/207919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26612/" ] }
207,957
#!/bin/bash function0() { local t1=$(exit 1) echo $t1 } function0 echo prints empty value. I expected: 1 Why doesn't t1 variable get assigned the exit command's return value - 1 ?
local t1=$(exit 1) tells the shell to: run exit 1 in a subshell; store its output (as in, the text it outputs to standard output) in a variable t1 , local to the function. It's thus normal that t1 ends up being empty. ( $() is known as command substitution .) The exit code is always assigned to $? , so you can do function0() { (exit 1) echo "$?" } to get the effect you're looking for. You can of course assign $? to another variable: function0() { (exit 1) local t1=$? echo "$t1" }
{ "source": [ "https://unix.stackexchange.com/questions/207957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
208,095
Bash code to print all folders: for f in ~/*; do if [ $f == '/home/sk/.' -o $f == '/home/sk/..' ]; then true else echo "$f" fi done It works on bash. When i ran the code on z shell, it threw error: = not found Then I converted [ into [[ , ] into ]] to avoid this error in z shell and ran it on z shell. It threw next error: condition expected: $f With [[ and ]] , bash also throws error as: syntax error in conditional expression syntax error near `-o' Is there a POSIX standard to do string comparison in shell, that works across shells?
There are various issues here. First, == is not standard, the POSIX way is = . Same goes for the -o . This one will work on both bash and zsh: for f in ~/*; do if [ "$f" = '/home/sk/.' ] || [ "$f" = '/home/sk/..' ]; then true else echo "$f" fi done Note that your if is unneeded, dotfiles are ignored by default in both bash and zsh. You can simply write: for f in ~/*; do echo "$f"; done Or even printf "%s\n" ~/*
{ "source": [ "https://unix.stackexchange.com/questions/208095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
208,140
I am trying to delete all the files with a space in their names. I am using following command. But it is giving me an error Command : ls | egrep '. ' | xargs rm Here if I am using only ls | egrep '. ' command it is giving me all the file name with spaces in the filenames. But when I am trying to pass the output to rm, all the spaces (leading or trailing) gets deleted. So my command is not getting properly executed. Any pointers on how to delete the file having atleast one space in their name?
You can use standard globbing on the rm command: rm -- *\ * This will delete any file whose name contains a space; the space is escaped so the shell doesn't interpret it as a separator. Adding -- will avoid problems with filenames starting with dashes (they won’t be interpreted as arguments by rm ). If you want to confirm each file before it’s deleted, add the -i option: rm -i -- *\ *
{ "source": [ "https://unix.stackexchange.com/questions/208140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118444/" ] }
208,184
Sometimes I use, $PROJECT_HOME/* to delete all files in the project. When the environment variable, PROJECT_HOME is not set (because I did su and the new user doesn't have this environment variable set), it starts deleting all files from the root folder. This is apocalyptic. How can I configure bash to throw error, when I use an undefined environment variable in the shell?
In POSIX shell, you can use set -u : #!/bin/sh set -u : "${UNSET_VAR}" or using Parameter Expansion : : "${UNSET_VAR?Unset variable}" In your case, you should use :? instead of ? to also fail on set but empty variables: rm -rf -- "${PROJECT_HOME:?PROJECT_HOME empty or unset}"/*
{ "source": [ "https://unix.stackexchange.com/questions/208184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
208,407
I often hear people refer to the Linux kernel as the Linux kernel image and I can't seem to find an answers on any search engines as to why its called an image. When I think of an image I can only think of two things either a copy of a disk in or a photo. It sure as hell isn't a photo image so why is it referred to as an image?
The Unix boot process has (had) only limited capabilities of intelligently loading a program (relocating it, loading libraries etc). Therefore the initial program was an exact image, stored on disc, of what needed to be loaded into memory and "called" to get the kernel going. Only much later things like (de-)compression were added and although more powerful bootloaders are now in place, the image name has stuck.
{ "source": [ "https://unix.stackexchange.com/questions/208407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118378/" ] }
208,568
I want to compile as fast as possible. Go figure. And would like to automate the choice of the number following the -j option. How can I programmatically choose that value, e.g. in a shell script? Is the output of nproc equivalent to the number of threads I have available to compile with? make -j1 make -j16
nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT. The number of jobs you can run in parallel with make using the -j option depends on a number of factors: the amount of available memory the amount of memory used by each make job the extent to which make jobs are I/O- or CPU-bound make -j$(nproc) is a decent place to start, but you can usually use higher values, as long as you don't exhaust your available memory and start thrashing. For really fast builds, if you have enough memory, I recommend using a tmpfs , that way most jobs will be CPU-bound and make -j$(nproc) will work as fast as possible.
{ "source": [ "https://unix.stackexchange.com/questions/208568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32951/" ] }
208,588
I'm currently doing this to sort and uniq the output of two different commands: tshark -r sample.pcap -T fields -e eth.src -e ip.src > hello tshark -r sample.pcap -T fields -e eth.dst -e ip.dst >> hello sort < hello | uniq > hello_uniq In a nutshell, I'm outputting source MAC addresses and IPs into a file. I'm then appending destination MAC addresses and IPs to that same file. I then sort the file and input that into uniq to end up with a list of unique MAC to IP address mapping. Is there a way to do this in one line? (Note: the use of tshark is not really relevant here, my question applies to any two sources of output like that)
nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT. The number of jobs you can run in parallel with make using the -j option depends on a number of factors: the amount of available memory the amount of memory used by each make job the extent to which make jobs are I/O- or CPU-bound make -j$(nproc) is a decent place to start, but you can usually use higher values, as long as you don't exhaust your available memory and start thrashing. For really fast builds, if you have enough memory, I recommend using a tmpfs , that way most jobs will be CPU-bound and make -j$(nproc) will work as fast as possible.
{ "source": [ "https://unix.stackexchange.com/questions/208588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89986/" ] }
208,597
I have the following snippet shown as my top command output. One real quick question here being, the values of Mem are shown in what granularity? Are they the number of bytes? Mem: 8191488k total, 4277448k used, 3914040k free, 292356k buffers Swap: 0k total, 0k used, 0k free, 3382180k cached Asking this question because, free -m command gives the output as total used free shared buffers cached Mem: 7999 4177 3822 0 285 3302 -/+ buffers/cache: 588 7410 Swap: 0 0 0
nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT. The number of jobs you can run in parallel with make using the -j option depends on a number of factors: the amount of available memory the amount of memory used by each make job the extent to which make jobs are I/O- or CPU-bound make -j$(nproc) is a decent place to start, but you can usually use higher values, as long as you don't exhaust your available memory and start thrashing. For really fast builds, if you have enough memory, I recommend using a tmpfs , that way most jobs will be CPU-bound and make -j$(nproc) will work as fast as possible.
{ "source": [ "https://unix.stackexchange.com/questions/208597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118741/" ] }
208,615
When I use the type command to find out if cat is a shell built-in or an external program I get the output below: -$ type cat cat is hashed (/bin/cat) -$ Does this mean that cat is an external program which is /bin/cat ? I got confused, because when I checked the output below for echo I got to see that it is a built-in but also a program /bin/echo -$ type echo echo is a shell builtin -$ which echo /bin/echo -$ So I could not use the logic that /bin/cat necessarily means an external program, because echo was /bin/echo but still a built-in. So how do I know what cat is? Built-in or external?
type tells you what the shell would use. For example: $ type echo echo is a shell builtin $ type /bin/echo /bin/echo is /bin/echo That means that if, at the bash prompt, you type echo , you will get the built-in. If you specify the path, as in /bin/echo , you will get the external command. which , by contrast is an external program that has no special knowledge of what the shell will do. On debian-like systems, which is a shell script which searches the PATH for the executable. Thus, it will give you the name of the external executable even if the shell would use a built-in. If a command is only available as a built-in, which will return nothing: $ type help help is a shell builtin $ which help $ Now, let;s look at cat : $ type cat cat is hashed (/bin/cat) $ which cat /bin/cat cat is an external executable, not a shell builtin.
{ "source": [ "https://unix.stackexchange.com/questions/208615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109220/" ] }
208,819
Ansible variables come from a variety of sources. It is for example possible to provide host_vars and group_vars by creating YAML files in a subfolder named host_vars and group_vars respectively of the folder containing the inventory file. How can I list all of the variables Ansible would know about a group or host inside a playbook? Note: I tried ansible -m debug -e 'var=hostvars' host and ansible -m debug -e '- debug: var=hostvars' to no avail. Hint: ansible <group|host> -m setup is not the correct answer as it does not include all the variables that come from other sources (it only contains { "ansible_facts" : { ... } } . In fact it does not even include variables provided by a dynamic inventory script (via _meta and so on). Ansible version: 1.9.1.
ansible <host pattern> -m debug -a "var=hostvars[inventory_hostname]" seems to work. Replace <host pattern> by any valid host pattern . Valid variable sources ( host_vars , group_vars , _meta in a dynamic inventory, etc.) are all taken into account. With dynamic inventory script hosts.sh : #!/bin/sh if test "$1" = "--host"; then echo {} else cat <<EOF { "ungrouped": [ "x.example.com", "y.example.com" ], "group1": [ "a.example.com" ], "group2": [ "b.example.com" ], "groups": { "children": [ "group1", "group2" ], "vars": { "ansible_ssh_user": "user" } }, "_meta": { "hostvars": { "a.example.com": { "ansible_ssh_host": "10.0.0.1" }, "b.example.com": { "ansible_ssh_host": "10.0.0.2" } } } } EOF fi You can get: $ chmod +x hosts.sh $ ansible -i hosts.sh a.example.com -m debug -a "var=hostvars[inventory_hostname]" a.example.com | success >> { "var": { "hostvars": { "ansible_ssh_host": "10.0.0.1", "ansible_ssh_user": "user", "group_names": [ "group1", "groups" ], "groups": { "all": [ "x.example.com", "y.example.com", "a.example.com", "b.example.com" ], "group1": [ "a.example.com" ], "group2": [ "b.example.com" ], "groups": [ "a.example.com", "b.example.com" ], "ungrouped": [ "x.example.com", "y.example.com" ] }, "inventory_hostname": "a.example.com", "inventory_hostname_short": "a" } } }
{ "source": [ "https://unix.stackexchange.com/questions/208819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5462/" ] }
208,870
We use && operator to run a command after previous one finishes. update-system-configuration && restart-service but how do I run a command only if previous command is unsuccessful ? For example, if I have to update system configuration and if it fails I need to send a mail to system admin? Edit: Come on this is not a duplicate question of control operators. This will help users who are searching for specifically this, I understand that answer to control operators will answer this question too, but people searching specifically for how to handle unsuccessful commands won't reach there directly, otherwise I would have got there before asking this question.
&& executes the command which follow only if the command which precedes it succeeds. || does the opposite: update-system-configuration || echo "Update failed" | mail -s "Help Me" admin@host Documentation From man bash : AND and OR lists are sequences of one of more pipelines separated by the && and || control operators, respectively. AND and OR lists are executed with left associativity. An AND list has the form command1 && command2 command2 is executed if, and only if, command1 returns an exit status of zero. An OR list has the form command1 || command2 command2 is executed if and only if command1 returns a non-zero exit status. The return status of AND and OR lists is the exit status of the last command executed in the list.
{ "source": [ "https://unix.stackexchange.com/questions/208870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64355/" ] }
208,908
When I scanned my $HOME directory with baobab (Disk Usage Analyzer), I found that ~/.cache is consuming about half a GB. I also tried to restart and again check size but no difference. So, I am planning to rm -rf ~/.cache . Let me know Is it safe to clear ~/.cache ?
It is safe to clear ~/.cache/ , new user accounts start with an empty directory anyway. You might want to log out after doing this though since programs might still use this directory. These programs can be found with this command: find ~/.cache -print0 | xargs -0 lsof -n In my case I would most likely be fine with just closing Firefox before removal.
{ "source": [ "https://unix.stackexchange.com/questions/208908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
208,960
I have ubuntu server on digitalocean and I want to give someone a folder for their domain on my server, my problem is, I don't want that user to see my folders or files or to be able to move out their folder. How can I restrict this user in their folder and not allow to him to move out and see other files/directories ?
I solved my problem by this way: Create a new group $ sudo addgroup exchangefiles Create the chroot directory $ sudo mkdir /var/www/GroupFolder/ $ sudo chmod g+rx /var/www/GroupFolder/ Create the group-writable directory $ sudo mkdir -p /var/www/GroupFolder/files/ $ sudo chmod g+rwx /var/www/GroupFolder/files/ Give them both to the new group $ sudo chgrp -R exchangefiles /var/www/GroupFolder/ after that I went to /etc/ssh/sshd_config and added to the end of the file: Match Group exchangefiles # Force the connection to use SFTP and chroot to the required directory. ForceCommand internal-sftp ChrootDirectory /var/www/GroupFolder/ # Disable tunneling, authentication agent, TCP and X11 forwarding. PermitTunnel no AllowAgentForwarding no AllowTcpForwarding no X11Forwarding no Now I'm going to add new user with obama name to my group: $ sudo adduser --ingroup exchangefiles obama Now everything is done, so we need to restart the ssh service: $ sudo service ssh restart notice: the user now can't do any thing out file directory I mean all his file must be in file Folder.
{ "source": [ "https://unix.stackexchange.com/questions/208960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118981/" ] }
209,053
Is there a way to save the changes I made to my vim buffer as a patch file for the original file, without saving it as a separate file and using diff?
It's possible to do this without a plugin using the w command, so the buffer contents can be used in a shell command: :w !diff -au "%" - > changes.patch ( % is substituted with the path of the file being edited, - reads the buffer from stdin)
{ "source": [ "https://unix.stackexchange.com/questions/209053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119031/" ] }
209,068
I have a file named Element_query containing the result of a query : SQL> select count (*) from element; [Output of the query which I want to keep in my file] SQL> spool off; I want to delete 1st line and last line using shell command.
Using GNU sed : sed -i '1d;$d' Element_query How it works : -i option edit the file itself. You could also remove that option and redirect the output to a new file or another command if you want. 1d deletes the first line ( 1 to only act on the first line, d to delete it) $d deletes the last line ( $ to only act on the last line, d to delete it) Going further : You can also delete a range. For example, 1,5d would delete the first 5 lines. You can also delete every line that begins with SQL> using the statement /^SQL> /d You could delete every blank line with /^$/d Finally, you can combine any of the statement by separating them with a semi-colon ( statement1;statement2;satement3;... ) or by specifying them separately on the command line ( -e 'statement1' -e 'statement 2' ... )
{ "source": [ "https://unix.stackexchange.com/questions/209068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118311/" ] }
209,123
I obviously understand that one can add value to internal field separator variable. For example: $ IFS=blah $ echo "$IFS" blah $ I also understand that read -r line will save data from stdin to variable named line : $ read -r line <<< blah $ echo "$line" blah $ However, how can a command assign variable value? And does it first store data from stdin to variable line and then give value of line to IFS ?
In POSIX shells, read , without any option doesn't read a line , it reads words from a (possibly backslash-continued) line, where words are $IFS delimited and backslash can be used to escape the delimiters (or continue lines). The generic syntax is: read word1 word2... remaining_words read reads stdin one byte at a time¹ until it finds an unescaped newline character (or end-of-input), splits that according to complex rules and stores the result of that splitting into $word1 , $word2 ... $remaining_words . For instance on an input like: <tab> foo bar\ baz bl\ah blah\ whatever whatever and with the default value of $IFS , read a b c would assign: $a ⇐ foo $b ⇐ bar baz $c ⇐ blah blahwhatever whatever Now if passed only one argument, that doesn't become read line . It's still read remaining_words . Backslash processing is still done, IFS whitespace characters² are still removed from the beginning and end. The -r option removes the backslash processing. So that same command above with -r would instead assign $a ⇐ foo $b ⇐ bar\ $c ⇐ baz bl\ah blah\ Now, for the splitting part, it's important to realise that there are two classes of characters for $IFS : the IFS whitespace characters² (including space and tab (and newline, though here that doesn't matter unless you use -d), which also happen to be in the default value of $IFS ) and the others. The treatment for those two classes of characters is different. With IFS=: ( : being not an IFS whitespace character), an input like :foo::bar:: would be split into "" , "foo" , "" , bar and "" (and an extra "" with some implementations though that doesn't matter except for read -a ). While if we replace that : with space, the splitting is done into only foo and bar . That is leading and trailing ones are ignored, and sequences of them are treated like one. There are additional rules when whitespace and non-whitespace characters are combined in $IFS . Some implementations can add/remove the special treatment by doubling the characters in IFS ( IFS=:: or IFS=' ' ). So here, if we don't want the leading and trailing unescaped whitespace characters to be stripped, we need to remove those IFS white space characters from IFS. Even with IFS-non-whitespace characters, if the input line contains one (and only one) of those characters and it's the last character in the line (like IFS=: read -r word on a input like foo: ) with POSIX shells (not zsh nor some pdksh versions), that input is considered as one foo word because in those shells, the characters $IFS are considered as terminators , so word will contain foo , not foo: . So, the canonical way to read one line of input with the read builtin is: IFS= read -r line (note that for most read implementations, that only works for text lines as the NUL character is not supported except in zsh ). Using var=value cmd syntax makes sure IFS is only set differently for the duration of that cmd command. History note The read builtin was introduced by the Bourne shell and was already to read words , not lines. There are a few important differences with modern POSIX shells. The Bourne shell's read didn't support a -r option (which was introduced by the Korn shell), so there's no way to disable backslash processing other than pre-processing the input with something like sed 's/\\/&&/g' there. The Bourne shell didn't have that notion of two classes of characters (which again was introduced by ksh). In the Bourne shell all characters undergo the same treatment as IFS whitespace characters do in ksh, that is IFS=: read a b c on an input like foo::bar would assign bar to $b , not the empty string. In the Bourne shell, with: var=value cmd If cmd is a built-in (like read is), var remains set to value after cmd has finished. That's particularly critical with $IFS because in the Bourne shell, $IFS is used to split everything, not only the expansions. Also, if you remove the space character from $IFS in the Bourne shell, "$@" no longer works. In the Bourne shell, redirecting a compound command causes it to run in a subshell (in the earliest versions, even things like read var < file or exec 3< file; read var <&3 didn't work), so it was rare in the Bourne shell to use read for anything but user input on the terminal (where that line continuation handling made sense) Some Unices (like HP/UX, there's also one in util-linux ) still have a line command to read one line of input (that used to be a standard UNIX command up until the Single UNIX Specification version 2 ). That's basically the same as head -n 1 except that it reads one byte at a time to make sure it doesn't read more than one line. On those systems, you can do: line=`line` Of course, that means spawning a new process, execute a command and read its output through a pipe, so a lot less efficient than ksh's IFS= read -r line , but still a lot more intuitive. ¹ though on seekable input, some implementations can revert to reading by blocks and seek-back afterwards as an optimisation. ksh93 goes even further and remembers what was read and uses it for the next read invocation, though that's currently broken ² IFS whitespace characters , per POSIX being the characters classified as [:space:] in the locale and that happen to be in $IFS though in ksh88 (on which the POSIX specification is based) and in most shells, that's still limited to SPC, TAB and NL. The only POSIX compliant shell in that regard I found was yash . ksh93 and bash (since 5.0) also include other whitespace (such as CR, FF, VT...), but limited to the single-byte ones (beware on some systems like Solaris, that includes the non-breaking-space which is single byte in some locales)
{ "source": [ "https://unix.stackexchange.com/questions/209123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
209,183
I have 5 million files which take up about 1TB of storage space. I need to transfer these files to a third party. What's the best way to do this? I have tried reducing the size using .tar.gz, but even though my computer has 8GB RAM, I get an "out of system memory" error. Is the best solution to snail-mail the files over?
Additional information provided in the comments reveals that the OP is using a GUI method to create the .tar.gz file. GUI software often includes a lot more bloat than the equivalent command line equivalent software, or performs additional unnecessary tasks for the sake of some "extra" feature such as a progress bar. It wouldn't surprise me if the GUI software is trying to collect a list of all the filenames in memory. It's unnecessary to do that in order to create an archive. The dedicated tools tar and gzip are defintely designed to work with streaming input and output which means that they can deal with input and output a lot bigger than memory. If you avoid the GUI program, you can most likely generate this archive using a completely normal everyday tar invocation like this: tar czf foo.tar.gz foo where foo is the directory that contains all your 5 million files. The other answers to this question give you a couple of additional alternative tar commands to try in case you want to split the result into multiple pieces, etc...
{ "source": [ "https://unix.stackexchange.com/questions/209183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4430/" ] }
209,249
HP-UX ***** B.11.23 U ia64 **** unlimited-user license find . -type d -name *log* | xargs ls -la gives me the directory names (the ones which contain log in the directory name) followed by all files within that directory. The directories /var/opt/SID/application_a/log/ , /var/opt/SID/application_b/log/ , /var/opt/SID/application_c/log/ and so on contain log files. I want only the two latest logfiles to be listed by the ls command, which I usually find using ls -latr | tail -2 . The output has to be something like this.. /var/opt/SID/application_a/log/ -rw-rw-rw- 1 user1 user1 59698 Jun 11 2013 log1 -rw-rw-rw- 1 user1 user1 59698 Jun 10 2013 log2 /var/opt/SID/application_b/log/ -rw-rw-rw- 1 user1 user1 59698 Jun 11 2013 log1 -rw-rw-rw- 1 user1 user1 59698 Jun 10 2013 log2 /var/opt/SID/application_c/log/ -rw-rw-rw- 1 user1 user1 59698 Jun 11 2013 log1 -rw-rw-rw- 1 user1 user1 59698 Jun 10 2013 log2 find . -type d -name *log* | xargs ls -la | tail -2 does not give me the above result. What I get is a list of last two files of find . -type d -name *log* | xargs ls -la command. So can I pipe commands after a piped xargs ? How else do I query, to get the resultant list of files in the above format? find . -type d -name *log* | xargs sh -c "ls -ltr | tail -10" gives me a list of ten directory names inside the current directory which happens to be /var/opt/SID and that is also not what I want.
You are almost there. In your last command, you can use -I to do the ls correctly -I replace-str Replace occurrences of replace-str in the initial-arguments with names read from standard input.  Also, unquoted blanks do not terminate input items; instead the separator is the newline character.  Implies -x and -L 1 . So, with find . -type d -name "*log*" | xargs -I {} sh -c "echo {}; ls -la {} | tail -2" you will echo the dir found by find , then do the ls | tail on it.
{ "source": [ "https://unix.stackexchange.com/questions/209249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102998/" ] }
209,566
I created an img file via the following command: dd if=/dev/zero bs=2M count=200 > binary.img It's just a file with zeroes, but I can use it in fdisk and create a partition table: # fdisk binary.img Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x51707f21. Command (m for help): p Disk binary.img: 400 MiB, 419430400 bytes, 819200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x51707f21 and, let's say, one partition: Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-819199, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-819199, default 819199): Created a new partition 1 of type 'Linux' and of size 399 MiB. Command (m for help): w The partition table has been altered. Syncing disks. When I check the partition table, I get the following result: Command (m for help): p Disk binary.img: 400 MiB, 419430400 bytes, 819200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x7f3a8a6a Device Boot Start End Sectors Size Id Type binary.img1 2048 819199 817152 399M 83 Linux So the partition exists. When I try to format this partition via gparted, I get the following error: I don't know why it looks for binary.img1 , and I have no idea how to format the partition from command live. Does anyone know how to format it using ext4 filesystem?
You can access the disk image and its individual partitions via the loopback feature. You have already discovered that some disk utilities will operate (reasonably) happily on disk images. However, mkfs is not one of them (but strangely mount is). Here is output from fdisk -lu binary.img : Disk binary.img: 400 MiB, 419430400 bytes, 819200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes ... Device Boot Start End Sectors Size Id Type binary.img1 2048 819199 817152 399M 83 Linux To access the partition you've created you have a couple of choices The explicit route losetup --offset $((512*2048)) --sizelimit $((512*817152)) --show --find binary.img /dev/loop0 The output /dev/loop0 is the name of the loop device that has been allocated. The --offset parameter is just the partition's offset ( Start ) multiplied by the sector size ( 512 ). Whereas --sizelimit is the size of the partition, and you can calculate it in the following way: End-Start+1, which is 819199-2048+1=817152 , and that number also has to be multiplied by the sector size. You can then use /dev/loop0 as your reference to the partition: mkfs -t ext4 -L img1 /dev/loop0 mkdir -p /mnt/img1 mount /dev/loop0 /mnt/img1 ... umount /mnt/img1 losetup -d /dev/loop0 The implicit route losetup --partscan --show --find binary.img /dev/loop0 The output /dev/loop0 is the name of the primary loop device that has been allocated. In addition, the --partscan option tells the kernel to scan the device for a partition table and assign subsidiary loop devices automatically. In your case with the one partition you also get /dev/loop0p1 , which you can then use as your reference to the partition: mkfs -t ext4 -L img1 /dev/loop0p1 mkdir -p /mnt/img1 mount /dev/loop0p1 /mnt/img1 ... umount /mnt/img1 losetup -d /dev/loop0
{ "source": [ "https://unix.stackexchange.com/questions/209566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52763/" ] }
209,579
I created a environment variable in one terminal window and tried to echo it in another terminal window. That displayed nothing. $TEST=hello After that I exported it and tried again to echo it in a different terminal window. result was same as before. export TEST but if I execute the same code at the login (appending the code to ~/.profile file) variables can be used any terminal window. What is happening here? What is the different between executing a code in a terminal and executing the same at the login?
export makes a variable something that will be included in child process environments. It does not affect other already existing environments. In general there isn't a way to set a variable in one terminal and have it automatically appear in another terminal, the environment is established for each process on its own. Adding it to your .profile makes it so that your environment will be setup to include that new variable each time you log in though. So it's not being exported from one shell to another, but instead is instructing a new shell to include it when it sets up the initial environment.
{ "source": [ "https://unix.stackexchange.com/questions/209579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110279/" ] }
209,746
I am trying to use an alternate user (non-admin) to execute graphical software on my system. This alternate user has been named and given a UID and GID to match a remote system user of the same name. The UID is 500 so I believe that makes the user a 'non-login' user. Starting from Ubuntu logged into my main account, I open a terminal and su to the alternate user. I then attempt to execute the command to start the application and receive 'No protocol specified'. Is this because of the UID<1000, because of the su or because of the non-admin of the user? How can I get this user to execute the application with a GUI?
The problem is not occurring because of the UID of the user. 500 is just fine as a UID, and that UID doesn't make it a 'non-login' user except in the eyes of the default settings of some few display managers. The error message No protocol specified sounds like an application-specific error message, and an unhelpful one at that, but I am going to guess that the error is that the application is unable to contact your X11 display because it does not have permission to do so because it's running as a different user. Applications need a "magic cookie" (secret token) in order to talk to the X11 server so that other processes on the system running under other users cannot intrude on your display, create windows, and snoop your keystrokes. The other system user does not have access to this magic cookie because the permissions are set so that it is only accessible to the user who started the desktop environment (which is as it should be). Try this, running as your original user, to copy the X11 cookie to the other account: su - <otheruser> -c "unset XAUTHORITY; xauth add $(xauth list)" then run your application. You may also need to unset XAUTHORITY in that shell too. That command extracts the magic cookie ( xauth list ) from your main user and adds it ( xauth add ) to where the other user can get it.
{ "source": [ "https://unix.stackexchange.com/questions/209746", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96932/" ] }
209,820
I accidentally deleted /etc/redhat-release file. How can I restore or create a new one? I have CentOS Linux release 7.0.1406 (Core).
You can use RPM to see what RPM that file belongs to: $ rpm -qf /etc/redhat-release centos-release-7-0.1406.el7.centos.2.5.x86_64 You can then fix it using yum : $ yum reinstall centos-release Might not work If the RPM that was used to do this install is no longer available then the above will not work: $ yum reinstall centos-release-7-0.1406.el7.centos.2.5.x86_64 ... Installed package centos-release-7-0.1406.el7.centos.2.5.x86_64 (from updates) not available. In this case you can look for that RPM in the CentOS Vault (I search via Google for it), for example. NOTE: The specific package you want is here . You can then download the RPM directly and do the re-install using rpm or yum . $ wget http://vault.centos.org/centos/7.0.1406/updates/x86_64/Packages/centos-release-7-0.1406.el7.centos.2.5.x86_64.rpm Using RPM $ sudo rpm -Uvh --replacepkgs centos-release-7-0.1406.el7.centos.2.5.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:centos-release-7-0.1406.el7.cento################################# [100%] Using YUM $ sudo yum reinstall centos-release-7-0.1406.el7.centos.2.5.x86_64.rpm Loaded plugins: dellsysid, fastestmirror, langpacks Examining centos-release-7-0.1406.el7.centos.2.5.x86_64.rpm: centos-release-7-0.1406.el7.centos.2.5.x86_64 Resolving Dependencies --> Running transaction check ---> Package centos-release.x86_64 0:7-0.1406.el7.centos.2.5 will be reinstalled --> Finished Dependency Resolution Dependencies Resolved ======================================================================================================================================================== Package Arch Version Repository Size ======================================================================================================================================================== Reinstalling: centos-release x86_64 7-0.1406.el7.centos.2.5 /centos-release-7-0.1406.el7.centos.2.5.x86_64 31 k Transaction Summary ======================================================================================================================================================== Reinstall 1 Package Total size: 31 k Installed size: 31 k Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : centos-release-7-0.1406.el7.centos.2.5.x86_64 1/1 Verifying : centos-release-7-0.1406.el7.centos.2.5.x86_64 1/1 Installed: centos-release.x86_64 0:7-0.1406.el7.centos.2.5 Complete! Why didn't reinstall work? This is a snafu that was created when the individualized RPMs to specific versions of CentOS were deprecated. This directory (and version of CentOS) is deprecated. For normal users, you should use /7/ and not /7.0.1406/ in your path. Please see this FAQ concerning the CentOS release scheme: https://wiki.centos.org/FAQ/General If you know what you are doing, and absolutely want to remain at the 7.0.1406 level, go to http://vault.centos.org/ for packages. Please keep in mind that7.0.1406 no longer gets any updates, nor any security fix's. --- Source: http://mirror.centos.org/centos/7.0.1406/readme So you typically have to reach into the CentOS Vault for packages that fall into this state.
{ "source": [ "https://unix.stackexchange.com/questions/209820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119499/" ] }
209,833
It appears that $[expr] performs arithmetic expansion just like $((expr)) . But I can't find any mention of $[ in the bash manual. This command gives no results: gunzip -c /usr/share/man/man1/bash.1.gz | grep -E '\$\[' What is this operator and is its behavior standardized anywhere? My bash version: GNU bash, version 3.2.51(1)-release (x86_64-apple-darwin13)
You can find old bash source here . In particular I downloaded bash-1.14.7.tar.gz . In the documentation/bash.txt you will find: Arithmetic Expansion Arithmetic expansion allows the evaluation of an arithmetic expression and the substitution of the result. There are two formats for arithmetic expansion: $[expression] $((expression)) The references to $[ are gone in doc/bash.html from the bash-doc-2.0.tar.gz download and the NEWS file mentions that: The $[...] arithmetic expansion syntax is no longer supported, in favor of $((...)) . $((...)) is also the standard syntax for an arithmetic expansion, but may have been added to the standard later than the original Bash implementation. However, $[...] does still seem to work in Bash 5.0, so it's not completely removed.
{ "source": [ "https://unix.stackexchange.com/questions/209833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67823/" ] }
209,971
Consider I've set variable site and needs to be printed by echo or printf , but If I use single quote to write something and want to use variable then how? Example: $ site=unix.stackexchange.com $ echo "visit:$site" visit:unix.stackexchange.com But If I use single quote: $ echo 'visit:$site' visit:$site Then we know that '' is strong quote and will not expand the variable I've tried something: $ echo 'visit:"$site"' visit:"$site" but do not succeed. So, I am looking for way to print value inside variable while using single quote .
You can't expand variables in single quotes. You can end single quotes and start double quotes, though: echo 'visit:"'"$site"'"' Or, you can backslash double quotes inside of double quotes: echo "visit:\"$site\""
{ "source": [ "https://unix.stackexchange.com/questions/209971", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
210,158
The Bash interpreter itself has options. For example, those mentioned on lines 22-23 of Bash's man page : OPTIONS All of the single-character shell options documented in the description of the set builtin command can be used as options when the shell is invoked. In addition, bash interprets the following options when it is invoked: -c ... -i ... -l ... -r ... I've used a few search patterns in Bash's man page like: /^\s*set /list Is it possible to print a list of these settings that are applied to the current shell?
printf %s\\n "$-" Will list the single letter options in a single string. That parameter can also be used like: set -f -- ${-:+"-$-"} echo *don\'t* *glob* *this* set +f "$@" To first disable shell -f ilename expansion while simultaneously saving a value for $- - if any - in $1 . Next, no globs occur, and last +f ilename expansion is once again enabled, and possibly also disabled. For example, if -f ilename expansion was already disabled when the value for $- was first saved, then its saved value would be (at least) : f And so when set is run again, it works out to: set +f -f Which just puts you right back where you started. set +o Will list all set table shell options (see Jason's answer for the shopt able - is that a word? - options) in a form that is safe for shell reentry. In that way, you can also do: state=$(set +o) set -some -crazy -options eval "$state" To save, change, and restore the shell options' state respectively. To handle shopt ions and set table options in one go: state=$(set +o; shopt -p) #do what you want with options here eval "$state" You can also call set without any arguments to add a list of all of the shell's currently set variables - also quoted for reentry to the shell. And you can - in bash - additionally add the command typeset -fp to also include all currently declared shell functions. You can lump it all together and eval when ready. You can even call alias without arguments for more of the same. That... might cover it, though. I guess there is "$@" - which you'd have to put in a bash array first, I suppose, before doing set . Nope, there's also trap . This one's a little funny. Usually: trap 'echo this is my trap' 0 (echo this is my subshell; trap) ...will just print this is my subshell because the subshell is a new process and gets its own set of trap s - and so doesn't inherit any trap s but those which its parent has explicitly ignored - (like trap '' INT ) . However: trap 'echo this is my trap' 0 save_traps=$(trap) trap behaves specially when it is the first and only command run in a command substitution subshell in that it will reproduce a list of the parent shell's currently set traps in a format which is quoted for safe reentry to the shell. And so you can do the save_traps , then set without arguments - and all of the rest already mentioned - to pretty much get a lock on all shell state. You might want to explicitly add export -p and readonly -p to restore original shell var attributes, though. Anyway, that's enough.
{ "source": [ "https://unix.stackexchange.com/questions/210158", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
210,171
File1: .tid.setnr := 1123 .tid.setnr := 3345 .tid.setnr := 5431 .tid.setnr := 89323 File2: .tid.info := 12 .tid.info := 3 .tid.info := 44 .tid.info := 60 Output file: .tid.info := 12 .tid.setnr := 1123 .tid.info := 3 .tid.setnr := 3345 .tid.info := 44 .tid.setnr := 5431 .tid.info := 60 .tid.setnr := 89323
Using paste : paste -d \\n file2 file1
{ "source": [ "https://unix.stackexchange.com/questions/210171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118311/" ] }
210,174
Is it possible to change the background of the active (current) tmux tab? I'm using tmux 1.9 on Ubuntu 15.04. $ tmux -V tmux 1.9 I tried to do: set-option -g pane-active-border-fg red But the result was not changed: I expected the 3-bash* to have a red background.
You haven't set window active background color, you only set active panel border, try: set-window-option -g window-status-current-bg red
{ "source": [ "https://unix.stackexchange.com/questions/210174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45370/" ] }
210,202
I accidentally created a file with the name - (eg, seq 10 > - ). Then I tried to use less to view it, but it just hangs. I understand that this is happening because less - expects input from stdin , so it does not interpret the - as a file name. I tried less \- but it does not work either. So, is there any way to indicate less that - is a file and not stdin? The best I could get is: find -name '-' -exec less {} +
Just prefix it with ./ : less ./- Or use redirection: less < - Note that since - (as opposed to -x or --foo-- for instance) is considered a special filename rather than an option, the following doesn't work: less -- - # THIS DOES NOT WORK
{ "source": [ "https://unix.stackexchange.com/questions/210202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40596/" ] }
210,228
I want to add a user to Red Hat Linux that will not use a password for logging in, but instead use a public key for ssh. This would be on the command line.
Start with creating a user: useradd -m -d /home/username -s /bin/bash username Create a key pair from the client which you will use to ssh from: ssh-keygen -t rsa Copy the public key /home/username/.ssh/id_rsa.pub onto the RedHat host into /home/username/.ssh/authorized_keys Set correct permissions on the files on the RedHat host: chown -R username:username /home/username/.ssh chmod 700 /home/username/.ssh chmod 600 /home/username/.ssh/authorized_keys Ensure that Public Key authentication is enabled on the RedHat host: grep PubkeyAuthentication /etc/ssh/sshd_config #should output: PubkeyAuthentication yes If not, change that directive to yes and restart the sshd service on the RedHat host. From the client start an ssh connection: ssh username@redhathost It should automatically look for the key id_rsa in ~/.ssh/ . You can also specify an identity file using: ssh -i ~/.ssh/id_rsa username@redhathost
{ "source": [ "https://unix.stackexchange.com/questions/210228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119776/" ] }
210,448
I have a VM running CentOS 7 that I have not used for a long time. Today I launched it and tried to update the CentOS system to the latest version using yum update , but I got a lot of errors: Loaded plugins: fastestmirror, langpacks http//bay.uchicago.edu/centos/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http//mirror.cs.pitt.edu/centos/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http//mirror.anl.gov/pub/centos/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden Trying other mirror. http//mirror.pac-12.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http//centos.expedientevirtual.com/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. (Many other similar errors are omitted ...) Trying other mirror. Loading mirror speeds from cached hostfile * base: bay.uchicago.edu * epel: csc.mcs.sdsmt.edu * extras: mirror.ancl.hawaii.edu * nux-dextop: li.nux.ro * updates: centos-distro.cavecreek.net No packages marked for update I deleted the colon after http in the above error messages to avoid warnings. I think these errors might come from the CentOS version I am using: 7.0.1406 -- since current latest version is a new one, say, 7.0.1588 or something, the corresponding path does not exist and hence the HTTP error 404. But how to have my current CentOS automatically adjust the path name to the latest version and be able to update from the correct URL? Thanks.
Run the following command to clean the metadata: yum clean all This will clean all yum caches including cached mirrors of your yum repositories. On the next run it will get a new list of mirrors.
{ "source": [ "https://unix.stackexchange.com/questions/210448", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98928/" ] }
210,528
Here are commands on a random file from pastebin : wget -qO - http://pastebin.com/0cSPs9LR | wc -l 350 wget -qO - http://pastebin.com/0cSPs9LR | sort -u | wc -l 287 wget -qO - http://pastebin.com/0cSPs9LR | sort | uniq | wc -l 287 wget -qO - http://pastebin.com/0cSPs9LR | sort | uniq -u | wc -l 258 The man pages are not clear on what the -u flag is doing. Any advice?
uniq with -u skips any lines that have duplicates. Thus: $ printf "%s\n" 1 1 2 3 | uniq 1 2 3 $ printf "%s\n" 1 1 2 3 | uniq -u 2 3 Usually, uniq prints lines at most once (assuming sorted input). This option actually prints lines which are truly unique (having not appeared again).
{ "source": [ "https://unix.stackexchange.com/questions/210528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119938/" ] }
210,615
To solve this problem I always have to use scp or rsync to copy the file into my local computer to open the file and simply copy the contents of the text file into my local clipboard. I was just wondering if there is a more clever way to do this without having the need of copying the file.
Of course you have to read the file, but you could </dev/null ssh USER@REMOTE "cat file" | xclip -i though that still means to open a ssh connection and copy the contents of the file. But finally you don't see anything of it anymore ;) And if you are connecting from an OS X computer you use pbcopy instead: </dev/null ssh USER@REMOTE "cat file" | pbcopy PS: Instead of </dev/null you can use ssh -n but I don't like expressing things in terms of software options, where I can use the system to get the same. PPS: The </dev/null pattern for ssh is extremely usefull for loops printf %s\\n '-l user host1' '-l user host2' | while read c do </dev/null ssh $u "ip address; hostname; id" done
{ "source": [ "https://unix.stackexchange.com/questions/210615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91570/" ] }
210,620
The goal: to be able to get an "infobox" to open in a terminal after some time; alarm clock style, (on a Debian derived linux box). However: > at now + 3 min dialog --infobox "Time to attend to matters\!" 6 33 produces no output. and a system email that says "Error opening terminal: unknown". So we prefix the dialog with some environmental variable stuff which did the trick in the past, that the command after "at" now looks like this: TERM=linux DISPLAY=":0.0" dialog --infobox "Seek ye the truth\!" 6 33 Now the only thing produced is a system email filled with escape sequences, which i'll guess is the output of dialog itself? How can one get dialog to play well with "at"? (thankee!)
Of course you have to read the file, but you could </dev/null ssh USER@REMOTE "cat file" | xclip -i though that still means to open a ssh connection and copy the contents of the file. But finally you don't see anything of it anymore ;) And if you are connecting from an OS X computer you use pbcopy instead: </dev/null ssh USER@REMOTE "cat file" | pbcopy PS: Instead of </dev/null you can use ssh -n but I don't like expressing things in terms of software options, where I can use the system to get the same. PPS: The </dev/null pattern for ssh is extremely usefull for loops printf %s\\n '-l user host1' '-l user host2' | while read c do </dev/null ssh $u "ip address; hostname; id" done
{ "source": [ "https://unix.stackexchange.com/questions/210620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98896/" ] }
210,948
I tried to use the ls command and got an error: bash: /bin/ls: cannot execute binary file What can I use instead of this command?
You can use the echo or find commands instead of ls : echo * or: find -printf "%M\t%u\t%g\t%p\n"
{ "source": [ "https://unix.stackexchange.com/questions/210948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
210,982
Question: How do I launch a program while ensuring that its network access is bound via a specific network interface? Case: I want to access two distinct machines with the same IP (192.168.1.1), but accessible via two different network interfaces (eth1 and eth2). Example: net-bind -D eth1 -exec {Program 192.168.1.1} net-bind -D eth2 -exec {Program 192.168.1.1} The above is an approximation of what I'd like, inspired by the hardware binding done via primusrun and optirun . Challenge: As suggested in a related thread , the interfaces used are not chosen by the program, but rather by the kernel (Hence the pre-binding syntax in the above example). I've found some related solutions, which are unsatisfactory. They are based on binding network interfaces via user-specific network blacklisting; i.e., running the process as a user which can only access a single specific network interface.
For Linux, this has already been answered on Superuser - How to use different network interfaces for different processes? . The most popular answer uses an LD_PRELOAD trick to change the network binding for a program, but modern kernels support a much more flexible feature called 'network namespaces' which is exposed through the ip program. This answer shows how to use this. From my own experiments I have done the following (as root): # Add a new namespace called test_ns ip netns add test_ns # Set test to use eth0, after this point eth0 is not usable by programs # outside the namespace ip link set eth0 netns test_ns # Bring up eth0 inside test_ns ip netns exec test_ns ip link set eth0 up # Use dhcp to get an ipv4 address for eth0 ip netns exec test_ns dhclient eth0 # Ping google from inside the namespace ip netns exec test_ns ping www.google.co.uk It is also possible to manage network namespaces to some extent with the unshare and nsenter commands. This allows you to also create separate spaces for PIDs, users and mount points. For some more information see: Reliable way to jail child processes using `nsenter:` Namespaces in operation
{ "source": [ "https://unix.stackexchange.com/questions/210982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108221/" ] }
211,817
How to copy the contents of a file in UNIX without displaying the file contents. I don't want to cat or vi to see the contents. I want to copy them to clipboard so that I can paste it back on my windows notepad. I can't copy the file from that server to another due to access restrictions.
X11 If using X11 (the most common GUI on traditional Unix or Linux based systems), to copy the content of a file to the X11 CLIPBOARD selection without displaying it, you can use the xclip or xsel utility. xclip -sel c < file Or: xsel -b < file to store the content of file as the CLIPBOARD X11 selection. To store the output of a command: mycommand | xclip -sel c mycommand | xsel -b Note that it should be stored using an UTF-8 encoding or otherwise pasting won't work properly. If the file is encoded using an another character set, you should convert to UTF-8 first, like: <file iconv -f latin1 -t utf8 | xclip -sel c for a file encoded in latin1 / iso8859-1 . xsel doesn't work with binary data (it doesn't accept null bytes), but xclip does. To store it as a CUT_BUFFER (those are still queried by some applications like xterm when nothing claims the CLIPBOARD or PRIMARY X selections and don't need to have a process running to serve it like for selections), though you probably won't want or need to use that nowadays: xprop -root -format CUT_BUFFER0 8s -set CUT_BUFFER0 "$(cat file)" (removes the trailing newline characters from file ). GNU screen GNU screen has the readbuf command to slurp the content of a file into its own copy-paste buffer (which you paste with ^A] ). So: screen -X readbuf file Apple OS/X Though Apple OS/X can use X11. It doesn't by default unless you run a X11 application. You would be able to use xclip or xsel there as OS/X should synchronise the X11 CLIPBOARD selection with OS/X pasteboard buffers, but that would be a bit of a waste to start the X11 server just for that. On OS/X, you can use the pbcopy command to store arbitrary content into pasteboard buffers: pbcopy < file (the file's character encoding is expected to be the locale's one). To store the output of a command: mycommand | pbcopy Shells Most shells have their own copy-paste buffers. In emacs mode, cut and copy operations store the copied/cut text onto a stack which you yank/paste with Ctrl-Y , and cycle through with Alt+Y zsh CUTBUFFER/killring In zsh , the stack is stored in the $killring array and the top of the stack in the $CUTBUFFER variable though those variables are only available from Zsh Line Editor (zle) widgets and a few specialised widgets are the prefered way to manipulate those. Because those are only available via the ZLE, doing it with commands is a bit convoluted: zmodload zsh/mapfile zle-line-init() { if [ -n "$FILE_TO_COPY" ]; then zle copy-region-as-kill $mapfile[$FILE_TO_COPY] unset FILE_TO_COPY fi } zle -N zle-line-init file-copy() FILE_TO_COPY=$1:A The zle-line-init special widget is executed once at the start of each new command prompt. What that means is that the file will only be copied at the next prompt. For instance, if you do: file-copy file; sleep 2 The file will only be copied after those 2 seconds.
{ "source": [ "https://unix.stackexchange.com/questions/211817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77967/" ] }
211,834
I want to use sed to change a slash into a backslash and a slash, i.e. / -> \/ . But it does not work. Here a small example: #!/bin/bash TEST=/etc/hallo echo $TEST echo $TEST | sed "s/hallo/bello/g" echo $TEST | sed "s/\//\\\//g" The output of the first three lines is as assumed. But the last one does not work. Why? How to correct the last part?
Use single quotes for the expression you used: sed 's/\//\\\//g' In double quotes, \ has a special meaning, so you have to backslash it: sed "s/\//\\\\\//g" But it's cleaner to change the delimiter: sed 's=/=\\/=g' sed "s=/=\\\/=g"
{ "source": [ "https://unix.stackexchange.com/questions/211834", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116283/" ] }
212,127
How can I override the file exists: warning from zsh? > echo > newfile.txt > echo > newfile.txt zsh: file exists: newfile.txt In these cases I prefer my shell to not complain and simply overwrite the file, like bash. Likewise, how to override the following: $ ls >> /tmp/testfile.txt zsh: no such file or directory: /tmp/testfile.txt
Does your setopt output mention noclobber ? If so, that's it, just setopt clobber The documentation for the option is at http://zsh.sourceforge.net/Doc/Release/Options.html#index-file-clobbering_002c-allowing
{ "source": [ "https://unix.stackexchange.com/questions/212127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38047/" ] }
212,183
I need to check a variable's existence in an if statement. Something to the effect of: if [ -v $somevar ] then echo "Variable somevar exists!" else echo "Variable somevar does not exist!" And the closest question to that was this , which doesn't actually answer my question.
In modern bash (version 4.2 and above): [[ -v name_of_var ]] From help test : -v VAR, True if the shell variable VAR is set
{ "source": [ "https://unix.stackexchange.com/questions/212183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
212,329
How can I hide a password in shell scripts? There are a number of scripts that are accessing database. If we open the script others also aware the username and password. So if anyone knows how to hide please let me know. I have one way: place the password in a file and make the file as hidden and no one going to access the file (change the permissions and use the file in script while going to accessing database).
First , as several people have already said, keeping the credentials separate from the script is essential. (In addition to increased security, it also means that you can re-use the same script for several systems with different credentials.) Second , you should consider not only the security of the credentials but also the impact if/when those credentials are compromised. You shouldn't have just one password for all access to the database, you should have different credentials with different levels of access. You could, for instance, have one DB user that has the ability to perform a search in the database - that user should have read-only access. Another user may have permission to insert new records, but not to delete them. A third one may have permission to delete records. In addition to restricting the permissions for each account, you should also have restriction on where each account can be used from. For instance, the account used by your web server should not be allowed to connect from any other IP address than that of the webserver. An account with full root permissions to the database should be very restricted indeed in terms of where it may connect from and should never be used other than interactively. Also, consider using stored procedures in the database to restrict exactly what can be done by each account. These restrictions need to be implemented on the DB-server side of the system so that even if the client-side is compromised, the restrictions cannot be altered from it. (And, obviously, the DB server needs to be protected with firewalls etc in addition to the DB configuration...) In the case of a DB account that is only permitted limited read-only access, and only from a particular IP address, you might not need any further credentials than that, depending on the sensitivity of the data and the security of the host the script is being run from. One example may be a search form on your web site, which can be run with a user that is only allowed to use a stored procedure which extracts only the information that will be presented on the web page. In this case, adding a password does not really confer any extra security, since that information is already meant to be public, and the user can't access any other data that would be more sensitive. Also, make sure that the connection to the database is made using TLS, or anybody listening on the network can get your credentials. Third , consider what kind of credentials to use. Passwords are just one form, and not the most secure. You could instead use some form of public/private key pair, or AD/PAM or the like. Fourth , consider the conditions under which the script will be run: If it is run interactively, then you should enter the password, or the password to the private key, or the private key, or be logged in with a valid Kerberos ticket, when you run it - in other words, the script should get its credentials directly from you at the time that you run it, instead of reading them from some file. If it is run from a webserver, consider setting up the credentials at the time when you start the webserver. A good example here is SSL certificates - they have a public certificate and a private key, and the private key has a password. You may store the private key on the web server, but you still need to enter the password to it when you start Apache. You could also have the credentials on some kind of hardware, such as a physical card or an HSM, that can be removed or locked once the server is started. (Of course, the downside to this method is that the server can't restart on its own if something happens. I would prefer this to the risk of having my system compromised, but your mileage may vary...) If the script is being run from cron, this is the hard part. You don't want to have the credentials lying around anywhere on your system where someone can access them - but you do want to have them lying around so that your script can access them, right? Well, not quite right. Consider exactly what the script is doing. What permissions does it need on the database? Can it be restricted so that it doesn't matter if the wrong person connects with those permissions? Can you instead run the script directly on the DB server that nobody else has access to, instead of from the server that does have other users? If, for some reason that I can't think of, you absolutely must have the script running on an insecure server and it must be able to do something dangerous/destructive... now is a good time to re-think your architecture. Fifth , if you value the security of your database, you should not be running these scripts on servers that other people have access to. If someone is logged in on your system, then they will have the possibility to get at your credentials. For instance, in the case of a web server with an SSL certificate, there is at least a theoretical possibility of someone being able to gain root and access the httpd process's memory area and extract the credentials. There has been at least one exploit in recent times where this could be done over SSL, not even requiring the attacker to be logged in. Also, consider using SELinux or AppArmor or whatever is available for your system to restrict which users can do what. They will make it possible for you to disallow users to even try to connect to the database, even if they do manage to gain access to the credentials. If all this sounds like overkill to you , and you can't afford or don't have the time to do it - then, in my (arrogant and elitist) opinion, you should not be storing anything important or sensitive in your database. And if you're not storing anything important or sensitive, then where you store your credentials is also not important - in which case, why use a password at all? Lastly , if you absolutely cannot avoid storing some kind of credentials, you could have the credentials read-only and own by root and root could grant ownership on an exceedingly temporary basis when requested to do so by a script (because your script should not be run as root unless absolutely necessary, and connecting to a database does not make it necessary). But it's still not a good idea.
{ "source": [ "https://unix.stackexchange.com/questions/212329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120858/" ] }
212,360
I'm using URxvt 9.20 on debian jessie and I'm looking for a way to copy & paste text like I was used to with the gnome-terminal ( Ctrl + Insert for copying, Shift + Insert for pasting). It works within different urxvt consoles, it does not work between e.g. urxvt and iceweasel though. I tried according to the manual on archlinux , but it won't work (even though I actually don't want to use Shift + Ctrl + C / V it was worth a try). .Xresources: ! ****************** ! urxvt config ! ****************** ! Disable Perl extension ! If you do not use the Perl extension features, you can improve the security ! and speed by disabling Perl extensions completely. URxvt.perl-ext: URxvt.perl-ext-common: ! Font spacing ! By default the distance between characters can feel too wide. It's controlled ! by this entry: ! URxvt.letterSpace: -1 ! -- Fonts -- ! URxvt.font:xft:Monospace:pixelsize=13 URxvt.boldfont:xft:Monospace-Bold:pixelsize=13 !URxvt*font: -xos4-terminus-medium-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:pixelsize:12 !URxvt*boldFont: -xos4-terminus-bold-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:bold:pixelsize:12 !URxvt*italicFont: xft:Bitstream Vera Sans Mono:italic:autohint=true:pixelsize=12 !URxvt*boldItalicFont: xft:Bitstream Vera Sans Mono:bold:italic:autohint=true:pixelsize=12 ! Disable scrollbar !URxvt*scrollBar: false ! Scrollbar style - rxvt (default), plain (most compact), next, or xterm URxvt.scrollstyle: plain ! Background color !URxvt*background: black URxvt*background: #1B1B1B ! Font color !URxvt*foreground: white URxvt*foreground: #00FF00 ! Other colors URxvt*color0: black !URxvt*color1: red3 URxvt*color1: #CD0000 URxvt*color2: green3 !URxvt*color3: yellow3 URxvt*color3: #C4A000 URxvt*color4: blue2 !URxvt*color4: #3465A4 URxvt*color5: magenta3 URxvt*color6: cyan3 URxvt*color7: gray90 URxvt*color8: grey50 URxvt*color9: red URxvt*color10: green URxvt*color11: yellow !URxvt*color12: blue URxvt*color12: #3465A4 URxvt*color13: magenta URxvt*color14: cyan URxvt*color15: white ! ****************** ! /urxvt config ! ******************
Unfortunately, the X window system has several different copy-paste mechanisms . Rxvt, like most old-school X applications, uses the primary selection. Generally, when you select something with the mouse, it's automatically copied to the primary selection, and when you middle-click to paste, that pastes the primary selection. Ctrl + C and Ctrl + V (or other key bindings) in applications using modern GUI toolkits, such as Gnome-terminal and Firefox, copy/paste from the clipboard. There are tools to facilitate working with the selections. In particular, if you just want to have a single selection that's copied to whether you select with the mouse or press Ctrl + C , you can run autocutsel (start it from your .xinitrc or from your desktop environment's startup programs), which detects when something is copied to one of the selections and automatically copies it to the other.
{ "source": [ "https://unix.stackexchange.com/questions/212360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121057/" ] }
212,438
I'm wondering how to stop all units that are grouped together by the same target. My setup is as follows. I have several unit config files that read: [Unit] ... [Service] ... [Install] WantedBy=mycustom.target When I run # systemctl start mycustom.target Those units that "are wanted by" mycustom.target start correctly. Now, I would also like to be able stop all units that are wanted by mycustom.target . I tried: # systemctl stop mycustom.target This doesn't do anything though. Is there a way to make this work without having to stop all units that are (explicitly) wanted by the same target?
Use the PartOf= directive. Configures dependencies similar to Requires=, but limited to stopping and restarting of units. When systemd stops or restarts the units listed here, the action is propagated to this unit. Note that this is a one-way dependency — changes to this unit do not affect the listed units. PartOf=mycustom.target
{ "source": [ "https://unix.stackexchange.com/questions/212438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67525/" ] }
212,688
It's a serious question. I test some awk scripts and I need files with a newline in their names. Is it possible to add a newline into a filename with mv ? I now, I can do this with touch : touch "foo bar" With touch I added the newline character per copy and paste. But I can't write foo Return bar in my shell. How can I rename a file, to have a newline in the filename? Edit 2015/06/28; 07:08 pm To add a newline in zsh I can use, Alt + Return
It is a bad idea (to have strange characters in file names) but you could do mv somefile.txt "foo bar" (you could also have done mv somefile.txt "$(printf "foo\nbar")" or mv somefile.txt foo$'\n'bar , etc... details are specific to your shell. I'm using zsh ) Read more about globbing , e.g. glob(7) . Details could be shell-specific. But understand that /bin/mv is given (by your shell), via execve(2) , an expanded array of arguments: argument expansion and globbing is the responsibility of the invoking shell. And you could even code a tiny C program to do the same: #include <stdio.h> #include <stdlib.h> int main() { if (rename ("somefile.txt", "foo\nbar")) { perror("rename somefile.txt"); exit(EXIT_FAILURE); }; return 0; } Save above program in foo.c , compile it with gcc -Wall foo.c -o foo then run ./foo Likewise, you could code a similar script in Perl, Ruby, Python, Ocaml, etc.... But that is a bad idea. Avoid newlines in filenames (it will confuse the user, and it could break many scripts). Actually, I even recommend to use only non-accentuated letters, digits, and +-/._% characters (with / being the directory separator) in file paths. "Hidden" files (starting with . ) should be used with caution and parcimony. I believe using any kind of space in a file name is a mistake. Use an underscore instead (e.g. foo/bar_bee1.txt ) or a minus (e.g. foo/bar-bee1.txt )
{ "source": [ "https://unix.stackexchange.com/questions/212688", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107084/" ] }
212,872
I want to see what are the last N commands in my history . I thought history | tail -n 5 would make it, but I noticed that a multiline command counts for as many lines as it has. $ echo "hello how are you" $ history | tail -2 how are you" 1051 history | tail -2 So my question is: do I have to parse the output of the command to accomplish this?
I found it! history [n] An argument of n lists only the last n lines. $ echo "hello how are you" $ history 2 1060 echo "hello how are you" 1061 history 2
{ "source": [ "https://unix.stackexchange.com/questions/212872", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40596/" ] }
212,894
I've encountered both http_proxy and HTTP_PROXY . Are both forms equivalent? Does one of them take precedence over the other?
There is no central authority who assigns an official meaning to environment variables before applications can use them. POSIX defines the meaning of some variables ( PATH , TERM , …) and lists several more in a non-normative way as being in common use, all of them in uppercase. http_proxy and friends isn't one of them. Unlike basically all conventional environment variables used by many applications, http_proxy , https_proxy , ftp_proxy and no_proxy are commonly lowercase. I don't recall any program that only understands them in uppercase, I can't even find one that tries them in uppercase. Many programs use the lowercase variant only, including lynx, wget, curl, perl LWP, perl WWW::Search, python urllib/urllib2, etc. So for these variables, the right form is the lowercase one. The lowercase name dates back at least to CERN libwww 2.15 in March 1994 (thanks to Stéphane Chazelas for locating this). I don't know what motivated the choice of lowercase, which would have been unusual even then.
{ "source": [ "https://unix.stackexchange.com/questions/212894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79271/" ] }
213,185
What's the right approach to handle restarting a service in case one of its dependencies fails on startup (but succeeds after retry). Here's a contrived repro to make the problem clearer. a.service (simulates failure on first try and success on second try) [Unit] Description=A [Service] ExecStartPre=/bin/sh -x -c "[ -f /tmp/success ] || (touch /tmp/success && sleep 10)" ExecStart=/bin/true TimeoutStartSec=5 Restart=on-failure RestartSec=5 RemainAfterExit=yes b.service (trivially succeeds after A starts) [Unit] Description=B After=a.service Requires=a.service [Service] ExecStart=/bin/true RemainAfterExit=yes Restart=on-failure RestartSec=5 Let's start b: # systemctl start b A dependency job for b.service failed. See 'journalctl -xe' for details. Logs: Jun 30 21:34:54 debug systemd[1]: Starting A... Jun 30 21:34:54 debug sh[1308]: + '[' -f /tmp/success ']' Jun 30 21:34:54 debug sh[1308]: + touch /tmp/success Jun 30 21:34:54 debug sh[1308]: + sleep 10 Jun 30 21:34:59 debug systemd[1]: a.service start-pre operation timed out. Terminating. Jun 30 21:34:59 debug systemd[1]: Failed to start A. Jun 30 21:34:59 debug systemd[1]: Dependency failed for B. Jun 30 21:34:59 debug systemd[1]: Job b.service/start failed with result 'dependency'. Jun 30 21:34:59 debug systemd[1]: Unit a.service entered failed state. Jun 30 21:34:59 debug systemd[1]: a.service failed. Jun 30 21:35:04 debug systemd[1]: a.service holdoff time over, scheduling restart. Jun 30 21:35:04 debug systemd[1]: Starting A... Jun 30 21:35:04 debug systemd[1]: Started A. Jun 30 21:35:04 debug sh[1314]: + '[' -f /tmp/success ']' A has been successfully started but B is left in a failed state and won't retry. EDIT I added the following to both services and now B successfully starts when A starts, but I can't explain why. [Install] WantedBy=multi-user.target Why would this affect the relationship between A and B? EDIT2 Above "fix" doesn't work in systemd 220. systemd 219 debug logs systemd219 systemd[1]: Trying to enqueue job b.service/start/replace systemd219 systemd[1]: Installed new job b.service/start as 3454 systemd219 systemd[1]: Installed new job a.service/start as 3455 systemd219 systemd[1]: Enqueued job b.service/start as 3454 systemd219 systemd[1]: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch oldcoreos systemd219 systemd[1]: Forked /bin/sh as 1502 systemd219 systemd[1]: a.service changed dead -> start-pre systemd219 systemd[1]: Starting A... systemd219 systemd[1502]: Executing: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmpoldcoreos systemd219 sh[1502]: + '[' -f /tmp/success ']' systemd219 sh[1502]: + touch /tmp/success systemd219 sh[1502]: + sleep 10 systemd219 systemd[1]: a.service start-pre operation timed out. Terminating. systemd219 systemd[1]: a.service changed start-pre -> final-sigterm systemd219 systemd[1]: Child 1502 belongs to a.service systemd219 systemd[1]: a.service: control process exited, code=killed status=15 systemd219 systemd[1]: a.service got final SIGCHLD for state final-sigterm systemd219 systemd[1]: a.service changed final-sigterm -> failed systemd219 systemd[1]: Job a.service/start finished, result=failed systemd219 systemd[1]: Failed to start A. systemd219 systemd[1]: Job b.service/start finished, result=dependency systemd219 systemd[1]: Dependency failed for B. systemd219 systemd[1]: Job b.service/start failed with result 'dependency'. systemd219 systemd[1]: Unit a.service entered failed state. systemd219 systemd[1]: a.service failed. systemd219 systemd[1]: a.service changed failed -> auto-restart systemd219 systemd[1]: a.service: cgroup is empty systemd219 systemd[1]: a.service: cgroup is empty systemd219 systemd[1]: a.service holdoff time over, scheduling restart. systemd219 systemd[1]: Trying to enqueue job a.service/restart/fail systemd219 systemd[1]: Installed new job a.service/restart as 3718 systemd219 systemd[1]: Installed new job b.service/restart as 3803 systemd219 systemd[1]: Enqueued job a.service/restart as 3718 systemd219 systemd[1]: a.service scheduled restart job. systemd219 systemd[1]: Job b.service/restart finished, result=done systemd219 systemd[1]: Converting job b.service/restart -> b.service/start systemd219 systemd[1]: a.service changed auto-restart -> dead systemd219 systemd[1]: Job a.service/restart finished, result=done systemd219 systemd[1]: Converting job a.service/restart -> a.service/start systemd219 systemd[1]: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch oldcoreos systemd219 systemd[1]: Forked /bin/sh as 1558 systemd219 systemd[1]: a.service changed dead -> start-pre systemd219 systemd[1]: Starting A... systemd219 systemd[1]: Child 1558 belongs to a.service systemd219 systemd[1]: a.service: control process exited, code=exited status=0 systemd219 systemd[1]: a.service got final SIGCHLD for state start-pre systemd219 systemd[1]: About to execute: /bin/true systemd219 systemd[1]: Forked /bin/true as 1561 systemd219 systemd[1]: a.service changed start-pre -> running systemd219 systemd[1]: Job a.service/start finished, result=done systemd219 systemd[1]: Started A. systemd219 systemd[1]: Child 1561 belongs to a.service systemd219 systemd[1]: a.service: main process exited, code=exited, status=0/SUCCESS systemd219 systemd[1]: a.service changed running -> exited systemd219 systemd[1]: a.service: cgroup is empty systemd219 systemd[1]: About to execute: /bin/true systemd219 systemd[1]: Forked /bin/true as 1563 systemd219 systemd[1]: b.service changed dead -> running systemd219 systemd[1]: Job b.service/start finished, result=done systemd219 systemd[1]: Started B. systemd219 systemd[1]: Starting B... systemd219 systemd[1]: Child 1563 belongs to b.service systemd219 systemd[1]: b.service: main process exited, code=exited, status=0/SUCCESS systemd219 systemd[1]: b.service changed running -> exited systemd219 systemd[1]: b.service: cgroup is empty systemd219 sh[1558]: + '[' -f /tmp/success ']' systemd 220 debug logs systemd220 systemd[1]: b.service: Trying to enqueue job b.service/start/replace systemd220 systemd[1]: a.service: Installed new job a.service/start as 4846 systemd220 systemd[1]: b.service: Installed new job b.service/start as 4761 systemd220 systemd[1]: b.service: Enqueued job b.service/start as 4761 systemd220 systemd[1]: a.service: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmp/success && sleep 10)' systemd220 systemd[1]: a.service: Forked /bin/sh as 2032 systemd220 systemd[1]: a.service: Changed dead -> start-pre systemd220 systemd[1]: Starting A... systemd220 systemd[2032]: a.service: Executing: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmp/success && sleep 10)' systemd220 sh[2032]: + '[' -f /tmp/success ']' systemd220 sh[2032]: + touch /tmp/success systemd220 sh[2032]: + sleep 10 systemd220 systemd[1]: a.service: Start-pre operation timed out. Terminating. systemd220 systemd[1]: a.service: Changed start-pre -> final-sigterm systemd220 systemd[1]: a.service: Child 2032 belongs to a.service systemd220 systemd[1]: a.service: Control process exited, code=killed status=15 systemd220 systemd[1]: a.service: Got final SIGCHLD for state final-sigterm. systemd220 systemd[1]: a.service: Changed final-sigterm -> failed systemd220 systemd[1]: a.service: Job a.service/start finished, result=failed systemd220 systemd[1]: Failed to start A. systemd220 systemd[1]: b.service: Job b.service/start finished, result=dependency systemd220 systemd[1]: Dependency failed for B. systemd220 systemd[1]: b.service: Job b.service/start failed with result 'dependency'. systemd220 systemd[1]: a.service: Unit entered failed state. systemd220 systemd[1]: a.service: Failed with result 'timeout'. systemd220 systemd[1]: a.service: Changed failed -> auto-restart systemd220 systemd[1]: a.service: cgroup is empty systemd220 systemd[1]: a.service: Failed to send unit change signal for a.service: Transport endpoint is not connected systemd220 systemd[1]: a.service: Service hold-off time over, scheduling restart. systemd220 systemd[1]: a.service: Trying to enqueue job a.service/restart/fail systemd220 systemd[1]: a.service: Installed new job a.service/restart as 5190 systemd220 systemd[1]: a.service: Enqueued job a.service/restart as 5190 systemd220 systemd[1]: a.service: Scheduled restart job. systemd220 systemd[1]: a.service: Changed auto-restart -> dead systemd220 systemd[1]: a.service: Job a.service/restart finished, result=done systemd220 systemd[1]: a.service: Converting job a.service/restart -> a.service/start systemd220 systemd[1]: a.service: About to execute: /bin/sh -x -c '[ -f /tmp/success ] || (touch /tmp/success && sleep 10)' systemd220 systemd[1]: a.service: Forked /bin/sh as 2132 systemd220 systemd[1]: a.service: Changed dead -> start-pre systemd220 systemd[1]: Starting A... systemd220 systemd[1]: a.service: Child 2132 belongs to a.service systemd220 systemd[1]: a.service: Control process exited, code=exited status=0 systemd220 systemd[1]: a.service: Got final SIGCHLD for state start-pre. systemd220 systemd[1]: a.service: About to execute: /bin/true systemd220 systemd[1]: a.service: Forked /bin/true as 2136 systemd220 systemd[1]: a.service: Changed start-pre -> running systemd220 systemd[1]: a.service: Job a.service/start finished, result=done systemd220 systemd[1]: Started A. systemd220 systemd[1]: a.service: Child 2136 belongs to a.service systemd220 systemd[1]: a.service: Main process exited, code=exited, status=0/SUCCESS systemd220 systemd[1]: a.service: Changed running -> exited systemd220 systemd[1]: a.service: cgroup is empty systemd220 systemd[1]: a.service: cgroup is empty systemd220 systemd[1]: a.service: cgroup is empty systemd220 systemd[1]: a.service: cgroup is empty systemd220 sh[2132]: + '[' -f /tmp/success ']'
I'll try to summarize my findings for this issue in case someone comes across this as information on this topic is scant. Restart=on-failure only applies to process failures (does not apply to failure due to dependency failures) The fact that dependent failed units get restarted under certain conditions when a dependency successfully restart was a bug in systemd < 220: http://lists.freedesktop.org/archives/systemd-devel/2015-July/033513.html If there's even a small chance that a dependency might fail on start and you care about resiliency, don't use Before / After and instead perform a check on some artifact that the dependency produces e.g. ExecStartPre=/usr/bin/test -f /some/thing Restart=on-failure RestartSec=5s You could even use systemctl is-active <dependecy> . Very hacky, but I haven't found any better options. In my opinion, not having a way to handle dependency failures is a flaw in systemd.
{ "source": [ "https://unix.stackexchange.com/questions/213185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121599/" ] }
213,303
In bash shell ls can use a logical OR functionality through (of course I could also do ls name1 name2 but my true examples are more complicated): ls @(name1|name2) Is there a way to do this using find ? My naive implementation: find . -maxdepth 1 -name @("name1"|"name2") doesn't work (it just outputs nothing)
You can use -o for logical OR . Beware however that all find predicates have logical values, so you'll usually need to group OR ed things together with parens. And since parens also have a meaning to the shell, you'll also need to escape them: find /some/dir -maxdepth 1 \( -name '*.c' -o -name '*.h' \) -print
{ "source": [ "https://unix.stackexchange.com/questions/213303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90937/" ] }
213,311
Where does the name of nm come from? The IEEE standard defines nm as: nm - write the name list of an object file Is nm an abbreviated form of word name / names ? Or does it have a completely different origin?
You can use -o for logical OR . Beware however that all find predicates have logical values, so you'll usually need to group OR ed things together with parens. And since parens also have a meaning to the shell, you'll also need to escape them: find /some/dir -maxdepth 1 \( -name '*.c' -o -name '*.h' \) -print
{ "source": [ "https://unix.stackexchange.com/questions/213311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109924/" ] }
213,330
tail -f x.log I use this command to see a growing log file in the command prompt. I am interested only in seeing the log lines that are written to the file after running tail -f and not interested in the logs that were written to the file before doing tail -f . But tail -f command on start, takes the last 10 lines and displays it. This confuses me, at times if these logs are freshly generated (or) they are old logs? So, how can i customize tail -f to output only the new entries?
You can try: tail -n0 -f x.log From man page : -n, --lines= K output the last K lines, instead of the last 10; or use -n +K to output lines starting with the Kth
{ "source": [ "https://unix.stackexchange.com/questions/213330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
213,364
I want to setup an Apache Spark Cluster but I am not able to communicate from the worker machine to the master machine at port 7077 (where the Spark Master is running). So I tried to telnet to the master from the worker machine and this is what I am seeing: root@worker:~# telnet spark 7077 Trying 10.xx.xx.xx... Connected to spark. Escape character is '^]'. Connection closed by foreign host. The command terminated with "Connection closed by foreign host" immediately. It does not timeout or anything. I verified that the the host is listening on the port and since telnet output shows "Connected to spark." — this also means that the connection is successful. What could be the reason for such behavior? I am wondering if this closing of the connection could be the reason why I am not able to communicate from my worker machine to the master.
The process that is listening for connections on port 7077 is accepting the connection and then immediately closing the connection. The problem lies somewhere in that application's code or configuration, not in the system itself.
{ "source": [ "https://unix.stackexchange.com/questions/213364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121553/" ] }
213,530
I am currently trying to make a script that creates bytes that will be piped as input to netcat. Here is the idea of the script: (perl -e "print \"$BYTES\x00\"; cat file; perl -e "print \"More bytes\"x16 . \"\r\n\"";) | netcat ip port I tried using both using a subshell and command substitution (ex. with $()) to execute the commands. However I fail to understand why the output of the script when using command substitution is wrong. I suspect that command substitution incorrectly pipes its output when executing multiple commands. Can someone explain to me why this is so? EDIT Here is the variant that used command substitution: $(perl -e "print \"$BYTES\x00\"; cat file; perl -e "print \"More bytes\"x16 . \"\r\n\"";) | netcat ip port
Okay, let's break this down. A subshell executes its contents in a chain (i.e., it groups them). This actually makes intuitive sense as a subshell is created simply by surrounding the chain of commands with () . But, aside from the contents of the subshell being grouped together in execution, you can still use a subshell as if it were a single command. That is, a subshell still has an stdin , stdout and stderr so you can pipe things to and from a subshell. On the other hand, command substitution is not the same thing as simply chaining commands together. Rather, command substitution is meant to act a bit like a variable access but with a function call. Variables, unlike commands, do not have the standard file descriptors so you cannot pipe anything to or from a variable (generally speaking), and the same is true of command substitutions. To try to make this more clear, what follows are a set of maybe-unclear (but accurate) examples and a set of, what I think may be, more easily-understood examples. Let's say the date -u command gives the following: Thu Jul 2 13:42:27 UTC 2015 But, we want to manipulate the output of this command. So, let's pipe it into something like sed : user@host~> date -u | sed -e 's/ / /g' Thu Jul 2 13:42:27 UTC 2015 Wow, that was fun! The following is completely equivalent to above (barring some environment differences that you can read about in the man pages about your shell): user@host~> (date -u) | sed -e 's/ / /g' Thu Jul 2 13:42:27 UTC 2015 That should be no surprise since all we did was group date -u . However, if we do the following, we are going to get something that may seem a bit odd at first: user@host~> $(date -u) | sed -e 's/ / /g' command not found: Thu This is because $(date -u) is equivalent to typing out exactly what date -u outputs. So the above is equivalent to the following: user@host~> Thu Jul 2 13:42:27 UTC 2015 | sed -e 's/ / /g' Which will, of course, error out because Thu is not a command (at least not one I know of); and it certainly doesn't pipe anything to stdout (so sed will never get any input). But, since we know that command substitutions act like variables, we can easily fix this problem because we know how to pipe the value of a variable into another command: user@host~> echo $(date -u) | sed -e 's/ / /g' Thu Jul 2 13:42:27 UTC 2015 But, as with any variable in bash , you should probably quote command substitutions with "" . Now, for the perhaps-simpler example; consider the following: user@host~> pwd /home/hypothetical user@host~> echo pwd pwd user@host~> echo "$(pwd)" /home/hypothetical user@host~> echo "$HOME" /home/hypothetical user@host~> echo (pwd) error: your shell will tell you something weird that roughly means “Whoa! you tried to have me echo something that isn't text!” user@host~> (pwd) /home/hypothetical I am not sure how to describe it any simpler than that. The command substitution works just like a variable access where the subshell still operates like a command.
{ "source": [ "https://unix.stackexchange.com/questions/213530", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84969/" ] }
213,799
Is it possible in an interactive bash shell to enter a command that outputs some text so that it appears at the next command prompt, as if the user had typed in that text at that prompt ? I want to be able to source a script that will generate a command-line and output it so that it appears when the prompt returns after the script ends so that the user can optionally edit it before pressing enter to execute it. This can be achieved with xdotool but that only works when the terminal is in an X window and only if it's installed. [me@mybox] 100 $ xdotool type "ls -l" [me@mybox] 101 $ ls -l <--- cursor appears here! Can this be done using bash only?
With zsh , you can use print -z to place some text into the line editor buffer for the next prompt: print -z echo test would prime the line editor with echo test which you can edit at the next prompt. I don't think bash has a similar feature, however on many systems, you can prime the terminal device input buffer with the TIOCSTI ioctl() : perl -e 'require "sys/ioctl.ph"; ioctl(STDIN, &TIOCSTI, $_) for split "", join " ", @ARGV' echo test Would insert echo test into the terminal device input buffer, as if received from the terminal. A more portable variation on @mike's Terminology approach and that doesn't sacrifice security would be to send the terminal emulator a fairly standard query status report escape sequence: <ESC>[5n which terminals invariably reply (so as input) as <ESC>[0n and bind that to the string you want to insert: bind '"\e[0n": "echo test"'; printf '\e[5n' If within GNU screen , you can also do: screen -X stuff 'echo test' Now, except for the TIOCSTI ioctl approach, we're asking the terminal emulator to send us some string as if typed. If that string comes before readline ( bash 's line editor) has disabled terminal local echo, then that string will be displayed not at the shell prompt, messing up the display slightly. To work around that, you could either delay the sending of the request to the terminal slightly to make sure the response arrives when the echo has been disabled by readline. bind '"\e[0n": "echo test"'; ((sleep 0.05; printf '\e[5n') &) (here assuming your sleep supports sub-second resolution). Ideally you'd want to do something like: bind '"\e[0n": "echo test"' stty -echo printf '\e[5n' wait-until-the-response-arrives stty echo However bash (contrary to zsh ) doesn't have support for such a wait-until-the-response-arrives that doesn't read the response. However it has a has-the-response-arrived-yet feature with read -t0 : bind '"\e[0n": "echo test"' saved_settings=$(stty -g) stty -echo -icanon min 1 time 0 printf '\e[5n' until read -t0; do sleep 0.02 done stty "$saved_settings" Further reading See @starfry's answer 's that expands on the two solutions given by @mikeserv and myself with a few more detailed information.
{ "source": [ "https://unix.stackexchange.com/questions/213799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9259/" ] }
214,141
I am not a Linux guy but stuck in some Script which I have to read for my Project. So can anyone can help me what this command is doing? shift $(($optind - 1))
shift $((OPTIND-1)) (note OPTIND is upper case) is normally found immediately after a getopts while loop. $OPTIND is the number of options found by getopts . As pauljohn32 mentions in the comments, strictly speaking, OPTIND gives the position of the next command line argument. From the GNU Bash Reference Manual : getopts optstring name [args] getopts is used by shell scripts to parse positional parameters. optstring contains the option characters to be recognized; if a character is followed by a colon, the option is expected to have an argument, which should be separated from it by whitespace. The colon (‘:’) and question mark (‘?’) may not be used as option characters. Each time it is invoked, getopts places the next option in the shell variable name, initializing name if it does not exist, and the index of the next argument to be processed into the variable OPTIND . OPTIND is initialized to 1 each time the shell or a shell script is invoked. When an option requires an argument, getopts places that argument into the variable OPTARG . The shell does not reset OPTIND automatically; it must be manually reset between multiple calls to getopts within the same shell invocation if a new set of parameters is to be used. When the end of options is encountered, getopts exits with a return value greater than zero. OPTIND is set to the index of the first non-option argument, and name is set to ‘?’. getopts normally parses the positional parameters, but if more arguments are given in args , getopts parses those instead. shift n removes n strings from the positional parameters list. Thus shift $((OPTIND-1)) removes all the options that have been parsed by getopts from the parameters list, and so after that point, $1 will refer to the first non-option argument passed to the script. Update As mikeserv mentions in the comment, shift $((OPTIND-1)) can be unsafe. To prevent unwanted word-splitting etc, all parameter expansions should be double-quoted. So the safe form for the command is shift "$((OPTIND-1))"
{ "source": [ "https://unix.stackexchange.com/questions/214141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122226/" ] }
214,228
I was reading Practical Unix and Internet Security , when I came across the following lines which I couldn't comprehend. If you are using the wu archive server, you can configure it in such a way that uploaded files are uploaded in mode 004 , so they cannot be downloaded by another client . This provides better protection than simply making the directory unreadable , as it prevents people from uploading files and then telling their friends the exact filename to download. A permission of 004 corresponds to -------r-- . Can't a file be downloaded if it has read access? Also why is it considered better than simply making the directory non-readable? What does this imply? Note: This is with regard to unauthorised users leaving illegal and copyrighted material on servers using anonymous FTP. The above solution was suggested to prevent this along with a script which deletes the directory contents after a period of time.
The permissions 004 (------r--) means that the file can only be read by processes that are not running as the same user or as the same group as the FTP server. This is rather unusual: usually the user has more rights than the group, and the group has more rights than others. Normally the user can change the permissions, so it's pointless to give more restrictive permissions to the user. It makes sense here because the FTP server (presumably) doesn't have a command to change permissions, so the files will retain their permissions until something else changes them. Since the user that the FTP server is running as can't read the files, people won't be able to download the file. That makes it impossible to use the FTP server to share file. Presumably some process running as a different user and group reads the file at some point, verifies that it complies to some policy, copies the data if it does, and deletes the uploaded file. It would have made more sense to me to give the file permissions 040 (readable by the group only) and have the consumer process run as the same group as the FTP server, but a different user.
{ "source": [ "https://unix.stackexchange.com/questions/214228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122271/" ] }
214,234
I have Ubuntu 14.04 running on two vm's. I have have permissive SELinux enabled on both. On system1, all of my files + linked directories in /var/www/html are marked as var_t and the symbolic linked directory (to home/../Documents ) is RED and appears not to work. On system2, all of my files + linked directories in /var/www/html are marked as file_t and the symbolic linked directory (to home/../chipweb ) is NOT RED and is ok to use? Why are my file SELinux types different in these two identical directories? I am confused? thanks!
The permissions 004 (------r--) means that the file can only be read by processes that are not running as the same user or as the same group as the FTP server. This is rather unusual: usually the user has more rights than the group, and the group has more rights than others. Normally the user can change the permissions, so it's pointless to give more restrictive permissions to the user. It makes sense here because the FTP server (presumably) doesn't have a command to change permissions, so the files will retain their permissions until something else changes them. Since the user that the FTP server is running as can't read the files, people won't be able to download the file. That makes it impossible to use the FTP server to share file. Presumably some process running as a different user and group reads the file at some point, verifies that it complies to some policy, copies the data if it does, and deletes the uploaded file. It would have made more sense to me to give the file permissions 040 (readable by the group only) and have the consumer process run as the same group as the FTP server, but a different user.
{ "source": [ "https://unix.stackexchange.com/questions/214234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122279/" ] }
214,274
What is the purpose of Linux permissions such as 111 or 333 (i.e. the user can execute , but cannot read the file), if the ability to execute does not automatically imply the ability to read?
I played with it and apparently, exec permissions do not imply read permissions. Binaries can be executable without being readable: $ echo 'int main(){ puts("hello world"); }' > hw.c $ make hw $ ./hw hello world $ chmod 111 hw $ ./hw hello world $ cat hw /bin/cat: hw: Permission denied I can't execute scripts though, unless they have both read and exec permission bits on: $ cat > hw.sh #!/bin/bash echo hello world from bash ^D $ chmod +x ./hw.sh $ ./hw.sh hello world from bash $ chmod 111 ./hw.sh $ ./hw.sh /bin/bash: ./hw.sh: Permission denied
{ "source": [ "https://unix.stackexchange.com/questions/214274", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122305/" ] }
214,445
I used this command to display the first result of files in my directory. ls | head -n 1 My simple question is, how can I modify this command to display say the nth result? Thanks!
You could use sed to select a single line, for example line 12: ls | sed -n 12p Option -n asks sed not to print every line (which is what it normally does), and 12p asks to print the pattern space when the address is 12.
{ "source": [ "https://unix.stackexchange.com/questions/214445", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122428/" ] }
214,471
Introduction: I have created a bash function that is able to check whether a port is available and increments it by 1 if false until a certain maximum port number. E.g., if port 500 is unavailable then the availability of 501 will be checked until 550. Aim: In order to test this bash function I need to create a range of ports that are in LISTEN state. Attempts: On Windows it is possible to create a LISTEN port using these PowerShell commands : PS C:\Users\u> netstat -nat | grep 1234 PS C:\Users\u> $listener = [System.Net.Sockets.TcpListener]1234 PS C:\Users\u> $listener.Start(); PS C:\Users\u> netstat -nat | grep 1234 TCP 0.0.0.0:1234 0.0.0.0:0 LISTENING InHost PS C:\Users\u> $listener.Stop(); PS C:\Users\u> netstat -nat | grep 1234 PS C:\Users\u> Based on this I was trying to think about a command that could do the same on CentOS, but I do not know why and I started to Google without finding a solution that solves this issue. Expected answer : I will accept and upvote the answer that contains a command that is able to create a LISTEN port and once the command has been run the port should stay in LISTEN state, i.e.: [user@host ~]$ ss -nat | grep 500 LISTEN 0 128 *:500 *:*
You could use nc -l as a method to do what you are looking for. Some implementations of nc have a -L option which allows the connections to persist. If you only need them for a little while you could open this command in a for loop and have a bunch of ports opened that way. If you need these opened longer you can use one of the super servers to create a daemon.
{ "source": [ "https://unix.stackexchange.com/questions/214471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65367/" ] }
214,472
I run Linux Mint 17.1. When I log in I get a Cinnamon notification saying: Problems during Cinnamon startup Cinnamon started successfully, but one or more applets, desklets or extension failed to load. Check your system log and the Cinnamon LookingGlass log for any issues. You can disable the offending extension(s) in Cinnamon Settings to prevent this message from occurring. Please contact the developer. Sure enough, when I checked the Desklets settings, there was an extension marked with "error". Out of curiosity, I tried to look for the log message mentioned in the notification to no avail. There were no relevant messages in /var/log/syslog and I could not find the LookingGlass log. This is what I've tried: dmesg | grep -i cinnamon grep -i cinnamon /var/log/syslog find /var/log -iname "*cinnamon*" find /var/log -iname "*glass*" find /var/log -iname "*looking*"
It's in ~/.xsession-errors Prior to Cinnamon 3.8.8 it was ~/.cinnamon/glass.log
{ "source": [ "https://unix.stackexchange.com/questions/214472", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55706/" ] }
214,657
zstyle seems like it's just a central place to store and retrieve data, like an alternative to export -ing shell parameters. Is that true, or is there more to it?
zstyle handles the obvious style control for the completion system, but it seems to cover more than just that. E.g., the vcs_info module relies on it for display of git status in your prompt. You can start by looking at the few explanatory paragraphs in man zshmodules in the zstyle section. You can simply invoke it to see what settings are in effect. This can be instructive. The Zsh Book has a nice chapter treatment on zstyle , also, explaining in detail its various fields. You could grep around in the .../Completion/ directory on your system to see how some of those files make use of zstyle . A common location is near /usr/share/zsh/functions/Completion/* . I see it used in 100+ files on my system there. Users often have zstyle sprinkled around their ~/.zshrc , too. Here are some nice ones to add some color and descriptions to your completing: # Do menu-driven completion. zstyle ':completion:*' menu select # Color completion for some things. # http://linuxshellaccount.blogspot.com/2008/12/color-completion-using-zsh-modules-on.html zstyle ':completion:*' list-colors ${(s.:.)LS_COLORS} # formatting and messages # http://www.masterzen.fr/2009/04/19/in-love-with-zsh-part-one/ zstyle ':completion:*' verbose yes zstyle ':completion:*:descriptions' format "$fg[yellow]%B--- %d%b" zstyle ':completion:*:messages' format '%d' zstyle ':completion:*:warnings' format "$fg[red]No matches for:$reset_color %d" zstyle ':completion:*:corrections' format '%B%d (errors: %e)%b' zstyle ':completion:*' group-name '' # Completers for my own scripts zstyle ':completion:*:*:sstrans*:*' file-patterns '*.(lst|clst)' zstyle ':completion:*:*:ssnorm*:*' file-patterns '*.tsv' # ... The completion system makes most of the fields clear if you play around with it. Try typing zstyle :«tab» and you see some options. Tab-complete to the next colon and you’ll see the next set of options, etc.
{ "source": [ "https://unix.stackexchange.com/questions/214657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73256/" ] }
214,796
What is the reason to use such untelling system call names like time and creat instead of getCurrentTimeSecs and createFile or, maybe more suitable on Unix get_current_time_secs and create_file . Which brings me to the next point: why should someone want something like cfsetospeed without camel case or at least underscores to make it readable? Of course the calls would have more characters but we all know that readability of code is more important right?
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names. Related questions: Why is 'umount' not spelled 'unmount'? What did Ken Thompson mean when he said, "I'd spell creat with an 'e'."? What, if any, naming convention was used for the standard Unix commands? https://stackoverflow.com/questions/682719/what-does-the-9th-commandment-mean
{ "source": [ "https://unix.stackexchange.com/questions/214796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122656/" ] }
214,820
I have read this quote (below) several times, most recently here , and am continually puzzled at how dd can be used to patch anything let alone a compiler: The Unix system I used at school, 30 years ago, was very limited in RAM and Disk space. Especially, the /usr/tmp file system was very small, which led to problems when someone tried to compile a large program. Of course, students weren't supposed to write "large programs" anyway; large programs were typically source codes copied from "somewhere". Many of us copied /usr/bin/cc to /home/<myname>/cc , and used dd to patch the binary to use /tmp instead of /usr/tmp , which was bigger. Of course, this just made the problem worse - the disk space occupied by these copies did matter those days, and now /tmp filled up regularly, preventing other users from even editing their files. After they found out what happened, the sysadmins did a chmod go-r /bin/* /usr/bin/* which "fixed" the problem, and deleted all our copies of the C compiler. (Emphasis mine) The dd man-page says nothing about patching and a don't think it could be re-purposed to do this anyway. Could binaries really be patched with dd ? Is there any historical significance to this?
Let's try it. Here's a trivial C program: #include <stdio.h> int main(int argc, char **argv) { puts("/usr/tmp"); } We'll build that into test : $ cc -o test test.c If we run it, it prints "/usr/tmp". Let's find out where " /usr/tmp " is in the binary: $ strings -t d test | grep /usr/tmp 1460 /usr/tmp -t d prints the offset in decimal into the file of each string it finds. Now let's make a temporary file with just " /tmp\0 " in it: $ printf "/tmp\x00" > tmp So now we have the binary, we know where the string we want to change is, and we have a file with the replacement string in it. Now we can use dd : $ dd if=tmp of=test obs=1 seek=1460 conv=notrunc This reads data from tmp (our " /tmp\0 " file), writing it into our binary, using an output block size of 1 byte, skipping to the offset we found earlier before it writes anything, and explicitly not truncating the file when it's done. We can run the patched executable: $ ./test /tmp The string literal the program prints out has been changed, so it now contains " /tmp\0tmp\0 ", but the string functions stop as soon as they see the first null byte. This patching only allows making the string shorter or the same length, and not longer, but it's adequate for these purposes. So not only can we patch things using dd , we've just done it.
{ "source": [ "https://unix.stackexchange.com/questions/214820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122678/" ] }
214,879
In a git repository, I have set up my .gitmodules file to reference a github repository: [submodule "src/repo"] path = src/repo url = repourl when I 'git status' on this repo, it shows: On branch master Your branch is up-to-date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: src/repo (new commits) If I cd into src/repo and git status on repo, it says that there is nothing to commit. Why is my top-level git repo complaining?
It's because Git records which commit (not a branch or a tag, exactly one commit represented in SHA-1 hash) should be checked out for each submodule. If you change something in submodule dir, Git will detect it and urge you to commit those changes in the top-level repoisitory. Run git diff in the top-level repository to show what has actually changed Git thinks. If you've already made some commits in your submodule (thus "clean" in submodule), it reports submodule's hash change. $ git diff diff --git a/src/repo b/src/repo index b0c86e2..a893d84 160000 --- a/src/repo +++ b/src/repo @@ -1 +1 @@ -Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea +Subproject commit a893d84d323cf411eadf19569d90779610b10280 Otherwise it shows -dirty hash change which you cannot stage or commit in the top-level repository. git status also claims submodule has untracked/modified content. $ git diff diff --git a/src/repo b/src/repo --- a/src/repo +++ b/src/repo @@ -1 +1 @@ -Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea +Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea-dirty $ git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) (commit or discard the untracked or modified content in submodules) modified: src/repo (untracked content) no changes added to commit (use "git add" and/or "git commit -a") To update which commit records should be checked out for the submodule, you need to git commit the submodule in addition to committing the changes in the submodule: git add src/repo
{ "source": [ "https://unix.stackexchange.com/questions/214879", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122721/" ] }
215,015
This question I would think would be far easier to answer by means of Googling it as it is so simple, but alas I am left to ask it here. What I would like to do is to remove launchers I no longer have need for on the KDE 4 panel in Sabayon. Here's a screenshot, down the bottom left you will see icons (which represent launchers) for Google Chrome, Terminator and Konsole, in that order. I would like to remove the Konsole launcher. The only solution I have managed to find on my own is removing the entire panel, creating a new panel and then adding the launchers I want and leaving out the launchers I don't want. As my list of launchers continues to grow this solution will only get more and more tedious with time, hence why I would prefer a simpler solution if anyone has one. The most natural solution to me would be to right-click on the unwanted launcher and find an option to remove the launcher, but this is the menu I get from right-clicking the Konsole launcher: Clicking "Icon Settings" just gives me this: which is just the options for the desktop configuration file used for the Konsole launcher and to my knowledge has nothing to do with removing the launcher from the KDE panel.
At least on my KDE4 desktop I can remove a launcher like this: right-click on the right-most side of the panel and select Unlock Widgets in the popup menu right-click again on the right-most side of the panel and select Panel Settings now displayed in the popup-menu move mouse on the desired launcher icon and click on the X in its popup to remove the launcher (you can also click and drag it elsewhere if you want to) right-click on the right-most side of the panel and select Lock Widgets in the popup menu (to prevent accidental panel changes)
{ "source": [ "https://unix.stackexchange.com/questions/215015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27613/" ] }
215,234
find /tmp -printf '%s %p\n' |sort -n -r | head This command is working fine but what are the %s %p options used here? Are there any other options that can be used?
What are the %s %p options used here? From the man page : %s File's size in bytes. %p File's name. Scroll down on that page beyond all the regular letters for printf and read the parts which come prefixed with a %. %n Number of hard links to file. %p File's name. %P File's name with the name of the starting-point under which it was found removed. %s File's size in bytes. %t File's last modification time in the format returned by the C `ctime' function. Are there any other options that can be used? There are. See the link to the manpage.
{ "source": [ "https://unix.stackexchange.com/questions/215234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30355/" ] }