source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
145,857
I have 3 panes in my tmux window: -------------------------- | | 2 | | | | | 1 |----------| | | 3 | | | | -------------------------- Panes 1 and 2 have vim . Pane 3 runs a cli I am developing. Sometimes I want to compare panes 1 and 2, so I want to hide pane 3: -------------------------- | | | | | | | 1 | 2 | | | | | | | -------------------------- and then bring back pane 3 again. I don't want to kill pane 3 as I have set up some things there and don't want to go though setting them up again. Is there something similar to PREFIX + z which can zoom pane 2 but without touching pane 1? Or Is there a way to hide pane 3 quickly and bring it up back when needed?
Use the break-pane and join-pane commands. Refer to man tmux for details, options and usage. Hide Pane 3: Select pane 3 and enter Prefix - : break-pane -dP . tmux will send pane 3 to a window in the background (the -d flag) and print some information about it in pane 2 (the -P flag). By default you'll see something like 1:2.0 (meaning: session:window.pane ). Hit q to continue working. 1 1 With some practice you will be able to drop the -P flag since you can predict the session:window.pane triplet: session defaults to the current session and pane defaults to 0 while window will be the next free window identifier. Get Pane 3 back: To get pane 3 and the layout back, select pane 2 and issue Prefix - : join-pane -vs 1:2.0 telling tmux to split pane 2 vertically ( -v ) and to join the (source) pane ( -s ) with identifier 1:2.0 . Optionally, you can drop either the session or the pane identifier. Note also that tmux stores a command line history, conveniently accessible with Prefix - : - Up or Prefix - : - ctrl - p . You'll probably need some time to get the hang of it, but once you do, you'll surely be able to come up with custom key bindings that are convenient for you. This question contains some useful information and tricks that might improve your workflow.
{ "source": [ "https://unix.stackexchange.com/questions/145857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78242/" ] }
145,929
This is my /etc/sysconfig/iptables : It has two ports open 80 apache and 22 for ssh. # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT For port 22 ( SSH ) I want to ensure no-one can connect to this port except for a specific ip address. example ip: 1.2.3.4 Please disregard any oversight/concerns regarding what if my ip changes and I can not SSH to my server any more.
if I get the question in a right way, you want your server to be reachable only from specific IP address on port 22, you can update Iptables for this: iptables -A INPUT -p tcp -s YourIP --dport 22 -j ACCEPT In that case, you are opening ssh port only to YourIP, if you need to open DNS for your internal network: iptables -A INPUT -p udp -s YourIP --dport 53 -j ACCEPT iptables -A INPUT -p tcp -s YourIP --dport 53 -j ACCEPT Once you have them added and opened for those IPs, you need to close the door for the rest of IPs iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP iptables -A INPUT -p udp -s 0.0.0.0/0 --dport 53 -j DROP iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 53 -j DROP (Make sure to set the rules in the correct position in your ruleset. iptables -A INPUT will add the rules to the end of the INPUT as it currently is.) or as joel said you can add one rule instead: iptables -A INPUT -p tcp ! -s <permittedIP> -j DROP or you can just set the default policy on the firewall with iptables -P INPUT DROP In brief, as presented in this question on SO : iptables -A INPUT -p tcp --dport 22 -s YourIP -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j DROP
{ "source": [ "https://unix.stackexchange.com/questions/145929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
145,930
Is there any tiling manager like XMonad but written with Python? I think Haskell was too hard for me. But I know python a little.
if I get the question in a right way, you want your server to be reachable only from specific IP address on port 22, you can update Iptables for this: iptables -A INPUT -p tcp -s YourIP --dport 22 -j ACCEPT In that case, you are opening ssh port only to YourIP, if you need to open DNS for your internal network: iptables -A INPUT -p udp -s YourIP --dport 53 -j ACCEPT iptables -A INPUT -p tcp -s YourIP --dport 53 -j ACCEPT Once you have them added and opened for those IPs, you need to close the door for the rest of IPs iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP iptables -A INPUT -p udp -s 0.0.0.0/0 --dport 53 -j DROP iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 53 -j DROP (Make sure to set the rules in the correct position in your ruleset. iptables -A INPUT will add the rules to the end of the INPUT as it currently is.) or as joel said you can add one rule instead: iptables -A INPUT -p tcp ! -s <permittedIP> -j DROP or you can just set the default policy on the firewall with iptables -P INPUT DROP In brief, as presented in this question on SO : iptables -A INPUT -p tcp --dport 22 -s YourIP -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j DROP
{ "source": [ "https://unix.stackexchange.com/questions/145930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78268/" ] }
145,978
I have a file, f1.txt : ID Name 1 a 2 b 3 g 6 f The number of spaces is not fixed. What is the best way to replace all the white spaces with one space using only tr ? This is what I have so far: cat f1.txt | tr -d " " But the output is: IDName 1a 2b 3g 6f But I want it to look like this: ID Name 1 a 2 b 3 g 6 f Please try and avoid sed .
With tr , use the s queeze repeat option: $ tr -s " " < file ID Name 1 a 2 b 3 g 6 f Or you can use an awk solution: $ awk '{$2=$2};1' file ID Name 1 a 2 b 3 g 6 f When you change a field in record, awk rebuild $0 , takes all field and concat them together, separated by OFS , which is a space by default. That will squeeze sequences of space and tabs (and possibly other blank characters depending on the locale and implementation of awk ) into one space, but also remove the leading and trailing blanks off each line.
{ "source": [ "https://unix.stackexchange.com/questions/145978", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68738/" ] }
145,997
I have set up a VM of Ubuntu server, have installed OpenSSH, and am now trying to connect to it using Putty. Within Putty, under "Host name", I put "Ubuntu", given this is what I thought it was called when I set up the VM. However, I just get the error: "Connection Timed Out". I also tried putting "127.0.0.1" into the host name within Putty and just get "Connection Refused". Note that I have done the port forwarding for SSH and HTTP within Oracle VM, so I am at a loss as to how to get it running.
VirtualBox will create a private network (10.0.2.x) which will be connected to your host network using NAT . (Unless configured otherwise.) This means that you cannot directly access any host of the private network from the host network. To do so, you need some port forwarding. In the network preferences of your VM you can, for example, configure VirtualBox to open port 22 on 127.0.1.1 (a loopback address of your host) and forward any traffic to port 22 of 10.0.2.1 (the internal address of your VM) This way, you can point putty to Port 22 of 127.0.1.1 and VirtualBox will redirect this connection to your VM where its ssh daemon will answer it, allowing you to log in.
{ "source": [ "https://unix.stackexchange.com/questions/145997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70247/" ] }
146,009
I am running Fedora with the stock kernel and I'd like to enable the BFQ disk I/O scheduler, and ideally BFS. I have built my own kernel and that works, though it is a royal pain dealing with the Nvidia drivers. Can I enable BFQ and BFS without building my own kernel, such as by adding kernel args to grub? If not, is there a kernel package available that supports this?
VirtualBox will create a private network (10.0.2.x) which will be connected to your host network using NAT . (Unless configured otherwise.) This means that you cannot directly access any host of the private network from the host network. To do so, you need some port forwarding. In the network preferences of your VM you can, for example, configure VirtualBox to open port 22 on 127.0.1.1 (a loopback address of your host) and forward any traffic to port 22 of 10.0.2.1 (the internal address of your VM) This way, you can point putty to Port 22 of 127.0.1.1 and VirtualBox will redirect this connection to your VM where its ssh daemon will answer it, allowing you to log in.
{ "source": [ "https://unix.stackexchange.com/questions/146009", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34855/" ] }
146,051
While I was learning about cpu load, I came to know that it depends on the number of cores. If I have 2 cores then load 2 will give 100% cpu utilization. So I tried to find out cores.( I already know that system has 2 cores, 4 threads so 2 virtual cores Check here about processor ).So I ran cat /proc/cpuinfo Which gave me processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 69 model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz stepping : 1 microcode : 0x17 cpu MHz : 774.000 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 3591.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 69 model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz stepping : 1 microcode : 0x17 cpu MHz : 1600.000 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 3591.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 69 model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz stepping : 1 microcode : 0x17 cpu MHz : 800.000 cache size : 4096 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 2 initial apicid : 2 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 3591.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 69 model name : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz stepping : 1 microcode : 0x17 cpu MHz : 774.000 cache size : 4096 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 3 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 3591.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: Now I am totally confused. It shows 4 processors, with 2 cpu cores. Can anyone explain this output? Once my cpu load was 3.70, Is this maximum load? Still at that time cpu was at <50%. What about turbo boost? Are all cores are turbo boosted or only physical? Any method in Ubuntu to get current cpu frequency to see if the processor is on turbo boost or not? Load was to 3.70 about 100%. But CPU usage wasn't 100% because of IO response time. This does not means that IO device will be at maximum speed, but io device will be 100% busy, which sometimes affects applications using IO ex: music may break.
The words “CPU”, “processor” and “core” are used in somewhat confusing ways. They refer to the processor architecture. A core is the smallest independent unit that implements a general-purpose processor; a processor is an assemblage of cores (on some ARM systems, a processor is an assemblage of clusters which themselves are assemblages of cores). A chip can contain one or more processors (x86 chips contain a single processor, in this sense of the word processor ). Hyperthreading means that some parts of a core are duplicated. A core with hyperthreading is sometimes presented as an assemblage of two “virtual cores” — meaning not that each core is virtual, but that the plural is virtual because these are not actually separate cores and they will sometimes have to wait while the other core is making use of a shared part. As far as software is concerned, there is only one concept that's useful almost everywhere: the notion of parallel threads of execution. So in most software manuals, the terms CPU and processor are used to mean any one piece of hardware that executes program code. In hardware terms, this means one core, or one virtual core with hyperthreading. Thus top shows you 4 CPUs, because you can have 4 threads executing at the same time. /proc/cpuinfo has 4 entries, one for each CPU (in that sense). The processor numbers (which are the number of the cpu NUMBER entries in /sys/devices/system/cpu ) correspond to these 4 threads. /proc/cpuinfo is one of the few places where you get information about what hardware implements these threads of execution: physical id : 0 siblings : 4 core id : 0 cpu cores : 2 means that cpu0 is one of 4 threads inside physical component (processor) number 0, and that's in core 0 among 2 in this processor.
{ "source": [ "https://unix.stackexchange.com/questions/146051", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72565/" ] }
146,060
The following happens on different Linuces: When I'm in a virtual console, hold Alt and press ← or → , the virtual ttys cycle. This is really annoying as I'm using fish-shell which also uses this key combo. I could remap fish's short cuts, but I don't want to. Instead I want to disable the linux function or remap it. How can I disable or change the tty-cycling-key-combo?
Here's a one-off fix: sudo sh -c 'dumpkeys |grep -v cr_Console |loadkeys'
{ "source": [ "https://unix.stackexchange.com/questions/146060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23598/" ] }
146,085
I am using freescale IMX6 quad processor. I want to know if the top command lists the CPU usage of all 4 cores or of a single core. I am seeing an application's CPU usage being the same with 4 cores and with a single core. I was guessing the CPU usage by the application will increase on a single core and decrease on 4 cores but it has not changed.
I'm not entirely sure what you're asking here. Yes, top shows CPU usage as a percentage of a single CPU by default. That's why you can have percentages that are >100. On a system with 4 cores, you can see up to 400% CPU usage. You can change this behavior by pressing I (that's Shift + i and toggles "Irix mode") while top is running. That will cause it to show the pecentage of available CPU power being used. As explained in man top : 1. %CPU -- CPU Usage The task's share of the elapsed CPU time since the last screen update, expressed as a percentage of total CPU time. In a true SMP environment, if 'Irix mode' is Off, top will operate in 'Solaris mode' where a task's cpu usage will be divided by the total number of CPUs. You toggle 'Irix/Solaris' modes with the 'I' interactive command. Alternatively, you can press 1 which will show you a breakdown of CPU usage per CPU: top - 13:12:58 up 21:11, 17 users, load average: 0.69, 0.50, 0.43 Tasks: 248 total, 3 running, 244 sleeping, 0 stopped, 1 zombie %Cpu0 : 33.3 us, 33.3 sy, 0.0 ni, 33.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 16.7 us, 0.0 sy, 0.0 ni, 83.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 60.0 us, 0.0 sy, 0.0 ni, 40.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 8186416 total, 6267232 used, 1919184 free, 298832 buffers KiB Swap: 8191996 total, 0 used, 8191996 free, 2833308 cached
{ "source": [ "https://unix.stackexchange.com/questions/146085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77442/" ] }
146,100
Let's say, I have a really big text file (about 10.000.000 lines). I need to grep it from the end and save result to a file. What's the most efficient way to accomplish task?
tac /grep Solution tac file | grep whatever Or a bit more effective: grep whatever < <(tac file) Time with a 500MB file: real 0m1.225s user 0m1.164s sys 0m0.516s sed/grep Solution: sed '1!G;h;$!d' | grep whatever Time with a 500MB file: Aborted after 10+ minutes. awk/grep Solution: awk '{x[NR]=$0}END{while (NR) print x[NR--]}' file | grep whatever Time with a 500MB file: real 0m5.626s user 0m4.964s sys 0m1.420s perl/grep Solution: perl -e 'print reverse <>' file | grep whatever Time with a 500MB file: real 0m3.551s user 0m3.104s sys 0m1.036s
{ "source": [ "https://unix.stackexchange.com/questions/146100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
146,190
I'm using the following CentOS: $ cat /etc/centos-release CentOS Linux release 7.0.1406 (Core) The commands nmap , netstat and lsof are not found on CentOS7. Why? $ type -a nmap bash: type: nmap: not found $ type -a netstat bash: type: netstat: not found $ type -a lsof bash: type: lsof: not found What should I do to make them work?
The package net-tools was deprecated in CentOS7 in favour of the iproute2 suite. You may either install it manually or check out this blogpost for replacement commands: http://dougvitale.wordpress.com/2011/12/21/deprecated-linux-networking-commands-and-their-replacements/ EDIT Here is the URL to Red Hat's Bugzilla for RHEL7 that covers the deprecation of netstat in more detail: https://bugzilla.redhat.com/show_bug.cgi?id=1119297 Excerpt As stated before, net-tools are deprecated thus shouldn't be used unless necessary. Behaviour in RHEL 7 is the same as in Fedora - net-tools is missing from minimal install, but is in @base (~= @standard in Fedora) which is installed in all non-minimal configurations. There are also other tickets that deal with this such as IDs 682308 and 687920. Note that they are assigned to the Fedora project and are quite old.
{ "source": [ "https://unix.stackexchange.com/questions/146190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78340/" ] }
146,193
When I type in cd .ssh in terminal, it returns with -bash: cd: .ssh/: Permission denied . Now I cannot add my ssh keys to ssh. When I type ssh-add ~/.ssh/idname it says /Users/Dan/.ssh/idname: Permission denied . I think it has to do with me typing ls -d because it worked before I typed this into terminal?
Since you have "Permission denied" on a directory, it is likely that the directory does not have execute permissions. Similarly, to traverse a directory tree to get at a file, you would need execute permissions on each directory in between the root and the file (hence the same error for the other command). Try setting the execute permissions on the directory chmod u+xr,go-rwx ~/.ssh Then see if you can run those statements again.
{ "source": [ "https://unix.stackexchange.com/questions/146193", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78477/" ] }
146,206
I tried a majority of the formats (gzip, etc.) to extract a zip file with tar , and when I became frustrated enough to Google for it, I found no way to extract a zip file with tar and only recommendations to use zip or unzip . As a matter of fact, my Linux system doesn't even have a zip utility, but only unzip (leaving me to wonder why this is the main recommended option). Of course unzip worked, solving my problem, but why can't tar extract zip files? Perhaps I should instead be asking, what is the difference between zip and the compression methods supported by tar ?
The UNIX philosophy is to have small tools. One tool is doing exactly one thing, but this especially well. The tar tool is just combining several files into a single file without any compression. The gzip tool is just compressing a single file. If you want to have both, you just combine both tools resulting in a .tar.gz file. The zip tool is a completely different thing. It takes a bunch of files and combines them into a single compressed file. With totally different algorithms. If you want one tool to rule them all use atool . It will support a whole bunch of different formats simply by detecting the format and calling the correct tool.
{ "source": [ "https://unix.stackexchange.com/questions/146206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68081/" ] }
146,283
I want to install libpq-dev on my Vagrant machine. I install it with $ apt-get install -y libpq-dev During installation a prompt appears which asks if it's allowed to restart some services automatically. This prompt breaks my Vagrant provision. How can disable this prompt? Text: There are services installed on your system which need to be restarted when certain libraries, such as libpam, libc, and libssl, are upgraded. Since these restarts may cause interruptions of service for the system, you will normally be prompted on each upgrade for the list of services you wish to restart. You can choose this option to avoid being prompted; instead, all necessary restarts will be done for you automatically so you can avoid being asked questions on each library upgrade. ****EDIT **** Thanks to Patrick's answer and this question I fixed it. Now my Vagrantfile contains: sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libpq-dev
Set the environment variable DEBIAN_FRONTEND=noninteractive . For example: export DEBIAN_FRONTEND=noninteractive apt-get install -y libpq-dev This will make apt-get select the default options.
{ "source": [ "https://unix.stackexchange.com/questions/146283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26992/" ] }
146,402
I am trying to upgrade apache 2.2.15 to 2.2.27. While running config.nice taken from apache2.2.15/build I am getting following error: checking whether the C compiler works... no configure: error: in `/home/vkuser/httpd-2.2.27/srclib/apr': configure: error: C compiler cannot create executables I have tried to search online but no luck. I have also tested out c compiler by running a small test.c script and it runs fine. There were few solution given online like installing 'kernel-devel' package but it did not resolve issue. How can I get this to work? Following is the config.log generated: This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by configure, which was generated by GNU Autoconf 2.67. Invocation command line was $ ./configure --prefix=/opt/myapp/apache2.2 --with-mpm=worker --enable-static-support --enable-ssl=static --enable-modules=most --disable-authndbd --disable-authn-dbm --disable-dbd --enable-static-logresolve --enable-static-rotatelogs --enable-proxy=static --enable-proxyconnect=static --enable-proxy-ftp=static --enable-proxy-http=static --enable-rewrite=static --enable-so=static --with-ssl=/opt/myapp/apache2.2/openssl --host=x86_32-unknown-linux-gnu host_alias=x86_32-unknown-linux-gnu CFLAGS=-m32 LDFLAGS=-m32 --with-included-apr ## --------- ## ## Platform. ## ## --------- ## hostname = dmcpq-000 uname -m = x86_64 uname -r = 2.6.18-348.12.1.el5 uname -s = Linux uname -v = #1 SMP Mon Jul 1 17:54:12 EDT 2013 /usr/bin/uname -p = unknown /bin/uname -X = unknown /bin/arch = x86_64 /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: /opt/myapp/Entrust/GetAccess/Runtime/Apache22/bin PATH: /usr/kerberos/sbin PATH: /usr/kerberos/bin PATH: /usr/local/sbin PATH: /usr/local/bin PATH: /sbin PATH: /bin PATH: /usr/sbin PATH: /usr/bin PATH: /root/bin ## ----------- ## ## Core tests. ## ## ----------- ## configure:2793: checking for chosen layout configure:2795: result: Apache configure:3598: checking for working mkdir -p configure:3614: result: yes configure:3629: checking build system type configure:3643: result: x86_64-unknown-linux-gnu configure:3663: checking host system type configure:3676: result: x86_32-unknown-linux-gnu configure:3696: checking target system type configure:3709: result: x86_32-unknown-linux-gnu ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_build=x86_64-unknown-linux-gnu ac_cv_env_CC_set= ac_cv_env_CC_value= ac_cv_env_CFLAGS_set=set ac_cv_env_CFLAGS_value=-m32 ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_LDFLAGS_set=set ac_cv_env_LDFLAGS_value=-m32 ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set=set ac_cv_env_host_alias_value=x86_32-unknown-linux-gnu ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_host=x86_32-unknown-linux-gnu ac_cv_mkdir_p=yes ac_cv_target=x86_32-unknown-linux-gnu ## ----------------- ## ## Output variables. ## ## ----------------- ## APACHECTL_ULIMIT='' APR_BINDIR='' APR_CONFIG='' APR_INCLUDEDIR='' APR_VERSION='' APU_BINDIR='' APU_CONFIG='' APU_INCLUDEDIR='' APU_VERSION='' AP_BUILD_SRCLIB_DIRS='' AP_CLEAN_SRCLIB_DIRS='' AP_LIBS='' AWK='' BUILTIN_LIBS='' CC='' CFLAGS='-m32' CORE_IMPLIB='' CORE_IMPLIB_FILE='' CPP='' CPPFLAGS='' CRYPT_LIBS='' CXX='' CXXFLAGS='' DEFS='' DSO_MODULES='' ECHO_C='' ECHO_N='-n' ECHO_T='' EGREP='' EXEEXT='' EXTRA_CFLAGS='' EXTRA_CPPFLAGS='' EXTRA_CXXFLAGS='' EXTRA_INCLUDES='' EXTRA_LDFLAGS='' EXTRA_LIBS='' GREP='' HTTPD_LDFLAGS='' HTTPD_VERSION='' INCLUDES='' INSTALL='' INSTALL_DSO='' INSTALL_PROG_FLAGS='' LDFLAGS='-m32' LIBOBJS='' LIBS='' LIBTOOL='' LN_S='' LTCFLAGS='' LTFLAGS='' LTLIBOBJS='' LT_LDFLAGS='' LYNX_PATH='' MKDEP='' MKINSTALLDIRS='' MK_IMPLIB='' MODULE_CLEANDIRS='' MODULE_DIRS='' MOD_ACTIONS_LDADD='' MOD_ALIAS_LDADD='' MOD_ASIS_LDADD='' MOD_AUTHNZ_LDAP_LDADD='' MOD_AUTHN_ALIAS_LDADD='' MOD_AUTHN_ANON_LDADD='' MOD_AUTHN_DBD_LDADD='' MOD_AUTHN_DBM_LDADD='' MOD_AUTHN_DEFAULT_LDADD='' MOD_AUTHN_FILE_LDADD='' MOD_AUTHZ_DBM_LDADD='' MOD_AUTHZ_DEFAULT_LDADD='' MOD_AUTHZ_GROUPFILE_LDADD='' MOD_AUTHZ_HOST_LDADD='' MOD_AUTHZ_OWNER_LDADD='' MOD_AUTHZ_USER_LDADD='' MOD_AUTH_BASIC_LDADD='' MOD_AUTH_DIGEST_LDADD='' MOD_AUTOINDEX_LDADD='' MOD_BUCKETEER_LDADD='' MOD_CACHE_LDADD='' MOD_CASE_FILTER_IN_LDADD='' MOD_CASE_FILTER_LDADD='' MOD_CERN_META_LDADD='' MOD_CGID_LDADD='' MOD_CGI_LDADD='' MOD_CHARSET_LITE_LDADD='' MOD_DAV_FS_LDADD='' MOD_DAV_LDADD='' MOD_DAV_LOCK_LDADD='' MOD_DBD_LDADD='' MOD_DEFLATE_LDADD='' MOD_DIR_LDADD='' MOD_DISK_CACHE_LDADD='' MOD_DUMPIO_LDADD='' MOD_ECHO_LDADD='' MOD_ENV_LDADD='' MOD_EXAMPLE_LDADD='' MOD_EXPIRES_LDADD='' MOD_EXT_FILTER_LDADD='' MOD_FILE_CACHE_LDADD='' MOD_FILTER_LDADD='' MOD_HEADERS_LDADD='' MOD_HTTP_LDADD='' MOD_IDENT_LDADD='' MOD_IMAGEMAP_LDADD='' MOD_INCLUDE_LDADD='' MOD_INFO_LDADD='' MOD_ISAPI_LDADD='' MOD_LDAP_LDADD='' MOD_LOGIO_LDADD='' MOD_LOG_CONFIG_LDADD='' MOD_LOG_FORENSIC_LDADD='' MOD_MEM_CACHE_LDADD='' MOD_MIME_LDADD='' MOD_MIME_MAGIC_LDADD='' MOD_NEGOTIATION_LDADD='' MOD_OPTIONAL_FN_EXPORT_LDADD='' MOD_OPTIONAL_FN_IMPORT_LDADD='' MOD_OPTIONAL_HOOK_EXPORT_LDADD='' MOD_OPTIONAL_HOOK_IMPORT_LDADD='' MOD_PROXY_AJP_LDADD='' MOD_PROXY_BALANCER_LDADD='' MOD_PROXY_CONNECT_LDADD='' MOD_PROXY_FTP_LDADD='' MOD_PROXY_HTTP_LDADD='' MOD_PROXY_LDADD='' MOD_PROXY_SCGI_LDADD='' MOD_REQTIMEOUT_LDADD='' MOD_REWRITE_LDADD='' MOD_SETENVIF_LDADD='' MOD_SO_LDADD='' MOD_SPELING_LDADD='' MOD_SSL_LDADD='' MOD_STATUS_LDADD='' MOD_SUBSTITUTE_LDADD='' MOD_SUEXEC_LDADD='' MOD_UNIQUE_ID_LDADD='' MOD_USERDIR_LDADD='' MOD_USERTRACK_LDADD='' MOD_VERSION_LDADD='' MOD_VHOST_ALIAS_LDADD='' MPM_LIB='' MPM_NAME='' MPM_SUBDIR_NAME='' NONPORTABLE_SUPPORT='' NOTEST_CFLAGS='' NOTEST_CPPFLAGS='' NOTEST_CXXFLAGS='' NOTEST_LDFLAGS='' NOTEST_LIBS='' OBJEXT='' OS='' OS_DIR='' OS_SPECIFIC_VARS='' PACKAGE_BUGREPORT='' PACKAGE_NAME='' PACKAGE_STRING='' PACKAGE_TARNAME='' PACKAGE_URL='' PACKAGE_VERSION='' PATH_SEPARATOR=':' PCRE_CONFIG='' PICFLAGS='' PILDFLAGS='' PKGCONFIG='' PORT='' POST_SHARED_CMDS='' PRE_SHARED_CMDS='' RANLIB='' RM='' RSYNC='' SHELL='/bin/sh' SHLIBPATH_VAR='' SHLTCFLAGS='' SH_LDFLAGS='' SH_LIBS='' SH_LIBTOOL='' SSLPORT='' SSL_LIBS='' UTIL_LDFLAGS='' ab_LTFLAGS='' abs_srcdir='' ac_ct_CC='' ap_make_delimiter='' ap_make_include='' bindir='${exec_prefix}/bin' build='x86_64-unknown-linux-gnu' build_alias='' build_cpu='x86_64' build_os='linux-gnu' build_vendor='unknown' cgidir='${datadir}/cgi-bin' checkgid_LTFLAGS='' datadir='${prefix}' datarootdir='${prefix}/share' docdir='${datarootdir}/doc/${PACKAGE}' dvidir='${docdir}' errordir='${datadir}/error' exec_prefix='${prefix}' exp_bindir='/opt/myapp/apache2.2/bin' exp_cgidir='/opt/myapp/apache2.2/cgi-bin' exp_datadir='/opt/myapp/apache2.2' exp_errordir='/opt/myapp/apache2.2/error' exp_exec_prefix='/opt/myapp/apache2.2' exp_htdocsdir='/opt/myapp/apache2.2/htdocs' exp_iconsdir='/opt/myapp/apache2.2/icons' exp_includedir='/opt/myapp/apache2.2/include' exp_installbuilddir='/opt/myapp/apache2.2/build' exp_libdir='/opt/myapp/apache2.2/lib' exp_libexecdir='/opt/myapp/apache2.2/modules' exp_localstatedir='/opt/myapp/apache2.2' exp_logfiledir='/opt/myapp/apache2.2/logs' exp_mandir='/opt/myapp/apache2.2/man' exp_manualdir='/opt/myapp/apache2.2/manual' exp_proxycachedir='/opt/myapp/apache2.2/proxy' exp_runtimedir='/opt/myapp/apache2.2/logs' exp_sbindir='/opt/myapp/apache2.2/bin' exp_sysconfdir='/opt/myapp/apache2.2/conf' host='x86_32-unknown-linux-gnu' host_alias='x86_32-unknown-linux-gnu' host_cpu='x86_32' host_os='linux-gnu' host_vendor='unknown' htcacheclean_LTFLAGS='' htdbm_LTFLAGS='' htdigest_LTFLAGS='' htdocsdir='${datadir}/htdocs' htmldir='${docdir}' htpasswd_LTFLAGS='' httxt2dbm_LTFLAGS='' iconsdir='${datadir}/icons' includedir='${prefix}/include' infodir='${datarootdir}/info' installbuilddir='${datadir}/build' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/modules' localedir='${datarootdir}/locale' localstatedir='${prefix}' logfiledir='${localstatedir}/logs' logresolve_LTFLAGS='' mandir='${prefix}/man' manualdir='${datadir}/manual' nonssl_listen_stmt_1='' nonssl_listen_stmt_2='' oldincludedir='/usr/include' other_targets='' pdfdir='${docdir}' perlbin='' prefix='/opt/myapp/apache2.2' progname='' program_transform_name='s,x,x,' proxycachedir='${localstatedir}/proxy' psdir='${docdir}' rel_bindir='bin' rel_cgidir='cgi-bin' rel_datadir='' rel_errordir='error' rel_exec_prefix='' rel_htdocsdir='htdocs' rel_iconsdir='icons' rel_includedir='include' rel_installbuilddir='build' rel_libdir='lib' rel_libexecdir='modules' rel_localstatedir='' rel_logfiledir='logs' rel_mandir='man' rel_manualdir='manual' rel_proxycachedir='proxy' rel_runtimedir='logs' rel_sbindir='bin' rel_sysconfdir='conf' rotatelogs_LTFLAGS='' runtimedir='${localstatedir}/logs' sbindir='${exec_prefix}/bin' shared_build='' sharedstatedir='${prefix}/com' sysconfdir='${prefix}/conf' target='x86_32-unknown-linux-gnu' target_alias='' target_cpu='x86_32' target_os='linux-gnu' target_vendor='unknown' configure: exit 1
From the output you've given, you are trying to compile a 32-bit build of apache on a 64 bit system. This is from the intput to configure here: --host=x86_32-unknown-linux-gnu host_alias=x86_32-unknown-linux-gnu CFLAGS=-m32 LDFLAGS=-m32 Also see the output lines confirming this: configure:3629: checking build system type configure:3643: result: x86_64-unknown-linux-gnu configure:3663: checking host system type configure:3676: result: x86_32-unknown-linux-gnu configure:3696: checking target system type configure:3709: result: x86_32-unknown-linux-gnu Here it is using a 64 bit build system but a 32 bit host/target. Further down we see: ac_cv_env_CFLAGS_set=set ac_cv_env_CFLAGS_value=-m32 This flag tells gcc to produce 32 bit objects. Your error that the C compiler cannot produce executable is likely caused by not having a 32 bit toolchain present. Testing your ability to compile 32 bit objects You can test this by compiling a small C example with the -m32 flag. // Minimal C example #include <stdio.h> int main() { printf("This works\n"); return 0; } Compiling: gcc -m32 -o m32test m32test.c If this command fails, then you have a problem with your compiler being able to build 32 bit objects. The error messages emitted from the compiler may be helpful in remedying this. Remedies Build for a 64 bit target (by removing the configure options forcing a 32 bit build), or Install a 32 bit compiler toolchain
{ "source": [ "https://unix.stackexchange.com/questions/146402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63194/" ] }
146,441
I need to copy and over-write a large amount of files, I've used the following command: # cp -Rf * ../ But then whenever a file with the same name exists on the destination folder I get this question: cp: overwrite `../ibdata1'? The Problem is that I have about 200 files which are going to be over-written and I don't think that pressing Y then Enter 200 times is the right way to do it. So, what is the right way to that?
You can do yes | cp -rf myxx , Or if you do it as root - your .bashrc or .profile has an alias of cp to cp -i, most modern systems do that to root profiles. You can temporarily bypass an alias and use the non-aliased version of a command by prefixing it with \, e.g. \cp whatever
{ "source": [ "https://unix.stackexchange.com/questions/146441", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36757/" ] }
146,521
I am trying to understand the linux diff command on two files whose lines are just permutation of each other but not able to grok the output that it generates. Consider the three commands below: [myPrompt]$ cat file1 apples oranges [myPrompt]$ cat file2 oranges apples [myPrompt]$ diff file1 file2 1d0 < apples 2a2 > apples Can someone explain the above cryptic output from diff. Why there is no mention of "oranges" at all in the output? What does 1d0 and 2a2 mean? I understand from this answer that : "<" means the line is missing in file2 and ">" means the line is missing in file1 BUT that doesn't explain why oranges is missing in the output.
To understand the report, remember that diff is prescriptive, describing what changes need to be made to the first file ( file1 ) to make it the same as the second file ( file2 ). Specifically, the d in 1d0 means delete and the a in 2a2 means add . Thus: 1d0 means line 1 must be deleted in file1 ( apples ). 0 in 1d0 means line 0 is where they would have appeared in the second file ( file2 ) had they not been deleted. That means when changing file2 to file1 (backwards) append line 1 of file1 after line 0 of file2 . 2a2 means append the second line ( oranges ) from file2 to the now second line of file1 (after deleting the first line in file1 , oranges switched to line 1)
{ "source": [ "https://unix.stackexchange.com/questions/146521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28032/" ] }
146,551
I am working on a Linux Mint 16 computer and since recently, every time I want to install something via apt-get install , the log message says that the packages couldn't be authenticated. I go ahead and try to install them without authetication and it turns out most of the packages are not found. At the end of the process, the console message suggests me to use apt-get update or --fix-missing . So that's what I do: sudo apt-get update and immediateley after I try again to install with sudo apt-get install nginx but I still get the same message error. What is the problem? Am I missing something? Note: I would have copy/pasted the logs but they are in Spanish so they would probably wouldn't have been of much help to most. UPDATE: I managed to get the logs in English thanks to @Flup. Here they are: For apt-get install : ricardo@toshi ~$ sudo apt-get install nginx Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libnet-daemon-perl libplrpc-perl Use 'apt-get autoremove' to remove them. The following extra packages will be installed: nginx-common nginx-full The following NEW packages will be installed: nginx nginx-common nginx-full 0 upgraded, 3 newly installed, 0 to remove and 17 not upgraded. Need to get 404 kB of archives. After this operation, 1246 kB of additional disk space will be used. Do you want to continue [Y/n]? Y WARNING: The following packages cannot be authenticated! nginx-common nginx-full nginx Install these packages without verification [y/N]? y Err http://archive.ubuntu.com/ubuntu/ raring-updates/universe nginx-common all 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.88.153 80] Err http://security.ubuntu.com/ubuntu/ raring-security/universe nginx-common all 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.92.200 80] Err http://security.ubuntu.com/ubuntu/ raring-security/universe nginx-full amd64 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.92.200 80] Err http://security.ubuntu.com/ubuntu/ raring-security/universe nginx all 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.92.200 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/nginx-common_1.2.6-1ubuntu3.3_all.deb 404 Not Found [IP: 91.189.92.200 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/nginx-full_1.2.6-1ubuntu3.3_amd64.deb 404 Not Found [IP: 91.189.92.200 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/nginx_1.2.6-1ubuntu3.3_all.deb 404 Not Found [IP: 91.189.92.200 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? For apt-get update : ricardo@toshi ~$ sudo apt-get update Hit http://dl.google.com stable Release.gpg Ign http://es.archive.ubuntu.com raring Release.gpg Hit http://archive.canonical.com raring Release.gpg Hit http://dl.google.com stable Release.gpg Hit http://ppa.launchpad.net raring Release.gpg Ign http://es.archive.ubuntu.com raring Release.gpg Ign http://archive.ubuntu.com raring Release.gpg Hit http://dl.google.com stable Release Hit http://archive.canonical.com raring Release Ign http://es.archive.ubuntu.com raring Release Hit http://ppa.launchpad.net raring Release.gpg Ign http://archive.ubuntu.com raring-updates Release.gpg Hit http://dl.google.com stable Release Ign http://security.ubuntu.com raring-security Release.gpg Ign http://es.archive.ubuntu.com raring Release Hit http://archive.canonical.com raring/partner amd64 Packages Ign http://archive.ubuntu.com raring Release Hit http://ppa.launchpad.net raring Release Hit http://downloads-distro.mongodb.org dist Release.gpg Hit http://dl.google.com stable/main amd64 Packages Get:1 http://packages.linuxmint.com olivia Release.gpg [198 B] Ign http://archive.ubuntu.com raring-updates Release Hit http://archive.canonical.com raring/partner i386 Packages Hit http://dl.google.com stable/main i386 Packages Hit http://ppa.launchpad.net raring Release Ign http://security.ubuntu.com raring-security Release Ign http://archive.ubuntu.com raring/main amd64 Packages/DiffIndex Hit http://ppa.launchpad.net raring/main Sources Ign http://archive.ubuntu.com raring/restricted amd64 Packages/DiffIndex Get:2 http://packages.linuxmint.com olivia Release [18.5 kB] Hit http://ppa.launchpad.net raring/main amd64 Packages Ign http://archive.ubuntu.com raring/universe amd64 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/main amd64 Packages/DiffIndex Hit http://dl.google.com stable/main amd64 Packages Hit http://downloads-distro.mongodb.org dist Release Ign http://archive.ubuntu.com raring/multiverse amd64 Packages/DiffIndex Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://dl.google.com stable/main i386 Packages Ign http://archive.ubuntu.com raring/main i386 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/restricted amd64 Packages/DiffIndex Ign http://archive.ubuntu.com raring/restricted i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring/universe i386 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/universe amd64 Packages/DiffIndex Ign http://archive.ubuntu.com raring/multiverse i386 Packages/DiffIndex Hit http://toolbelt.heroku.com ./ Release.gpg Hit http://ppa.launchpad.net raring/main Sources Get:3 http://packages.linuxmint.com olivia/main amd64 Packages [23.5 kB] Hit http://downloads-distro.mongodb.org dist/10gen amd64 Packages Hit https://get.docker.io docker Release.gpg Hit http://ppa.launchpad.net raring/main amd64 Packages Ign http://security.ubuntu.com raring-security/multiverse amd64 Packages/DiffIndex Ign http://archive.canonical.com raring/partner Translation-en Hit http://ppa.launchpad.net raring/main i386 Packages Hit https://get.docker.io docker Release Ign http://archive.canonical.com raring/partner Translation-es Ign http://security.ubuntu.com raring-security/main i386 Packages/DiffIndex Hit http://toolbelt.heroku.com ./ Release Hit http://downloads-distro.mongodb.org dist/10gen i386 Packages Hit https://get.docker.io docker/main amd64 Packages Get:4 http://packages.linuxmint.com olivia/upstream amd64 Packages [9249 B] Ign http://security.ubuntu.com raring-security/restricted i386 Packages/DiffIndex Hit https://get.docker.io docker/main i386 Packages Get:5 http://packages.linuxmint.com olivia/import amd64 Packages [39.2 kB] Ign http://security.ubuntu.com raring-security/universe i386 Packages/DiffIndex Hit http://toolbelt.heroku.com ./ Packages Ign http://archive.ubuntu.com raring-updates/main amd64 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/multiverse i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/restricted amd64 Packages/DiffIndex Ign http://dl.google.com stable/main Translation-en Ign http://archive.ubuntu.com raring-updates/universe amd64 Packages/DiffIndex Ign http://dl.google.com stable/main Translation-es Ign http://archive.ubuntu.com raring-updates/multiverse amd64 Packages/DiffIndex Ign http://dl.google.com stable/main Translation-en Ign http://archive.ubuntu.com raring-updates/main i386 Packages/DiffIndex Get:6 http://packages.linuxmint.com olivia/main i386 Packages [23.5 kB] Ign http://dl.google.com stable/main Translation-es Ign http://archive.ubuntu.com raring-updates/restricted i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/universe i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/multiverse i386 Packages/DiffIndex Get:7 http://packages.linuxmint.com olivia/upstream i386 Packages [9237 B] Ign http://ppa.launchpad.net raring/main Translation-en Get:8 http://packages.linuxmint.com olivia/import i386 Packages [40.1 kB] Ign http://ppa.launchpad.net raring/main Translation-es Ign http://ppa.launchpad.net raring/main Translation-en Ign http://ppa.launchpad.net raring/main Translation-es Ign http://toolbelt.heroku.com ./ Translation-en Ign http://toolbelt.heroku.com ./ Translation-es Ign http://downloads-distro.mongodb.org dist/10gen Translation-en Ign http://downloads-distro.mongodb.org dist/10gen Translation-es Err http://es.archive.ubuntu.com raring/main amd64 Packages 404 Not Found [IP: 91.189.92.201 80] Err http://es.archive.ubuntu.com raring/universe amd64 Packages 404 Not Found [IP: 91.189.92.201 80] Err http://es.archive.ubuntu.com raring/main i386 Packages 404 Not Found [IP: 91.189.92.201 80] Err http://es.archive.ubuntu.com raring/universe i386 Packages 404 Not Found [IP: 91.189.92.201 80] Ign http://es.archive.ubuntu.com raring/main Translation-en Ign http://es.archive.ubuntu.com raring/main Translation-es Ign http://es.archive.ubuntu.com raring/universe Translation-en Ign http://es.archive.ubuntu.com raring/universe Translation-es Err http://es.archive.ubuntu.com raring/main amd64 Packages 404 Not Found [IP: 91.189.92.201 80] Err http://es.archive.ubuntu.com raring/universe amd64 Packages 404 Not Found [IP: 91.189.92.201 80] Err http://es.archive.ubuntu.com raring/main i386 Packages 404 Not Found [IP: 91.189.92.201 80] Err http://es.archive.ubuntu.com raring/universe i386 Packages 404 Not Found [IP: 91.189.92.201 80] Ign http://es.archive.ubuntu.com raring/main Translation-en Ign http://es.archive.ubuntu.com raring/main Translation-es Ign http://es.archive.ubuntu.com raring/universe Translation-en Ign http://es.archive.ubuntu.com raring/universe Translation-es Ign http://packages.linuxmint.com olivia/import Translation-en Ign http://packages.linuxmint.com olivia/import Translation-es Ign https://get.docker.io docker/main Translation-en Ign http://packages.linuxmint.com olivia/main Translation-en Ign http://packages.linuxmint.com olivia/main Translation-es Ign http://packages.linuxmint.com olivia/upstream Translation-en Ign https://get.docker.io docker/main Translation-es Ign http://packages.linuxmint.com olivia/upstream Translation-es Ign http://archive.ubuntu.com raring/main Translation-en Ign http://archive.ubuntu.com raring/main Translation-es Ign http://archive.ubuntu.com raring/multiverse Translation-en Ign http://archive.ubuntu.com raring/multiverse Translation-es Ign http://archive.ubuntu.com raring/restricted Translation-en Ign http://archive.ubuntu.com raring/restricted Translation-es Ign http://archive.ubuntu.com raring/universe Translation-en Ign http://archive.ubuntu.com raring/universe Translation-es Ign http://archive.ubuntu.com raring-updates/main Translation-en Ign http://archive.ubuntu.com raring-updates/main Translation-es Ign http://archive.ubuntu.com raring-updates/multiverse Translation-en Ign http://archive.ubuntu.com raring-updates/multiverse Translation-es Ign http://archive.ubuntu.com raring-updates/restricted Translation-en Ign http://archive.ubuntu.com raring-updates/restricted Translation-es Ign http://security.ubuntu.com raring-security/main Translation-en Ign http://archive.ubuntu.com raring-updates/universe Translation-en Ign http://archive.ubuntu.com raring-updates/universe Translation-es Ign http://security.ubuntu.com raring-security/main Translation-es Err http://archive.ubuntu.com raring/main amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring/restricted amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Ign http://security.ubuntu.com raring-security/multiverse Translation-en Err http://archive.ubuntu.com raring/universe amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Ign http://security.ubuntu.com raring-security/multiverse Translation-es Err http://archive.ubuntu.com raring/multiverse amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring/main i386 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring/restricted i386 Packages 404 Not Found [IP: 91.189.92.200 80] Ign http://security.ubuntu.com raring-security/restricted Translation-en Err http://archive.ubuntu.com raring/universe i386 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring/multiverse i386 Packages 404 Not Found [IP: 91.189.92.200 80] Ign http://security.ubuntu.com raring-security/restricted Translation-es Err http://archive.ubuntu.com raring-updates/main amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring-updates/restricted amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Ign http://security.ubuntu.com raring-security/universe Translation-en Err http://archive.ubuntu.com raring-updates/universe amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring-updates/multiverse amd64 Packages 404 Not Found [IP: 91.189.92.200 80] Ign http://security.ubuntu.com raring-security/universe Translation-es Err http://archive.ubuntu.com raring-updates/main i386 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring-updates/restricted i386 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://archive.ubuntu.com raring-updates/universe i386 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://security.ubuntu.com raring-security/main amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://archive.ubuntu.com raring-updates/multiverse i386 Packages 404 Not Found [IP: 91.189.92.200 80] Err http://security.ubuntu.com raring-security/restricted amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com raring-security/universe amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com raring-security/multiverse amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com raring-security/main i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com raring-security/restricted i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com raring-security/universe i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com raring-security/multiverse i386 Packages 404 Not Found [IP: 91.189.91.15 80] Fetched 163 kB in 15s (10.9 kB/s) W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80] E: Some index files failed to download. They have been ignored, or old ones used instead.
The thing that helped me was: https://smyl.es/how-to-fix-ubuntudebian-apt-get-404-not-found-package-repository-errors-saucy-raring-quantal-oneiric-natty/ Basically updating the lists to use old-releases.ubuntu.com: sudo sed -i -e 's/archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list sudo sed -i -e 's/archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list.d/official-package-repositories.list sudo sed -i -e 's/archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list.d/official-source-repositories.list Edit: As Meisam Mulla said in the comments , if your urls in the /etc/apt/sources.list files are prefixed with something ( ca. for example) you'll need to remove the prefixes manually, as ca.old-releases.ubuntu.com isn't a valid address. Also, some of my error messages for the Googlers: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/source/Sources 404 Not Found [IP: 91.189.88.153 80] Err http://archive.ubuntu.com raring/main Sources 404 Not Found [IP: 91.189.92.201 80] Err http://archive.ubuntu.com raring/restricted Sources 404 Not Found [IP: 91.189.92.201 80]
{ "source": [ "https://unix.stackexchange.com/questions/146551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54616/" ] }
146,570
How to create a menu in a shell script that will display 3 options that a user will use the arrows keys to move the highlight cursor and press enter to select one?
Here is a pure bash script solution in form of the select_option function, relying solely on ANSI escape sequences and the built-in read . Works on Bash 4.2.45 on OSX. The funky parts that might not work equally well in all environments from all I know are the get_cursor_row() , key_input() (to detect up/down keys) and the cursor_to() functions. #!/usr/bin/env bash # Renders a text based list of options that can be selected by the # user using up, down and enter keys and returns the chosen option. # # Arguments : list of options, maximum of 256 # "opt1" "opt2" ... # Return value: selected index (0 for opt1, 1 for opt2 ...) function select_option { # little helpers for terminal print control and key input ESC=$( printf "\033") cursor_blink_on() { printf "$ESC[?25h"; } cursor_blink_off() { printf "$ESC[?25l"; } cursor_to() { printf "$ESC[$1;${2:-1}H"; } print_option() { printf " $1 "; } print_selected() { printf " $ESC[7m $1 $ESC[27m"; } get_cursor_row() { IFS=';' read -sdR -p $'\E[6n' ROW COL; echo ${ROW#*[}; } key_input() { read -s -n3 key 2>/dev/null >&2 if [[ $key = $ESC[A ]]; then echo up; fi if [[ $key = $ESC[B ]]; then echo down; fi if [[ $key = "" ]]; then echo enter; fi; } # initially print empty new lines (scroll down if at bottom of screen) for opt; do printf "\n"; done # determine current screen position for overwriting the options local lastrow=`get_cursor_row` local startrow=$(($lastrow - $#)) # ensure cursor and input echoing back on upon a ctrl+c during read -s trap "cursor_blink_on; stty echo; printf '\n'; exit" 2 cursor_blink_off local selected=0 while true; do # print options by overwriting the last lines local idx=0 for opt; do cursor_to $(($startrow + $idx)) if [ $idx -eq $selected ]; then print_selected "$opt" else print_option "$opt" fi ((idx++)) done # user key control case `key_input` in enter) break;; up) ((selected--)); if [ $selected -lt 0 ]; then selected=$(($# - 1)); fi;; down) ((selected++)); if [ $selected -ge $# ]; then selected=0; fi;; esac done # cursor position back to normal cursor_to $lastrow printf "\n" cursor_blink_on return $selected } Here is an example usage: echo "Select one option using up/down keys and enter to confirm:" echo options=("one" "two" "three") select_option "${options[@]}" choice=$? echo "Choosen index = $choice" echo " value = ${options[$choice]}" Output looks like below, with the currently selected option highlighted using inverse ansi coloring (hard to convey here in markdown). This can be adapted in the print_selected() function if desired. Select one option using up/down keys and enter to confirm: [one] two three Update: Here is a little extension select_opt wrapping the above select_option function to make it easy to use in a case statement: function select_opt { select_option "$@" 1>&2 local result=$? echo $result return $result } Example usage with 3 literal options: case `select_opt "Yes" "No" "Cancel"` in 0) echo "selected Yes";; 1) echo "selected No";; 2) echo "selected Cancel";; esac You can also mix if there are some known entries (Yes and No in this case), and leverage the exit code $? for the wildcard case: options=("Yes" "No" "${array[@]}") # join arrays to add some variable array case `select_opt "${options[@]}"` in 0) echo "selected Yes";; 1) echo "selected No";; *) echo "selected ${options[$?]}";; esac
{ "source": [ "https://unix.stackexchange.com/questions/146570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78702/" ] }
146,620
What is the difference between sync and async mount options from the end-user point of view? Is file system mounted with one of these options works faster than if mounted with another one? Which option is the default one, if none of them is set? man mount says that sync option may reduce lifetime of flash memory, but it may by obsolete conventional wisdom. Anyway this concerns me a bit, because my primary hard drive, where partitions / and /home are placed, is SSD drive. Ubuntu installer (14.04) have not specified sync nor async option for / partition, but have set async for /home by the option defaults . Here is my /etc/fstab , I added some additional lines (see comment), but not changed anything in lines made by installer: # / was on /dev/sda2 during installation UUID=7e4f7654-3143-4fe7-8ced-445b0dc5b742 / ext4 errors=remount-ro 0 1 # /home was on /dev/sda3 during installation UUID=d29541fc-adfa-4637-936e-b5b9dbb0ba67 /home ext4 defaults 0 2 # swap was on /dev/sda4 during installation UUID=f9b53b49-94bc-4d8c-918d-809c9cefe79f none swap sw 0 0 # here goes part written by me: # /mnt/storage UUID=4e04381d-8d01-4282-a56f-358ea299326e /mnt/storage ext4 defaults 0 2 # Windows C: /dev/sda1 UUID=2EF64975F6493DF9 /mnt/win_c ntfs auto,umask=0222,ro 0 0 # Windows D: /dev/sdb1 UUID=50C40C08C40BEED2 /mnt/win_d ntfs auto,umask=0222,ro 0 0 So if my /dev/sda is SSD, should I - for the sake of reducing wear - add async option for / and /home file systems? Should I set sync or async option for additional partitions that I defined in my /etc/fstab ? What is recommended approach for SSD and HDD drives?
async is the opposite of sync , which is rarely used. async is the default, you don't need to specify that explicitly. The option sync means that all changes to the according filesystem are immediately flushed to disk; the respective write operations are being waited for. For mechanical drives that means a huge slow down since the system has to move the disk heads to the right position; with sync the userland process has to wait for the operation to complete. In contrast, with async the system buffers the write operation and optimizes the actual writes; meanwhile, instead of being blocked the process in userland continues to run. (If something goes wrong, then close() returns -1 with errno = EIO .) SSD: I don't know how fast the SSD memory is compared to RAM memory, but certainly it is not faster, so sync is likely to give a performance penalty, although not as bad as with mechanical disk drives. As of the lifetime, the wisdom is still valid, since writing to a SSD a lot "wears" it off. The worst scenario would be a process that makes a lot of changes to the same place; with sync each of them hits the SSD, while with async (the default) the SSD won't see most of them due to the kernel buffering. In the end of the day, don't bother with sync , it's most likely that you're fine with async .
{ "source": [ "https://unix.stackexchange.com/questions/146620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
146,633
I'm trying to create a new user on a Centos 6 system. First, I do useradd kevin Then, I tried to run commands as that user su - kevin However, I get the following error messages -bash: /dev/null: Permission denied -bash: /dev/null: Permission denied -bash: /dev/null: Permission denied -bash: /dev/null: Permission denied -bash: /dev/null: Permission denied -bash: /dev/null: Permission denied [kevin@gazelle ~]$ And I can't do very much as that user. The permissions on /dev/null are as follows: -rwxr-xr-x 1 root root 9 Jul 25 17:07 null Roughly the same as they are on my Mac, crw-rw-rw- 1 root wheel 3, 2 Jul 25 14:08 null It's possible , but really unlikely, that I touched dev. As the root user, I tried adding kevin to the root group: usermod -a -G root kevin However I still am getting /dev/null permission denied errors. Why can't the new user write to /dev/null ? What groups should the new user be a part of? Am I not impersonating the user correctly? Is there a beginners guide to setting up users/permissions on Linux?
Someone evidently moved a regular file to /dev/null. Rebooting will recreate it, or do rm -f /dev/null; mknod -m 666 /dev/null c 1 3 As @Flow has noted in a comment, you must be root to do this. 1 and 3 here are the device major and minor number on Linux-based OSes (the 3rd device handled by the mem driver, see /proc/devices , cat /sys/devices/virtual/mem/null/dev , readlink /sys/dev/char/1:3 ). It varies with the OS. For instance, it's 2 , 2 on OpenBSD and AIX , it may also not be always the same on a given OS. Some OSes may supply a makedev / MAKEDEV command to help recreate them.
{ "source": [ "https://unix.stackexchange.com/questions/146633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9519/" ] }
146,671
I know this has probably been asked before, but I couldn't find it with Google. Given Linux Kernel No configurations that change $HOME bash Will ~ == $HOME be true?
What's important to understand is that ~ expansion is a feature of the shell (of some shells), it's not a magic character than means your home directory wherever it's used. It is expanded (by the shell, which is an application used to interpret command lines), like $var is expanded to its value under some conditions when used in a shell command line before the command is executed. That feature first appeared in the C-shell in the late 1970s (the Bourne shell didn't have it, nor did its predecessor the Thompson shell), was later added to the Korn shell (a newer shell built upon the Bourne shell in the 80s). It was eventually standardized by POSIX and is now available in most shells including non-POSIX ones like fish . Because it's in such widespread use in shells, some non-shell applications also recognise it as meaning the home directory. That's the case of many applications in their configuration files or their own command line ( mutt , slrn , vim ...). bash specifically (which is the shell of the GNU project and widely used in many Linux-based operating systems), when invoked as sh , mostly follows the POSIX rules about ~ expansion, and in areas not specified by POSIX, behaves mostly like the Korn shell (of which it is a part clone). While $var is expanded in most places (except inside single quotes), ~ expansion, being an afterthought is only expanded in a few specific conditions. It is expanded when on its own argument in list contexts, in contexts where a string is expected. Here are a few examples of where it's expanded in bash : cmd arg ~ other arg var=~ var=x:~:x (required by POSIX, used for variables like PATH , MANPATH ...) for i in ~ [[ ~ = text ]] [[ text = ~ ]] (the expansion of ~ being taken as a pattern in AT&T ksh but not bash since 4.0). case ~ in ~) ... ${var#~} (though not in some other shells) cmd foo=~ (though not when invoked as sh , and only when what's on the left of the = is shaped like an unquoted bash variable name) cmd ~/x (required by POSIX obviously) cmd ~:x (but not x:~:x or x-~-x ) a[~]=foo; echo "${a[~]} $((a[~]))" (not in some other shells) Here are a few examples where it's not expanded: echo "~" '~' echo ~@ ~~ (also note that ~u is meant to expand to the home directory of user u ). echo @~ (( HOME == ~ )) , $(( var + ~ )) with extglob : case $var in @(~|other))... (though case $var in ~|other) is OK). ./configure --prefix=~ (as --prefix is not a valid variable name) cmd "foo"=~ (in bash , because of the quotes). when invoked as sh : export "foo"=~ , env JAVA_HOME=~ cmd ... As to what it expands to: ~ alone expands to the content of the HOME variable, or when it is not set, to the home directory of the current user in the account database (as an extension since POSIX leaves that behaviour undefined). It should be noted that in ksh88 and bash versions prior to 4.0, tilde expansion underwent globbing (filename generation) in list contexts: $ bash -c 'echo "$HOME"' /home/***stephane*** $ bash -c 'echo ~' /home/***stephane*** /home/stephane $ bash -c 'echo "~"' ~ That should not be a problem in usual cases. Note that because it's expanded, the same warning applies as other forms of expansions. cd ~ Doesn't work if $HOME starts with - or contains .. components. So, even though it's very unlikely to ever make any difference, strictly speaking, one should write: cd -P -- ~ Or even: case ~ in (/*) cd -P ~;; (*) d=~; cd -P "./$d";; esac (to cover for values of $HOME like - , +2 ...) or simply: cd (as cd takes you to your home directory without any argument) Other shells have more advanced ~ expansions. For instance, in zsh , we have: ~4 , ~- , ~-2 (with completion) used to expand the directories in your directory stack (the places you've cd to before). dynamic named directories . You can define your own mechanism to decide how ~something is being expanded.
{ "source": [ "https://unix.stackexchange.com/questions/146671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78754/" ] }
146,756
I have a Bash script, which looks similar to this: #!/bin/bash echo "Doing some initial work...."; /bin/start/main/server --nodaemon Now if the bash shell running the script receives a SIGTERM signal, it should also send a SIGTERM to the running server (which blocks, so no trap possible). Is that possible?
Try: #!/bin/bash _term() { echo "Caught SIGTERM signal!" kill -TERM "$child" 2>/dev/null } trap _term SIGTERM echo "Doing some initial work..."; /bin/start/main/server --nodaemon & child=$! wait "$child" Normally, bash will ignore any signals while a child process is executing. Starting the server with & will background it into the shell's job control system, with $! holding the server's PID (to be used with wait and kill ). Calling wait will then wait for the job with the specified PID (the server) to finish, or for any signals to be fired . When the shell receives SIGTERM (or the server exits independently), the wait call will return (exiting with the server's exit code, or with the signal number + 128 in case a signal was received). Afterward, if the shell received SIGTERM, it will call the _term function specified as the SIGTERM trap handler before exiting (in which we do any cleanup and manually propagate the signal to the server process using kill ).
{ "source": [ "https://unix.stackexchange.com/questions/146756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70752/" ] }
146,784
I have had some rather bad experiences with GRUB2 . I could say (and have said) some nasty things about its design and development process. I especially dislike its means of update: for whatever reason it must semi-automatically update several scripts - one indirectly via another in a chain - for every kernel update - or many other minor (and seemingly unrelated) configuration alterations. This is directly contrasted by previous experiences I had with LILO - to which I am seriously considering reverting - as I never had any problems with it, and its configuration was pretty simple. For one thing, as I remember it, I had only to update (or, rather, it only ever updated) a single, simply-managed configuration text-file per kernel-update. So how does LILO work on modern hardware with today's kernels? How does GRUB? How do other bootloaders? Do I have to fulfill any preconditions, or is it just about writing the configuration file and running lilo command as I fondly remember it in the old days? Does the kernel package update (Debian/Ubuntu) update LILO as it does with GRUB2?
ELILO Managing EFI Boot Loaders for Linux: Using ELILO It's really difficult for me to decide which part of that to copy+paste because it's all really good, so I'll just ask you please to read it. Rod Smith Authored and maintains both gdisk and rEFInd . But before you do I'd like to comment a little on it. The ELILO link above is to one of the many pages on UEFI booting you'll find on rodsbooks.com written by Rod Smith. He's an accomplished technical writer, and if you've ever googled the topic of UEFI booting and wound up not reading something of his, it was likely because you skipped the top several results. Linux UEFI boot Basically, the Linux kernel can be directly executed by the firmware. In the link above he mentions the Linux kernel's EFI stub loader - this is what you should be using, in my opinion, as it allows the linux kernel to be called directly by the firmware itself. Regardless of what you're doing something is being executed by the firmware - and it sounds like that something is grub . If the firmware can directly load your os kernel, what good is a bootloader? UEFI firmware mounts a FAT formatted GPT partition flagged esp by the partition table and executes a path there it has saved as a UEFI boot variable in an onboard flash memory module. So one thing you might do is put the linux kernel on that FAT partition and store its path in that boot variable. Suddenly the kernel is its own bootloader. Bootloaders On UEFI systems, bootloaders are redundant - ELILO included. The problem bootloaders were designed to solve was that BIOS systems only read in the first sector of the boot flagged partition and execute it. It's a little difficult to do anything meaningful with a 512 byte kernel, so the common thing to do was write a tiny utility that could mount a filesystem where you kept the actual kernel and chainload it. In fact, the 512 bytes was often not enough even for the bootloaders. grub , for instance, actually chainloads itself before ever chainloading your kernel, because it wedges its second stage in the empty space between the boot sector and the first sector of your filesystem. It's kind of a dirty hack - but it worked. Bootmanagers For the sake of easy configuration though, some go-between can be useful. What Rod Smith's rEFInd does is launch as an EFI application - this is a relatively new concept. It is a program that is executed from disk by - and that returns to - the firmware. What rEFInd does is allow you to manage boot menus and then returns your boot selection to the firmware to execute. It comes with UEFI filesystem drivers - so, for instance, you can use the kernel's EFI-stub loader on a non-FAT partition (such as your current /boot ). It is dead simple to manage - if such a thing is necessary at all - and it adds the simplicity of an executable system kernel to the convenience of a configurable bootmanager. Atomic Indirection The kernel doesn't need symlinks - it can mount --bind . If there's any path on your / where you should disallow symlinking, it is /boot . An orphaned symlink in /boot is not the kind of problem you should ever have to troubleshoot. Still, it is a common enough practice to setup elaborate indirections in /boot by several distributions - even if it is a horrible idea - in order to handle in-place kernel updates and/or multiple kernel configurations. This is a problem for EFI systems not configured to load filesystem drivers (such as are provided with the rEFInd package) because FAT is a fairly stupid filesystem overall, and it does not understand them. I don't personally use the UEFI filesystem drivers provided with rEFInd, though most distributions include a rEFInd package that can be installed via package manager and forgotten about just using their own awful symlinked /boot config and rEFInd's packaged UEFI filesystem drivers. My Config I once wrote a set of instructions on it and posted it here , but it looks like: % grep esp /etc/fstab && > ls /esp/EFI LABEL=ESP /esp vfat defaults 0 1 /esp/EFI/arch_root /boot none bind,defaults 0 0 arch_root/ arch_sqsh/ arch_xbmc/ BOOT/ ipxe/ So I just put those two lines in my /etc/fstab pointing to a folder that I intend to contain the new linux installation's /boot and I'm almost done worrying about the whole thing. I also have to do: cat /boot/refind_linux.conf "Arch" "root=LABEL=data rootflags=subvol=arch_root,rw,ssd,compress-force=lzo,space_cache,relatime" Apart from installing the refind-efi package via pacman for the first one, that is all that is required to setup as many separate installations/configurations as I desire. Note that the majority of that string above consists of btrfs-specific mount-options specified as kernel parameters. A more typical /boot/refind_linux.conf would probably look like: "Menu Entry" "root=/dev/sda2" And that's all it takes. rodsbooks.com If you still want ELILO then you can find installation instructions at the link above. If you want rEFInd you'll find links to it in the first paragraph there. Basically if you want to do any UEFI boot configuration, read rodsbooks.com first.
{ "source": [ "https://unix.stackexchange.com/questions/146784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
146,825
When I press Ctrl + " (create a new pane) while in a pane, which has the PWD /tmp for example, the new pane starts as my home folder ~ . I looked at https://unix.stackexchange.com/a/109255/72471 and it helped me with the same issue concerning windows. However, I couldn't fix the split-window issue by inserting bind " split-window -c "#{pane_current_path}" into my ~/.tmux.conf . I am using tmux 1.9a and therefor don't want a rather messy solution for older versions stated here (it doesn't work in my case, anyway): bind '"' set default-path "" \; split-window -v \; set -u default-path How can I tell tmux to set the default directory as the current path of a pane, when creating a new pane?
Try specifying v for vertical or h for horizontal My .tmux.conf file has: bind \ split-window -h -c '#{pane_current_path}' # Split panes horizontal bind - split-window -v -c '#{pane_current_path}' # Split panes vertically (I use \ and - as one-finger pane splitters.) New panes open for me using my current directory, wherever I am. It's certainly a key feature for me! One other critical thing with tmux (this was the issue in this case) is that you have to apply changes with: tmux source-file ~/.tmux.conf Note that closing terminals, even logging off and restarting, will NOT apply tmux changes – you have to actually use that command (or use Ctrl + B :source-file ~/.tmux.conf ). You can see my full .tmux.conf file at https://github.com/durrantm/setups .
{ "source": [ "https://unix.stackexchange.com/questions/146825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72471/" ] }
146,843
When I cd a link, my current path is prefixed with the link's path, rather than the path of the directory the link links to. E.g. ~/dirlinks/maths$ ls -l logic lrwxrwxrwx 1 tim tim 71 Jul 27 10:24 logic -> /windows-d/academic discipline/study objects/areas/formal systems/logic ~/dirlinks/maths$ cd logic ~/dirlinks/maths/logic$ pwd /home/tim/dirlinks/maths/logic ~/dirlinks/maths/logic$ cd .. ~/dirlinks/maths$ I would like to have my current path changed to the path of the linked dir, so that I can work with the parent dirs of the linked dir as well. Besides ls the link to find out the linked dir, and then cd into it, what are some simpler ways to accomplish that? For example, after cd into a link, how do you change your current path to the path of the linked dir?
With POSIX shell, you can use -P option of cd builtin: cd -P <link> With bash , from man bash : The -P option says to use the physical directory structure instead of following symbolic links (see also the -P option to the set builtin command)
{ "source": [ "https://unix.stackexchange.com/questions/146843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
146,913
cd - can switch between current dir and previous dir. It seems that I have seen - used as arguments to other commands before, though I don't remember if - means the same as with cd . I found that - doesn't work with ls . Is - used only with cd?
- is defined in POSIX Utility Syntax Guidelines as standard input: Guideline 13: For utilities that use operands to represent files to be opened for either reading or writing, the '-' operand should be used to mean only standard input (or standard output when it is clear from context that an output file is being specified) or a file named -. You can see this definition for utilities which operate with files for reading or writing. cd does not belong to these utilities, so - in cd does not follow this guideline. Besides, POSIX also defined - has own meaning with cd : - When a <hyphen> is used as the operand, this shall be equivalent to the command: cd "$OLDPWD" && pwd which changes to the previous working directory and then writes its name.
{ "source": [ "https://unix.stackexchange.com/questions/146913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
146,922
I have a 1TB big file (disk-image from a damaged drive) and a 1.3MB small file (beginning of a disk-file). Using the contents of the small file, I want to overwrite portions of the big file. That is, I want to insert/overwrite the first 1.3MB of the 1TB-image using the small file. Using small temporary files for testing I was unable to overwrite parts of the files. Rather, dd overwrote the files completely. This is not what I want. Is dd able to do this?
If you use the conv=notrunc argument, you can replace just the first however many bytes. e.g. dd conv=notrunc if=small.img of=large.img root@debian:~/ddtest# dd if=/dev/zero of=file1.img bs=1M count=10 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.14556 s, 9.2 MB/s root@debian:~/ddtest# dd if=/dev/urandom of=file2.img bs=1M count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.207185 s, 5.1 MB/s root@debian:~/ddtest# head file1.img << Blank space here as it's all Zeroes >> root@debian:~/ddtest# dd conv=notrunc if=file2.img of=file1.img 2048+0 records in 2048+0 records out 1048576 bytes (1.0 MB) copied, 0.00468016 s, 224 MB/s root@debian:~/ddtest# head file1.img ^�v�y�ے!� E�91���� << SNIP Random garbage >> root@debian:~/ddtest#
{ "source": [ "https://unix.stackexchange.com/questions/146922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11865/" ] }
146,942
The following bash syntax verifies if param isn't empty: [[ ! -z $param ]] For example: param="" [[ ! -z $param ]] && echo "I am not zero" No output and its fine. But when param is empty except for one (or more) space characters, then the case is different: param=" " # one space [[ ! -z $param ]] && echo "I am not zero" "I am not zero" is output. How can I change the test to consider variables that contain only space characters as empty?
First, note that the -z test is explicitly for: the length of string is zero That is, a string containing only spaces should not be true under -z , because it has a non-zero length. What you want is to remove the spaces from the variable using the pattern replacement parameter expansion : [[ -z "${param// }" ]] This expands the param variable and replaces all matches of the pattern (a single space) with nothing, so a string that has only spaces in it will be expanded to an empty string. The nitty-gritty of how that works is that ${var/pattern/string} replaces the first longest match of pattern with string . When pattern starts with / (as above) then it replaces all the matches. Because the replacement is empty, we can omit the final / and the string value: ${parameter/pattern/string} The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string . If pattern begins with ‘/’, all matches of pattern are replaced with string . Normally only the first match is replaced. ... If string is null, matches of pattern are deleted and the / following pattern may be omitted. After all that, we end up with ${param// } to delete all spaces. Note that though present in ksh (where it originated), zsh and bash , that syntax is not POSIX and should not be used in sh scripts.
{ "source": [ "https://unix.stackexchange.com/questions/146942", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67059/" ] }
147,261
I did try to install mysql-server on my Vagrant Ubuntu 12.04 LTS virtual machine. When I did so, the setup auto-starts. I can see this in the Vagrant output: While not mandatory, it is highly recommended that you set a password ││ for the MySQL administrative "root" user.││││ If this field is left blank, the password will not be changed.││││ New password for the MySQL "root" user After that the output text goes haywire — ± ├⎺ ⎼␊⎻┌▒␌␊ ┌␋␉⎽─┌␋├␊3-0 3.7.9-2┤␉┤┼├┤1 (┤⎽␋┼± ... — but is rather lengthy and full of green and red colors, so I believe the rest of the install is completing. But I can confirm the lack of install after: sudo apt-get install --just-print mysql-server-5.5 ... The following NEW packages will be installed: mysql-server-5.5 How can I send the right signals through a shell script to configure the MYSQL server? Or if I cannot, how can I stop the automatic launching of the configuration or kill the setup once launched while still having the package installed?
You can set the MySQL root password in your bootstrap file by adding debconf-set-selections commands before running your apt-get install: #!/usr/bin/env bash debconf-set-selections <<< 'mysql-server mysql-server/root_password password MySuperPassword' debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password MySuperPassword' apt-get update apt-get install -y mysql-server I presume this to work on any Debian based system. I use it everyday, box is built completely automatically.
{ "source": [ "https://unix.stackexchange.com/questions/147261", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79059/" ] }
147,420
What is $() in Linux Shell Commands? For example: chmod 777 $(pwd)
It's very similar to the backticks ``. It's called command substitution ( posix specification ) and it invokes a subshell. The command in the braces of $() or between the backticks ( `…` ) is executed in a subshell and the output is then placed in the original command. Unlike backticks, the $(...) form can be nested. So you can use command substitution inside another substitution. There are also differences in escaping characters within the substitution. I prefer the $(...) form.
{ "source": [ "https://unix.stackexchange.com/questions/147420", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73778/" ] }
147,443
Is it possible for less output to set the tab width to a number X as it is for cat ?
Yes, it is possible with less -x or less --tabs , e.g. less -x4 will set the tabwidth to 4. You can configure defaults with the LESS environment variable, e.g. LESS="-x4" .
{ "source": [ "https://unix.stackexchange.com/questions/147443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72471/" ] }
147,446
How do you use top with just showing the CMD name? I have used top with just showing the running process that I want; for example: $ top -p 19745 And if I want more than one PID, I would use: $ top -p 19745 -p 19746 -p 19747 I have Googled it, but they don't say how you can do it, even when I try looking at the help in top it still doesn't show you. Is there a way you can filter by the CMD name only? There are certain files that I am running through Apache2, and I want to monitor them only. afile1.abc afile2.abc afile3.abc afile4.abc Update I see this in the man top page: x: Command -- Command line or Program name Display the command line used to start a task or the name of the associated program. You toggle between command line and name with 'c', which is both a command-line option and an interactive command. When you've chosen to display command lines, processes without a command line (like kernel threads) will be shown with only the program name in parentheses, as in this example: ( mdrecoveryd ) Either form of display is subject to potential truncation if it's too long to fit in this field's current width. That width depends upon other fields selected, their order and the current screen width. Note: The 'Command' field/column is unique, in that it is not fixed-width. When displayed, this column will be allocated all remaining screen width (up to the maximum 512 characters) to provide for the potential growth of program names into command lines. Will that do anything for me?
Yes, it is possible with less -x or less --tabs , e.g. less -x4 will set the tabwidth to 4. You can configure defaults with the LESS environment variable, e.g. LESS="-x4" .
{ "source": [ "https://unix.stackexchange.com/questions/147446", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20752/" ] }
147,471
I find the output of the shell command top to be a simple and familiar way to get a rough idea of the health of a machine. I'd like to serve top 's output (or something very similar to it) from a tiny web server on a machine for crude monitoring purposes. Is there a way to get top to write its textual output exactly once , without formatting characters? I've tried this: (sleep 1; echo 'q') | top > output.txt This seems to be close to what I want, except that (1) there's no guarantee that I won't get more or less than one screenful of info and (2) I have to strip out all the terminal formatting characters. Or is there some other top -like command that lists both machine-wide and process-level memory/CPU usage/uptime info? (Ideally, I'd love a strategy that's portable to both Linux and Mac OS X, since our devs use Macs and our prod environment is Linux.)
In Linux, you can try this: top -bn1 > output.txt From man top : -b : Batch-mode operation Starts top in 'Batch' mode, which could be useful for sending output from top to other programs or to a file. In this mode, top will not accept input and runs until the iterations limit you've set with the '-n' command-line option or until killed. .... -n : Number-of-iterations limit as: -n number Specifies the maximum number of iterations, or frames, top should produce before ending. With OS X, try: top -l 1 From top OSX manpage : -l <samples> Use logging mode and display <samples> samples, even if standard output is a terminal. 0 is treated as infinity. Rather than redisplaying, output is periodically printed in raw form. Note that the first sample displayed will have an invalid %CPU displayed for each process, as it is calculated using the delta between samples.
{ "source": [ "https://unix.stackexchange.com/questions/147471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28969/" ] }
147,560
Consider the following bash script: #!/bin/bash echo "${1##*.}" This script prints the extension of a file if the file name is supplied as the first command line argument, something like following: $ ./script.sh ./file.pdf In the above case pdf is printed. Please explain how the expression ${1##*.} is able to extract the extension of the file. (I understand what $0, $1, $2, $# do in bash and also understand about regular expressions to some extent)
Bash parameter expansion supports several modifications it can do to the value while expanding a variable. One of them is ## , which removes the longest prefix of the value matching a pattern (patterns are not regular expressions here). In this case the pattern is *. . That matches any zero or more characters followed by a . . ${x##*.} means to remove all of the string up to the last . character, and leave everything after that dot. ${1##*.} means to do that expansion using the value of the first positional parameter, the one you'd usually access with $1 . The final result of echo "${1##*.}" is then to print out the part of the first argument of the script that comes after the last . , which is the filename extension. If the pattern doesn't match at all, the full value of the variable is expanded, just as if you hadn't used the ## . In this case, if the argument you gave didn't have a . in it at all then you'd just get it back out again. Bash also supports a single # to take the shortest matching prefix off, and the same thing with % to match the end of the string instead.
{ "source": [ "https://unix.stackexchange.com/questions/147560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77842/" ] }
147,563
I know I can use Up to iterate through previous commands. Running the last command simply involves Up + Enter . However, I was thinking of buying the Happy Hacking Keyboard as I spend a lot of time in vim . This keyboard has no arrow keys, and the only way I know how to get this kind of behaviour is by pressing Ctrl + R and beginning to repeat my previous command. Is there an easy way to emulate Up + Enter in an UNIX terminal without the arrow keys?
With csh or any shell implementing csh -like history substitution ( tcsh , bash , zsh ): !! Then Enter . Or alternatively : !-1 Then Enter . Or Ctrl + P , Enter Magic space Also, note that !! and !-1 will not auto-expand for you, until you execute them (when it might be too late). If using bash , you can put bind Space:magic-space into ~/.bashrc , then pressing Space after the command will auto-expand them inline, allowing you to inspect them before execution. This is particularly useful for history expansion from a command run a while ago, e.g. !echo will pull the last command run starting with echo . With magic space, you get to preview the command before it's run. That's the equivalent of doing bindkey ' ' magic-space in tcsh or zsh .
{ "source": [ "https://unix.stackexchange.com/questions/147563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65634/" ] }
147,795
I am now under a directory with very long path. For future visiting it quicker, I would like to create a link to it. I tried ln -s . ~/mylink ~/mylink actually links to ~ . So can I expand ~ into the obsolute pathname, and then give it to ln ?
A symlink actually stores the path you give literally, as a string¹. That means your link ~/mylink contains " . " (one character). When you access the link, that path is interpreted relative to where the link is, rather than where you were when you made the link. Instead, you can store the actual path you want in the link: ln -s "$(pwd)" ~/mylink using command substitution to put the output of pwd (the working directory name) into your command line. ln sees the full path and stores it into your symlink, which will then point to the right place. ¹ More or less.
{ "source": [ "https://unix.stackexchange.com/questions/147795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
147,885
Occasionally, I need to check resources on several machines throughout our data-centers for consolidation recommendations and the like. I prefer htop, mostly because of the interactive feel and the display. Is there a way to default some settings to my setup for htop? For example, one thing I'd like to always have shown is the average CPU load. important note: Setting this on specific boxes isn't something feasible - I'm looking for maybe a way to set this dynamically every time I ssh into the box. Is this possible at all?
htop has a setup screen, accessed via F2 , that allows you to customize the top part of the display, including adding or removing a "Load average" field and setting it's style (text, bar, etc.). These seem to be auto saved in $HOME/.config/htop/htoprc , which warns: # Beware! This file is rewritten by htop when settings are changed in the interface. # The parser is also very primitive, and not human-friendly. I.e., edit that at your own risk. However, you should be able to transfer it from one system to another (version differences might occasionally cause a bit of an issue). You could also set up a configuration, quit, and then copy the file, so that you could maintain a set of different configurations by swapping/symlinking whichever one with htoprc .
{ "source": [ "https://unix.stackexchange.com/questions/147885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79419/" ] }
147,957
In awk, I can clear an array with a loop, making it an empty array, which is equivalent to deleting it. for (key in array) delete array[key]; Is there a simpler way? Can I completely delete an array, so that the variable name can be re-used for a scalar?
The syntax delete array is not in current versions in POSIX, but it is supported by virtually all existing implementations (including the original awk, GNU, mawk, and BusyBox). It will be added in a future version of POSIX (see defect 0000544 ). An alternate way to clear all array elements, which is both portable and standard-compliant, and which is an expression rather than a statement, is to rely on split deleting all existing elements: split("", array) All of these, including delete array , leave the variable marked as being an array variable in the original awk, in GNU awk and in mawk (but not in BusyBox awk). As far as I know, once a variable has been used as an array, there is no way to use it as a scalar variable.
{ "source": [ "https://unix.stackexchange.com/questions/147957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
148,035
I always thought that the only benefit of using dash instead of bash was that dash was smaller, and therefore many instances of dash would start faster at boot time. But I have done some research, and found some people migrating all their scripts to dash in the hope they would run faster, and I also found this in the article DashAsBinSh in the Ubuntu Wiki: The major reason to switch the default shell was efficiency . bash is an excellent full-featured shell appropriate for interactive use; indeed, it is still the default login shell. However, it is rather large and slow to start up and operate by comparison with dash. Nowadays I've been using lots of bash scripts for many things on my system, and my problem is that I have a particular script that I'm running continuously 24/7, that spawns around 200 children, which together heat my computer 10°C more than in normal usage. It is a rather large script with lots of bashisms, so porting them to POSIX or some other shell would be very time consuming (and POSIX doesn't really matter for personal use), but it would be worth if I could reduce some of this CPU usage. I know there are also other things to consider, like calling an external binary like sed for a simple bashism like ${foo/bar} , or grep instead of =~ . TL;DR is really bash slower to start up and operate in comparison with dash? Are there other Unix shells which are more efficient than bash?
SHELL SEQ: Probably a useful means of bench-marking a shell's performance is to do a lot of very small, simple evaluations repetitively. It is important, I think, not just to loop, but to loop over input , because a shell needs to read <&0 . I thought this would complement the tests @cuonglm already posted because it demonstrates a single shell process's performance once invoked, as opposed to his which demonstrates how quickly a shell process loads when invoked. In this way, between us, we cover both sides of the coin. Here's a function to facilitate the demo: sh_bench() ( #don't copy+paste comments o=-c sh=$(command -v "$1") ; shift #get shell $PATH; toss $1 [ -z "${sh##*busybox}" ] && o='ash -c' #cause its weird set -- "$sh" $o "'$(cat <&3)'" -- "$@" #$@ = invoke $shell time env - "$sh" $o "while echo; do echo; done|$*" #time (env - sh|sh) AC/DC ) 3<<-\SCRIPT #Everything from here down is run by the different shells i="${2:-1}" l="${1:-100}" d="${3:- }"; set -- "\$((n=\$n\${n:++\$i}))\$d" #prep loop; prep eval set -- $1$1$1$1$1$1$1$1$1$1 #yup while read m #iterate on input do [ $(($i*50+${n:=-$i})) -gt "$(($l-$i))" ] || #eval ok? eval echo -n \""$1$1$1$1$1"\" #yay! [ $((n=$i+$n)) -gt "$(($l-$i))" ] && #end game? echo "$n" && exit #and EXIT echo -n "$n$d" #damn - maybe next time done #done #END SCRIPT #end heredoc It either increments a variable once per newline read or, as a slight-optimization, if it can, it increments 50 times per newline read. Every time the variable is incremented it is printed to stdout . It behaves a lot like a sort of seq cross nl . And just to make it very clear what it does - here's some truncated set -x; output after inserting it just before time in the function above: time env - /usr/bin/busybox ash -c ' while echo; do echo; done | /usr/bin/busybox ash -c '"'$( cat <&3 )'"' -- 20 5 busybox' So each shell is first called like: env - $shell -c "while echo; do echo; done |..." ...to generate the input that it will need to loop over when it reads in 3<<\SCRIPT - or when cat does, anyway. And on the other side of that |pipe it calls itself again like: "...| $shell -c '$(cat <<\SCRIPT)' -- $args" So aside from the initial call to env (because cat is actually called in the previous line) ; no other processes are invoked from the time it is called until it exits. At least, I hope that's true. Before the numbers... I should make some notes on portability. posh doesn't like $((n=n+1)) and insists on $((n=$n+1)) mksh doesn't have a printf builtin in most cases. Earlier tests had it lagging a great deal - it was invoking /usr/bin/printf for every run. Hence the echo -n above. maybe more as I remember it... Anyway, to the numbers: for sh in dash busybox posh ksh mksh zsh bash do sh_bench $sh 20 5 $sh 2>/dev/null sh_bench $sh 500000 | wc -l echo ; done That'll get 'em all in one go... 0dash5dash10dash15dash20 real 0m0.909s user 0m0.897s sys 0m0.070s 500001 0busybox5busybox10busybox15busybox20 real 0m1.809s user 0m1.787s sys 0m0.107s 500001 0posh5posh10posh15posh20 real 0m2.010s user 0m2.060s sys 0m0.067s 500001 0ksh5ksh10ksh15ksh20 real 0m2.019s user 0m1.970s sys 0m0.047s 500001 0mksh5mksh10mksh15mksh20 real 0m2.287s user 0m2.340s sys 0m0.073s 500001 0zsh5zsh10zsh15zsh20 real 0m2.648s user 0m2.223s sys 0m0.423s 500001 0bash5bash10bash15bash20 real 0m3.966s user 0m3.907s sys 0m0.213s 500001 ARBITRARY = MAYBE OK? Still, this is a rather arbitrary test, but it does test reading input, arithmetic evaluation, and variable expansion. Maybe not comprehensive, but possibly near to there. EDIT by Teresa e Junior : @mikeserv and I have done many other tests (see our chat for details), and we found the results could be summarized like this: If you need speed, go definitely with dash , it is much faster than any other shell and about 4x faster than bash . While busybox 's shell can be much slower than dash , in some tests it could be faster, because it has many of its own userland utilities, like grep , sed , sort , etc., which don't have as many features as the commonly used GNU utilities, but can get the work done as much. If speed is not everything you care about, ksh (or ksh93 ) can be considered the best compromise between speed and features. It's speed compares to the smaller mksh , which is way faster than bash , and it has also some unique features, like floating point arithmetic . Although bash is famous for its simplicity, stability, and functionality, it was the slowest of all shells in the majority of our tests, and by a large margin.
{ "source": [ "https://unix.stackexchange.com/questions/148035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9491/" ] }
148,041
Originally posted to AskUbuntu.com ... AskUbuntu has adopted a policy of closing questions about EOL (End Of Life) versions. There's a vocal contingent to remove them as well. To prevent possible loss of this popular question (342335 views to date), am placing a revised version here. --- docsalvager The "classic" system... Puppy Linux 5.2.8 (Lucid) based on Ubuntu 10.04 (Lucid Lynx) GRUB 2 boot loader GRUB 2 puts a number of *.mod files (kernel modules) in /boot/grub . Deleting these files (thinking they were misplaced sound files) resulted in failure on reboot and the prompt grub rescue> . How to recover in this situation?
This answer is for others out there that DocSalvager's answer doesn't work for. I followed DocSalvager's use of ls to find the correct hard drive partition. In my case it was (hd0,msdos5) . Then I executed the following commands to get back to the normal grub boot loader screen. grub rescue> set boot=(hd0,msdos5) grub rescue> set prefix=(hd0,msdos5)/boot/grub grub rescue> insmod normal grub rescue> normal After booting into Ubuntu I repaired the grub boot loader with the following commands from the terminal. sudo grub-install /dev/sda Please reference this source for a visual walk through of this process.
{ "source": [ "https://unix.stackexchange.com/questions/148041", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27437/" ] }
148,043
It is said that on Unix and Linux in general, you should avoid having spaces in a filename of a file (ordinary file, dir, link, device file, ...). But I do that all the time. For a filename with a space inside, In Nautilus, the space character is shown as a space. In Bash terminal, I either use \ to represent a space, or enclose the filename within a pair of double quotes. in some applications's files (Nautilus, not sure if OS will also do so), the filename is written with the space replaced with %20 . Is a space really not allowed in a filename? How do you use or deal with a space in a filename correctly?
Spaces, and indeed every character except / and NUL, are allowed in filenames. The recommendation to not use spaces in filenames comes from the danger that they might be misinterpreted by software that poorly supports them. Arguably, such software is buggy. But also arguably, programming languages like shell scripting make it all too easy to write software that breaks when presented with filenames with spaces in them, and these bugs tend to slip through because shell scripts are not often tested by their developers using filenames with spaces in them. Spaces replaced with %20 is not often seen in filenames. That's mostly used for (web) URLs. Though it's true that %-encoding from URLs sometimes makes its way into filenames, often by accident.
{ "source": [ "https://unix.stackexchange.com/questions/148043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
148,105
I installed CentOS 5.5 on my VMWare 8 recently and I am trying to add a new user on the system. I am unable to add the user unless I use su - option. I believe it has to do something with path not set properly. I updated the path and here is what it looks like /usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/uone/bin:/sbin I believe the command is in /sbin dir which is already a part of path. Can anyone suggest me what else I might be missing?
Try adding /usr/sbin to your path. For example to add it to the end of the path you would do something like this: export PATH=$PATH:/the/file/path
{ "source": [ "https://unix.stackexchange.com/questions/148105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73029/" ] }
148,133
I had a problem running a script from crontab. After some research I understood the problem was because PATH parameter doesn't include /sbin. I looked what it does include in /etc/crontab: PATH=/sbin:/bin:/usr/sbin:/usr/bin As a test - simple cron job to print the PATH variable: * * * * * echo $PATH &> /root/TMP.log the output is: cat /root/TMP.log /usr/bin:/bin I don't understand this behaviour... How do I set the PATH variable..? Or better - how to add paths to it?
While they are similar, a user crontab (edited using crontab -e) is different from and keeps a separate path from the system crontab (edited by editing /etc/crontab). The system crontab has 7 fields, inserting a username before the command. The user crontab, on the other hand, has only 6 fields, going directly into the command immediately after the time fields. Likewise, the PATH in the system crontab normally includes the /sbin directories, whereas the PATH in the user crontab does not. If you want to set PATH for the user crontab, you need to define the PATH variable in the user crontab. A simple workaround for adding your regular PATH in shell commands in cron is to have the cronjob source your profile by running bash in a login shell. for example instead of * * * * * some command You can instead run * * * * * bash -lc some command That way if your profile sets the PATH or other environment variables to something special, it also gets included in your command.
{ "source": [ "https://unix.stackexchange.com/questions/148133", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67047/" ] }
148,187
I am curious to know what trick you use to remember options for various commands? Looking up man pages all the time is time consuming and not so cool!
The trick is simple: You just don't. It's a waste of time and just not necessary. Memorizing command options isn't a particularly useful skill. It's much more important to understand how stuff works in general and to have a vague idea which tools exist in the first place and what you use them for. A very important skill here is to know how to find out about stuff you don't know yet. Man pages are time consuming? Not so. It's not like you have to read them - at least, not every time - there is a search function. So if I don't remember which cryptic option was the one for hdparm to disable idle timer on some WD disks, I do man hdparm and /idle3 and hey, it was -J . Looking stuff like that up is so quick I don't even remember doing it afterwards. Imagine someone actually memorizing all of the hdparm options. What a waste of time. It's fine if you just happen to remember options because you use them frequently. That happens automatically without even thinking about it. But actually consciously spending time on memorizing them... what's that supposed to be good for? A paper test?
{ "source": [ "https://unix.stackexchange.com/questions/148187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73029/" ] }
148,197
I have a weird problem where my laptop will wake when it's closed, generating a lot of heat and causing much frustration. Is there a way that I can tell if the laptop's lid is closed so that I can automatically suspend the computer (via a cron script) if it wakes itself while the lid is closed? Closing the lid does currently suspend the machine and opening it does wake it, so that works properly. It's a 2011 MacBook Pro running Ubuntu 12.04.
For my specific case, I can get the status of the lid with $ cat /proc/acpi/button/lid/LID0/state state: open I can then just grep for open or closed to see if it's open or closed.
{ "source": [ "https://unix.stackexchange.com/questions/148197", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
148,545
I am running Ubuntu 10.04 and I use upstart for daemon management. My enterprise application is run as a daemon and must be run as root because of various privileges. E.g.: sudo start my-application-long-ID sudo stop my-application-long-ID etc I would like to introduce an alias to abbreviate these commands as something like: alias startapp='sudo start my-application-long-ID' and run it as startapp and that works but I would prefer to not have sudo in the alias. alias startapp='start my-application-long-ID' does not when run using sudo startapp , returning sudo: startapp: command not found . However, when I added the alias: alias sudo='sudo ' sudo startapp now works but I am still curious why sudo ignores aliases.
I see the below information from here . When using sudo, use alias expansion (otherwise sudo ignores your aliases) alias sudo='sudo ' The reason why it doesn't work is explained here . Bash only checks the first word of a command for an alias, any words after that are not checked. That means in a command like sudo ll, only the first word (sudo) is checked by bash for an alias, ll is ignored. We can tell bash to check the next word after the alias (i.e sudo) by adding a space to the end of the alias value.
{ "source": [ "https://unix.stackexchange.com/questions/148545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23944/" ] }
148,563
Apparently you can rename file to ... . If I were insane, how would I rename file to .. or . ? Is such a filename even allowed? Backslash doesn't seem to disable dot's special meaning: $ mv test \. mv: `test' and `./test' are the same file
You can't rename a file to . or .. because all directories already contain entries for those two names. (Those entries point to directories, and you can't rename a file to a directory.) mv detects the case where the destination is an existing directory, and interprets it as a request to move the file into that directory (using its current name). Backslashes have nothing to do with this, because . is not a shell metacharacter. \. and . are the same to bash .
{ "source": [ "https://unix.stackexchange.com/questions/148563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53467/" ] }
148,890
I need to disable SELinux but cannot restart the machine i followed this link where i get bellow command setenforce 0 But after running this command i checked for that sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: permissive Mode from config file: disabled Policy version: 24 Policy from config file: targeted Is there any other option?
sestatus is showing the current mode as permissive . In permissive mode, SELinux will not block anything, but merely warns you. The line will show enforcing when it's actually blocking. I don't believe it's possible to completely disable SELinux without a reboot.
{ "source": [ "https://unix.stackexchange.com/questions/148890", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64773/" ] }
148,929
I am trying to copy a file that has colons and periods, e.g., with: scp "test.json-2014-08-07T11:17:58.662378" remote:tmp/ scp test.json-2014-08-07T11\:17\:58.662378 remote:tmp/ and combinations with file: scp "file:///home/.../test.json-2014-08-07T11:17:58.662378" remote:tmp/ My guess is that scp tries to interprete parts of the file as a server and/or port number. How do I avoid that? If I rename the file to test.json then scp test.json remote:tmp/ works ok, but not even scp test*62378 remote:tmp/ works.
Use ./ before your filename: scp ./test.json-2014-08-07T11:17:58.662378 remote:tmp/ That make scp know it's a file. Without it, scp thinks it's a hostname because of the colon.
{ "source": [ "https://unix.stackexchange.com/questions/148929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79719/" ] }
148,975
I am confused how md5sum --check is supposed to work: $ man md5sum -c, --check read MD5 sums from the FILEs and check them I have a file, I can pipe it to md5sum : $ cat file | md5sum 44693b9ef883e231cd9f90f737acd58f - When I want to check the integrity of the file tomorrow, how can I check if the md5sum is still 44693b9ef883e231cd9f90f737acd58f ? Note cat file might be a stream. So I want to use the pipe as in my example, not md5sum file .
You do this: cat file | md5sum > sumfile And the next day you can do this: cat file | md5sum --check sumfile Which prints: -: OK if everything is alright.
{ "source": [ "https://unix.stackexchange.com/questions/148975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
148,985
I have been asked to write a shell script To check if a URL is UP/Working or not for my Project. I tried to find some hints over the internet, but whatever I got is about checking whether the URL exists or not. I first tried with wget . wget -S --spider https://genesis-dev.webbank.ssmb.com:21589/gop-ui/app.jsp 2>&1 | awk '/^ /' if [ $? -ne 0 ] then echo "Server is UP" else echo "Server is down" fi My next attempt was with curl . curl -ivs https://genesis-dev.webbank.ssmb.com:21589/opconsole-sit/opconsole.html# if [ $? -ne 0 ] then echo "Server is UP" else echo "Server is down" fi But, both are checking existence of the URL not the response.
curl -Is http://www.yourURL.com | head -1 You can try this command to check any URL.Status code 200 OK means that the request has succeeded and the URL is reachable. You can also test URL availability and get the response code using telnet command telnet www.yourURL.com 80 80 is the port number.
{ "source": [ "https://unix.stackexchange.com/questions/148985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80080/" ] }
149,017
I tried tailing two files using the option: tail -0f file1.log -0f file2.log In Linux I see an error "tail : can process only one file at a time". In AIX I see the error as "Invalid options". This works fine when I use: tail -f file1 -f file 2 in Linux but not in AIX. I want to be able to tail multiple files using -0f or -f in AIX/Linux multitail is not recognized in either of these OS.
What about: tail -f file1 & tail -f file2 Or prefixing each line with the name of the file: tail -f file1 | sed 's/^/file1: /' & tail -f file2 | sed 's/^/file2: /' To follow all the files whose name match a pattern, you could implement the tail -f (which reads from the file every second continuously) with a zsh script like: #! /bin/zsh - zmodload zsh/stat zmodload zsh/zselect zmodload zsh/system set -o extendedglob typeset -A tracked typeset -F SECONDS=0 pattern=${1?}; shift drain() { while sysread -s 65536 -i $1 -o 1; do continue done } for ((t = 1; ; t++)); do typeset -A still_there still_there=() for file in $^@/$~pattern(#q-.NoN); do stat -H stat -- $file || continue inode=$stat[device]:$stat[inode] if (($+tracked[$inode])) || { exec {fd}< $file && tracked[$inode]=$fd; } then still_there[$inode]= fi done for inode fd in ${(kv)tracked}; do drain $fd if ! (($+still_there[$inode])); then exec {fd}<&- unset "tracked[$inode]" fi done ((t <= SECONDS)) || zselect -t $((((t - SECONDS) * 100) | 0)) done Then for instance, to follow all the text files in the current directory recursively: that-script '**/*.txt' .
{ "source": [ "https://unix.stackexchange.com/questions/149017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77967/" ] }
149,033
If a file tells the OS its file format, how does the OS choose which application to open it by default? In Windows, is the association stored in the registry table? How does Linux choose which application to open a file? I used to use Nautilus a lot, but now I change to terminal. Is it true that in terminal, we always have to explicitly specify which application to open a file? Does the settings of which application to open a file of a certain format by default belong to the file manager (e.g. Nautilus), and it is not an issue when we are living in terminals?
There may be different mechanisms to handle these default settings. However, other answers tend to focus on complete desktop environments, each of them with its own mechanism. Yet, these are not always installed on a system (I use OpenBox a lot), and in this case, tools such as xdg-open may be used. Quoting the Arch Wiki : xdg-open is a desktop-independent tool for configuring the default applications of a user. Many applications invoke the xdg-open command internally. At this moment, I am using Ubuntu (12.04) and xdg-open is available. However, when you use a complete desktop environment such as GNOME, xdg-open acts as a simple forwarder, and relays the file requests to your DE, which is then free to handle it as it wants (see other answers for GNOME and Nautilus, for instance). Inside a desktop environment (e.g. GNOME, KDE, or Xfce), xdg-open simply passes the arguments to that desktop environment's file-opener application (gvfs-open, kde-open, or exo-open, respectively), which means that the associations are left up to the desktop environment. ... which brings you back to the other answers in that case. Still, since this is Unix & Linux, and not Ask Ubuntu: When no desktop environment is detected (for example when one runs a standalone window manager, e.g. Openbox), xdg-open will use its own configuration files. All in all: |-- no desktop env. > handle directly. User Request > xdg-open > --| |-- desktop env. > pass information to the DE. If the first case, you'll need to configure xdg-open directly , using the xdg-mime command (which will also allow you to see which application is supposed to handle which file). In the second case... |-- GNOME? > gvfs-open handles the request. | Info. from xdg-open > --|-- KDE? > kde-open handles the request. | |-- XFCE? > exo-open handles the request. ... you'll need to configure the file-opener associated with your desktop environment. In some cases, configuration made through xdg-mime may be redirected to the proper configuration tool in your environment.
{ "source": [ "https://unix.stackexchange.com/questions/149033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
149,045
Does "shebang" mean "bang she"? Why not "hebang" as "bang he"?
Another interesting name derivation from here . Among UNIX shell (user interface) users, a shebang is a term for the "#!" characters that must begin the first line of a script. In musical notation, a "#" is called a sharp and an exclamation point - "!" - is sometimes referred to as a bang. Thus, shebang becomes a shortening of sharp-bang
{ "source": [ "https://unix.stackexchange.com/questions/149045", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
149,074
I typed set -x in terminal. Now the terminal keeps printing the last command run on top of my output so the command ~]$echo "this is what I see" returns + echo 'this is what I see' this is what I see There is no man page for set , how do I turn set -x off?
Use set +x . More information: $ type set set is a special shell builtin Since set is a shell builtin, it is documented in the documentation of your shell. Beware that some systems have man pages for shell builtins, but these man pages are only correct if you're using the default shell. On Linux, you may have man pages that present POSIX commands, which will turn up for shell builtins because there's no man page of a standalone utility to shadow them; these man pages are correct for all Bourne-style shells (dash, bash, *ksh, and even zsh) but typically incomplete. See Reading and searching long man pages for tips on searching for a builtin in a long shell man page. In this case, the answer is the same for all Bourne-style shells. If set - LETTER turns on an option, set + LETTER turns it off. Thus, set +x turns off traces. The last trace, for the set +x command itself, is not completely avoidable. You can suppress it with { set +x; } 2>/dev/null , but in some shells there's still a trace for the redirection itself. You can avoid a trace for set +x by not running it and instead letting the (sub)shell exit: if it's ok to run the traced command(s) in a subshell, you can use (set -x; command to trace; other command to trace); command that is not traced .
{ "source": [ "https://unix.stackexchange.com/questions/149074", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79435/" ] }
149,163
When you enter an invalid command at a bash prompt you get the message -bash: {command}: command not found What does the - at the very beginning signify?
It means that it is a login shell. From man bash : A login shell is one whose first character of argument zero is a -, or one started with the --login option. (In bash terminology, the "zeroth" argument is the command name which, in your case, was bash .) bash uses this as a signal to do login activities such as executing .bash_profile , etc. One way that the dash may be added automatically is if the shell is started with exec . From the Bash manual : exec [-cl] [-a name] [command [arguments]] [...] If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command . Example Compare these two attempts to run the command nonexistent . First without -l : $ exec bash $ nonexistent bash: nonexistent: command not found And, second, with: $ exec -l bash $ nonexistent -bash: nonexistent: command not found
{ "source": [ "https://unix.stackexchange.com/questions/149163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40269/" ] }
149,209
I have a config-file that I keep open in vim, but that sometimes gets changed on disk, without these changes being reflected on the terminal. Can I refresh the content on the screen without closing and re-opening the file? If so, how?
You can use the :edit command, without specifying a file name, to reload the current file. If you have made modifications to the file, you can use :edit! to force the reload of the current file (you will lose your modifications). The command :edit can be abbreviated by :e . The force-edit can thus be done by :e!
{ "source": [ "https://unix.stackexchange.com/questions/149209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77711/" ] }
149,223
I want html number entities like &#x119; and want to convert it to real character. I have emails mostly from linkedin that look like this: chcia&#x142;abym zapyta&#x107;, czy rozwa&#x17c;a Pan takze udzia&#x142; w nowych projektach w Warszawie ? Obecnie poszukujemy specjalisty javascript/architekta z bardzo dobr&#x105; znajomo&#x15b;ci&#x105; Angular.js do projektu, kt&#xf3;ry dotyczy systemu, s&#x142;u&#x17c;&#x105;cego do monitorowania i zarz&#x105;dzania flot&#x105; pojazd&#xf3;w. Zesp&#xf3;&#x142;, do kt&#xf3;rego poszukujemy I'm using clawsmail, switching to html don't convert it to text, I've try to copy and use xclip -o -sel clip | html2text | less but it didn't convert the entities. Is there a way to have that text using command line tools? The only way I can think of is to use data:text/html,<PASTE THE EMAIL> and open it in a browser, but would prefer the command line.
With Free recode (formerly known as GNU recode ): recode html < file If you don't have recode or HTML::Entities and only need to decode &#x<hex>; entities, you could do it by hand with: perl -Mopen=locale -pe 's/&#x([\da-f]+);/chr hex $1/gie'
{ "source": [ "https://unix.stackexchange.com/questions/149223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1806/" ] }
149,319
In UNIX, when a parent process disappears, I thought that all child processes reset init as their parent. Is this not correct all the time? Are there any exceptions?
Three answers written in 2014, all saying that in Unices and in Linux the process is reparented to process #1 without exception. Three wrong answers. ☺ As the SUS says, quoted in one of the other answers here so I won't quote it again, the parent process of orphaned children is set to an implementation-defined process. Cristian Ciupitu is right to consult the Linux documentation to see what the implementation defines. But he is being misled by that documentation, which is inconsistent and not up-to-date. Two years before these three answers were written, and fast coming up to three years ago at the time of first writing this answer, the Linux kernel changed. The systemd developers added the ability for processes to set themselves up as "subreapers". From Linux 3.4 onwards processes can issue the prctl() system call with the PR_SET_CHILD_SUBREAPER option, and as a result they, not process #1, will become the parent of any of their orphaned descendant processes. The man page for prctl() is up-to-date, but other man pages have not been brought up to date and made consistent. In version 10.2, FreeBSD gained the same ability, extending its existing procctl() system call with PROC_REAP_ACQUIRE and PROC_REAP_RELEASE options. It adopted this mechanism from DragonFly BSD; which gained it in version 4.2, originally named reapctl() but renamed during development to procctl() . So there are exceptions, and fairly prominent ones: On Linux, FreeBSD/PC-BSD, and DragonFly BSD, the parent process of orphaned children is set to the nearest ancestor process of the child that is marked as a subreaper, or process #1 if there is no ancestor subreaper process. Various daemon supervision utilities — including systemd (the one whose developers put this into the Linux kernel in the first place), upstart, and the nosh service-manager — already make use of this. If such a daemon supervisor is not process #1, and it spawns a service such as an interactive login session, and in that session one does the (quite wrongheaded) trick of attempting to "daemonize" by double- fork() ing , then one's process will end up as a child of the daemon supervisor, not of process #1. Expecting to be able to directly spawn daemons from within login sessions is a fundamental mistake, of course. But that's another answer. Further reading Jonathan Corbet (2012-03-28). 3.4 Merge window part 2 . LWN. "4. Various core changes" . Linux 3.4 . Kernel newbies. 2012. Daemonizing and Upstart . Nekoconeko. 2014-11-12. Lennart Poettering (2012-03-23). prctl: add PR_{SET,GET}_CHILD_SUBREAPER to allow simple process supervision . linux/kernel/git/torvalds/linux.git. Matthew Dillon (2014). Add reapctl() system call for managing sub-processes (3) -> procctl() . dragonfly.git. procctl() . DragonFly BSD Manual pages. § 2. DragonFly BSD 4.2 Release Notes . 2015-07-29. Konstantin Belousov (2014-12-01). Process reapers . freebsd-arch mailing list.
{ "source": [ "https://unix.stackexchange.com/questions/149319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79988/" ] }
149,342
I am not talking about recovering deleted files , but overwritten files. Namely by the following methods: # move mv new_file old_file # copy cp new_file old_file # edit vi existing_file > D > i new_content > :x Is it possible to retrieve anything if any of the above three actions is performed assuming no special programs are installed on the linux machine?
The answer is "Probably yes, but it depends on the filesystem type, and timing." None of those three examples will overwrite the physical data blocks of old_file or existing_file, except by chance. mv new_file old_file . This will unlink old_file. If there are additional hard links to old_file, the blocks will remain unchanged in those remaining links. Otherwise, the blocks will generally (it depends on the filesystem type) be placed on a free list. Then, if the mv requires copying (a opposed to just moving directory entries), new blocks will be allocated as mv writes. These newly-allocated blocks may or may not be the same ones that were just freed . On filesystems like UFS , blocks are allocated, if possible, from the same cylinder group as the directory the file was created in. So there's a chance that unlinking a file from a directory and creating a file in that same directory will re-use (and overwrite) some of the same blocks that were just freed. This is why the standard advice to people who accidentally remove a file is to not write any new data to files in their directory tree (and preferably not to the entire filesystem) until someone can attempt file recovery. cp new_file old_file will do the following (you can use strace to see the system calls): open("old_file", O_WRONLY|O_TRUNC) = 4 The O_TRUNC flag will cause all the data blocks to be freed, just like mv did above. And as above, they will generally be added to a free list, and may or may not get reused by the subsequent writes done by the cp command. vi existing_file . If vi is actually vim , the :x command does the following: unlink("existing_file~") = -1 ENOENT (No such file or directory) rename("existing_file", "existing_file~") = 0 open("existing_file", O_WRONLY|O_CREAT|O_TRUNC, 0664) = 3 So it doesn't even remove the old data; the data is preserved in a backup file. On FreeBSD, vi does open("existing_file",O_WRONLY|O_CREAT|O_TRUNC, 0664) , which will have the same semantics as cp , above. You can recover some or all of the data without special programs; all you need is grep and dd , and access to the raw device. For small text files, the single grep command in the answer from @Steven D in the question you linked to is the easiest way: grep -i -a -B100 -A100 'text in the deleted file' /dev/sda1 But for larger files that may be in multiple non-contiguous blocks, I do this: grep -a -b "text in the deleted file" /dev/sda1 13813610612:this is some text in the deleted file which will give you the offset in bytes of the matching line. Follow this with a series of dd commands, starting with dd if=/dev/sda1 count=1 skip=$(expr 13813610612 / 512) You'd also want to read some blocks before and after that block. On UFS, file blocks are usually 8KB and are usually allocated fairly contiguously, a single file's blocks being interleaved alternately with 8KB blocks from other files or free space. The tail of a file on UFS is up to 7 1KB fragments, which may or may not be contiguous. Of course, on file systems that compress or encrypt data, recovery might not be this straightforward. There are actually very few utilities in Unix that will overwrite an existing file's data blocks. One that comes to mind is dd conv=notrunc . Another is shred .
{ "source": [ "https://unix.stackexchange.com/questions/149342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24509/" ] }
149,359
I wish to install OpenVPN on OpenBSD 5.5 using OpenVPN source tarball. According to the instructions here , I have to install lzo and add CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" directives to "configure", since gcc will not find them otherwise. I have googled extensively for guide on how to do the above on OpenBSD but there is none. This is what I plan to do: Untar the source tarball to a freshly created directory Issue the command ./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" Issue the command make Issue the command make install Which of the following syntax is correct? ./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" or ./configure --CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" or ./configure --CFLAGS="-I/usr/local/include" --LDFLAGS="-L/usr/local/lib"
The correct way is: ./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" but this may not work with all configure scripts. It's probably better to set environment variables such as CPATH and LIBRARY_PATH (see gcc man page). An example: export CPATH=/usr/local/include export LIBRARY_PATH=/usr/local/lib export LD_LIBRARY_PATH=/usr/local/lib in your .profile , for instance. The LD_LIBRARY_PATH can be needed in case of shared libraries if a run path is not used (this depends on the OS, the build tools and the options that are used, but it shouldn't hurt).
{ "source": [ "https://unix.stackexchange.com/questions/149359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66229/" ] }
149,419
I am running my below shell script on a machine on which c++ application server is running on port 8080 and in the shell script I am executing an URL and stores the response coming from that URL in DATA variable. But suppose if the same app server is down, then it will not be able to execute the URL and then it will print out Retrying Again and sleep for 30 seconds and then execute the same url again. #!/bin/bash HOSTNAME=$hostname DATA="" RETRY=15 echo $HOSTNAME while [ $RETRY -gt 0 ] do DATA=$(wget -O - -q -t 1 http://$HOSTNAME:8080/beat) if [ $? -eq 0 ] then break else echo "Retrying Again" >&2 # restart the server let RETRY-=1 sleep 30 fi done echo "Server is UP" And here HOSTNAME is the local hostname of the server on which I am running my above shell script. Problem Statement:- Now what I am trying to do is, if the server is down, then it will print out Retrying Again so after that I want to check whether port 8080 is opened on $HOSTNAME or not. If not, then it means server is down so I want to restart the server by executing this command and then sleep for 30 seconds as shown above in the shell script. /opt/app/test/start_stop.sh start Is this possible to do here in my above shell script? I am running this shell script on Ubuntu 12.04.
The program lsof allows you to check which processes are using which ressources, like files or ports. To show which processes are listening on port 8080: lsof -Pi :8080 -sTCP:LISTEN In your case, you want to test whether a process is listening on 8080 - the return value of this command tells you just that. It also prints the pid of the process. lsof -Pi :8080 -sTCP:LISTEN -t If you need just the test, with no output, redirect it to /dev/null : if lsof -Pi :8080 -sTCP:LISTEN -t >/dev/null ; then echo "running" else echo "not running" fi If you use multiple host names with multiple IP addresses locally, specify the hostname too like lsof -Pi @someLocalName:8080 -sTCP:LISTEN
{ "source": [ "https://unix.stackexchange.com/questions/149419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64455/" ] }
149,451
How can I install a new version of R in my own directory, e.g., /local/data/project/behi .
The easiest way to do this is to install R from source : $ wget http://cran.rstudio.com/src/base/R-3/R-3.4.1.tar.gz $ tar xvf R-3.4.1.tar.gz $ cd R-3.4.1 $ ./configure --prefix=$HOME/R $ make && make install The second-to-last step is the critical one. It configures R to be installed into a subdirectory of your own home directory. To run it on Linux, macOS and similar systems, add $HOME/R/bin to your PATH . Then, shell commands like R and Rscript will work. On macOS, you have another alternative: build R.app and install it into your user's private Applications folder. You need to have Xcode installed to do this. You might consider giving --prefix=$HOME instead. That installs R at the top level of your home directory, so that the R and Rscript binaries end up in $HOME/bin , which is likely already in your user's PATH . The downside is that it makes later uninstallation harder, since R would be intermingled among your other $HOME contents. (If this is the first thing you've installed to $HOME/bin , you might have to log out and back in to get this in your PATH , since it's often added conditionally only if $HOME/bin exists at login time.) This general pattern applies to a large amount of Unix software you can install from source code. If the software has a configure script, it probably understands the --prefix option, and if not, there is usually some alternative with the same effect. These features are common for a number of reasons. In decreasing order of likelihood, in my experience: The safe default ( /usr/local ) is not the right $prefix in all situations. Circumstances might dictate something else such as /usr , /opt/$PKGNAME , etc. Binary package building systems ( RPM , DEB , PKG , Cygport ...) typically build and install the package into a special staging directory, then pack that up in such a way that it expands into the desired installation location. Your case, where you can't get root to install the software into a typical location, so you install into $HOME instead.
{ "source": [ "https://unix.stackexchange.com/questions/149451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80343/" ] }
149,452
I'm trying to find my java location within my Linux system and got this [980@b449 ~]$ which java /usr/bin/java [980@b449 ~]$ readlink -f $(which java) /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java what is the difference between the 2 commands?
which 2 commands? /usr/bin/java is a soft (symbolic) link to /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java There is no difference as they are the same file. If you type something like ls -l /usr/bin/java You might get a result such as: lrwxrwxrwx. 1 root root 22 Aug 5 17:01 /usr/bin/java -> /etc/alternatives/java Which would mean you can have several java versions on your system and use alternatives to change the default one. Otherwise you can simply add and remove links to change the default one manually. To create symbolic links use the command ln -s /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java /usr/bin/java Or in general form ln -s <original file> <link to file> And use rm to delete the link as you would delete any other file.
{ "source": [ "https://unix.stackexchange.com/questions/149452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65920/" ] }
149,474
I need to start a cronjob every day, but an hour later each day. What I have so far works for the most part, except for 1 day of the year: 0 0 * * * sleep $((3600 * (10#$(date +\%j) \% 24))) && /usr/local/bin/myprog When the day of year is 365 the job will start at 5:00, but the next day (not counting a leap year) will have a day of year as 1, so the job will start at 1:00. How can I get rid of this corner case?
My preferred solution would be to start the job every hour but have the script itself check whether it's time to run or not and exit without doing anything 24 times out of 25. crontab: 0 * * * * /usr/local/bin/myprog at the top of myprog : [ 0 -eq $(( $(date +%s) / 3600 % 25 )) ] || exit 0 If you don't want to make any changes to the script itself, you can also put the "time to run" check in the crontab entry but it makes for a long unsightly line: 0 * * * * [ 0 -eq $(( $(date +\%s) / 3600 \% 25 )) ] && /usr/local/bin/myprog
{ "source": [ "https://unix.stackexchange.com/questions/149474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80352/" ] }
149,494
I'm trying to create tar.gz file using the following command: sudo tar -vcfz dvr_rdk_v1.tar.gz dvr_rdk/ It then start to create files (many files in folder), but then I get the following error: tar: dvr_rdk_v1.tar.gz: Cannot stat: No such file or directory tar: Exiting with failure status due to previous errors I don't see any description of this error, what does it mean?
Remove - from vcfz options. tar does not need hyphen for options. With a hyphen, the argument for the -f option is z . So the command is in effect trying to archive dvr_rdk_v1.tar.gz and dvr_rdk into an archive called z . Without the hyphen, the semantics of the options changes, so that the next argument on the command line, i.e. your archive's filename, becomes the argument to the f flag. Also check your write permission to the directory from which you are executing the command.
{ "source": [ "https://unix.stackexchange.com/questions/149494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79842/" ] }
149,660
I can do df . to get some of the info on the mount that the current directory is in, and I can get all the info I want from mount . However I get to much info (info about other mounts). I can grep it down, but am wondering if there is a better way. Is there some command mountinfo such that mountinfo . gives info I want (like df . , but with the info that mount gives.) I am using Debian Gnu+Linux.
I think you want something like this: findmnt -T . When using the option -T, --target path if the path is not a mountpoint file or directory, findmnt checks path elements in reverse order to get the mountpoint. You can print only certain fields via -o, --output [list] . See findmnt --help for the list of available fields. Alternatively, you could run: (until findmnt . ; do cd .. ; done) The problem you're running into is that all paths are relative to something or other, so you just have to walk the tree. Every time. findmnt is a member of the util-linux package and has been for a few years now. By now, regardless of your distro, it should already be installed on your Linux machine if you also have the mount tool. man mount | grep findmnt -B1 -m1 For more robust and customizable output use findmnt(8), especially in your scripts. findmnt will print out all mounts' info without a mount-point argument, and only that for its argument with one. The -D is the emulate df option. Without -D its output is similar to mount 's - but far more configurable. Try findmnt --help and see for yourself. I stick it in a subshell so the current shell's current directory doesn't change. So: mkdir -p /tmp/1/2/3/4/5/6 && cd $_ (until findmnt . ; do cd .. ; done && findmnt -D .) && pwd OUTPUT TARGET SOURCE FSTYPE OPTIONS /tmp tmpfs tmpfs rw SOURCE FSTYPE SIZE USED AVAIL USE% TARGET tmpfs tmpfs 11.8G 839.7M 11G 7% /tmp /tmp/1/2/3/4/5/6 If you do not have the -D option available to you (Not in older versions of util-linux) then you need never fear - it is little more than a convenience switch in any case. Notice the column headings it produces for each call - you can include or exclude those for each invocation with the -o utput switch. I can get the same output as -D might provide like: findmnt /tmp -o SOURCE,FSTYPE,SIZE,USED,AVAIL,USE%,TARGET OUTPUT SOURCE FSTYPE SIZE USED AVAIL USE% TARGET tmpfs tmpfs 11.8G 1.1G 10.6G 10% /tmp
{ "source": [ "https://unix.stackexchange.com/questions/149660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4778/" ] }
149,726
I'm looking to write a script that takes a .txt filename as an argument, reads the file line by line, and passes each line to a command. For example, it runs command --option "LINE 1" , then command --option "LINE 2" , etc. The output of the command is written to another file. How do I go about doing that? I don't know where to start.
Use while read loop: : > another_file ## Truncate file. while IFS= read -r line; do command --option "$line" >> another_file done < file Another is to redirect output by block: while IFS= read -r line; do command --option "$line" done < file > another_file Last is to open the file: exec 4> another_file while IFS= read -r line; do command --option "$line" >&4 echo xyz ## Another optional command that sends output to stdout. done < file If one of the commands reads input, it would be a good idea to use another fd for input so the commands won't eat it (here assuming ksh , zsh or bash for -u 3 , use <&3 instead portably): while IFS= read -ru 3 line; do ... done 3< file Finally to accept arguments, you can do: #!/bin/bash file=$1 another_file=$2 exec 4> "$another_file" while IFS= read -ru 3 line; do command --option "$line" >&4 done 3< "$file" Which one could run as: bash script.sh file another_file Extra idea. With bash , use readarray : readarray -t lines < "$file" for line in "${lines[@]}"; do ... done Note: IFS= can be omitted if you don't mind having line values trimmed of leading and trailing spaces.
{ "source": [ "https://unix.stackexchange.com/questions/149726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80515/" ] }
149,741
Given a shell process (e.g. sh ) and its child process (e.g. cat ), how can I simulate the behavior of Ctrl + C using the shell's process ID? This is what I've tried: Running sh and then cat : [user@host ~]$ sh sh-4.3$ cat test test Sending SIGINT to cat from another terminal: [user@host ~]$ kill -SIGINT $PID_OF_CAT cat received the signal and terminated (as expected). Sending the signal to the parent process does not seem to work. Why is the signal not propagated to cat when sent to its parent process sh ? This does not work: [user@host ~]$ kill -SIGINT $PID_OF_SH
How CTRL + C works The first thing is to understand how CTRL + C works. When you press CTRL + C , your terminal emulator sends an ETX character (end-of-text / 0x03). The TTY is configured such that when it receives this character, it sends a SIGINT to the foreground process group of the terminal. This configuration can be viewed by doing stty -a and looking at intr = ^C; . The POSIX specification says that when INTR is received, it should send a SIGINT to the foreground process group of that terminal. What is the foreground process group? So, now the question is, how do you determine what the foreground process group is? The foreground process group is simply the group of processes which will receive any signals generated by the keyboard (SIGTSTP, SIGINT, etc). Simplest way to determine the process group ID is to use ps : ps ax -O tpgid The second column will be the process group ID. How do I send a signal to the process group? Now that we know what the process group ID is, we need to simulate the POSIX behavior of sending a signal to the entire group. This can be done with kill by putting a - in front of the group ID. For example, if your process group ID is 1234, you would use: kill -INT -1234 Simulate CTRL + C using the terminal number. So the above covers how to simulate CTRL + C as a manual process. But what if you know the TTY number, and you want to simulate CTRL + C for that terminal? This becomes very easy. Lets assume $tty is the terminal you want to target (you can get this by running tty | sed 's#^/dev/##' in the terminal). kill -INT -$(ps h -t $tty -o tpgid | uniq) This will send a SIGINT to whatever the foreground process group of $tty is.
{ "source": [ "https://unix.stackexchange.com/questions/149741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80522/" ] }
149,823
expr does not seem to like parenthesis (used in mathematics to explicit operator priority): expr 3 * (2 + 1) bash: syntax error near unexpected token `(' How to express operator priority in bash?
You can use the arithmetic expansion instead. echo "$(( 3 * ( 2 + 1 ) ))" 9 In my personal opinion, this looks a bit nicer than using expr . From man bash Arithmetic Expansion Arithmetic expansion allows the evaluation of an arithmetic expression and the substitution of the result. The format for arithmetic expansion is: $((expression)) The expression is treated as if it were within double quotes, but a double quote inside the parentheses is not treated specially. All tokens in the expression undergo parameter expansion, string expansion, command substitution, and quote removal. Arithmetic expansions may be nested. The evaluation is performed according to the rules listed below under ARITHMETIC EVALUATION. If expression is invalid, bash prints a message indicating failure and no substitution occurs.
{ "source": [ "https://unix.stackexchange.com/questions/149823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2305/" ] }
149,959
Based on various sources I have cobbled together ~/.config/systemd/user/screenlock.service : [Unit] Description=Lock X session Before=sleep.target [Service] Environment=DISPLAY=:0 ExecStart=/usr/bin/xautolock -locknow [Install] WantedBy=sleep.target I've enabled it using systemctl --user enable screenlock.service . But after rebooting, logging in, suspending and resuming (tested both with systemctl suspend and by closing the lid) the screen is not locked and there is nothing in journalctl --user-unit screenlock.service . What am I doing wrong? Running DISPLAY=:0 /usr/bin/xautolock -locknow locks the screen as expected. $ systemctl --version systemd 215 +PAM -AUDIT -SELINUX -IMA -SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ +SECCOMP -APPARMOR $ awesome --version awesome v3.5.5 (Kansas City Shuffle) • Build: Apr 11 2014 09:36:33 for x86_64 by gcc version 4.8.2 (nobody@) • Compiled against Lua 5.2.3 (running with Lua 5.2) • D-Bus support: ✔ $ slim -v slim version 1.3.6 If I run systemctl --user start screenlock.service the screen locks immediately and I get a log message in journalctl --user-unit screenlock.service , so ExecStart clearly is correct. Relevant .xinitrc section : xautolock -locker slock & Creating a system service with the same file works (that is, slock is active when resuming): # ln -s "${HOME}/.config/systemd/user/screenlock.service" /usr/lib/systemd/system/screenlock.service # systemctl enable screenlock.service $ systemctl suspend But I do not want to add a user-specific file outside $HOME for several reasons: User services should be clearly separated from system services User services should be controlled without using superuser privileges Configuration should be easily version controlled
sleep.target is specific to system services. The reason is, sleep.target is not a magic target that automatically gets activated when going to sleep. It's just a regular target that puts the system to sleep – so the 'user' instances of course won't have an equivalent. (And unfortunately the 'user' instances currently have no way to depend on systemwide services.) (That, and there's the whole "hardcoding $DISPLAY" business. Every time you hardcode session parameters in an OS that's based on the heavily multi-user/multi-seat Unix, root kills a kitten.) So there are two good ways to do this (I suggest the 2nd one): Method 1 Create a system service (or a systemd-sleep(8) hook) that makes systemd-logind broadcast the "lock all sessions" signal when the system goes to sleep: ExecStart=/usr/bin/loginctl lock-sessions Then, within your X11 session (i.e. from ~/.xinitrc), run something that reacts to the signal: systemd-lock-handler slock & xss-lock --ignore-sleep slock & (GNOME, Cinnamon, KDE, Enlightenment already support this natively.) Method 2 Within your X11 session, run something that directly watches for the system going to sleep, e.g. by hooking into systemd-logind's "inhibitors". The aforementioned xss-lock actually does exactly that, even without the explicit "lock all" signal, so it is enough to have it running: xss-lock slock & It will run slock as soon as it sees systemd-logind preparing to suspend the computer.
{ "source": [ "https://unix.stackexchange.com/questions/149959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
149,965
I have two directories images and images2 with this structure in Linux: /images/ad /images/fe /images/foo ... and other 4000 folders and the other is like: /images2/ad /images2/fe /images2/foo ... and other 4000 folders Each of these folders contain images and the directories' names under images and images2 are exactly the same, however their content is different. Then I want to know how I can copy-merge the images of /images2/ad into images/ad, the images of /images2/foo into images/foo and so on with all the 4000 folders..
This is a job for rsync . There's no benefit to doing this manually with a shell loop unless you want to move the file rather than copy them. rsync -a /path/to/source/ /path/to/destination In your case: rsync -a /images2/ /images/ (Note trailing slash on images2 , otherwise it would copy to /images/images2 .) If images with the same name exist in both directories, the command above will overwrite /images/SOMEPATH/SOMEFILE with /images2/SOMEPATH/SOMEFILE . If you want to replace only older files, add the option -u . If you want to always keep the version in /images , add the option --ignore-existing . If you want to move the files from /images2 , with rsync, you can pass the option --remove-source-files . Then rsync copies all the files in turn, and removes each file when it's done. This is a lot slower than moving if the source and destination directories are on the same filesystem.
{ "source": [ "https://unix.stackexchange.com/questions/149965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80666/" ] }
150,121
I was analyzing some web heads looking at htop and noticed the following Uptime: 301 days(!), 23:47:39 What does the (!) mean?
From htop source code, file UptimeMeter.c , you can see: char daysbuf[15]; if (days > 100) { sprintf(daysbuf, "%d days(!), ", days); } else if (days > 1) { sprintf(daysbuf, "%d days, ", days); } else if (days == 1) { sprintf(daysbuf, "1 day, "); } else { daysbuf[0] = '\0'; } I think ! here is just a mark that server has been up for more than 100 days. Reference http://sourceforge.net/p/htop/mailman/htop-general/?viewmonth=200707
{ "source": [ "https://unix.stackexchange.com/questions/150121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78937/" ] }
150,135
I like the image-preview in ranger, but I also like my terminal transparent. Is there really no way to get the image-preview work with w3m and transparent background? (I'm willing to change my terminal-emulator if that's necessary, currently urxvt)
From htop source code, file UptimeMeter.c , you can see: char daysbuf[15]; if (days > 100) { sprintf(daysbuf, "%d days(!), ", days); } else if (days > 1) { sprintf(daysbuf, "%d days, ", days); } else if (days == 1) { sprintf(daysbuf, "1 day, "); } else { daysbuf[0] = '\0'; } I think ! here is just a mark that server has been up for more than 100 days. Reference http://sourceforge.net/p/htop/mailman/htop-general/?viewmonth=200707
{ "source": [ "https://unix.stackexchange.com/questions/150135", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33165/" ] }
150,594
The program ed , a minimal text editor, cannot be exited by sending it an interrupt through using Ctrl - C , instead printing the error message "?" to the console. Why doesn't ed just exit when it receives the interrupt? Surely there's no reason why a cryptic error message is more useful here than just exiting. This behavior leads many new users into the following sort of interaction: $ ed hello ? help ? exit ? quit ? ^C ? ^C ? ? ? ^D $ su # rm -f /bin/ed Such a tragic waste—easily avoidable if ed simply agreed to be interrupted. Another stubborn program exhibiting similar behavior is less which also doesn't appear to have much reason to ignore C-c . Why don't these programs just take a hint?
Ctrl + C sends SIGINT . The conventional action for SIGINT is to return to a program's toplevel loop, cancelling the current command and entering a mode where the program waits for the next command. Only non-interactive programs are supposed to die from SIGINT. So it's natural that Ctrl + C doesn't kill ed, but causes it to return to its toplevel loop. Ctrl + C aborts the current input line and returns to the ed prompt. The same goes for less: Ctrl + C interrupts the current command and brings you back to its command prompt. For historical reasons, ed ignores SIGQUIT ( Ctrl + \ ). Normal applications should not catch this signal and allow themselves to be terminated, with a core dump if enabled.
{ "source": [ "https://unix.stackexchange.com/questions/150594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44164/" ] }
150,644
In his answer to the question "mixed raid types" , HBruijn suggests using LVM to implement RAID vs the more standard MDRAID. After a little investigation, it seems LVM also supports RAID functionality. In the past, I have used LVM on top of MDRAID, and was not aware till now that LVM also supports RAID functionality. This seems to be a relatively recent development, but I have not found out exactly when this was implemented. So, these are alternative ways to implement software RAID on Linux. What are the pros and cons of these two different approaches? I'm looking for feature comparisons between the two approaches so people can decide which is better for them. Conclusions based on experimentation (as in, this feature doesn't work as well as this feature and here is why) are also Ok, provided you include your data in the answer. Some specific issues to address: Suppose I want to do sw RAID + LVM (a common scenario). Should I use LVM's support for sw RAID and thus use one utility instead of two? Does this more integrated approach have any advantages? Does LVMs support for sw RAID have significant deficiencies compared to the more mature MDADM? Specifically, how stable/bug-free is the LVM support for sw RAID? It seems this support only goes back to 2011 (see below), while MDADM is much older. Also, how does it compare in terms of feature set? Does it have significant feature deficiencies compared to MDADM? Conversely, does it have support for any sw RAID features that MDADM does not have? NOTES: There is a detailed discussion at http://www.olearycomputers.com/ll/linux_mirrors.html but I could not find out what date it was written on. Similar question on Serverfault: linux LVM mirror vs. MD mirror . However, this question was asked in 2010, and the answers may be out of date. The changelog entry for version 2.02.87 - 12th August 2011 has Add configure --with-raid for new segtype 'raid' for MD RAID 1/4/5/6 support So, it looks like RAID support in LVM is about 3 years old.
How mature and featureful is LVM RAID? LVM-RAID is actually mdraid under the covers. It basically works by creating two logical volumes per RAID device (one for data, called "rimage"; one for metadata, called "rmeta"). It then passes those off to the existing mdraid drivers. So things like handling disk read errors, I/O load balancing, etc. should be fairly mature. That's the good news. Tools You can't use mdadm on it (at least, not in any easy way¹) and the LVM RAID tools are nowhere near as mature. For example, in Debian Wheezy, lvs can't tell you RAID5 sync status. I very much doubt repair and recovery (especially from "that should never happen!" situations) is anywhere near as good as mdadm (and I accidentally ran into one of those in my testing, and finally just gave up on recovering it—recovery with mdadm would have been easy). Especially if you're not using the newest versions of all the tools, it gets worse. Missing Features Current versions of LVM-RAID do not support shrinking ( lvreduce ) a RAID logical volume. Nor do they support changing the number of disks or RAID level ( lvconvert gives an error message saying not supported yet). lvextend does work, and can even grow RAID levels that mdraid only recently gained support for, such as RAID10. In my experience, extending LVs is much more common than reducing them, so that's actually reasonable. Some other mdraid features aren't present, and especially you can't customize all the options you can in with mdadm. On older versions (as found in, for example, Debian Wheezy), LVM RAID does not support growing, either. For example, on Wheezy: root@LVM-RAID:~# lvextend -L+1g vg0/root Extending logical volume root to 11.00 GiB Internal error: _alloc_init called for non-virtual segment with no disk space. In general, you don't want to run the Wheezy versions. The above is once you get it installed. That is not a trivial process either. Tool problems Playing with my Jessie VM, I disconnected (virtually) one disk. That worked, the machine stayed running. lvs , though, gave no indication the arrays were degraded. I re-attached the disk, and removed a second. Stayed running (this is raid6). Re-attached, still no indication from lvs . I ran lvconvert --repair on the volume, it told me it was OK. Then I pulled a third disk... and the machine died. Re-inserted it, rebooted, and am now unsure how to fix. mdadm --force --assemble would fix this; neither vgchange nor lvchange appears to have that option (lvchange accepts --force , but it doesn't seem to do anything). Even trying dmsetup to directly feed the mapping table to the kernel, I could not figure out how to recover it. Also, mdadm is a dedicated tool just for managing RAID. LVM does a lot more, but it feels (and I admit this is pretty subjective) like the RAID functionality has sort of been shoved in there; it doesn't quite fit. How do you actually install a system with LVM RAID? Here is a brief outline of getting it installed on Debian Jessie or Wheezy. Jessie is far easier; note if you're going to try this on Wheezy, read the whole thing first… Use a full CD image to install, not a netinst image. Proceed as normal, get to disk partitioning, set up your LVM physical volumes. You can put /boot on LVM-RAID (on Jessie, and on Wheezy with some work detailed below). Create your volume group(s). Leave it in the LVM menu. First bit of fun—the installer doesn't have the dm-raid.ko module loaded, or even available! So you get to grab it from the linux-image package that will be installed. Switch to a console (e.g., Alt - F2 ) and: cd /tmp dpkg-deb --fsys-tarfile /cdrom/pool/main/l/linux/linux-image-*.deb | tar x depmod -a -b /tmp modprobe -d /tmp dm-raid The installer doesn't know how to create LVM-RAID LVs, so you have to use the command line to do it. Note I didn't do any benchmarking; the stripe size ( -I ) below is entirely a guess for my VM setup: lvcreate --type raid5 -i 4 -I 256 -L 10G -n root vg0 On Jessie, you can use RAID10 for swap. On Wheezy, RAID10 isn't supported. So instead you can use two swap partitions, each RAID1. But you must tell it exactly which physical volumes to put them on or it puts both halves of the mirror on the same disk . Yes. Seriously. Anyway, that looks like: lvcreate --type raid1 -m1 -L 1G -n swap0 vg0 /dev/vda1 /dev/vdb1 lvcreate --type raid1 -m1 -L 1G -n swap1 vg0 /dev/vdc1 /dev/vdd1 Finally, switch back to the installer, and hit 'Finish' in the LVM menu. You'll now be presented with a lot of logical volumes showing. That's the installer not understanding what's going on; ignore everything with rimage or rmeta in their name (see the first paragraph way above for an explanation of what those are). Go ahead and create filesystems, swap partitions, etc. as normal. Install the base system, etc., until you get to the grub prompt. On Jessie, grub2 will work if installed to the MBR (or probably with EFI, but I haven't tested that). On Wheezy, install will fail, and the only solution is to backport Jessie's grub2. That is actually fairly easy, it compiles cleanly on Wheezy. Somehow, get your backported grub packages into /target (or do it in a second, after the chroot) then: chroot /target /bin/bash mount /sys dpkg -i grub-pc_*.deb grub-pc-bin_*.deb grub-common_*.deb grub2-common_*.deb grub-install /dev/vda … grub-install /dev/vdd # for each disk echo 'dm_raid' >> /etc/initramfs-tools/modules update-initramfs -kall -u update-grub # should work, technically not quite tested² umount /sys exit Actually, on my most recent Jessie VM grub-install hung. Switching to F2 and doing while kill $(pidof vgs); do sleep 0.25; done , followed by the same for lvs , got it through grub-install. It appeared to generate a valid config despite that, but just in case I did a chroot /target /bin/bash , made sure /proc and /sys were mounted, and did an update-grub . That time, it completed. I then did a dpkg-reconfigure grub-pc to select installing grub on all the virtual disks' MBRs. On Wheezy, after doing the above, select 'continue without a bootloader'. Finish the install. It'll boot. Probably. Community Knowledge There are a fair number of people who know about mdadm , and have a lot of deployment experience with it. Google is likely to answer most questions about it you have. You can generally expect a question about it here to get answers, probably within a day. The same can't be said for LVM RAID. It's hard to find guides. Most Google searches I've run instead find me stuff on using mdadm arrays as PVs. To be honest, this is probably largely because it's newer, and less commonly used. Somewhat, it feels unfair to hold this against it—but if something goes wrong, the much larger existing community around mdadm makes recovering my data more likely. Conclusion LVM-RAID is advancing fairly rapidly. On Wheezy, it isn't really usable (at least, without doing backports of LVM and the kernel). Earlier, in 2014, on Debian testing, it felt like an interesting, but unfinished idea. Current testing, basically what will become Jessie, feels like something that you might actually use, if you frequently need to create small slices with different RAID configurations (something that is an administrative nightmare with mdadm ). If your needs are adequately served by a few large mdadm RAID arrays, sliced into partitions using LVM, I'd suggest continuing to use that. If instead you wind up having to create many arrays (or even arrays of logical volumes), consider switching to LVM-RAID instead. But keep good backups. A lot of the uses of LVM RAID (and even mdadm RAID) are being taken over by things like cluster storage/object systems, ZFS, and btrfs. I recommend also investigating those, they may better meet your needs. Thank yous I'd like to thank psusi for getting me to revisit the state of LVM-RAID and update this post. Footnotes I suspect you could use device mapper to glue the metadata and data together in such a way that mdadm --assemble will take it. Of course, you could just run mdadm on logical volumes just fine... and that'd be saner. When doing the Wheezy install, I failed to do this first time, and wound up with no grub config. I had to boot the system by entering all the info at the grub prompt. Once booted, that worked, so I think it'll work just fine from the installer. If you wind up at the grub prompt, here are the magic lines to type: linux /boot/vmlinuz-3.2.0-4-amd64 root=/dev/mapper/vg0-root initrd /boot/initrd.image-3.2.0-4-amd64 boot PS: It's been a while since I actually did the original experiments. I have made my original notes available. Note that I have now done more recent ones, covered in this answer, and not in those notes.
{ "source": [ "https://unix.stackexchange.com/questions/150644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
150,718
I've been using Linux for a while now and whenever I typed sudo I thought I was switching over to the root user for a command. Apparently this is not true because all I need is my user account's password. I'm guessing since I haven't worked with multiple users I haven't really noticed this in the real world. I am unsure how Ubuntu sets up my first account. Is there a root user? Am I root? I'm guessing I just created a new user upon installation but it gave me root privileges? Just a little confused here... So why am I allowed to run root commands with my user's password?
In details it works the following way: /usr/bin/sudo executable file has setuid bit set, so even when executed by another user, it runs with the file owner's user id (root in that case). sudo checks in /etc/sudoers file what privileges do you have and whether you are permitted to run the command you are invoking. Saying simply, /etc/sudoers is a file which defines which users can run which commands using sudo mechanism. That's how that file look on my Ubuntu: # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL The third line is what presumably interests you. It lets anybody in the "sudo" group to execute any command as any user. When Ubuntu sets up the first account during installation it add that account to the "sudo" group. You can check which groups which users belong to with group command. sudo asks you for a password. Regarding the fact that it needs user's password, not the root's one, that is an excerpt from sudoers manual : Authentication and logging The sudoers security policy requires that most users authenticate themselves before they can use sudo. A password is not required if the invoking user is root, if the target user is the same as the invoking user, or if the policy has disabled authentication for the user or command. Unlike su(1), when sudoers requires authentication, it validates the invoking user's credentials, not the target user's (or root's) credentials. This can be changed via the rootpw, targetpw and runaspw flags, described later. However, in fact, sudo does not need your user password for anything. It ask for it just to ensure that you are really you and to provide you some kind of warning (or chance to stop) before invoking some potentially dangerous command. If you want to turn off password asking, change the sudoers entry to: %sudo ALL=(ALL:ALL) NOPASSWD: ALL After authentication sudo spawns child process which run the invoked command. The child inherits the root user id from its parent -- the sudo process. So, answering your questions precisely: I thought I was switching over to the root user for a command. You were right. Each command preceded with sudo runs with the root user id. Is there a root user? Yes, there is a root user account, separate from your user account created during system installation. However, by default in Ubuntu you are not allowed to login to interactive terminal as root user. Am I root? No, you are not a root. You only have privilege to run individual commands as a root , using the sudo mechanism described above. So why am I allowed to run root commands with my user's password? You have to enter user's password only due to sudo internal security mechanism. It can be easily turned off. You gain your root powers because of setuid bit of /usr/bin/sudo , not because of any passwords you enter.
{ "source": [ "https://unix.stackexchange.com/questions/150718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22494/" ] }
150,786
I'm trying to run a simple script- clone a git repository into a certain directory, then cd to this directory in order to execute an installation script. This script is in a Makefile. But the cd seems not to be working. It doesn't find my installation script. I added a pwd after the cd in the script, and it shows me the directory from where I'm executing the script, not the directory where I cd into. What's the problem? git clone http://somerepo ~/some_dir cd ~/some_dir/ pwd python myscript.py install => pwd : /hereIsPathToDirectoryFromWhichIRunTheScript python: can't open file 'setup.py': [Errno 2] No such file or directory It also doesn't work with ./setup.py . If I enter the absolute path ~/some_dir/setup.py the script fails later because it's trying to access resources in the same folder.
You're using a makefile. Makefiles aren't scripts, each line is executed in a new shell. Meaning when you change the environment in line (such as cd ), that change is not propagated to the next line. The solution is that when you want to preserve the environment between commands, you run all the commands in the same line. All the commands will then be executed in the same shell, and the environment is preserved. For example: target: git clone http://somerepo ~/some_dir cd ~/some_dir/ && python myscript.py install
{ "source": [ "https://unix.stackexchange.com/questions/150786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81167/" ] }
150,789
I have Linux system in which we force /dev/devname for running the system. proc /proc proc defaults 0 0 /dev/sda1 / ext3 barrier=1,errors=remount-ro 0 1 /dev/sda5 /opt ext3 barrier=1,defaults 0 22 /dev/sda2 /opt/vortex/dvss ext3 barrier=1,defaults 0 3 /dev/sda6 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0 We have this system running without issues till date. But, often in some installed machine we see that the system is not able to boot properly and sudden goes into "Grub rescue" When i mount the device as secondary and run E2Fsck i see that the system can be restored. Now, we are trying to address this failure. [ Fixing System boot failure due to GRUB Error In order, I noticed in some forums they say to SET UUID based boot up in FSTAB what are all the advantages that we would have if it is set through UUID. Is there a possibility that it would reduce my GRUB ERROR
You're using a makefile. Makefiles aren't scripts, each line is executed in a new shell. Meaning when you change the environment in line (such as cd ), that change is not propagated to the next line. The solution is that when you want to preserve the environment between commands, you run all the commands in the same line. All the commands will then be executed in the same shell, and the environment is preserved. For example: target: git clone http://somerepo ~/some_dir cd ~/some_dir/ && python myscript.py install
{ "source": [ "https://unix.stackexchange.com/questions/150789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52764/" ] }
151,118
From help compgen : $ help compgen compgen: compgen [-abcdefgjksuv] [-o option] [-A action] [-G globpat] [-W wordlist] [-F function] [-C command] [-X filterpat] [-P prefix] [-S suffix] [word] Display possible completions depending on the options. Intended to be used from within a shell function generating possible completions. If the optional WORD argument is supplied, matches against WORD are generated. Exit Status: Returns success unless an invalid option is supplied or an error occurs. What do options [-abcdefgjksuv] stand for? In other words, I want to know how to use all options.
Options for compgen command are the same as complete , except -p and -r . From compgen man page: compgen compgen [option] [word] Generate possible completion matches for word according to the options, which may be any option accepted by the complete builtin with the exception of -p and -r, and write the matches to the standard output For options [abcdefgjksuv] : -a means Names of alias -b means Names of shell builtins -c means Names of all commands -d means Names of directory -e means Names of exported shell variables -f means Names of file and functions -g means Names of groups -j means Names of job -k means Names of Shell reserved words -s means Names of service -u means Names of userAlias names -v means Names of shell variables You can see complete man page here .
{ "source": [ "https://unix.stackexchange.com/questions/151118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
151,149
If I do a sudo cp /etc/foo.txt ~/foo.txt , the new file is created with root as the owner. Right now, I see no way around this other than using the last two commands ( ls to clarify use-case): belmin@server1$ ls /etc/foo.txt > -rw------- 1 root root 3848 Mar 6 20:35 /etc/foo.txt > belmin@server1$ sudo cp /etc/foo.txt ~/foo.txt belmin@server1$ sudo chown belmin: $_ I would prefer: Doing it in one sudo command. Not having to specify my current user (maybe using a variable?).
Use install instead of cp : sudo install -o belmin /etc/foo.txt ~/foo.txt
{ "source": [ "https://unix.stackexchange.com/questions/151149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2372/" ] }
151,329
How can I efficiently reorder windows in tmux? For example, having this set of windows: 0:zsh 1:elinks 2:mutt 3:irssi 4:emacs 5:rss 6:htop What would I have to do to move rss to between elinks and mutt , ending up with: 0:zsh 1:elinks 2:rss 3:mutt 4:irssi 5:emacs 6:htop I know how to use move-window to move a window to a yet-unused index, and I could use a series of them to achieve this—but, obviously, this is very tedious.
swap-window can help you: swap-window -t -1 It moves current window to the left by one position. From man tmux : swap-window [-d] [-s src-window] [-t dst-window] (alias: swapw) This is similar to link-window, except the source and destination windows are swapped. It is an error if no window exists at src-window. You can bind it to a key: bind-key -n S-Left swap-window -t -1 bind-key -n S-Right swap-window -t +1 Then you can use Shift+Left and Shift+Right to change current window position.
{ "source": [ "https://unix.stackexchange.com/questions/151329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2916/" ] }
151,390
How to check whether or not a particular directory is a mount point? For instance there is a folder named /test that exists, and I want to check if it is a mount point or not.
If you want to check it's the mount point of a file system, that's what the mountpoint command (on most Linux-based systems) is for: if mountpoint -q -- "$dir"; then printf '%s\n' "$dir is a mount point" fi It does that by checking whether . and .. have the same device number ( st_dev in stat() result). So if you don't have the mountpoint command, you could do: perl -le '$dir = shift; exit(1) unless (@a = stat "$dir/." and @b = stat "$dir/.." and ($a[0] != $b[0] || $a[1] == $b[1]))' "$dir" Like mountpoint , it will return true for / even if / is not a mount point (like when in a chroot jail), or false for a mount point of a bind mount of the same file system within itself. Contrary to mountpoint , for symbolic links, it will check whether the target of the symlink is a mountpoint.
{ "source": [ "https://unix.stackexchange.com/questions/151390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81544/" ] }
151,437
When running echo abcd | wc -c it returns 5 . But the word abcd is only 4 characters long. Is echo printing some special character after the word abcd ? And can I prevent echo from printing that?
echo print newline ( \n ) at end of line echo abcd | xxd 0000000: 6162 6364 0a abcd. With some echo implementations, you can use -n : -n do not output the trailing newline and test: echo -n abcd | wc -c 4 With some others, you need the \c escape sequence: \c : Suppress the <newline> that otherwise follows the final argument in the output. All characters following the '\c' in the arguments shall be ignored. echo -e 'abcd\c' | wc -c 4 Portably, use printf : printf %s abcd | wc -c 4 (note that wc -c counts bytes, not characters (though in the case of abcd they are generally equivalent). Use wc -m to count characters).
{ "source": [ "https://unix.stackexchange.com/questions/151437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78050/" ] }
151,547
How to change the system date in Linux ? I want to change: Only Year Only Month Only Date Any combination of above three
Use date -s : date -s '2014-12-25 12:34:56' Run that as root or under sudo . Changing only one of the year/month/day is more of a challenge and will involve repeating bits of the current date. There are also GUI date tools built in to the major desktop environments, usually accessed through the clock. To change only part of the time, you can use command substitution in the date string: date -s "2014-12-25 $(date +%H:%M:%S)" will change the date, but keep the time. See man date for formatting details to construct other combinations: the individual components are %Y , %m , %d , %H , %M , and %S .
{ "source": [ "https://unix.stackexchange.com/questions/151547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4843/" ] }
151,654
I'm trying to check if an input is an integer and I've gone over it a hundred times but don't see the error in this. Alas it does not work, it triggers the if statement for all inputs (numbers/letters) read scale if ! [[ "$scale" =~ "^[0-9]+$" ]] then echo "Sorry integers only" fi I've played around with the quotes but either missed it or it did nothing. What do I do wrong? Is there an easier way to test if an input is just an INTEGER?
Remove quotes if ! [[ "$scale" =~ ^[0-9]+$ ]] then echo "Sorry integers only" fi
{ "source": [ "https://unix.stackexchange.com/questions/151654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79435/" ] }
151,689
If there are two (or more) versions of a given RPM available in a YUM repository, how can I instruct yum to install the version I want? Looking through the Koji build service I notice that there are several versions.
To see what particular versions are available to you via yum you can use the --showduplicates switch . It gives you a list like "package name.architecture     version": $ yum --showduplicates list httpd | expand Loaded plugins: fastestmirror, langpacks, refresh-packagekit Loading mirror speeds from cached hostfile * fedora: mirror.steadfast.net Available Packages httpd.x86_64 2.4.6-6.fc20 fedora httpd.x86_64 2.4.10-1.fc20 updates As far as installing a particular version? You can append the version info to the name of the package, removing the architecture name, like so: $ sudo yum install <package name>-<version info> For example in this case if I wanted to install the older version, 2.4.6-6 I'd do the following: $ sudo yum install httpd-2.4.6-6 You can also include the release info when specifying a package. In this case since I'm dealing with Fedora 20 (F20) the release info would be "fc20", and the architecture info too. $ sudo yum install httpd-2.4.6-6.fc20 $ sudo yum install httpd-2.4.6-6.fc20.x86_64 repoquery If you're ever unsure that you're constructing the arguments right you can consult with repoquery too. $ sudo yum install yum-utils # (to get `repoquery`) $ repoquery --show-duplicates httpd-2.4* httpd-0:2.4.6-6.fc20.x86_64 httpd-0:2.4.10-1.fc20.x86_64 downloading & installing You can also use one of the following options to download a particular RPM from the web, and then use yum to install it. $ yum --downloadonly <package> -or- $ yumdownloader <package> And then install it like so: $ sudo yum localinstall <path to rpm> What if I want to download everything that package X requires? $ yumdownloader --resolve <package> Example $ yumdownloader --resolve vim-X11 Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list --> Running transaction check ---> Package vim-X11.x86_64 2:7.3.315-1.fc14 set to be reinstalled --> Finished Dependency Resolution vim-X11-7.3.315-1.fc14.x86_64.rpm | 1.1 MB 00:01 Notice it's doing a dependency check, and then downloading the missing pieces. See my answer that covers it in more details here: How to download a file from repo, and install it later w/o internet connection? . References Get yum to install a specific package version
{ "source": [ "https://unix.stackexchange.com/questions/151689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
151,763
I was running a script that iterated over all the files on my Linux system and created some metadata about them, and it threw an error when it hit a broken symbolic link. I am newish to *nix, but I get the main idea behind linking files and how broken links come to exist. As far as I know, they are like the equivalent of litter in the street. Things that a program I'm removing wasn't smart enough to tell the package manager existed, and belonged to it, or something that got left behind in an upgrade. At first, I started to tweak the script I'm running to skip them, then I thought, 'well we could always delete them while we're down here...' I'm running Ubuntu 14.04 (Trusty Tahr). I can't see any reason not to, but before I go ahead and run this over my development system, is there any reason this might actually be a terrible idea? Do broken symlinks serve some purpose I am not aware of?
There are many reasons for broken symbolic links: A link was created to a target which no longer exists. Resolution: remove the broken symlink. A link was created for a target which has been moved. Or it's a relative link that's been moved relative to its target. (Not to imply that relative symlinks are a bad idea — quite the opposite: absolute symlinks are more prone to going stale because their target moved.) Resolution: find the intended target and fix the link. There was a mistake when creating the link. Resolution: find the intended target and fix the link. The link is to a file which is on a removable disk, network filesystem or other storage area which is not currently mounted. Resolution: none, the link isn't broken all the time. The link will work when the storage area is mounted. The link is to a file which exists only some of the time, by design. For example, the file is the cached output of a process, which is deleted when the information goes stale but only re-created upon explicit request. Or the link is to an inbox which is deleted when empty. Or the link is to a device file which is only present when the corresponding peripheral is attached. Resolution: none, the link isn't broken all the time. The link is only valid in a different storage hierarchy. For example, it is valid only in a chroot jail, or it's exported by an NFS server and only valid on the server or on some of its clients. Resolution: none, the link isn't broken everywhere. The link is broken for you, because you lack the permission to traverse a directory to reach the target, but it isn't broken for users with appropriate privilege. Resolution: none, the link isn't broken for everybody. The link is used to store information, as in the Firefox lock example cited by vinc17 . One reason to do it this way is that it's easier to populate a symlink atomically — there's no other way, whereas populating a file atomically is more complex: you need to create the file content under a temporary name, then move it into place, and handle stale temporary files left behind by a crash. Another reason is that symlinks are typically stored directly inside their inode on some filesystems, which makes reading them faster than reading the content of a file. Resolution: none. In this case, removing the link would be detrimental. If you can determine that a symlink falls into the first category, then sure, go ahead and delete it. Otherwise, abstain. A program that traverses directories recursively and cares about file contents should usually ignore broken symbolic links.
{ "source": [ "https://unix.stackexchange.com/questions/151763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80706/" ] }
151,807
I'm trying to pass multiple argument to a function, but one of them is consist of two words and I want shell function to deal with it as one arg: args=("$@") function(){ echo ${args[0]} echo ${args[1]} echo ${args[2]} } when I call this command sh shell hi hello guys bye I get this hi hello guys But what I really want is: hi hello guys bye
You should just quote the second argument. myfunc(){ echo "$1" echo "$2" echo "$3" } myfunc hi "hello guys" bye
{ "source": [ "https://unix.stackexchange.com/questions/151807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78050/" ] }
151,850
According to this answer and my own understanding, the tilde expands to the home directory: $ echo ~ /home/braiam Now, whenever I want the shell expansion to work, i. e. using variable names such $FOO , and do not break due unexpected characters, such spaces, etc. one should use double quotes " : $ FOO="some string with spaces" $ BAR="echo $FOO" $ echo $BAR echo some string with spaces Why doesn't this expansion works with the tilde? $ echo ~/some/path /home/braiam/some/path $ echo "~/some/path" ~/some/path
The reason, because inside double quotes, tilde ~ has no special meaning, it's treated as literal. POSIX defines Double-Quotes as: Enclosing characters in double-quotes ( "" ) shall preserve the literal value of all characters within the double-quotes, with the exception of the characters dollar sign, backquote, and backslash, ... The application shall ensure that a double-quote is preceded by a backslash to be included within double-quotes. The parameter '@' has special meaning inside double-quotes Except $ , ` , \ and @ , others characters are treated as literal inside double quotes.
{ "source": [ "https://unix.stackexchange.com/questions/151850", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41104/" ] }
151,867
I installed Debian in VirtualBox (for various experiments which usually broke my system) and tried to launch the VirtualBox guest addon script. I logged in as root and tried to launch autorun.sh , but I got «Permission denied». ls -l shows that the script have an executable rights. Sorry, that I can't copy the output -- VirtualBox absolutely have no use without the addon, as neither a shared directory, nor a shared clipboard works. But just for you to be sure, I copied the rights by hands: #ls -l ./autorun.sh -r-xr-xr-x 1 root root 6966 Mar 26 13:56 ./autorun.sh At first I thought that it may be that the script executes something that gave the error. I tried to replace /bin/sh with something like #/pathtorealsh/sh -xv , but I got no output — it seems the script can't even be executed. I have not even an idea what could cause it.
Maybe your file system is mounted with noexec option set, so you can not run any executable files. From mount documentation: noexec Do not allow direct execution of any binaries on the mounted filesystem. (Until recently it was possible to run binaries anyway using a command like /lib/ld*.so /mnt/binary. This trick fails since Linux 2.4.25 / 2.6.0.) Try: mount | grep noexec Then check if your file system is listed in output. If yes, you can solve this problem, by re-mounting file system with exec option: mount -o remount,exec filesystem
{ "source": [ "https://unix.stackexchange.com/questions/151867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59928/" ] }
151,883
I operate a Linux system which has a lot of users but sometimes an abuse occurs; where a user might run a single process that uses up more than 80% of the CPU/Memory. So is there a way to prevent this from happening by limiting the amount of CPU usage a process can use (to 10% for example)? I'm aware of cpulimit , but it unfortunately applies the limit to the processes I instruct it to limit (e.g single processes). So my question is, how can I apply the limit to all of the running processes and processes that will be run in the future without the need of providing their id/path for example?
While it can be an abuse for memory, it isn't for CPU: when a CPU is idle, a running process (by "running", I mean that the process isn't waiting for I/O or something else) will take 100% CPU time by default. And there's no reason to enforce a limit. Now, you can set up priorities thanks to nice . If you want them to apply to all processes for a given user, you just need to make sure that his login shell is run with nice : the child processes will inherit the nice value. This depends on how the users log in. See Prioritise ssh logins (nice) for instance. Alternatively, you can set up virtual machines. Indeed setting a per-process limit doesn't make much sense since the user can start many processes, abusing the system. With a virtual machine, all the limits will be global to the virtual machine. Another solution is to set /etc/security/limits.conf limits; see the limits.conf(5) man page. For instance, you can set the maximum CPU time per login and/or the maximum number of processes per login. You can also set maxlogins to 1 for each user.
{ "source": [ "https://unix.stackexchange.com/questions/151883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67864/" ] }
151,911
When I use below code in SSH terminal for CentOS it works fine: paste <(printf "%s\n" "TOP") But if I place the same line code in a shell script (test.sh) and run shell script from terminal, it throws error as this ./test.sh: line 30: syntax error near unexpected token (' ./test.sh: line 30: paste <(printf "%s\n" "TOP") How can I fix this problem?
Process substitution is not specified by POSIX, so not all POSIX shells support it, only some shells like bash , zsh , ksh88 , ksh93 . In CentOS system, /bin/sh is a symlink to /bin/bash . When bash is invoked with name sh , bash enters posix mode ( Bash Startup Files - Invoked with name sh ). In bash versions prior to 5.1, process substitution support was disabled when invoked in posix mode, causing a syntax error. The script should work if you call bash directly: bash test.sh . If not, maybe bash has entered posix mode. This can occur if you start bash with the --posix argument or if the variable POSIXLY_CORRECT is set when bash starts: $ bash --posix test.sh test.sh: line 54: syntax error near unexpected token `(' test.sh: line 54: `paste <(printf "%s\n" "TOP")' $ POSIXLY_CORRECT=1 bash test.sh test.sh: line 54: syntax error near unexpected token `(' test.sh: line 54: `paste <(printf "%s\n" "TOP") Or bash is built with --enable-strict-posix-default option. Here, you don't need process substitution, you can use standard shell pipes: printf "%s\n" "TOP" | paste - - is the standard way to tell paste to read the data from stdin. With some paste implementations, you can omit it though that's not standard. Where it would be useful is when pasting the output of more than one command like in: paste <(cmd1) <(cmd2) On systems that support /dev/fd/n , that can be done in sh with: { cmd1 4<&- | { cmd2 3<&- | paste /dev/fd/3 -; } 3<&0 <&4 4<&-; } 4<&0 (it's what <(...) does internally).
{ "source": [ "https://unix.stackexchange.com/questions/151911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81390/" ] }
151,916
I am trying to copy files over SSH , but cannot use scp due to not knowing the exact filename that I need. Although small binary files and text files transfer fine, large binary files get altered. Here is the file on the server: remote$ ls -la -rw-rw-r-- 1 user user 244970907 Aug 24 11:11 foo.gz remote$ md5sum foo.gz 9b5a44dad9d129bab52cbc6d806e7fda foo.gz Here is the file after I've moved it over: local$ time ssh [email protected] -t 'cat /path/to/foo.gz' > latest.gz real 1m52.098s user 0m2.608s sys 0m4.370s local$ md5sum latest.gz 76fae9d6a4711bad1560092b539d034b latest.gz local$ ls -la -rw-rw-r-- 1 dotancohen dotancohen 245849912 Aug 24 18:26 latest.gz Note that the downloaded file is bigger than the one on the server! However, if I do the same with a very small file, then everything works as expected: remote$ echo "Hello" | gzip -c > hello.txt.gz remote$ md5sum hello.txt.gz 08bf5080733d46a47d339520176b9211 hello.txt.gz local$ time ssh [email protected] -t 'cat /path/to/hello.txt.gz' > hi.txt.gz real 0m3.041s user 0m0.013s sys 0m0.005s local$ md5sum hi.txt.gz 08bf5080733d46a47d339520176b9211 hi.txt.gz Both file sizes are 26 bytes in this case. Why might small files transfer fine, but large files get some bytes added to them?
TL;DR Don't use -t . -t involves a pseudo-terminal on the remote host and should only be used to run visual applications from a terminal. Explanation The linefeed character (also known as newline or \n ) is the one that when sent to a terminal tells the terminal to move its cursor down. Yet, when you run seq 3 in a terminal, that is where seq writes 1\n2\n3\n to something like /dev/pts/0 , you don't see: 1 2 3 but 1 2 3 Why is that? Actually, when seq 3 (or ssh host seq 3 for that matters) writes 1\n2\n3\n , the terminal sees 1\r\n2\r\n3\r\n . That is, the line-feeds have been translated to carriage-return (upon which terminals move their cursor back to the left of the screen) and line-feed. That is done by the terminal device driver. More exactly, by the line-discipline of the terminal (or pseudo-terminal) device, a software module that resides in the kernel. You can control the behaviour of that line discipline with the stty command. The translation of LF -> CRLF is turned on with stty onlcr (which is generally enabled by default). You can turn it off with: stty -onlcr Or you can turn all output processing off with: stty -opost If you do that and run seq 3 , you'll then see: $ stty -onlcr; seq 3 1 2 3 as expected. Now, when you do: seq 3 > some-file seq is no longer writing to a terminal device, it's writing into a regular file, there's no translation being done. So some-file does contain 1\n2\n3\n . The translation is only done when writing to a terminal device. And it's only done for display. similarly, when you do: ssh host seq 3 ssh is writing 1\n2\n3\n regardless of what ssh 's output goes to. What actually happens is that the seq 3 command is run on host with its stdout redirected to a pipe. The ssh server on host reads the other end of the pipe and sends it over the encrypted channel to your ssh client and the ssh client writes it onto its stdout, in your case a pseudo-terminal device, where LF s are translated to CRLF for display. Many interactive applications behave differently when their stdout is not a terminal. For instance, if you run: ssh host vi vi doesn't like it, it doesn't like its output going to a pipe. It thinks it's not talking to a device that is able to understand cursor positioning escape sequences for instance. So ssh has the -t option for that. With that option, the ssh server on host creates a pseudo-terminal device and makes that the stdout (and stdin, and stderr) of vi . What vi writes on that terminal device goes through that remote pseudo-terminal line discipline and is read by the ssh server and sent over the encrypted channel to the ssh client. It's the same as before except that instead of using a pipe , the ssh server uses a pseudo-terminal . The other difference is that on the client side, the ssh client sets the terminal in raw mode (and disables local echo ). That means that no translation is done there ( opost is disabled and also other input-side behaviours). For instance, when you type Ctrl-C , instead of interrupting ssh , that ^C character is sent to the remote side, where the line discipline of the remote pseudo-terminal sends the interrupt to the remote command. When you do: ssh -t host seq 3 seq 3 writes 1\n2\n3\n to its stdout, which is a pseudo-terminal device. Because of onlcr , that gets translated on host to 1\r\n2\r\n3\r\n and sent to you over the encrypted channel. On your side there is no translation ( onlcr disabled), so 1\r\n2\r\n3\r\n is displayed untouched (because of the raw mode) and correctly on the screen of your terminal emulator. Now, if you do: ssh -t host seq 3 > some-file There's no difference from above. ssh will write the same thing: 1\r\n2\r\n3\r\n , but this time into some-file . So basically all the LF in the output of seq have been translated to CRLF into some-file . It's the same if you do: ssh -t host cat remote-file > local-file All the LF characters (0x0a bytes) are being translated into CRLF (0x0d 0x0a). That's probably the reason for the corruption in your file. In the case of the second smaller file, it just so happens that the file doesn't contain 0x0a bytes, so there is no corruption. Note that you could get different types of corruption with different tty settings. Another potential type of corruption associated with -t is if your startup files on host ( ~/.bashrc , ~/.ssh/rc ...) write things to their stderr, because with -t the stdout and stderr of the remote shell end up being merged into ssh 's stdout (they both go to the pseudo-terminal device). You don't want the remote cat to output to a terminal device there. You want: ssh host cat remote-file > local-file You could do: ssh -t host 'stty -opost; cat remote-file' > local-file That would work (except in the writing to stderr corruption case discussed above), but even that would be sub-optimal as you'd have that unnecessary pseudo-terminal layer running on host . Some more fun: $ ssh localhost echo | od -tx1 0000000 0a 0000001 OK. $ ssh -t localhost echo | od -tx1 0000000 0d 0a 0000002 LF translated to CRLF $ ssh -t localhost 'stty -opost; echo' | od -tx1 0000000 0a 0000001 OK again. $ ssh -t localhost 'stty olcuc; echo x' X That's another form of output post-processing that can be done by the terminal line discipline. $ echo x | ssh -t localhost 'stty -opost; echo' | od -tx1 Pseudo-terminal will not be allocated because stdin is not a terminal. stty: standard input: Inappropriate ioctl for device 0000000 0a 0000001 ssh refuses to tell the server to use a pseudo-terminal when its own input is not a terminal. You can force it with -tt though: $ echo x | ssh -tt localhost 'stty -opost; echo' | od -tx1 0000000 x \r \n \n 0000004 The line discipline does a lot more on the input side. Here, echo doesn't read its input nor was asked to output that x\r\n\n so where does that come from? That's the local echo of the remote pseudo-terminal ( stty echo ). The ssh server is feeding the x\n it read from the client to the master side of the remote pseudo-terminal. And the line discipline of that echoes it back (before stty opost is run which is why we see a CRLF and not LF ). That's independent from whether the remote application reads anything from stdin or not. $ (sleep 1; printf '\03') | ssh -tt localhost 'trap "echo ouch" INT; sleep 2' ^Couch The 0x3 character is echoed back as ^C ( ^ and C ) because of stty echoctl and the shell and sleep receive a SIGINT because stty isig . So while: ssh -t host cat remote-file > local-file is bad enough, but ssh -tt host 'cat > remote-file' < local-file to transfer files the other way across is a lot worse. You'll get some CR -> LF translation, but also problems with all the special characters ( ^C , ^Z , ^D , ^? , ^S ...) and also the remote cat will not see eof when the end of local-file is reached, only when ^D is sent after a \r , \n or another ^D like when doing cat > file in your terminal.
{ "source": [ "https://unix.stackexchange.com/questions/151916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
151,951
Assuming you know the target is a symbolic link and not a file, is there any difference between using rm and unlink to remove the link?
Anytime you have these types of questions it's best to conceive of a little test to see what's actually happening. For this you can use strace . unlink $ touch file1 $ strace -s 2000 -o unlink.log unlink file1 rm $ touch file1 $ strace -s 2000 -o rm.log rm file1 When you take a look at the 2 resulting log files you can "see" what each call is actually doing. Breakdown With unlink it's invoking the unlink() system call: .... mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f6d025cc000 close(3) = 0 unlink("file1") = 0 close(1) = 0 close(2) = 0 exit_group(0) = ? .... With rm it's a slightly different path: .... ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0 geteuid() = 1000 newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0 faccessat(AT_FDCWD, "file1", W_OK) = 0 unlinkat(AT_FDCWD, "file1", 0) = 0 lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) close(0) = 0 close(1) = 0 close(2) = 0 exit_group(0) = ? +++ exited with 0 +++ ... The system calls unlink() and unlinkat() are essentially the same except for the differences described in this man page: http://linux.die.net/man/2/unlinkat . excerpt The unlinkat() system call operates in exactly the same way as either unlink(2) or rmdir(2) (depending on whether or not flags includes the AT_REMOVEDIR flag) except for the differences described in this manual page. If the pathname given in pathname is relative, then it is interpreted relative to the directory referred to by the file descriptor dirfd (rather than relative to the current working directory of the calling process, as is done by unlink(2) and rmdir(2) for a relative pathname). If the pathname given in pathname is relative and dirfd is the special value AT_FDCWD, then pathname is interpreted relative to the current working directory of the calling process (like unlink(2) and rmdir(2)). If the pathname given in pathname is absolute, then dirfd is ignored.
{ "source": [ "https://unix.stackexchange.com/questions/151951", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5769/" ] }
151,969
I have a recently-compiled Linux kernel image (vmlinuz file) and I want to boot into it. I am aware that this won't give me a familiar Linux system, but I am hoping to be able to at least run some basic "Hello world" program as the init process. Is this even possible, and if so, how? So far I have tried to do this by installing GRUB on a USB which had an ext2 filesystem with the vmlinuz file in /boot. It must have loaded the kernel image because it ended in a kernel panic message: "VFS: Unable to mount root fs on unknown-block(0,0)" Here is the entry in grub.cfg: menuentry 'linux' --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0)' search --no-floppy --fs-uuid --set=root <my USB drive's UUID> linux /boot/vmlinuz root=UUID=<my USB drive's UUID> ro $vt_handoff } Thanks for any help.
Anytime you have these types of questions it's best to conceive of a little test to see what's actually happening. For this you can use strace . unlink $ touch file1 $ strace -s 2000 -o unlink.log unlink file1 rm $ touch file1 $ strace -s 2000 -o rm.log rm file1 When you take a look at the 2 resulting log files you can "see" what each call is actually doing. Breakdown With unlink it's invoking the unlink() system call: .... mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f6d025cc000 close(3) = 0 unlink("file1") = 0 close(1) = 0 close(2) = 0 exit_group(0) = ? .... With rm it's a slightly different path: .... ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0 geteuid() = 1000 newfstatat(AT_FDCWD, "file1", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0 faccessat(AT_FDCWD, "file1", W_OK) = 0 unlinkat(AT_FDCWD, "file1", 0) = 0 lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) close(0) = 0 close(1) = 0 close(2) = 0 exit_group(0) = ? +++ exited with 0 +++ ... The system calls unlink() and unlinkat() are essentially the same except for the differences described in this man page: http://linux.die.net/man/2/unlinkat . excerpt The unlinkat() system call operates in exactly the same way as either unlink(2) or rmdir(2) (depending on whether or not flags includes the AT_REMOVEDIR flag) except for the differences described in this manual page. If the pathname given in pathname is relative, then it is interpreted relative to the directory referred to by the file descriptor dirfd (rather than relative to the current working directory of the calling process, as is done by unlink(2) and rmdir(2) for a relative pathname). If the pathname given in pathname is relative and dirfd is the special value AT_FDCWD, then pathname is interpreted relative to the current working directory of the calling process (like unlink(2) and rmdir(2)). If the pathname given in pathname is absolute, then dirfd is ignored.
{ "source": [ "https://unix.stackexchange.com/questions/151969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81868/" ] }
152,081
The mysql user cannot use ports below 1024 because these are reserved for the root user. Apache, on the other hand, can use port 80. Apache runs as root before it runs as Apache and thus it can use port 80. It can even listen to port 81 and any other port. However, when I tried to get Apache to listen on port 79, it did not work. I tried to listen on port 1 too, and that did not work either. When I change the Apache settings, Apache restarts just fine, but it doesn’t actually work on the web. Can I use port 1 on the web?
I'm going to use Firefox as an example, because its open source and easy to find the information for, but this applies (probably with slightly different lists of ports) to other browsers, too. In August 2001, CERT issued a vulnerability note about how a web browser could be used to send near-arbitrary data to TCP ports chosen by an attacker, on any arbitrary IP address. This could be used to, for example, send emails which would appear to come from the user running the web browser. In order to mitigate this, Mozilla (as well as many other vendors) blocked Firefox from accessing certain ports . The two ports you tried, 79 and 1, happen to be on the blocklist. The source contains the full list of blocked ports . You can (on your browser) override this list using the preferences network.security.ports.banned.override and network.security.ports.banned . This isn't useful on the Internet in general, as you'd have to convince everyone who might visit your site to go to about:config and change them. (Note: Current versions of Firefox will give an error message explaining that if you try to browse to a site on a blocked port.) In general, there is little reason to use additional HTTP ports, at least externally. If you have to, prefer traditional extra ports like 8080, 8000, etc. that are far less likely to be blocked or at least ones outside of the IANA-assigned system ports range (0-1023). See the IANA port registry for more details.
{ "source": [ "https://unix.stackexchange.com/questions/152081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81947/" ] }
152,222
In Ubuntu (and I guess in Debian too) there is a system script named update-grub which automatically executes grub-mkconfig -o with the correct path for the GRUB configuration file. Is there a similar command for Red Hat-based distributions? If not, how does the system know where the GRUB configuration file is to update when a new kernel version is installed?
After analyzing the scripts in Fedora, I realize that the configuration file path is read from the symlink /etc/grub2.conf . The correct grub2-mkconfig line is thus: grub2-mkconfig -o "$(readlink -e /etc/grub2.conf)" As noted in comments, it might be /etc/grub2.cfg , or /etc/grub2-efi.cfg on a UEFI system. Actually, both links might be present at the same time and pointing to different locations . The -e flag to readlink will error out if the target file does not exist, but on my system both existed... Check your commands, I guess.
{ "source": [ "https://unix.stackexchange.com/questions/152222", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22966/" ] }
152,264
I have a added a local proxy for all my hosts in my .ssh config, however I want to shell into my local vm without the proxy command. Output of my ssh attempt: debug1: /Users/bbarbour/.ssh/config line 1: Applying options for local.dev debug1: /Users/bbarbour/.ssh/config line 65: Applying options for * Given the following ssh config how do I prevent the ProxyCommand from being applied to the local.dev entry? Host local.dev HostName dev.myserver.com User developer ... Host * ProxyCommand /usr/local/bin/corkscrew 127.0.0.1 8840 %h %p
You can exclude local.dev from ProxyCommand, using ! before it: Host * !local.dev ProxyCommand /usr/local/bin/corkscrew 127.0.0.1 8840 %h %p From ssh_config documentation: If more than one pattern is provided, they should be separated by whitespace. A pattern entry may be negated by prefixing it with an exclamation mark (`!') . If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. The documentation also said: For each parameter, the first obtained value will be used . The configuration files contain sections separated by ``Host'' specifications, and that section is only applied for hosts that match one of the patterns given in the specification. The matched host name is the one given on the command line. So, you can also disable ProxyCommand for local.dev by override value that you have defined in Host * : Host local.dev HostName dev.myserver.com User developer ProxyCommand none
{ "source": [ "https://unix.stackexchange.com/questions/152264", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67441/" ] }