source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
77,512
Is it possible to have a shebang that, instead of specifying a path to an interpreter, it has the name of the interpreter, and lets the shell find it through $PATH? If not, is there a reason why?
PATH lookup is a feature of the standard C library in userspace, as are environment variables in general. The kernel doesn't see environment variables except when it passes over an environment from the caller of execve to the new process. The kernel does not perform any interpretation on the path in execve (it's up to wrapper functions such as execvp to perform PATH lookup) or in a shebang (which more or less re-routes the execve call internally). So you need to put the absolute path in the shebang¹. The original shebang implementation was just a few lines of code, and it hasn't been significantly expanded since. In the first versions of Unix, the shell did the work of invoking itself when it noticed you were invoking a script. Shebang was added in the kernel for several reasons (summarizing the rationale by Dennis Ritchie : The caller doesn't have to worry whether a program to execute is a shell script or a native binary. The script itself specifies what interpreter to use, instead of the caller. The kernel uses the script name in logs. Pathless shebangs would require either to augment the kernel to access environment variables and process PATH , or to have the kernel execute a userspace program that performs the PATH lookup. The first method requires adding a disproportionate amount of complexity to the kernel. The second method is already possible with a #!/usr/bin/env shebang . ¹ If you put a relative path, it's interpreted relatively to the current directory of the process (not the directory containing the script), which is hardly useful in a shebang.
{ "source": [ "https://unix.stackexchange.com/questions/77512", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
77,514
I understand what GNU Info is and how to use it, but what is it for ? Why does it exist in parallel to the man pages? Why not write detailed man pages rather than provide a separate utility?
GNU Info was designed to offer documentation that was comprehensive, hyperlinked, and possible to output to multiple formats. Man pages were available, and they were great at providing printed output. However, they were designed such that each man page had a reasonably small set of content. A man page might have the discussion on a single C function such as printf(3), or would describe the ls(1) command. That breaks down when you get into larger systems. How would you fit the documentation for Emacs into man pages? An example of the problem is the Perl man page, which lists 174 separate man pages you can read to get information. How do you browse through that, or do a search to find out what && means? As an improvement over man pages, Info gave us: The ability to have a single document for a large system, which contains all the information about that system. (versus 174 man pages) Ability to do full-text search across the entire document (v. man -k which only checks keywords) Hyperlinks to different parts of the same or different documents (v. The See Also section, which was made into hyperlinks by some, but not all, man page viewers) An index for the document, which could be browsed or you could hit "i" and type in a term and it would search the index and take you to the right place (v. Nothing) Linear document browsing across concepts, allowing you read the previous and next sections if you want to, either by mouse or keystroke (v. Nothing). Is it still relevant? Nowadays most people would say "This documentation doesn't belong in a manpage" and would put it in a PDF or would put it up in HTML. In fact, the help systems on several OSes are based on HTML. However, when GNU Info was created (1986), HTML didn't exist yet. Nowadays texinfo allows you to create PDF, Info, or other formats, so you can use those formats if you want. That's why GNU Info was invented.
{ "source": [ "https://unix.stackexchange.com/questions/77514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38637/" ] }
77,588
Is it possible to use curl and post binary data without passing in a file name? For example, you can post a form using binary via --data-binary: curl -X POST --data-binary @myfile.bin http://foo.com However, this requires that a file exists. I was hoping to be able to log HTTP calls (such as to rest services) as the text of the curl command to reproduce the request. (this greatly assists debugging these services, for example) However, logging curl commands that reference a file would not be useful, so I was hoping I could actually log the raw binary data, presumably base64 encoded, and yet still allow you to copy and paste the logged curl command and execute it. So, is it possible to use curl and post binary data without referencing a file? If so, how would that work? What would an example look like?
You can pass data into curl via STDIN like so: echo -e '...data...\n' | curl -X POST --data-binary @- http://foo.com The @- tells curl to pull in from STDIN. To pipe binary data to curl (for example): echo -e '\x03\xF1' | curl -X POST --data-binary @- http://foo.com
{ "source": [ "https://unix.stackexchange.com/questions/77588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22030/" ] }
77,718
I'm confused. Running Fedora Linux, lscpu yields: Architecture: i686 CPU op-mode(s): 32-bit, 64-bit ... But when I try to install a 64-bit program (Chrome) I get error like: Package /....x86_64.rpm has incompatible architecture x86_64. Valid architectures are ['i686', 'i586', 'i486', i386'] I'm less interested in being able to install Chrome and more interested in why lscpu says that my CPU can run in 64-bit mode; clearly this can't mean I can run 64-bit programs. Can anyone clarify?
lscpu is telling you that your architecture is i686 (an Intel 32-bit CPU), and that your CPU supports both 32-bit and 64-bit operating modes. You won't be able to install x64 built applications since they're built specifically for x64 architectures. Your particular CPU can handle either the i386 or i686 built packages. There are a number of ways to verify your architecture & OS preferences. lscpu As you're already aware, you can use the command lscpu. It works well at giving you a rough idea of what you're CPU is capable of. $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit CPU(s): 4 Thread(s) per core: 2 Core(s) per socket: 2 CPU socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 37 Stepping: 5 CPU MHz: 1199.000 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 3072K NUMA node0 CPU(s): 0-3 /proc/cpuinfo This is actually the data provided by the kernel that most of the tools such as lscpu use to display. I find this output a little nice in the fact that it shows you some model number info about your particular CPU. Also it will show you a section for each core that your CPU may have. Here's output for a single core: $ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz stepping : 5 cpu MHz : 1466.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.74 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Here's what the first 3 lines of each section for a core looks like: $ grep processor -A 3 /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 -- processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 37 -- processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 37 -- processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 37 The output from /proc/cpuinfo can also tell you the type of architecture your CPU is providing through the various flags that it shows. Notice these lines from the above command: $ grep /proc/cpuinfo | head -1 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid The flags that end in _lm tell you that your processor support "long mode". Long mode is another name for 64-bit. uname This command can be used to determine what platform your kernel was built to support. For example: 64-bit kernel $ uname -a Linux grinchy 2.6.35.14-106.fc14.x86_64 #1 SMP Wed Nov 23 13:07:52 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux 32-bit kernel $ uname -a Linux skinner.bubba.net 2.6.18-238.19.1.el5.centos.plus #1 SMP Mon Jul 18 10:07:01 EDT 2011 i686 i686 i386 GNU/Linux This output can be refined a bit further using the switches, [-m|--machine] , [-p|--processor] , and [-i|--hardware-platform] . Here's that output for the same above systems. 64-bit $ uname -m; uname -p; uname -i x86_64 x86_64 x86_64 32-bit $ uname -m; uname -p; uname -i i686 i686 i386 NOTE: There's also a short-form version of uname -m that you can run as a stand alone command, arch . It returns exactly the same thing as uname -m . You can read more about the arch command in the coreutils documentation . excerpt arch prints the machine hardware name, and is equivalent to ‘uname -m’. hwinfo Probably the best tool for analyzing your hardware has got to be hwinfo . This package can show you pretty much anything that you'd want/need to know about any of your hardware, right from the terminal. It's save me dozens of times when I'd need some info off of a chip on a system's motherboard or needed to know the revision of a board in a PCI slot. You can query it against the different subsystems of a computer. In our case we'll be looking at the cpu subsystem. $ hwinfo --cpu 01: None 00.0: 10103 CPU [Created at cpu.301] Unique ID: rdCR.a2KaNXABdY4 Hardware Class: cpu Arch: X86-64 Vendor: "GenuineIntel" Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" Features: fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,aes,lahf_lm,ida,arat,tpr_shadow,vnmi,flexpriority,ept,vpid Clock: 2666 MHz BogoMips: 5319.74 Cache: 3072 kb Units/Processor: 16 Config Status: cfg=new, avail=yes, need=no, active=unknown Again, similar to /proc/cpuinfo this command shows you the makeup of each individual core in a multi-core system. Here's the first line from each section of a core, just to give you an idea. $ hwinfo --cpu | grep CPU 01: None 00.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" 02: None 01.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" 03: None 02.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" 04: None 03.0: 10103 CPU Model: 6.37.5 "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz" getconf This is probably the most obvious way to tell what architecture your CPU is presenting to the OS. Making use of getconf , your querying the system variable LONG_BIT. This isn't an environment variable. # 64-bit system $ getconf LONG_BIT 64 # 32-bit system $ getconf LONG_BIT 32 lshw Yet another tool, similar in capabilities to hwinfo . You can query pretty much anything you want to know about the underlying hardware. For example: # 64-bit Kernel $ lshw -class cpu *-cpu description: CPU product: Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz vendor: Intel Corp. physical id: 6 bus info: cpu@0 version: Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz slot: None size: 1199MHz capacity: 1199MHz width: 64 bits clock: 133MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp x86-64 constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=2 enabledcores=2 threads=4 # 32-bit Kernel $ lshw -class cpu *-cpu:0 description: CPU product: Intel(R) Core(TM)2 CPU 4300 @ 1.80GHz vendor: Intel Corp. physical id: 400 bus info: cpu@0 version: 6.15.2 serial: 0000-06F2-0000-0000-0000-0000 slot: Microprocessor size: 1800MHz width: 64 bits clock: 800MHz capabilities: boot fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe x86-64 constant_tsc pni monitor ds_cpl est tm2 ssse3 cx16 xtpr lahf_lm configuration: id=1 *-logicalcpu:0 description: Logical CPU physical id: 1.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 1.2 width: 64 bits capabilities: logical CPU op-mode(s)? Several of the commands report that what looks to be a 32-bit CPU as supporting 32-bit & 64-bit modes. This can be a little confusing and misleading, but if you understand the history of CPU's, Intel specifically, you'll know that they have a history of playing games with their products where a CPU might have an instruction set that supports 16-bits, but can address more RAM that 2^16. The same thing is going on with these CPUs. Most people know that a 32-bit CPU can address only 2^32 = 4GB of RAM. But there are versions of CPUs that can address more. These CPUs would often make use of a Linux kernel with the suffix PAE - Physical Address Extension . Using a PAE enabled kernel along with this hardware would allow you to address up to 64GB on a 32-bit system. You might think well then why do I need a 64-bit architecture? The problem with these CPUs is that a single processes space is limited to 2^32, so if you have a large simulation or computational program that needed more than the 2^32 of addressable space in RAM, then this wouldn't have helped you with that. Take a look at the wikipedia page on the P6 microarchitecture (i686) for more info. TL;DR - So what the heck is my CPU's architecture? In general it can get confusing because a number of the commands and methodologies above are using the term "architecture" loosely. If you're interested in whether the underlying OS is 32-bit or 64-bit use these commands: lscpu getconf LONG_BIT uname If on the other hand you want to know the CPU's architecture use these commands: /proc/cpuinfo hwinfo lshw Specifically you want to look for fields where it says things like "width: 64" or "width: 32" if you're using a tool like lshw , or look for the flags: lm : Long Mode (x86-64: amd64, also known as Intel 64, i.e. 64-bit capable) lahf_lm : LAHF/SAHF in long mode The presents of these 2 flags tells you that the CPU is 64-bit. Their absences tells you that it's 32-bit. See these URLs for additional information on the CPU flags. What do the flags in /proc/cpuinfo mean? CPU feature flags and their meanings References man pages lscpu man page /proc/cpuinfo reference page uname man page hwinfo man page getconf man page articles: Check if a machine runs on 64 bit or 32 bit Processor/Linux OS? Find out if processor is 32bit or 64 (Linux) Need Help : 32 bit / 64 bit check for Linux
{ "source": [ "https://unix.stackexchange.com/questions/77718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40194/" ] }
77,779
I'm trying to install Ruby in my home directory on a Linux server (without root access), which of course requires using gcc . The closest thing I can find is a directory by that name which (if you go deep enough) contains cc1 : >: find / -iname gcc 2> /dev/null /usr/libexec/gcc >: tree -if /usr/libexec/gcc /usr/libexec/gcc /usr/libexec/gcc/x86_64-redhat-linux /usr/libexec/gcc/x86_64-redhat-linux/4.1.1 /usr/libexec/gcc/x86_64-redhat-linux/4.1.1/cc1 /usr/libexec/gcc/x86_64-redhat-linux/4.1.2 -> 4.1.1 The fact that CC1 redirects to GCC on Wikipedia seems to imply something close to identity, however there's no other mention of CC1 on the GCC page besides the note about redirection, and Googling hasn't gotten me anything useful, and my attempts to use cc1 in place of gcc have failed. What exactly is the relationship between them? And does it offer me any hope of compiling Ruby on this machine?
GCC has a number of phases to its compilation, and it uses different internal commands to do each phase. C in particular is first preprocessed with cpp, then is compiled into assembly, assembled into machine language, and then linked together. cc1 is the internal command which takes preprocessed C-language files and converts them to assembly. It's the actual part that compiles C. For C++, there's cc1plus, and other internal commands for different languages. There is a book on Wikibooks that explains the process with pictures . Unfortunately, cc1 is an internal command and only one piece of the installation, and if that's all you have, you will not be able to compile things.
{ "source": [ "https://unix.stackexchange.com/questions/77779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
77,796
How do I fetch the current terminal name? I mean to the name that ps shows in the TTY column, e.g.: root@dor-desktop:/home/dor/Documents/LAMP_setup/webs_install/do/install# ps aux | egrep 'mysql|(^USER)' USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND dor 2238 0.2 1.9 448052 79796 ? S 17:27 0:17 gedit /home/dor/Documents/LAMP_setup/webs_install/do/install/mysql.install /home/dor/Documents/LAMP_setup/webs_install/do/install/mysql.setup root 4975 0.1 0.5 324984 22876 ? S 18:12 0:04 gedit /usr/local/mysql/bin/mysqld_safe root 8160 0.0 0.0 4108 664 pts/2 S 19:08 0:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --skip-networking --skip-grant-tables --user=mysql --basedir=/usr/local/mysql --ledir=/usr/local/mysql/libexec mysql 8279 0.0 0.4 146552 19032 pts/2 Sl 19:08 0:00 /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/var --user=mysql --skip-networking --skip-grant-tables --log-error=/usr/local/mysql/var/dor-desktop.err --pid-file=/usr/local/mysql/var/dor-desktop.pid --socket=/usr/local/mysql/mysql.sock --port=3306 root 8342 0.0 0.0 7632 1024 pts/2 R+ 19:14 0:00 egrep --color=auto mysql|(^USER) In the above example, I need to fetch pts/2 which is probably the name for the current terminal that executed those commands.
tty Now I have to enter 30 characters where 3 would have been enough... :-)
{ "source": [ "https://unix.stackexchange.com/questions/77796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4313/" ] }
77,852
And now I am unable to chmod it back.. or use any of my other system programs. Luckily this is on a VM I've been toying with, but is there any way to resolve this? The system is Ubuntu Server 12.10. I have attempted to restart into recovery mode, unfortunately now I am unable to boot into the system at all due to permissions not granting some programs after init-bottom availability to run- the system just hangs. This is what I see: Begin: Running /scripts/init-bottom ... done [ 37.062059] init: Failed to spawn friendly-recovery pre-start process: unable to execute: Permission denied [ 37.084744] init: Failed to spawn friendly-recovery post-stop process: unable to execute: Permission denied [ 37.101333] init: plymouth main process (220) killed by ABRT signal After this the computer hangs.
Boot another clean OS, mount the file system and fix permissions. As your broken file system lives in a VM, you should have your host system available and working. Mount your broken file system there and fix it. In case of QEMU/KVM you can for example mount the file system using nbd .
{ "source": [ "https://unix.stackexchange.com/questions/77852", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23695/" ] }
77,885
I used to be able to connect to my Gnome 3 desktop from a Windows machine with a VNC client. But after an upgrade (on the Linux side) a while ago, it quit working. When I attempt to connect, all I can get is a message saying "No matching security types" or "No supported authentication methods!" (depending on which client I try). In Gnome 3, I've turned on Screen Sharing under Settings > Sharing. Under that, I have Remote View on, Remote Control on, Approve All Connections on, Require Password off. I'm running Arch Linux with vino 3.8.1. On the Windows side, I've tried TigerVNC 1.0.1 & 1.2.0 and UltraVNC 1.0.9.6.2. How can I get this working?
This is actually a known and currently open bug . However, there is a very easy workaround; just issue the following command: gsettings set org.gnome.Vino require-encryption false You will now be able to connect with most vnc viewers.
{ "source": [ "https://unix.stackexchange.com/questions/77885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2421/" ] }
77,917
I am setting up a VMWare cluster of CentOS nodes. Is it best practice to include a domain name after the machine? What are the potential problems of leaving it out? Does a domain complicate configuration or simplify it? For example, if my node is at 192.168.1.93 , should I change /etc/hosts from 127.0.0.1 localhost.localdomain localhost to 127.0.0.1 localhost.cluster localhost 192.168.1.93 computernode1.cluster computenode1 or 127.0.0.1 localhost 192.168.1.93 computenode1 or #127.0.0.1 localhost 192.168.1.93 computenode1 or 192.168.1.93 localhost 192.168.1.93 computenode1
Putting the domain name in /etc/hosts is optional, and you can run a system without any ill effect at all. The only downside of leaving it out is that the system's fully qualified hostname won't show up properly. For example, hostname -f . The way detection of the fully qualified host name works: It first gets the hostname, or 'shortname'. This is the output of uname -n or hostname . It then gets the IP address for that hostname by consulting /etc/hosts (or whatever you have in /etc/resolv.conf , and falling back to the latter sources if not found in /etc/hosts ). Once it has the IP it then does a reverse lookup by again consulting /etc/hosts . Once it has a record in /etc/hosts , the first entry is used as the fully qualified hostname. In a nutshell, if you want fully qualified hostname to work, you should do either: 127.0.0.1 fully.qualified.hostname hostname localhost.localdomain localhost or 127.0.0.1 localhost.localdomain localhost 1.2.3.4 fully.qualified.hostname hostname
{ "source": [ "https://unix.stackexchange.com/questions/77917", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29268/" ] }
78,061
Is there a way I can see the environment variable of an other user? I want to do that as root, so permissions won't be a problem I guess. For the user himself, I use echo $PATH or set | grep PATH (or set when I don't remember the variable name). What would be a similar instruction for an other user? For what it's worth, I'm on Ubuntu Server 13.04.
Another option is to use env . Run this as root or with sudo : sudo -Hiu $user env | grep $var For example sudo -Hiu terdon env | grep HOME HOME=/home/terdon
{ "source": [ "https://unix.stackexchange.com/questions/78061", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
78,262
I do a ton of file compression. Most of the stuff I am compressing is just code, so I need to use lossless compression. I wondered if there was anything that offers a better size reduction than 7zip. It doesn't matter how long it takes to compress or decompress; size is all that matters. Does anyone know how the various tools and compression algorithms available in Linux compare for compressing text? Or is 7zip the best for compressing source code?
7zip is more a compactor (like PKZIP) than a compressor. It's available for Linux, but it can only create compressed archives in regular files, it's not able to compress a stream for instance. It's not able to store most of Unix file attributes like ownership, ACLs, extended attributes, hard links... On Linux, as a compressor , you've got xz that uses the same compression algorithm as 7zip (LZMA2). You can use it to compress tar archives. Like for gzip and bzip2 , there's a parallel variant pixz that can leverage several processors to speed up the compression ( xz can also do it natively since version 5.2.0 with the -T option). The pixz variant also supports indexing a compressed tar archive which means it's able to extract a single file without having to uncompress the file from the start. Footnote Compact is archive+compress (possibly with indexing, possibly members compressed separately), archiving doesn't imply compression. It is not a DOS thing, but possibly it was a French thing. Googling usenet archives, I seem to only come across articles of mine, so it could well have been my invention, though I strongly believe it's not.
{ "source": [ "https://unix.stackexchange.com/questions/78262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40513/" ] }
78,268
When my SSH session times out, the whole terminal freezes. Is there any way to break out of that connection? CTRL + C doesn't cut it.
You need to send the ssh escape sequence, which by default is ~ at the beginning of a line (in other words, preceded by a newline, or enter). Then send the disconnect character, which is . . E.g.: host1> ssh host2 Last login: Tue Jun 4 21:56:26 2013 from host1 host2> (host2 freezes, press enter ~ . ) host2> Connection to host2 closed. host1> See the ssh(1) manual: -e escape_char Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot (‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any escapes and makes the session fully transparent.
{ "source": [ "https://unix.stackexchange.com/questions/78268", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40519/" ] }
78,295
I have Ubuntu 12.10 on which I installed virtual box on which I setup Centos 6.4 i386. On Ubuntu I have: 3.5.0-17-generic kernel running. On virtual box I install Centos 6.4 and everything works fine, however when I boot into VM machine, there is only on loopback interface. Nothing else. I tried to manually add ifcfg-eth0 file but nothing happens. On VM settings I have added NAT and HostOnly interfaces but whenever I boot into Centos there is just lo interface. What should I do?
You need to enable networking during installation. You can do it post install as well. This is a change in the CentOS installation in the 6.x versions which throws a lot of people off, including myself. Starting with CentOS 6.x you can either opt to have NetworkManager manage your network setups or still do it the old fashion way and set them up manually using /etc/sysconfig/network-scripts/ifcfg-eth0 types of files. Given this is a VM I'd recommend using NetworkManager . To invoke the connection editor tool do the following: $ nm-connection-editor You'll be presented with a GUI like this: From this GUI you'll need to edit the eth0 setup like so: Without a GUI If you're attempting to do this on a headless server you can edit the following files and reboot to get your eth0 networking device to come up enabled and acquire an IP address from a DHCP server. Change these 2 options in the file, /etc/sysconfig/network-scripts/ifcfg-eth0 : ONBOOT="yes" NM_CONTROLLED="no" References Set Static IP in CentOS 6.4?
{ "source": [ "https://unix.stackexchange.com/questions/78295", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40538/" ] }
78,338
After downloading a file that has a md5 checksum available I currently check it with md5 *file* | grep *given_checksum* e.g. md5 file.zip | grep -i B4460802B5853B7BB257FBF071EE4AE2 but it seemed funny to me to require grep and the pipe for what is surely a very common task. A stickler for doing things efficiently, I wondered there is a better way of doing this?
md5sum has a -c option to check an existing set of sums, and its exit status indicates success/failure. Example: $ echo "ff9f75d4e7bda792fca1f30fc03a5303 package.deb" | md5sum -c - package.deb: OK Find a nice resource here
{ "source": [ "https://unix.stackexchange.com/questions/78338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5026/" ] }
78,351
I installed i3wm and X using the tutorials in the Arch wiki for a new Arch installation. I have Intel video, so I installed the xf86-video-intel package (which takes care of the driver). But when I run: startx I'm taken to i3 (which I've never used before), but the mouse won't move (I'm running Ubuntu dual boot on this machine and mouse works great there) and when I try to create a new tile ( alt-enter ), the cursor turns into what looks like a clock surrounded by a frame of sorts, and doesn't recover. When I try to exit ( alt-shift-e ), I'm presented with a tab asking me to be sure, which I have to click with my mouse, which isn't moving, so I can't exit without shutting down (i.e., pressing the i/o power button on the laptop). Here is a log from startup. I pressed new window a couple of times and then shut the computer down: 06/05/13 01:08:09 - i3 4.5.1 (2013-03-18, branch "tags/4.5.1") starting 06/05/13 01:08:09 - Parsing configfile /home/tjb1982/.i3/config deciding for version 4 due to this line: # i3 config file (v4) 06/05/13 01:08:09 - [libi3] libi3/font.c Using X font -misc-fixed-medium-r-normal--13-120-75-75-C-70-iso10646-1 06/05/13 01:08:09 - Used number 1 for workspace with name 1 06/05/13 01:08:09 - auto-starting i3-config-wizard 06/05/13 01:08:09 - startup id = i3/i3-config-wizard/723-0-arch_TIME0 06/05/13 01:08:09 - executing: i3-config-wizard 06/05/13 01:08:09 - Starting bar process: i3bar --bar_id=bar-hjnjco --socket="/run/user/1000/i3/ipc-socket.723" 06/05/13 01:08:09 - executing: i3bar --bar_id=bar-hjnjco --socket="/run/user/1000/i3/ipc-socket.723" 06/05/13 01:08:09 - Not a managed window, ignoring UnmapNotify event The config file "/home/tjb1982/.i3/config" already exists. Exiting. 06/05/13 01:08:09 - IPC: looking for config for bar ID "bar-hjnjco" 06/05/13 01:08:09 - workspace visible? fs = 0x1bdb0f0, ws = 0x1bdb0f0 06/05/13 01:08:09 - WM_CLASS changed to i3bar (instance), i3bar (class) 06/05/13 01:08:09 - WM_NAME changed to "i3bar for output LVDS1" 06/05/13 01:08:09 - Using legacy window title. Note that in order to get Unicode window titles in i3, the application has to set _NET_WM_NAME (UTF-8) 06/05/13 01:08:09 - This window is of type dock 06/05/13 01:08:09 - Checking window 0x00a00007 (class i3bar) 06/05/13 01:08:09 - dock status does not match 06/05/13 01:08:09 - Checking window 0x00a00007 (class i3bar) 06/05/13 01:08:09 - dock status matches 06/05/13 01:08:09 - ClientMessage for window 0x0000009e 06/05/13 01:08:26 - startup id = i3/i3-sensible-terminal/723-1-arch_TIME366852 06/05/13 01:08:26 - executing: i3-sensible-terminal 06/05/13 01:08:26 - Not a managed window, ignoring UnmapNotify event 06/05/13 01:08:28 - startup id = i3/i3-sensible-terminal/723-2-arch_TIME368252 06/05/13 01:08:28 - executing: i3-sensible-terminal 06/05/13 01:08:28 - Not a managed window, ignoring UnmapNotify event 06/05/13 01:08:30 - startup id = i3/i3-sensible-terminal/723-3-arch_TIME370387 06/05/13 01:08:30 - executing: i3-sensible-terminal 06/05/13 01:08:30 - Not a managed window, ignoring UnmapNotify event 06/05/13 01:08:31 - startup id = i3/i3-sensible-terminal/723-4-arch_TIME371331 06/05/13 01:08:31 - executing: i3-sensible-terminal 06/05/13 01:08:31 - Not a managed window, ignoring UnmapNotify event [libi3] libi3/font.c Using X font -misc-fixed-medium-r-normal--13-120-75-75-C-70-iso10646-1 Contents of ~/.i3/config (replaced # with ; for markdown legibility [which I didn't need to do, but I'm leaving it anyway): ; i3 config file (v4) ; ; Please see http://i3wm.org/docs/userguide.html for a complete reference! ; ; This config file uses keycodes (bindsym) and was written for the QWERTY ; layout. ; ; To get a config file with the same key positions, but for your current ; layout, use the i3-config-wizard ; ; Font for window titles. Will also be used by the bar unless a different font ; is used in the bar {} block below. ISO 10646 = Unicode font -misc-fixed-medium-r-normal--13-120-75-75-C-70-iso10646-1 ; The font above is very space-efficient, that is, it looks good, sharp and ; clear in small sizes. However, if you need a lot of unicode glyphs or ; right-to-left text rendering, you should instead use pango for rendering and ; chose a FreeType font, such as: ; font pango:DejaVu Sans Mono 10 ; use Mouse+Mod1 to drag floating windows to their wanted position floating_modifier Mod1 ; start a terminal bindsym Mod1+Return exec i3-sensible-terminal ; kill focused window bindsym Mod1+Shift+q kill ; start dmenu (a program launcher) bindsym Mod1+d exec dmenu_run ; There also is the (new) i3-dmenu-desktop which only displays applications ; shipping a .desktop file. It is a wrapper around dmenu, so you need that ; installed. ; bindsym Mod1+d exec --no-startup-id i3-dmenu-desktop ; change focus bindsym Mod1+j focus left bindsym Mod1+k focus down bindsym Mod1+l focus up bindsym Mod1+semicolon focus right ; alternatively, you can use the cursor keys: bindsym Mod1+Left focus left bindsym Mod1+Down focus down bindsym Mod1+Up focus up bindsym Mod1+Right focus right ; move focused window bindsym Mod1+Shift+j move left bindsym Mod1+Shift+k move down bindsym Mod1+Shift+l move up bindsym Mod1+Shift+semicolon move right ; alternatively, you can use the cursor keys: bindsym Mod1+Shift+Left move left bindsym Mod1+Shift+Down move down bindsym Mod1+Shift+Up move up bindsym Mod1+Shift+Right move right ; split in horizontal orientation bindsym Mod1+h split h ; split in vertical orientation bindsym Mod1+v split v ; enter fullscreen mode for the focused container bindsym Mod1+f fullscreen ; change container layout (stacked, tabbed, toggle split) bindsym Mod1+s layout stacking bindsym Mod1+w layout tabbed bindsym Mod1+e layout toggle split ; toggle tiling / floating bindsym Mod1+Shift+space floating toggle ; change focus between tiling / floating windows bindsym Mod1+space focus mode_toggle ; focus the parent container bindsym Mod1+a focus parent ; focus the child container ;bindsym Mod1+d focus child ; switch to workspace bindsym Mod1+1 workspace 1 bindsym Mod1+2 workspace 2 bindsym Mod1+3 workspace 3 bindsym Mod1+4 workspace 4 bindsym Mod1+5 workspace 5 bindsym Mod1+6 workspace 6 bindsym Mod1+7 workspace 7 bindsym Mod1+8 workspace 8 bindsym Mod1+9 workspace 9 bindsym Mod1+0 workspace 10 ; move focused container to workspace bindsym Mod1+Shift+1 move container to workspace 1 bindsym Mod1+Shift+2 move container to workspace 2 bindsym Mod1+Shift+3 move container to workspace 3 bindsym Mod1+Shift+4 move container to workspace 4 bindsym Mod1+Shift+5 move container to workspace 5 bindsym Mod1+Shift+6 move container to workspace 6 bindsym Mod1+Shift+7 move container to workspace 7 bindsym Mod1+Shift+8 move container to workspace 8 bindsym Mod1+Shift+9 move container to workspace 9 bindsym Mod1+Shift+0 move container to workspace 10 ; reload the configuration file bindsym Mod1+Shift+c reload ; restart i3 inplace (preserves your layout/session, can be used to upgrade i3) bindsym Mod1+Shift+r restart ; exit i3 (logs you out of your X session) bindsym Mod1+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'" ; resize window (you can also use the mouse for that) mode "resize" { ; These bindings trigger as soon as you enter the resize mode ; Pressing left will shrink the window’s width. ; Pressing right will grow the window’s width. ; Pressing up will shrink the window’s height. ; Pressing down will grow the window’s height. bindsym j resize shrink width 10 px or 10 ppt bindsym k resize grow height 10 px or 10 ppt bindsym l resize shrink height 10 px or 10 ppt bindsym semicolon resize grow width 10 px or 10 ppt ; same bindings, but for the arrow keys bindsym Left resize shrink width 10 px or 10 ppt bindsym Down resize grow height 10 px or 10 ppt bindsym Up resize shrink height 10 px or 10 ppt bindsym Right resize grow width 10 px or 10 ppt ; back to normal: Enter or Escape bindsym Return mode "default" bindsym Escape mode "default" } bindsym Mod1+r mode "resize" ; Start i3bar to display a workspace bar (plus the system information i3status ; finds out, if available) bar { status_command i3status } ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; automatically start i3-config-wizard to offer the user to create a ; keysym-based config which used his favorite modifier (alt or windows) ; ; i3-config-wizard will not launch if there already is a config file ; in ~/.i3/config. ; ; Please remove the following exec line: ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; exec i3-config-wizard
md5sum has a -c option to check an existing set of sums, and its exit status indicates success/failure. Example: $ echo "ff9f75d4e7bda792fca1f30fc03a5303 package.deb" | md5sum -c - package.deb: OK Find a nice resource here
{ "source": [ "https://unix.stackexchange.com/questions/78351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40407/" ] }
78,376
I know how to delete all txt file under current directory by rm *.txt . Does anyone know how to delete all files in current directory EXCEPT txt file?
You can use find : find . -type f ! -name '*.txt' -delete Or bash's extended globbing features: shopt -s extglob rm *.!(txt) Or in zsh: setopt extendedglob rm *~*.txt(.) # || ^^^ Only plain files # ||^^^^^ files ending in ".txt" # | \Except # \Everything
{ "source": [ "https://unix.stackexchange.com/questions/78376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26591/" ] }
78,408
I don't do terribly much shell scripting, so I was a little surprised when I was reading the documentation for git submodule and I saw the syntax they used in this documentation: A non-zero return from the command in any submodule causes the processing to terminate. This can be overridden by adding || : to the end of the command. I had to look up that || : was a shorthand for forcing a command to exit successfully. Anytime I've had to make a command exit successfully, I used || true . Is || : considered to be more idiomatic?
true was not built into the Bourne shell. : always was (it was the way to enter comments before # was introduced). That, and because it's shorter to type is probably the main reason people prefer : over true . Note another difference in POSIX shells (for bash , only in POSIX mode): while true is a regular builtin (doesn't even have to be builtin), : is a special builtin. That has a few implications, most of which are unlikely to have any impact in this particular case: If a : command fails, including because of a failed redirection, that causes the shell to exit. In practice, that probably won't make a difference unless you pass redirections to : $ sh -c ': > / ; echo HERE' sh: 1: cannot create /: Is a directory $ sh -c 'true > /; echo HERE' sh: 1: cannot create /: Is a directory HERE in var=value : , var remains set to value after : returns, not in the case of true : $ var=1; var=2 : ; echo "$var" 2 $ var=1; var=2 true; echo "$var" 1 Also note that || true works in shells of the rc and csh families but not || : (but not to cancel set -e in csh ). || : is not the same as : . It means or run : otherwise (that is, if the preceding pipeline fails). set -e false Would cause the shell to exit because of set -e (aka the errexit option) and false has a non-zero (failure) exit status. The set -e effect is cancelled if the command that returns a non-zero exit status is used as a condition like in: if false; then ... while false; do ... false && : ... false || : ... false && : only cancels set -e . false || : cancels the effect of set -e and sets the exit status to 0 so is more idiomatic to say that we want to ignore a failure exit code of the command. Most would argue that || true is more legible (conveys the intention more clearly).
{ "source": [ "https://unix.stackexchange.com/questions/78408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40578/" ] }
78,512
I need to umount something in my script, but sometimes it unmount's before all of the data has finished being copied and causes the umount to fail. I looked for a way to do a "blocking" umount, but I didn't find anything. So, I tried to write a script to loop until it could be unmounted, but it doesn't work. while [ `sudo umount mount` ] do sleep 0.1 done rmdir mount When run outputs: umount: /home/evantandersen/mount: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) rmdir: failed to remove `mount': Device or resource busy Shouldn't it loop until the return value of sudo umount mount is 0, meaning it was successfully umounted?
The [ command is to evaluate conditional expressions. It's of no use here. Because umount doesn't output anything on its standard output (the errors go to stderr), `sudo umount mount` expands to nothing. So it's like: while [ ] do sleep 0.1 done The [ command, when not passed any argument beside [ and ] returns false (a non-zero exit status), so you will not enter the loop. Even if umount had output its errors on stdout, using the [ command would not have made sense, because the words resulting of that output would never have made up a valid conditional expression. Here you want: until sudo umount mount do sleep 0.1 done That is, you want to check the exit status of sudo/umount, not of a [ command. If you wanted to check if umount output any error or warning on its stderr, that's where the [ could have been useful. The -n "some-string" is a conditional expression recognised by the [ command to test whether "some-string" is empty or not, so something like: while [ -n "$(sudo umount mount 2>&1 > /dev/null)" ]; do sleep 0.1 done But looking for the presence of error or warning messages is generally a bad idea. The umount command tells us whether or not it succeeds with its exit code, that's much more reliable. It could succeed and still output some warning message. It could fail and not output an error (like when it's killed). In this particular case, note that umount might fail because the directory is not mounted, and you would loop forever in that case, so you could try another approach like: while mountpoint -q mount && ! sudo umount mount; do sleep 0.1 done Or if "mount" may be mounted several times and you want to unmount them all: while mountpoint -q mount; do sudo umount mount || sleep 0.1 done
{ "source": [ "https://unix.stackexchange.com/questions/78512", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40645/" ] }
78,583
How can I ask ps to display only user processes and not kernel threads? See this question to see what I mean...
This should do (under Linux): ps --ppid 2 -p 2 --deselect kthreadd (PID 2) has PPID 0 ( on Linux 2.6+ ) but ps does not allow to filter for PPID 0; thus this work-around.
{ "source": [ "https://unix.stackexchange.com/questions/78583", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30196/" ] }
78,625
I have a file with the following contents: <username><![CDATA[name]]></username> <password><![CDATA[password]]></password> <dbname><![CDATA[name]]></dbname> and I need to make a script that changes the "name" in the first line to "something", the "password" on the second line to "somethingelse", and the "name" in the third line to "somethingdifferent". I can't rely on the order of these occurring in the file, so I can't simply replace the first occurrence of "name" with "something" and the second occurrence of "name" with "somethingdifferent". I actually need to do a search for the surrounding strings to make sure I'm finding and replacing the correct thing. So far I have tried this command to find and replace the first "name" occurrence: sed -i "s/<username><![CDATA[name]]><\/username>/something/g" file.xml however it's not working so I'm thinking some of these characters might need escaping, etc. Ideally, I'd love to be able to use regex to just match the two "username" occurrences and replace only the "name". Something like this but with sed : <username>.+?(name).+?</username> and replace the contents in the brackets with "something". Is this possible?
sed -i -E "s/(<username>.+)name(.+<\/username>)/\1something\2/" file.xml This is, I think, what you're looking for. Explanation: parentheses in the first part define groups (strings in fact) that can be reused in the second part \1 , \2 , etc. in the second part are references to the i-th group captured in the first part (the numbering starts with 1) -E enables extended regular expressions (needed for + and grouping). -i enables "in-place" file edit mode
{ "source": [ "https://unix.stackexchange.com/questions/78625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40721/" ] }
78,631
I am trying to install MySQL-server-5.6.12-1.el6.i686.rpm on a Red Hat Enterprise 6.1 server. I receive the following error: rpm -Uvh MySQL-server-5.6.12-1.el6.i686.rpm error: Failed dependencies: libaio.so.1 is needed by MySQL-server-5.6.12-1.el6.i686 libaio.so.1(LIBAIO_0.1) is needed by MySQL-server-5.6.12-1.el6.i686 libaio.so.1(LIBAIO_0.4) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6 is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.0) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.1) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.1.2) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.1.3) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.10) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.2) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.2.3) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.3) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.3.3) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.3.4) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.4) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.7) is needed by MySQL-server-5.6.12-1.el6.i686 libc.so.6(GLIBC_2.8) is needed by MySQL-server-5.6.12-1.el6.i686 libcrypt.so.1 is needed by MySQL-server-5.6.12-1.el6.i686 libcrypt.so.1(GLIBC_2.0) is needed by MySQL-server-5.6.12-1.el6.i686 libdl.so.2 is needed by MySQL-server-5.6.12-1.el6.i686 libdl.so.2(GLIBC_2.0) is needed by MySQL-server-5.6.12-1.el6.i686 libdl.so.2(GLIBC_2.1) is needed by MySQL-server-5.6.12-1.el6.i686 libgcc_s.so.1 is needed by MySQL-server-5.6.12-1.el6.i686 libgcc_s.so.1(GCC_3.0) is needed by MySQL-server-5.6.12-1.el6.i686 libgcc_s.so.1(GLIBC_2.0) is needed by MySQL-server-5.6.12-1.el6.i686 libm.so.6 is needed by MySQL-server-5.6.12-1.el6.i686 libm.so.6(GLIBC_2.0) is needed by MySQL-server-5.6.12-1.el6.i686 libm.so.6(GLIBC_2.1) is needed by MySQL-server-5.6.12-1.el6.i686 libpthread.so.0 is needed by MySQL-server-5.6.12-1.el6.i686 libpthread.so.0(GLIBC_2.0) is needed by MySQL-server-5.6.12-1.el6.i686 libpthread.so.0(GLIBC_2.1) is needed by MySQL-server-5.6.12-1.el6.i686 libpthread.so.0(GLIBC_2.2) is needed by MySQL-server-5.6.12-1.el6.i686 libpthread.so.0(GLIBC_2.3.2) is needed by MySQL-server-5.6.12-1.el6.i686 librt.so.1 is needed by MySQL-server-5.6.12-1.el6.i686 librt.so.1(GLIBC_2.2) is needed by MySQL-server-5.6.12-1.el6.i686 libstdc++.so.6 is needed by MySQL-server-5.6.12-1.el6.i686 libstdc++.so.6(CXXABI_1.3) is needed by MySQL-server-5.6.12-1.el6.i686 libstdc++.so.6(GLIBCXX_3.4) is needed by MySQL-server-5.6.12-1.el6.i686 libstdc++.so.6(GLIBCXX_3.4.11) is needed by MySQL-server-5.6.12-1.el6.i686 I recognize that these are programs I need to install. My question is where should I look to download these programs to install or should I just look for an older version of MySQL? EDIT : In the end it was actually a system architecture problem. System architecture should always be kept in mind when installing components in linux. I was using a i686 rpm when it was a x86_x64 system
sed -i -E "s/(<username>.+)name(.+<\/username>)/\1something\2/" file.xml This is, I think, what you're looking for. Explanation: parentheses in the first part define groups (strings in fact) that can be reused in the second part \1 , \2 , etc. in the second part are references to the i-th group captured in the first part (the numbering starts with 1) -E enables extended regular expressions (needed for + and grouping). -i enables "in-place" file edit mode
{ "source": [ "https://unix.stackexchange.com/questions/78631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40724/" ] }
78,639
How does touch -t command works exactly, internally (I tried to find its source code but couldn't)?
touch calls the utimes system call to set the file's modification time and its access time. On some systems, instead of utimes , it opens the file and then sets the file times through the descriptor, e.g. with utimensat under Linux. You can see how touch works on your system by looking at the system calls it makes. Under Linux, use strace , e.g. strace touch -d '1 hour ago' foo . Where to find the source code depends on your operating system. The GNU version is in coreutils , there's a version in the main source tree of any BSD, there's a version in BusyBox , in Minix , etc.
{ "source": [ "https://unix.stackexchange.com/questions/78639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
78,689
I am trying to open Firefox in CentOS, but I'm getting the following message: Firefox is already running but is not responding and Firefox doesn't open. I tried this in command line: kill Firefox but it didn't work. Also, I don't know in which directory I must execute the right commands. How can I fix this?
First find the process id of firefox using the following command in any directory: pidof firefox Kill firefox process using the following command in any directory: kill [firefox pid] Then start firefox again. Or you can do the same thing in just one command.As don_crissti said: kill $(pidof firefox)
{ "source": [ "https://unix.stackexchange.com/questions/78689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39485/" ] }
78,734
Why do people fear writing passwords in the command line? The history file is located in ~/.history , so it's available only to the user who executed the commands (and root).
Command lines are not just available in history. They are also available, for example, in the output of ps -ocmd or through the /proc filesystem. ( /proc/<pid>/cmdline ) which is where ps reads them. Also, users' home directories are often world- or group- readable; you can make the history file only user-readable, but that might not survive deletion and recreation.
{ "source": [ "https://unix.stackexchange.com/questions/78734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4313/" ] }
78,776
I have a text file encoded as following according to file : ISO-8859 text, with CRLF line terminators This file contains French's text with accents. My shell is able to display accent and emacs in console mode is capable of correctly displaying these accents. My problem is that more , cat and less tools don't display this file correctly. I guess that it means that these tools don't support this characters encoding set. Is this true? What are the characters encodings supported by these tools?
Your shell can display accents etc because it is probably using UTF-8. Since the file in question is a different encoding, less more and cat are trying to read it as UTF and fail. You can check your current encoding with echo $LANG You have two choices, you can either change your default encoding, or change the file to UTF-8. To change your encoding, open a terminal and type export LANG="fr_FR.ISO-8859" For example: $ echo $LANG en_US.UTF-8 $ cat foo.txt J'ai mal � la t�te, c'est chiant! $ export LANG="fr_FR.ISO-8859" $ xterm <-- open a new terminal $ cat foo.txt J'ai mal à la tête, c'est chiant! If you are using gnome-terminal or similar, you may need to activate the encoding, for example for terminator right click and: For gnome-terminal : Your other (better) option is to change the file's encoding: $ cat foo.txt J'ai mal � la t�te, c'est chiant! $ iconv -f ISO-8859-1 -t UTF-8 foo.txt > bar.txt $ cat bar.txt J'ai mal à la tête, c'est chiant!
{ "source": [ "https://unix.stackexchange.com/questions/78776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20996/" ] }
78,914
for i in $(xrandr); do echo "$i" ; done for i in "$(xrandr)"; do echo "$i"; done for i in "$(xrandr)"; do echo $i; done I understand why 1 differs from 2. But why does 3 give a different output from 2? Please explain the output too. How do quotes work on newlines?
An unquoted variable (as in $var ) or command substitution (as in $(cmd) or `cmd` ) is the split+glob operator in Bourne-like shells. That is, their content is split according to the current value of the $IFS special variable (which by default contains the space, tab and newline characters) And then each word resulting of that splitting is subject to filename generation (also known as globbing or filename expansion ), that is, they are considered as patterns and are expanded to the list of files that match that pattern. So in for i in $(xrandr) , the $(xrandr) , because it's not within quotes, is split on sequences of space, tab and newline characters. And each word resulting of that splitting is checked for matching file names (or left as is if they don't match any file), and for loops over them all. In for i in "$(xrandr)" , we're not using the split+glob operator as the command substitution is quoted, so there's one pass in the loop on one value: the output of xrandr (without the trailing newline characters which command substitution strips). However in echo $i , $i is unquoted again, so again the content of $i is split and subject to filename generation and those are passed as separate arguments to the echo command (and echo outputs its arguments separated by spaces). So lesson learnt: if you don't want word splitting or filename generation , always quote variable expansions and command substitutions if you do want word splitting or filename generation , leave them unquoted but set $IFS accordingly and/or enable or disable filename generation if needed ( set -f , set +f ). Typically, in your example above, if you want to loop over the blank separated list of words in the output of xrandr , you'd need to: leave $IFS at its default value (or unset it) to split on blanks Use set -f to disable filename generation unless you're sure that xrandr never outputs any * or ? or [ characters (which are wildcards used in filename generation patterns) And then only use the split+glob operator (only leave command substitution or variable expansion unquoted) in the in part of the for loop: set -f; unset -v IFS for i in $(xrandr); do whatever with "$i"; done If you want to loop over the (non-empty) lines of the xrandr output, you'd need to set $IFS to the newline character: IFS=' '
{ "source": [ "https://unix.stackexchange.com/questions/78914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27330/" ] }
79,050
Is there a way, before starting an aptitude upgrade or apt-get upgrade , to set up something so that you can "easily" rollback your system to the "apt" state it was before the actual upgrade, if something goes wrong? That is, for example, reinstall the old version of the packages that were upgraded during the process. (EDIT) A few hints : I know that etckeeper for example uses some hook on apt so that it is notified whenever apt installs or uninstalls a package. I suppose there could be some kind of script that could save the list of newly installed package and their previous version number to be able to reinstall them from the apt cache ( /var/cache/apt/archives ). There is also checkinstall which can keep track of file modifications... Any details on how to achieve that properly?
I just now had to figure out an answer to this, because the last apt-get upgrade on a Debian server made it impossible to boot the most recent kernel beyond a busybox, failing to mount the zfs root partition. At least an older kernel could still boot, but was incompatible with other software. Thus the need for a rollback. The short answer - you could use the following command: $ apt-get -s install $(apt-history rollback | tr '\n' ' ') if it does what you want remove the -s and run it again. Here are the steps I took to get this working properly: I temporarily trimmed my /var/log/dpkg.log to leave just today's upgrade I installed the tiny script apt-history from here into ~/.bashrc and ran $ apt-history rollback > rollback.txt ... libzfs2:amd64=0.6.4-4~wheezy zfsutils:amd64=0.6.4-4~wheezy zfs-initramfs:amd64=0.6.4-4~wheezy ... This provides a nicely formatted list of versioned packages to roll-back to by feeding it into apt-get install . Trim this list as needed in a text editor and then run (with -s for dry-run first): $ apt-get -s install $(cat rollback.txt | tr '\n' ' ') $ apt-get install $(cat rollback.txt | tr '\n' ' ') Apt will warn about the downgrades which is expected. To prevent this rollback to be overwritten by the next upgrade, the packages will need to be pinned, until the original issue is resolved. For example with: apt-mark hold zfsutils libzfs2 ... function apt-history(){ case "$1" in install) cat /var/log/dpkg.log | grep 'install ' ;; upgrade|remove) cat /var/log/dpkg.log | grep $1 ;; rollback) cat /var/log/dpkg.log | grep upgrade | \ grep "$2" -A10000000 | \ grep "$3" -B10000000 | \ awk '{print $4"="$5}' ;; *) cat /var/log/dpkg.log ;; esac }
{ "source": [ "https://unix.stackexchange.com/questions/79050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30196/" ] }
79,064
I have a tmp.txt file containing variables to be exported, for example: a=123 b="hello world" c="one more variable" How can I export all these variables using the export command, so that they can later be used by child processes?
source tmp.txt export a b c ./child ... Judging by your other question, you don't want to hardcode the variable names: source tmp.txt export $(cut -d= -f1 tmp.txt) test it: $ source tmp.txt $ echo "$a $b $c" 123 hello world one more variable $ perl -E 'say "@ENV{qw(a b c)}"' $ export $(cut -d= -f1 tmp.txt) $ perl -E 'say "@ENV{qw(a b c)}"' 123 hello world one more variable
{ "source": [ "https://unix.stackexchange.com/questions/79064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23710/" ] }
79,068
set command displays all the local variables like below. How do I export these variables all at once? >set a=123 b="asd asd" c="hello world"
Run the following command, before setting the variables: set -a set -o allexport # self-documenting version man page : -a When this option is on, the export attribute shall be set for each variable to which an assignment is performed -o option-name Set the option corresponding to option-name : allexport Same as -a . To turn this option off, run set +a or set +o allexport afterwards. Example: set -a # or: set -o allexport . ./environment set +a Where environment contains: FOO=BAR BAS='quote when using spaces, (, >, $, ; etc'
{ "source": [ "https://unix.stackexchange.com/questions/79068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23710/" ] }
79,112
How do I retrieve the date from the Internet and set my computer's clock, from the command line?
You can use : sudo dpkg-reconfigure tzdata for configuring your timezone . For updating time and date from internet use the following : Install If ntpd is not installed use any one of the following command to install ntpd: For RPM based: yum install ntp For Debian based: sudo apt-get install ntp Configuration You should at least set following parameter in /etc/ntp.conf config file: server For example, open /etc/ntp.conf file using vi text editor: # vi /etc/ntp.conf Locate server parameter and set it as follows: server pool.ntp.org Save the file and restart the ntpd service: # /etc/init.d/ntpd start You can synchronize the system clock to an NTP server immediately with following command: # ntpdate pool.ntp.org *For setting the time and date manually use the following syntax: date --set="STRING" For example, set new data to 2 Oct 2006 18:00:00, type the following command as root user: # date -s "2 OCT 2006 18:00:00" OR # date --set="2 OCT 2006 18:00:00" You can also simplify format using following syntax: # date +%Y%m%d -s "20081128" To set time use the following syntax: # date +%T -s "10:13:13" Where, 10: Hour (hh) 13: Minute (mm) 13: Second (ss) Use %p locale’s equivalent of either AM or PM, enter: # date +%T%p -s "6:10:30AM" # date +%T%p -s "12:10:30PM"
{ "source": [ "https://unix.stackexchange.com/questions/79112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19718/" ] }
79,135
I have two log files with thousands of lines. After pre-processing, only some lines differ. These remaining lines are either real differences, or shuffled groups of lines. Unified diffs allow me to see the detailed differences, but it makes manual comparison with eyeballs hard. Side-by-side diffs seems more useful for comparison, but it also adds thousands of unchanged lines. Is there a way to get the advantage of both worlds? Note, these log files are generated by xscope which is a program that monitors Xorg protocol data. I am looking for general-purpose tools that can be applied to situations similar to the above, not specialized webserver access log analysis tools for example. Two example log files are available at http://lekensteyn.nl/files/qemu-sdl-debug/ ( log13 and log14 ). A pre-processor command can be found in the xscope-filter file which removes timestamps and other minor details.
The 2 diff tools I use the most would be meld and sdiff . meld Meld is a GUI but does a great job in showing diffs between files. It's geared more for software development with features such as the ability to move changes from one side to the other to merge changes but can be used as just a straight side-by-side diffing tool. sdiff I've used this tool for years. I generally run it with the following switches: $ sdiff -bBWs file1 file2 -b Ignore changes in the amount of white space. -W Ignore all white space. -B Ignore changes whose lines are all blank. -s Do not output common lines. Often with log files you'll need to make the width of the columns wider, you can use -w <num> to make the screen wider. other tools that I use off and on diffc Diffc is a python script which colorizes unified diff output. $ diffc [OPTION] FILE1 FILE2 vimdiff Vimdiff is probably as good if not better than meld and it can be run from a terminal. I always forget to use it though which, to me, is a good indicator that I find the tool just a little to tough to use day to day. But YMMV.
{ "source": [ "https://unix.stackexchange.com/questions/79135", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8250/" ] }
79,269
My host is a freshly installed Ubuntu 2013.04, wireless network access worked out-of-the-box so I don't have any strange network configuration. In VirtualBox 4.2.10, with default (NAT) settings, I installed CentOS 6.4 minimal. Immediately after install, the first thing I did was ping 173.194.38.98 (google) and I a told connect: Network is unreachable . I tried running /etc/init.d/network start as root, no joy. I downloaded a VM image and tried it: exact same problem. When I installed Ubuntu and Windows VMs, they are able to access the Internet without any problem. What's wrong with this one? On the VM: On the host (values never change, except byte counts): eth0 Link encap:Ethernet HWaddr f0:de:f1:c0:ad:b3 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Memory:f3900000-f3920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:32272 errors:0 dropped:0 overruns:0 frame:0 TX packets:32272 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4263162 (4.2 MB) TX bytes:4263162 (4.2 MB) wlan0 Link encap:Ethernet HWaddr 60:d8:19:c9:42:59 inet addr:192.168.0.67 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::62d8:19ff:fec9:4259/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1221151 errors:0 dropped:0 overruns:0 frame:0 TX packets:845193 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1438957835 (1.4 GB) TX bytes:133904229 (133.9 MB) Note: Similar to this question but switching from NAT to Bridge is not a solution I find acceptable.
To get Centos to run on Virtual Box, in /etc/sysconfig/network-scripts/ifcfg-eth0 : DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes You might need to reboot.
{ "source": [ "https://unix.stackexchange.com/questions/79269", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2305/" ] }
79,301
I am trying to write an if statement to test whether there are any files matching a certain pattern. If there is a text file in a directory it should run a given script. My code currently: if [ -f /*.txt ]; then ./script fi Please give some ideas; I only want to run the script if there is a .txt in the directory.
[ -f /*.txt ] would return true only if there's one (and only one) non-hidden file in / whose name ends in .txt and if that file is a regular file or a symlink to a regular file. That's because wildcards are expanded by the shell prior to being passed to the command (here [ ). So if there's a /a.txt and /b.txt , [ will be passed 5 arguments: [ , -f , /a.txt , /b.txt and ] . [ would then complain that -f is given too many arguments. If you want to check that the *.txt pattern expands to at least one non-hidden file (regular or not), in the bash shell: shopt -s nullglob set -- *.txt if [ "$#" -gt 0 ]; then ./script "$@" # call script with that list of files. fi # Or with bash arrays so you can keep the arguments: files=( *.txt ) # apply C-style boolean on member count (( ${#files[@]} )) && ./script "${files[@]}" shopt -s nullglob is bash specific ( shopt is, nullglob actually comes from zsh ), but shells like ksh93 , zsh , yash , tcsh have equivalent statements. With zsh , the test for are there files matching a pattern can be written using an anonymous function and the N (for nullglob ) and Y1 (to stop after the first find) glob qualifier: if ()(($#)) *.txt(NY1); then do-something fi Note that those find those files by reading the contents of the directory, it doesn't try and access those files at all which makes it more efficient than solutions that call commands like ls or stat on that list of files computed by the shell. The standard sh equivalent would be: set -- [*].txt *.txt case "$1$2" in ('[*].txt*.txt') ;; (*) shift; script "$@" esac The problem is that with Bourne or POSIX shells, if a pattern doesn't match, it expands to itself. So if *.txt expands to *.txt , you don't know whether it's because there's no .txt file in the directory or because there's one file called *.txt . Using [*].txt *.txt allows to discriminate between the two. Now, if you want to check that the *.txt matches at least one regular file or symlink to regular file (like your [ -f *.txt ] suggests you want to do), or that all the files that match *.txt are regular files (after symlink resolution), that's yet another matter. With zsh : if ()(($#)) *.txt(NY1-.); then echo "there is at least one regular .txt file" fi if ()(($#)) *.txt(NY1^-.); then echo "there is at least one non-regular .txt files" fi (remove the - if you want to do the test prior to symlink resolution, that is consider symlinks as non-regular files whether they point to regular files or not).
{ "source": [ "https://unix.stackexchange.com/questions/79301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40952/" ] }
79,306
Recently I installed Mint Linux 15 (Olivia) 32 bit on my friends netbook. I am copy pasting the output of sudo lspci -vk 00:00.0 Host bridge: Intel Corporation Atom Processor D2xxx/N2xxx DRAM Controller (rev 03) Subsystem: Acer Incorporated [ALI] Device 061f Flags: bus master, fast devsel, latency 0 00:02.0 VGA compatible controller: Intel Corporation Atom Processor D2xxx/N2xxx Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Acer Incorporated [ALI] Device 061f Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at 86000000 (32-bit, non-prefetchable) [size=1M] I/O ports at 50d0 [size=8] Expansion ROM at <unassigned> [disabled] Capabilities: [d0] Power Management version 2 Capabilities: [b0] Vendor Specific Information: Len=07 <?> Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Kernel driver in use: gma500 So the problem is whenever I try to boot into the system it pops out a notification (not the exact words) Running in software rendering mode. No Hardware acceleration. I have searched the Mint Linux forum and found [this thread] ( http://forums.linuxmint.com/viewtopic.php?f=49&t=135578&p=727654 ), but it did not help much. I am also attaching the output of inxi -Fxz Kernel: 3.8.0-19-generic i686 (32 bit, gcc: 4.7.3) Desktop: Gnome Distro: Linux Mint 15 Olivia Machine: System: Acer product: AOD270 version: V1.06 Mobo: Acer model: JE01_CT Bios: Insyde version: V1.06 date: 03/05/2012 CPU: Dual core Intel Atom CPU N2600 (-HT-MCP-) cache: 512 KB flags: (lm nx sse sse2 sse3 ssse3) bmips: 6383.8 Clock Speeds: 1: 1600.00 MHz 2: 1600.00 MHz 3: 1600.00 MHz 4: 1600.00 MHz Graphics: Card: Intel Atom Processor D2xxx/N2xxx Integrated Graphics Controller bus-ID: 00:02.0 X.Org: 1.13.3 drivers: vesa (unloaded: fbdev) Resolution: [email protected] GLX Renderer: Gallium 0.4 on llvmpipe (LLVM 3.2, 128 bits) GLX Version: 2.1 Mesa 9.1.1 Direct Rendering: Yes The direct effect of disabled hardware video acceleration is that it is impossible to play video files and since the CPU is engaged with software acceleration, the system is damn too slow.
[ -f /*.txt ] would return true only if there's one (and only one) non-hidden file in / whose name ends in .txt and if that file is a regular file or a symlink to a regular file. That's because wildcards are expanded by the shell prior to being passed to the command (here [ ). So if there's a /a.txt and /b.txt , [ will be passed 5 arguments: [ , -f , /a.txt , /b.txt and ] . [ would then complain that -f is given too many arguments. If you want to check that the *.txt pattern expands to at least one non-hidden file (regular or not), in the bash shell: shopt -s nullglob set -- *.txt if [ "$#" -gt 0 ]; then ./script "$@" # call script with that list of files. fi # Or with bash arrays so you can keep the arguments: files=( *.txt ) # apply C-style boolean on member count (( ${#files[@]} )) && ./script "${files[@]}" shopt -s nullglob is bash specific ( shopt is, nullglob actually comes from zsh ), but shells like ksh93 , zsh , yash , tcsh have equivalent statements. With zsh , the test for are there files matching a pattern can be written using an anonymous function and the N (for nullglob ) and Y1 (to stop after the first find) glob qualifier: if ()(($#)) *.txt(NY1); then do-something fi Note that those find those files by reading the contents of the directory, it doesn't try and access those files at all which makes it more efficient than solutions that call commands like ls or stat on that list of files computed by the shell. The standard sh equivalent would be: set -- [*].txt *.txt case "$1$2" in ('[*].txt*.txt') ;; (*) shift; script "$@" esac The problem is that with Bourne or POSIX shells, if a pattern doesn't match, it expands to itself. So if *.txt expands to *.txt , you don't know whether it's because there's no .txt file in the directory or because there's one file called *.txt . Using [*].txt *.txt allows to discriminate between the two. Now, if you want to check that the *.txt matches at least one regular file or symlink to regular file (like your [ -f *.txt ] suggests you want to do), or that all the files that match *.txt are regular files (after symlink resolution), that's yet another matter. With zsh : if ()(($#)) *.txt(NY1-.); then echo "there is at least one regular .txt file" fi if ()(($#)) *.txt(NY1^-.); then echo "there is at least one non-regular .txt files" fi (remove the - if you want to do the test prior to symlink resolution, that is consider symlinks as non-regular files whether they point to regular files or not).
{ "source": [ "https://unix.stackexchange.com/questions/79306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40320/" ] }
79,334
If you fire up a terminal and call an executable (assuming one that's line oriented for simplicity) you get a reply to the command from the executable. How does this get printed to you (the user)? Does the terminal do something like pexpect ? (poll waiting for output) or what? How does it get notified of output to be printed out? And how does a terminal start a program? (Is it something akin to python's os.fork()? ) I'm puzzled how a terminal works, I've been playing with some terminal emulator and I still don't get how all this magic works. I'm looking at the source of konsole (kde) and yakuake (possibly uses konsole) an I can't get where all that magic happens.
Originally you had just dumb terminals - at first actually teletypewriters (similar to an electric typewriter, but with a roll of paper) (hence /dev/tty - TeleTYpers), but later screen+keyboard-combos - which just sent a key-code to the computer and the computer sent back a command that wrote the letter on the terminal (i.e. the terminal was without local echo, the computer had to order the terminal to write what the user typed on the terminal) - this is one of the reason why so many important Unix-commands are so short. Most terminals were connected by serial-lines, but (at least) one was directly connected to the computer (often the same room) - this was the console. Only a select few users were trusted to work on "the console" (this was often the only "terminal" available in single-user mode). Later there also were some graphical terminals (so-called "xterminals", not to be confused with the xterm -program) with screen & graphical screen-card, keyboard, mouse and a simple processor; which could just run an X-server. They did not do any computations themselves, so the X-clients ran on the computer they were connected to. Some had hard disks, but they could also boot over the network. They were popular in the early 1990s, before PCs became so cheap and powerful. Later still, there were "smart" or "intelligent" terminals. Smart terminals have the ability to process user input (line-editing at the shell prompt like inserting characters, removing words with Ctrl-W , removing letters with Ctrl-H or Backspace ) without help from the computer. The earlier dumb terminals, on the other hand, could not perform such onsite line-editing. On a dumb terminal, when the user presses a key, the terminal sends/delegates the resulting key-code to the computer to handle. After handling it, the computer sends the result back to the dumb terminal to display (e.g. pressing Ctrl-W would send a key-code to the computer, the computer would interpret that to mean "delete the last word", so the computer would handle that text change, then simply give the dumb terminal the output it should display). A "terminal emulator" – the "terminal-window" you open with programs such as xterm or konsole – tries to mimic the functionality of such smarter terminals. Also programs such as PuTTY (Windows) emulate these smart terminal emulators. With the PC, where "the console" (keyboard+screen) and "the computer" is more of a single unit, you got "virtual terminals" (on Linux, keys Alt+F1 through Alt+F6) instead, but these too mimic old-style terminals. Of course, with Unix/Linux becoming more of a desktop operating system often used by a single user, you now do most of your work "at the console", where users before used terminals connected by serial-lines. It's of course the shell that starts programs. And it uses the fork system-call (C language) to make a copy of itself with a environment-settings, then the exec system-call is used to turn this copy into the command you wanted to run. The shell suspends (unless the command is run in the background) until the command completes. As the command inherits the settings for stdin , stdout and stderr from the shell, the command will write to the terminal's screen and receive input from the terminal's keyboard.
{ "source": [ "https://unix.stackexchange.com/questions/79334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29540/" ] }
79,343
I would like to write a bash script with unknown amount of arguments. How can I walk through these arguments and do something with them? A wrong attempt would look like this: #!/bin/bash for i in $args; do echo $i done
There's a special syntax for this: for i do printf '%s\n' "$i" done More generally, the list of parameters of the current script or function is available through the special variable $@ . for i in "$@"; do printf '%s\n' "$i" done Note that you need the double quotes around $@ , otherwise the parameters undergo wildcard expansion and field splitting. "$@" is magic: despite the double quotes, it expands into as many fields as there are parameters. print_arguments () { for i in "$@"; do printf '%s\n' "$i"; done } print_arguments 'hello world' '*' 'special !\characters' '-n' # prints 4 lines print_arguments '' # prints one empty line print_arguments # prints nothing
{ "source": [ "https://unix.stackexchange.com/questions/79343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
79,371
I made an alias of the date command to display date in the following format: 2013.06.14.12.10.02 using this command: alias date = date +"%Y.%m.%d.%H.%M.%S" Everything works great, except I want to remove the leading zeroes from the output. There is no way to make it happen by changing the format. I think it can be done only by piping the output to other commands like sed and awk . The OS I am running is Ubuntu 12.04.2 LTS.
As per the GNU date manpage: By default, date pads numeric fields with zeroes. The following optional flags may follow '%': - (hyphen) do not pad the field Therefore you can do alias date="date '+%Y.%-m.%-d.%-H.%-M.%-S'" and receive 2013.6.14.3.19.31
{ "source": [ "https://unix.stackexchange.com/questions/79371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41056/" ] }
79,395
SUID The sticky bit applied to executable programs flagging the system to keep an image of the program in memory after the program finished running. But I don't know that what it's stored in memory. And how I can see them, in this case.?
This is probably one of my most irksome things that people mess up all the time. The SUID/GUID bit and the sticky-bit are 2 completely different things. If you do a man chmod you can read about the SUID and sticky-bits. The man page is available here as well. background excerpt The letters rwxXst select file mode bits for the affected users: read (r), write (w), execute (or search for directories) (x), execute/search only if the file is a directory or already has execute permission for some user (X), set user or group ID on execution (s) , restricted deletion flag or sticky bit (t) . SUID/GUID What the above man page is trying to say is that the position that the x bit takes in the rwxrwxrwx for the user octal (1st group of rwx) and the group octal (2nd group of rwx) can take an additional state where the x becomes an s. When this occurs this file when executed (if it's a program and not just a shell script) will run with the permissions of the owner or the group of the file. So if the file is owned by root and the SUID bit is turned on, the program will run as root. Even if you execute it as a regular user. The same thing applies to the GUID bit. excerpt SETUID AND SETGID BITS chmod clears the set-group-ID bit of a regular file if the file's group ID does not match the user's effective group ID or one of the user's supplementary group IDs, unless the user has appropriate privileges. Additional restrictions may cause the set-user-ID and set-group-ID bits of MODE or RFILE to be ignored. This behavior depends on the policy and functionality of the underlying chmod system call. When in doubt, check the underlying system behavior. chmod preserves a directory's set-user-ID and set-group-ID bits unless you explicitly specify otherwise. You can set or clear the bits with symbolic modes like u+s and g-s, and you can set (but not clear) the bits with a numeric mode. SUID/GUID examples no suid/guid - just the bits rwxr-xr-x are set. $ ls -lt b.pl -rwxr-xr-x 1 root root 179 Jan 9 01:01 b.pl suid & user's executable bit enabled (lowercase s) - the bits rwsr-x-r-x are set. $ chmod u+s b.pl $ ls -lt b.pl -rwsr-xr-x 1 root root 179 Jan 9 01:01 b.pl suid enabled & executable bit disabled (uppercase S) - the bits rwSr-xr-x are set. $ chmod u-x b.pl $ ls -lt b.pl -rwSr-xr-x 1 root root 179 Jan 9 01:01 b.pl guid & group's executable bit enabled (lowercase s) - the bits rwxr-sr-x are set. $ chmod g+s b.pl $ ls -lt b.pl -rwxr-sr-x 1 root root 179 Jan 9 01:01 b.pl guid enabled & executable bit disabled (uppercase S) - the bits rwxr-Sr-x are set. $ chmod g-x b.pl $ ls -lt b.pl -rwxr-Sr-x 1 root root 179 Jan 9 01:01 b.pl sticky bit The sticky bit on the other hand is denoted as t , such as with the /tmp directory: $ ls -l /|grep tmp drwxrwxrwt. 168 root root 28672 Jun 14 08:36 tmp This bit should have always been called the "restricted deletion bit" given that's what it really connotes. When this mode bit is enabled, it makes a directory such that users can only delete files & directories within it that they are the owners of. excerpt RESTRICTED DELETION FLAG OR STICKY BIT The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the restricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp. For regular files on some older systems, the bit saves the program's text image on the swap device so it will load more quickly when run; this is called the sticky bit.
{ "source": [ "https://unix.stackexchange.com/questions/79395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40333/" ] }
79,449
I have a remote host, previously I can ssh into it with root and password ssh root@remote_host or I ssh into it first using a regular user account ssh esolve@remote_host and then su root and input password but today, in both ways, my password is always incorrect, like [esolve@local esolve]$ ssh root@remote_host root@remote_host's password: Permission denied, please try again. or [esolve@remote_host ~]$ su root Password: su: incorrect password related information: I can use this password to log in with root if I operated locally on that remote machine today. I ssh into is via VPN Why did this happen? How can I solve this problem?
Do you have ssh as root disabled? Check your sshd configuration (possibly /etc/ssh/sshd_config ) and look for the line PermitRootLogin no . Change the no to yes and restart sshd (most likely either service ssh restart or service sshd restart ). Some distributions (e.g., Ubuntu) default to without-password for PermitRootLogin such that root login is allowed via public key authentication, but not with a password.
{ "source": [ "https://unix.stackexchange.com/questions/79449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38026/" ] }
79,571
I wrote a little bash script to see what happens when I keep following a symbolic link that points to the same directory. I was expecting it to either make a very long working directory, or to crash. But the result surprised me... mkdir a cd a ln -s ./. a for i in `seq 1 1000` do cd a pwd done Some of the output is ${HOME}/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a ${HOME}/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a ${HOME}/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a ${HOME}/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a ${HOME}/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a ${HOME}/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a ${HOME}/a ${HOME}/a/a ${HOME}/a/a/a ${HOME}/a/a/a/a ${HOME}/a/a/a/a/a ${HOME}/a/a/a/a/a/a ${HOME}/a/a/a/a/a/a/a ${HOME}/a/a/a/a/a/a/a/a what is happening here?
Patrice identified the source of the problem in his answer , but if you want to know how to get from there to why you get that, here's the long story. The current working directory of a process is nothing you'd think too complicated. It is an attribute of the process which is a handle to a file of type directory where relative paths (in system calls made by the process) start from. When resolving a relative path, the kernel doesn't need to know the (a) full path to that current directory, it just reads the directory entries in that directory file to find the first component of the relative path (and .. is like any other file in that regard) and continues from there. Now, as a user, you sometimes like to know where that directory lies in the directory tree. With most Unices, the directory tree is a tree, with no loop. That is, there's only one path from the root of the tree ( / ) to any given file. That path is generally called the canonical path. To get the path of the current working directory, what a process has to do is just walk up (well down if you like to see a tree with its root at the bottom) the tree back to the root, finding the names of the nodes on the way. For instance, a process trying to find out that its current directory is /a/b/c , would open the .. directory (relative path, so .. is the entry in the current directory) and look for a file of type directory with the same inode number as . , find out that c matches, then opens ../.. and so on until it finds / . There's no ambiguity there. That's what the getwd() or getcwd() C functions do or at least used to do. On some systems like modern Linux, there's a system call to return the canonical path to the current directory which does that lookup in kernel space (and allows you to find your current directory even if you don't have read access to all its components), and that's what getcwd() calls there. On modern Linux, you can also find the path to the current directory via a readlink() on /proc/self/cwd . That's what most languages and early shells do when returning the path to the current directory. In your case, you can call cd a as may times as you want, because it's a symlink to . , the current directory doesn't change so all of getcwd() , pwd -P , python -c 'import os; print os.getcwd()' , perl -MPOSIX -le 'print getcwd' would return your ${HOME} . Now, symlinks went complicating all that. symlinks allow jumps in the directory tree. In /a/b/c , if /a or /a/b or /a/b/c is a symlink, then the canonical path of /a/b/c would be something completely different. In particular, the .. entry in /a/b/c is not necessarily /a/b . In the Bourne shell, if you do: cd /a/b/c cd .. Or even: cd /a/b/c/.. There's no guarantee you'll end up in /a/b . Just like: vi /a/b/c/../d is not necessarily the same as: vi /a/b/d ksh introduced a concept of a logical current working directory to somehow work around that. People got used to it and POSIX ended up specifying that behaviour which means most shells nowadays do it as well: For the cd and pwd builtin commands ( and only for them (though also for popd / pushd on shells that have them)), the shell maintains its own idea of the current working directory. It's stored in the $PWD special variable. When you do: cd c/d even if c or c/d are symlinks, while $PWD containes /a/b , it appends c/d to the end so $PWD becomes /a/b/c/d . And when you do: cd ../e Instead of doing chdir("../e") , it does chdir("/a/b/c/e") . And the pwd command only returns the content of the $PWD variable. That's useful in interactive shells because pwd outputs a path to the current directory that gives information on how you got there and as long as you only use .. in arguments to cd and not other commands, it's less likely to surprise you, because cd a; cd .. or cd a/.. would generally get you back to where you were. Now, $PWD is not modified unless you do a cd . Until the next time you call cd or pwd , a lot of things could happen, any of the components of $PWD could be renamed. The current directory never changes (it's always the same inode, though it could be deleted), but its path in the directory tree could change completely. getcwd() computes the current directory each time it's called by walking down the directory tree so its information is always accurate, but for the logical directory implemented by POSIX shells, the information in $PWD might become stale. So upon running cd or pwd , some shells may want to guard against that. In that particular instance, you see different behaviours with different shells. Some like ksh93 ignore the problem completely, so will return incorrect information even after you call cd (and you wouldn't see the behaviour that you're seeing with bash there). Some like bash or zsh do check that $PWD is still a path to the current directory upon cd , but not upon pwd . pdksh does check upon both pwd and cd (but upon pwd , does not update $PWD ) ash (at least the one found on Debian) does not check, and when you do cd a , it actually does cd "$PWD/a" , so if the current directory has changed and $PWD no longer points to the current directory, it will actually not change to the a directory in the current directory, but the one in $PWD (and return an error if it doesn't exist). If you want to play with it, you can do: cd mkdir -p a/b cd a pwd mv ~/a ~/b pwd echo "$PWD" cd b pwd; echo "$PWD"; pwd -P # (and notice the bug in ksh93) in various shells. In your case, since you're using bash , after a cd a , bash checks that $PWD still points to the current directory. To do that, it calls stat() on the value of $PWD to check its inode number and compare it with that of . . But when the looking up of the $PWD path involves resolving too many symlinks, that stat() returns with an error, so the shell cannot check whether $PWD still corresponds to the current directory, so it computes it a again with getcwd() and updates $PWD accordingly. Now, to clarify Patrice's answer, that check of number of symlinks encountered while looking up a path is to guard against symlink loops. The simplest loop can be made with rm -f a b ln -s a b ln -s b a Without that safe guard, upon a cd a/x , the system would have to find where a links to, finds it's b and is a symlink which links to a , and that would go on indefinitely. The simplest way to guard against that is to give up after resolving more than an arbitrary number of symlinks. Now back to the logical current working directory and why it's not so good a feature. It's important to realise that it's only for cd in the shell and not other commands. For instance: cd -- "$dir" && vi -- "$file" is not always the same as: vi -- "$dir/$file" That's why you'll sometimes find that people recommend to always use cd -P in scripts to avoid confusion (you don't want your software to handle an argument of ../x differently from other commands just because it's written in shell instead of another language). The -P option is to disable the logical directory handling so cd -P -- "$var" actually does call chdir() on the content of $var (at least as long as $CDPATH it not set, and except when $var is - (or possibly -2 , +3 ... in some shells) but that's another story). And after a cd -P , $PWD will contain a canonical path.
{ "source": [ "https://unix.stackexchange.com/questions/79571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6723/" ] }
79,648
I am using Ubuntu 12.04.2. I am trying to use "trap" command to capture abnormal or error in my shell script but I am also trying to manually trigger "Error" exit. I have tried exit 1, but it won't trigger "Error" signal. #!/bin/bash func() { exit 1 } trap "echo hi" INT TERM ERR func Not sure how to manually trigger "Error" exit signal?
the ERR trap is not to run code when the shell itself exits with a non-zero error code, but when any command run by that shell that is not part of a condition (like in if cmd... , or cmd || ... ...) exits with a non-zero exit status (the same conditions as what causes set -e to exit the shell). If you want to run code upon exit of the shell with non-zero exit status, you should add a trap on EXIT instead and check $? there: trap '[ "$?" -eq 0 ] || echo hi' EXIT Note however that upon a trapped signal, both the signal trap and the EXIT trap would be run, so you may want to do it like: unset killed_by trap 'killed_by=INT;exit' INT trap 'killed_by=TERM;exit' TERM trap ' ret=$? if [ -n "$killed_by" ]; then echo >&2 "Ouch! Killed by $killed_by" exit 1 elif [ "$ret" -ne 0 ]; then echo >&2 "Died with error code $ret" fi' EXIT Or to use exit status like $((signum + 128)) upon signals: for sig in INT TERM HUP; do trap "exit $((128 + $(kill -l "$sig")))" "$sig" done trap ' ret=$? [ "$ret" -eq 0 ] || echo >&2 "Bye: $ret"' EXIT Note however that exiting normally upon SIGINT or SIGQUIT has potential annoying side effects when your parent process is a shell like bash that implements the wait and cooperative exit handling of terminal interrupt. So, you may want to make sure to kill yourself with the same signal instead so as to report to your parent that you were indeed interrupted, and that it should consider exiting itself as well if it received a SIGINT/SIGQUIT. unset killed_by for sig in INT QUIT TERM HUP; do trap "exit $((128 + $(kill -l "$sig"))); killed_by=$sig" "$sig" done trap ' ret=$? [ "$ret" -eq 0 ] || echo >&2 "Bye: $ret" if [ -n "$killed_by" ]; then trap - "$killed_by" # reset handler # ulimit -c 0 # possibly disable core dumps kill -s "$killed_by" "$$" else exec "$ret" fi' EXIT If you want the ERR trap to fire, just run a command with a non-zero exit status like false or test .
{ "source": [ "https://unix.stackexchange.com/questions/79648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16551/" ] }
79,658
When I run export $PATH in bash, I get the error not a valid identifier . Why?
Running export $PATH will try to export a variable with a name equal to the value of $PATH (after word splitting and filename generation). That is, it's equivalent to writing something like export /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin . And since /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin is not a valid variable name, it fails. What you want to do is export PATH . export (equivalent to declare -x when not called within a function) in Bash maps the shell variable to an environment variable, so it is passed to commands executed from now one (in child processes or otherwise). To print the value of a variable safely and readably, use printf '%q\n' "$PATH" or typeset -p PATH to print its definition.
{ "source": [ "https://unix.stackexchange.com/questions/79658", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41291/" ] }
79,684
I'd tried command cat with an executable file: cat /bin/ls Now I can't read any word in this terminal (Linux console). How can I fix it?
Often times when in a Unix/Linux terminal (Bash) for example you'll use the commands more or less or cat to view a file. When you do this and the file isn't meant to be viewed (such as /bin/ls ) you'll get output like this: What's going on here is that you just tried to view a file that's a program. An executable which aren't meant to be viewed with standard viewers as I mentioned above. method #1 - reset To fix this issue you can do the following: Hit Control + C a couple of times ( Ctrl + C ) Type the command reset and hit return This should usually put your terminal back into a more normal mode. I'll mention one more thing, when you do the steps above, you'll by typing them blind into your terminal. So just make sure you're typing it correctly. method #2 - stty sane As suggested in the comments by @sendmoreinfo you might have better luck using the following commands instead if the above doesn't work: $ stty sane $ tput rs1 determining a files' type Incidentally, if you come across a file and aren't sure if it's going to mess up your terminal you can inspect the file using the command file which will report back the type of file it is. For example, with /bin/ls that file shows the following output: $ file /bin/ls /bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, stripped
{ "source": [ "https://unix.stackexchange.com/questions/79684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40333/" ] }
79,702
I need to periodically run a command that ensures that some text files are kept in Linux mode. Unfortunately dos2unix always modifies the file, which would mess file's and folder's timestamps and cause unnecessary writes. The script I write is in Bash, so I'd prefer answers based on Bash.
You can use dos2unix as a filter and compare its output to the original file: dos2unix < myfile.txt | cmp - myfile.txt
{ "source": [ "https://unix.stackexchange.com/questions/79702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17765/" ] }
79,751
I am a graduate student, and the group in which I work maintains a Linux cluster. Each node of the cluster has its own local disk, but these local disks are relatively small and are not equipped with automatic backup. So the group owns a fileserver with many TBs of storage space. I am a relative Linux novice, so I am not sure what are the specs of the fileserver in terms of speed, networking ability, etc. I do know from experience that the local disks are significantly faster than the fileserver in terms of I/O. About a dozen or so people use the fileserver. Using cp to copy a ~20 GB file from the fileserver to one of the local disks takes about 11.5 minutes in real time on average (according to time ). I know that this cp operation is not very efficient because (1) time tells me that the system time for such a copy is only ~45 seconds; and because (2) when I examine top during the copy, %CPU is quite low (by inspection, roughly 0-10% on average). Using cp to copy the same ~20 GB file from one folder on the local disk to another folder on the same local disk takes less time -- about 9 minutes in real time (~51 seconds in system time, according to time ). So apparently the fileserver is somewhat slower than the local disk, as expected, but perhaps not significantly slower. I am surprised that copying from local to same local is not faster than 9 minutes. I need to copy ~200 large files -- each ~20 GB -- from the fileserver to one of the local disks. So, my question is: Is there a faster alternative to cp for copying large files in Linux? (Or are there any flags within cp that I could use which would speed up copying?) Even if I could somehow shave a minute off this copying time, that would help immensely. I am sure that buying new, faster hardware disks, but I don't have access to such resources. I am also not a system administrator -- I am only a (novice) user -- so I don't have access to more detailed information on the load that is on the disks. I do know that while about a dozen people use the fileserver daily, I am the only person using this particular node/local disk.
%CPU should be low during a copy. The CPU tells the disk controller "grab data from sectors X–Y into memory buffer at Z". Then it goes and does something else (or sleep, if there is nothing else). The hardware triggers an interrupt when the data is in memory. Then the CPU has to copy it a few times, and tells the network card "transmit packets at memory locations A, B, and C". Then it goes back to doing something else. You're pushing ~240mbps. On a gigabit LAN, you ought to be able to do at least 800mbps, but: That's shared among everyone using the file server (and possibly a connection between switches, etc.) That's limited by the speed the file server can handle the write, keeping in mind its disk I/O bandwidth is shared by everyone using it. You didn't specify how you're accessing the file server (NFS, CIFS (Samba), AFS, etc.). You may need to tune your network mount, but on anything half-recent the defaults are usually pretty sane. For tracking down the bottleneck, iostat -kx 10 is going to be a useful command. It'll show you the utilization on your local hard disks. If you can run that on the file server, it'll tell you how busy the file server is. The general solution is going to be to speed up that bottleneck, which of course you don't have the budget for. But, there are a couple of special cases where you can find a faster approach: If the files are compressible, and you have a fast CPU, doing a minimal compress on-the-fly might be quicker. Something like lzop or maybe gzip --fastest . If you are only changing a few bits here and there, and then sending the file back, only sending deltas will be much faster. Unfortunately, rsync won't really help here, as it will need to read the file on both sides to find the delta. Instead, you need something that keeps track of the delta as you change the file... Most approaches here are app-specific. But its possible that you could rig something up with, e.g., device-mapper (see the brand new dm-era target ) or btrfs. If you're copying the same data to multiple machines, you can use something like udpcast to send it to all the machines at once. And, since you note you're not the sysadmin, I'm guessing that means you have a sysadmin. Or at least someone responsible for the file server & network. You should probably ask him/her/them, they should be much more familiar with the specifics of your setup. Your sysadmin(s) should at least be able to tell you what transfer rate you can reasonably expect.
{ "source": [ "https://unix.stackexchange.com/questions/79751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
79,766
I'm starting to write a few bash scripts for my project and they need some intermediate files or variables to be written. I want to know which folders can I be sure of having access to when someone runs my script? Is it a common practice to use /tmp/ ? Or should I used something else?
I find the Filesystem Hierarchy Standard document invaluable when looking for this stuff. There are a few options, /tmp - 'non-permanent' temporary files /var/tmp - 'permanent' temporary files /var/cache - 'application' transient data files It really depends on the kind of data you're storing.
{ "source": [ "https://unix.stackexchange.com/questions/79766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41323/" ] }
79,909
Here's my test bash script. I'm not able to get it to work. I'm seeing two errors: Use of uninitialized value $answer in chop at /usr/sbin/adduser line 589. Use of uninitialized value $answer in pattern match (m//) at /usr/sbin/adduser line 590. Here's my script: #!/bin/bash sudo adduser myuser << ENDX password password First Last Y ENDX exit 0 Here is the output: me@mycomputer$ ./adduser.sh Adding user `myuser' ... Adding new group `myuser' (1001) ... Adding new user `myuser' (1001) with group `myuser' ... Creating home directory `/home/myuser' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for myuser Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Use of uninitialized value $answer in chop at /usr/sbin/adduser line 589. Use of uninitialized value $answer in pattern match (m//) at /usr/sbin/adduser line 590. Is the information correct? [Y/n] me@mycomputer$ This is on Kubuntu 12.04 LTS $ bash --version bash --version GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) Here are the lines from adduser (system script - unmodified by me) with notations of the two relevant line numbers: for (;;) { my $chfn = &which('chfn'); &systemcall($chfn, $new_name); # Translators: [y/N] has to be replaced by values defined in your # locale. You can see by running "locale yesexpr" which regular # expression will be checked to find positive answer. print (gtx("Is the information correct? [Y/n] ")); chop (my $answer=<STDIN>); <-- LINE 589 last if ($answer !~ m/$noexpr/o); <-- LINE 590 }
Just use the command line parameters instead of stdin, and use chpasswd for the password. For example: sudo adduser myuser --gecos "First Last,RoomNumber,WorkPhone,HomePhone" --disabled-password echo "myuser:password" | sudo chpasswd
{ "source": [ "https://unix.stackexchange.com/questions/79909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
79,960
I want to disable requiretty so that I can sudo within scripts, but I'd rather only disable it for a single command rather than everything. Is that possible within the sudoers config?
You can override the default setting for options such as requiretty for a specific user or for a specific command (or for a specific run-as-user or host), but not for a specific command when executed as a specific user. For example, assuming that requiretty is set in the compile-default options, the following sudoers file allows both artbristol and bob to execute /path/to/program as root from a script. artbristol needs no password whereas bob must have to enter a password (presumably tty_tickets is off and bob entered his password on some terminal recently). artbristol ALL = (root) NOPASSWD: /path/to/program bob ALL = (root) /path/to/program Defaults!/path/to/program !requiretty If you want to change the setting for a command with specific arguments, you need to use a command alias (this is a syntax limitation). For example, the following fragment allows artbristol to run /path/to/program --option in a script, but not /path/to/program with other arguments. Cmnd_Alias MYPROGRAM = /path/to/program --option artbristol ALL = (root) /path/to/program artbristol ALL = (root) NOPASSWD: MYPROGRAM Defaults!MYPROGRAM !requiretty
{ "source": [ "https://unix.stackexchange.com/questions/79960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11326/" ] }
80,017
I am looking for a command to count number of all words in a file. For instance if a file is like this, today is a good day then it should print 5 , since there are 5 words there.
The command wc aka. word count can do it: $ wc -w <file> example $ cat sample.txt today is a good day $ wc -w sample.txt 5 sample.txt # just the number (thanks to Stephane Chazelas' comment) $ wc -w < sample.txt 5
{ "source": [ "https://unix.stackexchange.com/questions/80017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7607/" ] }
80,151
I want to show my PATH environment variable in a more human-readable way. $ echo $PATH /Users/arturo/.rvm/gems/ruby-1.9.3-p392/bin:/Users/arturo/.rvm/gems/ruby-1.9.3-p392@global/bin:/Users/arturo/.rvm/rubies/ruby-1.9.3-p392/bin:/Users/arturo/.rvm/bin:/usr/local/git/bin:/Users/arturo/.gvm/groovy/current/bin:/Users/arturo/.gvm/grails/current/bin:/Users/arturo/.gvm/griffon/current/bin:/Users/arturo/.gvm/gradle/current/bin:/Users/arturo/.gvm/lazybones/current/bin:/Users/arturo/.gvm/vertx/current/bin:/Users/arturo/.gvm/bin:/Users/arturo/.gvm/ext:/usr/local/git/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin I'm thinking in something like this: $ echo $PATH | some cut and awk magic /Users/arturo/.rvm/gems/ruby-1.9.3-p392/bin /Users/arturo/.rvm/gems/ruby-1.9.3-p392@global/bin /Users/arturo/.rvm/rubies/ruby-1.9.3-p392/bin /Users/arturo/.rvm/bin ...
You can use tr . $ tr ':' '\n' <<< "$PATH" /Users/arturo/.rvm/gems/ruby-1.9.3-p392/bin /Users/arturo/.rvm/gems/ruby-1.9.3-p392@global/bin /Users/arturo/.rvm/rubies/ruby-1.9.3-p392/bin ... You can also do this in some shells (tested in bash and zsh): echo -e ${PATH//:/\\n} In zsh, you can use the $path variable to see your path with spaces instead of colons. $ echo $path /Users/arturo/.rvm/gems/ruby-1.9.3-p392/bin /Users/arturo/.rvm/gems/ruby-1.9.3-p392@global/bin /Users/arturo/.rvm/rubies/ruby-1.9.3-p392/bin /Users/arturo/.rvm/bin Which can be combined with printf or print . $ printf "%s\n" $path /Users/arturo/.rvm/gems/ruby-1.9.3-p392/bin /Users/arturo/.rvm/gems/ruby-1.9.3-p392@global/bin /Users/arturo/.rvm/rubies/ruby-1.9.3-p392/bin ... $ print -l $path /Users/arturo/.rvm/gems/ruby-1.9.3-p392/bin /Users/arturo/.rvm/gems/ruby-1.9.3-p392@global/bin /Users/arturo/.rvm/rubies/ruby-1.9.3-p392/bin ... The <<< operators are called herestrings. Herestrings pass the word to their right to the standard input of the command on their left. $ cat <<< 'Hello there' Hello there If your shell doesn't support them, use echo and a pipe. $ echo 'Hello there' | cat Hello there
{ "source": [ "https://unix.stackexchange.com/questions/80151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6219/" ] }
80,270
Is there a command to recover/undelete deleted files by rm ? $ rm -rf /path/to/myfile How can I recover myfile ? If there is such a tool how can I use it?
The link someone provided in the comments is likely your best chance. Linux debugfs Hack: Undelete Files That write-up though looking a little intimidating is actually fairly straight forward to follow. In general the steps are as follows: Use debugfs to view a filesystems log $ debugfs -w /dev/mapper/wks01-root At the debugfs prompt debugfs: lsdel Sample output Inode Owner Mode Size Blocks Time deleted 23601299 0 120777 3 1/ 1 Tue Mar 13 16:17:30 2012 7536655 0 120777 3 1/ 1 Tue May 1 06:21:22 2012 2 deleted inodes found. Run the command in debugfs debugfs: logdump -i <7536655> Determine files inode ... ... .... output truncated Fast_link_dest: bin Blocks: (0+1): 7235938 FS block 7536642 logged at sequence 38402086, journal block 26711 (inode block for inode 7536655): Inode: 7536655 Type: symlink Mode: 0777 Flags: 0x0 Generation: 3532221116 User: 0 Group: 0 Size: 3 File ACL: 0 Directory ACL: 0 Links: 0 Blockcount: 0 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x4f9fc732 -- Tue May 1 06:21:22 2012 atime: 0x4f9fc730 -- Tue May 1 06:21:20 2012 mtime: 0x4f9fc72f -- Tue May 1 06:21:19 2012 dtime: 0x4f9fc732 -- Tue May 1 06:21:22 2012 Fast_link_dest: bin Blocks: (0+1): 7235938 No magic number at block 28053: end of journal. With the above inode info run the following commands # dd if=/dev/mapper/wks01-root of=recovered.file.001 bs=4096 count=1 skip=7235938 # file recovered.file.001 file: ASCII text, with very long lines Files been recovered to recovered.file.001 . Other options If the above isn't for you I've used tools such as photorec to recover files in the past, but it's geared for image files only. I've written about this method extensively on my blog in this article titled: How to Recover Corrupt jpeg and mov Files from a Digital Camera's SDD Card on Fedora/CentOS/RHEL .
{ "source": [ "https://unix.stackexchange.com/questions/80270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28488/" ] }
80,277
Some documentation I'm going through has a boolean switch on whether or not a user is a 'system' user or a 'normal' user (defaulting to 'normal'). What is the difference between these two modes of user-ship? I don't need to learn what a user is or why you need them (even 'fake' ones), but this particular distinction isn't intuitive to me.
That is not a technical difference but an organizational decision. E.g. it makes sense to show normal users in a login dialog (so that you can click them instead of having to type the user name) but it wouldn't to show system accounts (the UIDs under which daemons and other automatic processes run) there. Thus a border is defined or rather two ranges for the UIDs for the two groups. In openSUSE the file /etc/login.defs contains these lines: # Min/max values for automatic uid selection in useradd # # SYS_UID_MIN to SYS_UID_MAX inclusive is the range for # UIDs for dynamically allocated administrative and system accounts. # UID_MIN to UID_MAX inclusive is the range of UIDs of dynamically # allocated user accounts. # UID_MIN 1000 UID_MAX 60000 # System accounts SYS_UID_MIN 100 SYS_UID_MAX 499 and # Min/max values for automatic gid selection in groupadd # # SYS_GID_MIN to SYS_GID_MAX inclusive is the range for # GIDs for dynamically allocated administrative and system groups. # GID_MIN to GID_MAX inclusive is the range of GIDs of dynamically # allocated groups. # GID_MIN 1000 GID_MAX 60000 # System accounts SYS_GID_MIN 100 SYS_GID_MAX 499
{ "source": [ "https://unix.stackexchange.com/questions/80277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28980/" ] }
80,305
I have a Clonezilla installation on a USB stick and I'd like to make some modifications to the operating system. Specifically, I'd like to insert a runnable script into /usr/sbin to make it easy to run my own backup command to make backups less painful. The main filesystem lives under /live/filesystem.squashfs on the USB FAT-32 partition. How can I mount this read/write on my Linux machine in order to be able to add/remove/change files? I'm running an Ubuntu 12.04 derivative.
This assumes that you are root and that squashfs-tools is installed on your system: Copy filesystem.squashfs to some empty dir, e.g.: cp /path/to/filesystem.squashfs /path/to/workdir cd /path/to/workdir Unpack the file then move it somewhere else (so you still have it as a backup): unsquashfs filesystem.squashfs mv filesystem.squashfs /path/to/backup/ Go in squashfs-root , add/modify as per your taste then recreate 1 filesystem.squashfs : cd /path/to/workdir mksquashfs squashfs-root filesystem.squashfs Copy the newly created filesystem.squashfs over the existing one on your USB drive, e.g.: cp filesystem.squashfs /mnt/clonezilla/live/ then reboot and use your LIVE USB. 1: Consult the manual for additional options that you can pass, like -b 4M -comp lz4 or -comp xz -Xbcj x86 etc
{ "source": [ "https://unix.stackexchange.com/questions/80305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
80,344
How does sudo work internally? How is it possible that it can become root without having the root password, unlike su ? What syscalls, etc. are involved in the process? Is it not a gaping security hole in Linux (e.g. why couldn't I compile a heavily-patched sudo that just did whatever regular sudo did, but didn't ask for the unprivileged user's password)? I have read login and su internals . I have also read How is sudo intended to be used? but despite the title, they mainly deal with the differences between su and sudo .
If you take a look at the executable sudo : $ which sudo /usr/bin/sudo $ ls -la /usr/bin/sudo ---s--x--x 2 root root 208808 Jun 3 2011 /usr/bin/sudo You'll notice that it carries the permission bits ---s--x--x . These can be broken down as follows: -|--s|--x|--x - - first dash denotes if a directory or a file ("d" = dir, "-" = file) --s - only the setuid bit is enabled for user who owns file --x - only the group execute bit is enabled --x - only the other execute bit is enabled So when a program has it's setuid bit enabled (also referred to as SUID) it means that when someone runs this program it will run with the credentials of the user that owns the file, aka. root in this case. Example If I run the following command as user saml: $ whoami saml $ sudo su - [sudo] password for saml: You'll notice that the execution of sudo actually is running as root: $ ps -eaf|grep sudo root 20399 2353 0 05:07 pts/13 00:00:00 sudo su - setuid mechanism If you're curious how SUID works take a look at man setuid . Here's an excerpt from the man page that explains it better than I could: setuid() sets the effective user ID of the calling process. If the effective UID of the caller is root, the real UID and saved set-user-ID are also set. Under Linux, setuid() is implemented like the POSIX version with the _POSIX_SAVED_IDS feature. This allows a set-user-ID (other than root) program to drop all of its user privileges, do some un-privileged work, and then reengage the original effective user ID in a secure manner. If the user is root or the program is set-user-ID-root, special care must be taken. The setuid() function checks the effective user ID of the caller and if it is the superuser, all process-related user ID's are set to uid. After this has occurred, it is impossible for the program to regain root privileges. The key concept here is that programs have a real userid (UID) and an effective one (EUID). Setuid is setting the effective userid (EUID) when this bit is enabled. So from the kernel's perspective it's known that in our example, saml is still the original owner (UID), but the EUID has been set with whomever is the owner of the executable. setgid I should also mention that when we're breaking down the permissions on the sudo command the second group of bits were for group permissions. The group bits also has something similar to setuid called set group id (aka. setgid, SGID). This does the same thing as SUID except it runs the process with the group credentials instead of the owner credentials. References setuid wikipedia page setuid man page
{ "source": [ "https://unix.stackexchange.com/questions/80344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29146/" ] }
80,351
Why is cp --reflink=auto not the default behaviour? Could it cause any harm to enable it? Is it possible to enable it at compile time, so it is used all across the system, not just in interactive shells?
It's not the default since for robustness reasons one may want a copy to take place to protect against data corruption. Also for performance reasons you may want the writes to happen at copy time rather than some latency sensitive process working on a CoW file and being delayed by the writes possibly to a different part of a mechanical disk. Note that from coreutils v8.24 mv will reflink by default, since it doesn't have the above constraints. Note also that the major release after v8.32 will try to reflink in cp by default, as such a change is not appropriate for a minor release.
{ "source": [ "https://unix.stackexchange.com/questions/80351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41669/" ] }
80,362
What does <<< mean? Here is an example: $ sed 's/a/b/g' <<< "aaa" bbb Is it something general that works with more Linux commands? It looks like it's feeding the sed program with the string aaa , but isn't << or < usually used for that?
Others have answered the basic question: what is it? Let's look at why it's useful. You can also feed a string to a command's stdin like this: echo "$string" | command However in bash, introducing a pipe means the individual commands are run in subshells. Consider this: echo "hello world" | read first second echo $second $first The output of the 2nd echo command prints just a single space. Whaaaa? What happened to my variables? Because the read command is in a pipeline, it is run in a subshell. It correctly reads 2 words from its stdin and assigns to the variables. But then the command completes, the subshell exits and the variables are lost. Sometimes you can work around this with braces: echo "hello world" | { read first second echo $second $first } That's OK if your need for the values is contained, but you still don't have those variables in the current shell of your script. To remedy this confusing situation, use a here-string read first second <<< "hello world" echo $second $first Ah, much better!
{ "source": [ "https://unix.stackexchange.com/questions/80362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41368/" ] }
80,399
I have a file, with a "KEYWORD" on line number n. How can I print all lines starting from line n+1 until the end? For example, here I would liek to pro=int only lines DDD and EEE AAA BBB CCC KEYWORD DDD EEE
You can do this with sed : sed '1,/^KEYWORD$/d' This will delete (omit) all lines from the beginning of the stream until "KEYWORD", inclusive.
{ "source": [ "https://unix.stackexchange.com/questions/80399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36112/" ] }
80,424
Tried to run program X using 8 threads and it was over in n minutes . Tried to run same program using 50 threads and it was over in n*10 minutes . Why does this happen and how can I get optimal number of threads I can use?
This is a complicated question you're asking. Without knowing more about the nature of your threads it's difficult to say. Some things to consider when diagnosing system performance: Is the process/thread CPU bound (needs lots of CPU resources) Memory bound (needs lots of RAM resources) I/O bound (Network and/or hard drive resources) All of these three resources are finite and any one can limit the performance of a system. You need to look at which (might be 2 or 3 together) your particular situation is consuming. You can use ntop and iostat , and vmstat to diagnose what's going on.
{ "source": [ "https://unix.stackexchange.com/questions/80424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40016/" ] }
80,493
After five unsuccessful Arch Linux installations, I've got two that installed correctly. The first time, the network worked fine, and I was even installing Arch Linux over SSH. After booting from the installed system instead of the live CD, it can't connect to the network, I get the following message when I try to ping anything, even my router: connect: Network is unreachable I've tried installing Arch again, because nothing on the Internet seemed to fix my issue. After installing, I'm getting the same issue. It appears it's failing to start eth0 because it times out.
The problem I was facing was that there was no eth0 (from what I can understand.) Run the command ip link , this should output your interfaces. I did not have the eth0 interface. Instead I had enp1s8 . Because I was using static, I copied the example profile with cd /etc/netctl then cp examples/ethernet-static my-network . After that, I edited my-network with nano my-network , and changed the Interface from eth0 to en1s8 (or whatever your adapter is in ip link ). Finally, I enabled it to use that profile on startup with netctl enable my-network . Enabling the profile will ensure that it starts upon boot, but there is no need to reboot the system, simply run netctl start my-network to initialize it immediately. I was able to get help on the official Arch Linux forums, you can view my topic there .
{ "source": [ "https://unix.stackexchange.com/questions/80493", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39625/" ] }
80,707
Currently I have a shell script which logs messages to a log file like this: log_file="/some/dir/log_file.log" echo "some text" >> $log_file do_some_command echo "more text" >> $log_file do_other_command When executing this script, there is no output to screen, and, since I'm connecting to the server via putty, I have to open another connection and do "tail -f log_file_path.log", because I can't terminate the running script and I want to see the output in real time. Obviously, what I want is that the text messages are printed on screen and into file, but I'd like to do it in one line, not two lines, one of which has no redirection to file. How to achieve this?
This works: command | tee -a "$log_file" tee saves input to a file (use -a to append rather than overwrite), and copies the input to standard output as well. Because the command can detect that it's now being run in a non-interactive fashion this may change its behaviour. The most common side effect is that it disables colour output. If this happens (and you want ANSI colour coded output) you have to check the command documentation to see if it has a way to force it to revert to the interactive behaviour, such as grep --color=always . Beware that this means the log file will also include these escape codes, and you'll need to use less --RAW-CONTROL-CHARS "$log_file" to read it without distracting escape code literals. Also beware that there is no way to make the log file contents different from what is printed to screen when running the above command, so you can't have colour coded output to screen and non-coloured output in the log file.
{ "source": [ "https://unix.stackexchange.com/questions/80707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41872/" ] }
80,711
I want to use either of apt-get or yum . How to install them and make them successful running?
If you want the equivalent of apt-get or yum on Mac OS X, you have two choices. Homebrew: http://brew.sh Macports: http://www.macports.org You can use brew install PACKAGE_NAME or port install PACKAGE_NAME to install the package available.
{ "source": [ "https://unix.stackexchange.com/questions/80711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41880/" ] }
80,821
I was trying to backup some files via SSH but instead of tar 'ing the ones I wanted I got my home folder. I did some further testing and it boils down to this: ssh root@server /bin/sh -c "cd /boot && ls -l" Which to my surprise lists files in /root not /boot . But if I run the entire /bin/sh command from a terminal it properly cd s and prints the /boot files. What's happening here?
ssh doesn't let you specify a command precisely, as you have done, as a series of arguments to be passed to execvp on the remote host. Instead it concatenates all the arguments into a string and runs them through a remote shell. This stands out as a major design flaw in ssh in my opinion... it's a well-behaved unix tool in most ways, but when it comes time to specify a command it chose to use a single monolithic string instead of an argv, like it was designed for MSDOS or something! Since ssh will pass your command as a single string to sh -c , you don't need to provide your own sh -c . When you do, the result is sh -c '/bin/sh -c cd /boot && ls -l' with the original quoting lost. So the commands separated by the && are: `/bin/sh -c cd /boot` `ls -l` The first of those runs a shell with the command text "cd" and $0 = "boot" . The "cd" command completes successfully, the $0 is irrelevant, and the /bin/sh -c indicates success, then the ls -l happens.
{ "source": [ "https://unix.stackexchange.com/questions/80821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41932/" ] }
80,845
The question is quite straightforward, so no further description is needed according to me. I just add that I am running Ubuntu 13.04. Any help is welcomed.
In Midnight Commander go to Options menu / Configuration... command / Use internal edit checkbox and uncheck it. (Don't forget to execute the Save setup command if the Auto save setup option is off.) Then set the EDITOR environment variable to Sublime. You may prefer to add one of these to your shell's resource file: Global setting for all programs that use EDITOR (not recommended): EDITOR=sublime export EDITOR Temporary setting for the given Midnight Commander session only: alias mc='EDITOR=sublime mc' Same for the viewer, just you uncheck the Use internal view option and set the VIEWER environment variable instead.
{ "source": [ "https://unix.stackexchange.com/questions/80845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32083/" ] }
80,864
I bought an SSD and I am going to set up my desktop system with a completely fresh Linux installation. SSDs are known to be fast, but they have a disadvantage: The number of writes (per block?) is limited. So I am thinking about which data should be located at the SSD and which at the HDD drive. Generally I thought that data that changes frequently should be placed on the HDD and data that doesn't change frequently can be put on the SSD. Now I read this question, with a similar scenario. In the answers there is written: "SSD drives are ideally suited for swap space..." Why are SSDs ideally suited for swap space? OK, I see high potential for raising the system's performance, but doesn't swap data change frequently and hence there would be many writes on the SSD resulting in a short SSD lifetime? And what about the /var directory? Doesn't its contents change frequently, too? Wouldn't it be a good idea to put it on the HDD? Is there any other data that should not be located on an SSD?
If you worry about write cycles, you won't get anywhere. You will have data on your SSD that changes frequently; your home, your configs, your browser caches, maybe even databases (if you use any). They all should be on SSD: why else would you have one, if not to gain speed for the things you do frequently? The number of writes may be limited, but a modern SSD is very good at wear leveling, so you shouldn't worry about it too much. The disk is there to be written to; if you don't use it for that, you might just as well use it as a paperweight and never even put it into your computer. There is no storage device suited for swap space. Swap is slow , even on SSD. If you need to swap all the time, you're better off getting more RAM one way or another. It may be different for swap space that's not used for swapping, but for suspend-to-disk scenarios. Naturally the faster the storage media used for that, the faster it will suspend and wake up again. Personally, I put everything on SSD except the big, static data. A movie, for example, doesn't have to waste expensive space on SSD, as a HDD is more than fast enough to play it. It won't play any faster using SSD storage for it. Like all storage media, SSD will fail at some point, whether you use it or not. You should consider them to be just as reliable as HDDs, which is not reliable at all, so you should make backups.
{ "source": [ "https://unix.stackexchange.com/questions/80864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41961/" ] }
80,968
This is what I'd like to be able to do: After a user's account is created, they should be able to ssh -tunnel, but their account is automatically removed after 30 days unless the countdown is reset by the root user. How can I automate this? I'll have to handle around 15 users.
useradd You can control how long a user's account is valid through the use of the --expiredate option to useradd . excerpt from useradd man page -e, --expiredate EXPIRE_DATE The date on which the user account will be disabled. The date is specified in the format YYYY-MM-DD. If not specified, useradd will use the default expiry date specified by the EXPIRE variable in /etc/default/useradd, or an empty string (no expiry) by default. So when setting up the user's account you can specify a date +30 days in the future from now, and add that to your useradd command when setting up their accounts. $ useradd -e 2013-07-30 someuser chage You can also change a existing accounts date using the chage command. To change an accounts expiration date you'd do the following: $ chage -E 2013-08-30 someuser calculating the date +30 days from now To do this is actually pretty trivial using the date command. For example: $ date -d "30 days" Sun Jul 28 01:03:05 EDT 2013 You can format using the +FORMAT options to the date command, which ends up giving you the following: $ date -d "30 days" +"%Y-%m-%d" 2013-05-28 Putting it all together So knowing the above pieces, here's one way to put it together. First when creating an account you'd run this command: $ useradd -e `date -d "30 days" +"%Y-%m-%d"` someuser Then when you want to adjust their expiration dates you'd periodically run this command: $ chage -E `date -d "30 days" +"%Y-%m-%d"` someuser Specifying time periods of less than 24h If you want a user to only be active for some minutes, you cannot use the options above since they require specifying a date. In that case, you could either set up a crontab to remove/lock the created user after the specified time (for example, 10 minutes), or you could do one of: adduser someuser && sleep 600 && usermod --lock someuser or $ adduser someuser $ echo usermod --lock someuser | at now + 10 minutes References useradd man page chage man page
{ "source": [ "https://unix.stackexchange.com/questions/80968", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42037/" ] }
81,011
My system is fully encrypted with dm-crypt and LVM . I recently moved the encrypted partition from /dev/sda5 to /dev/sda2 . My question is: how can I change the name the encrypted partition is mapped to from sda5_crypt to sda2_crypt ? I can boot the system all right. But the prompt I get at boot time says (sda5_crypt) though the UUID maps to /dev/sda2 : Volume group "vg" not found Skipping volume group vg Unlocking the disk /dev/.../UUID (sda5_crypt) Enter passphrase: I tried to live-boot, decrypt sda2 , activate vg , chroot to /dev/vg/root and run update-grub2 but to no avail. Merely editing /etc/crypttab doesn't work either.
"sda5_crypt" crypttab change as per suggestion below: Replace OLD_NAME with NEW_NAME in /etc/crypttab & /etc/fstab , and then: # dmsetup rename OLD_NAME NEW_NAME # cp -a /dev/mapper/NEW_NAME /dev/mapper/OLD_NAME # update-initramfs -u -k all # rm /dev/mapper/OLD_NAME # update-grub # reboot
{ "source": [ "https://unix.stackexchange.com/questions/81011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27926/" ] }
81,044
[Background: I'd like to actually test How to take down a static network interface when not connected? ] I'm setting up a QEMU-KVM virtual machine using libvirt (via virt-manager ). I put two NICs on it (both virtio). They are bridged to a physical NIC on the host. I want to test what NetworkManager does when I "unplug" one. But there isn't button/checkbox for that in virt-manager, nor a quick Google search turn up anything. How do I emulate unplugging the network cable?
You can do that in the console with: virsh domif-setlink domain interface-device state And check its status with: virsh domifstat domain interface-device You can see the network interfaces configured with: virsh domifaddr domain Have a look at the man page for details. Here's an example of a typical workflow: $ sudo virsh list Id Name State ---------------------------------------------------- 24 ubuntu17.10 running $ sudo virsh domifaddr ubuntu17.10 Name MAC address Protocol Address ------------------------------------------------------------------------------- vnet0 52:54:00:d0:76:cb ipv4 192.168.122.183/24 $ sudo virsh domif-getlink ubuntu17.10 vnet0 vnet0 up $ sudo virsh domif-setlink ubuntu17.10 vnet0 down Device updated successfully $ sudo virsh domif-getlink ubuntu17.10 vnet0 vnet0 down $ sudo virsh domif-setlink ubuntu17.10 vnet0 up Device updated successfully $ sudo virsh domif-getlink ubuntu17.10 vnet0 vnet0 up
{ "source": [ "https://unix.stackexchange.com/questions/81044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/977/" ] }
81,129
I'm now using Arch Linux, and find a command most works like more and less . To understand the differences between them is a confusing problem. The question Isn't less just more? mentions the differences between less and more . Do you know the differences on color performance, shortcuts and ability moving forward and backward?
more more is an old utility. When the text passed to it is too large to fit on one screen, it pages it. You can scroll down but not up. Some systems hardlink more to less , providing users with a strange hybrid of the two programs that looks like more and quits at the end of the file like more but has some less features such as backwards scrolling. This is a result of less 's more compatibility mode. You can enable this compatibility mode temporarily with LESS_IS_MORE=1 less ... . more passes raw escape sequences by default. Escape sequences tell your terminal which colors to display. less less was written by a man who was fed up with more 's inability to scroll backwards through a file. He turned less into an open source project and over time, various individuals added new features to it. less is massive now. That's why some small embedded systems have more but not less . For comparison, less 's source is over 27000 lines long. more implementations are generally only a little over 2000 lines long. In order to get less to pass raw escape sequences, you have to pass it the -r flag. You can also tell it to only pass ANSI escape characters by passing it the -R flag. See less FAQs for more details: http://www.greenwoodsoftware.com/less/faq.html most most is supposed to be more than less . It can display multiple files at a time. By default, it truncates long lines instead of wrapping them and provides a left/right scrolling mechanism. most's website has no information about most 's features. Its manpage indicates that it is missing at least a few less features such as log-file writing (you can use tee for this though) and external command running. By default, most uses strange non-vi-like keybindings. man most | grep '\<vi.?\>' doesn't return anything so it may be impossible to put most into a vi-like mode. most has the ability to decompress gunzip-compressed files before reading. Its status bar has more information than less 's. most passes raw escape sequences by default.
{ "source": [ "https://unix.stackexchange.com/questions/81129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41873/" ] }
81,224
In terminal, how can I define a key to go to the previous directory which I was in when changing directory with the cd command? For example, I'm in /opt/soft/bin and I cd into /etc/squid3 and I want to get back to the first directory.
You can use cd - or you could use cd "$OLDPWD"
{ "source": [ "https://unix.stackexchange.com/questions/81224", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36198/" ] }
81,240
I need to manually edit /etc/shadow to change the root password inside of a virtual machine image. Is there a command-line tool that takes a password and generates an /etc/shadow compatible password hash on standard out?
You can use following commands for the same: Method 1 (md5, sha256, sha512) openssl passwd -6 -salt xyz yourpass Note: passing -1 will generate an MD5 password, -5 a SHA256 and -6 SHA512 (recommended) Method 2 (md5, sha256, sha512) mkpasswd --method=SHA-512 --stdin The option --method accepts md5 , sha-256 and sha-512 Method 3 (des, md5, sha256, sha512) As @tink suggested, we can update the password using chpasswd using: echo "username:password" | chpasswd Or you can use the encrypted password with chpasswd . First generate it using this: perl -e 'print crypt("YourPasswd", "salt", "sha512"),"\n"' Then later you can use the generated password to update /etc/shadow : echo "username:encryptedPassWd" | chpasswd -e The encrypted password we can also use to create a new user with this password, for example: useradd -p 'encryptedPassWd' username
{ "source": [ "https://unix.stackexchange.com/questions/81240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/663/" ] }
81,508
Is there a way to check how much power a USB device requires? Why do I need this? I need to connect an LTE USB stick to my Raspberry Pi, and don't know how much power it needs. We got it quite easily on Windows, but haven't found a way to do it on Linux.
Take a look at this SuperUser Q&A titled: How do you check how much power a USB port can deliver? , specifically my answer . lsusb -v You can get the maximum power using lsusb -v , for example: $ lsusb -v|egrep "^Bus|MaxPower" Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub MaxPower 0mA Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub MaxPower 0mA Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MaxPower 0mA Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MaxPower 0mA Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MaxPower 0mA Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MaxPower 0mA Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MaxPower 0mA Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB MaxPower 100mA Bus 003 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser MaxPower 98mA Bus 001 Device 004: ID 04a9:1069 Canon, Inc. S820 MaxPower 2mA Bus 001 Device 005: ID 05ac:120a Apple, Inc. iPod Nano MaxPower 500mA MaxPower 500mA
{ "source": [ "https://unix.stackexchange.com/questions/81508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29436/" ] }
81,540
When I am in tmux only a portion of the text shows up. If I try to scroll up or down the console scrolls up but not the actual text . If I do CTRL+b followed by [ , I see in the status bar *tmux , and If I press the up or down arrow I can actually go up/down on the text line by line . When I press q I see in the status line bash . When I do CTRL+b follow Page UP or Page the console goes up or down but not the text. How can I scroll up or down the text in more than one line at a time?
If you're using OS X's Terminal.app, it will capture Page up/down keypresses and just scroll the window contents, as if you used the scroll bar. You can use Shift + Page up/down to send them to the application inside the terminal. Using that, you should be able to scroll by a page at a time using: Control + B [ Arrows keys or Shift + Page up/down Control + C when done with scrollback If you want to change this to behave like every other terminal app on every other platform, you can go to Terminal -> Preferences, Settings, choose your profile and go to Keyboard, and swap the bindings for "page down/page up" and "shift page down/shift page up":
{ "source": [ "https://unix.stackexchange.com/questions/81540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42132/" ] }
81,566
I want to run one of the two commands C1 and C2 at random. How do I do that on commandline (bash)? Will appreciate if a one-liner is possible.
if (( RANDOM % 2 )); then C1; else C2; fi
{ "source": [ "https://unix.stackexchange.com/questions/81566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
81,628
I'm usually inside GNU Screen or tmux , and that doesn't give me great scrolling functionality. Is there an alternative to tail -f that allows me to quickly scroll up? A tool that is like most is to less and more . This question is related but far from specific. I'm really looking for something that lets me scroll.
You can use less +F to start less in its "forward forever" mode. In this mode, less will behave like tail -f , ignoring the ends of files and providing a steady stream of text. When you want to scroll, press Ctrl c . To re-enter forward forever mode, press F .
{ "source": [ "https://unix.stackexchange.com/questions/81628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38047/" ] }
81,757
Is there a simple utility or script to columnate the output from one of my scripts? I have data in some form: A aldkhasdfljhaf B klajsdfhalsdfh C salkjsdjkladdag D lseuiorlhisnflkc E sdjklfhnslkdfhn F kjhnakjshddnaskjdh but if this becomes two long, write the data in the following form (where still vertically ordered): A aldkhasdfljhaf D lseuiorlhisnflkc B klajsdfhalsdfh E sdjklfhnslkdfhn C salkjsdjkladdag F kjhnakjshddnaskjdh From reading the manpage, I don't think that this is something column would be appropriate for but I'm not sure. It's easy enough to split in the form: A B C D E F by only printing \n every second line (what my current script does). Any ideas? Thanks!
column seems to be what you want: $ cat file A aldkhasdfljhaf B klajsdfhalsdfh C salkjsdjkladdag D lseuiorlhisnflkc E sdjklfhnslkdfhn $ column file A aldkhasdfljhaf D lseuiorlhisnflkc B klajsdfhalsdfh E sdjklfhnslkdfhn C salkjsdjkladdag F kjhnakjshddnaskjdh
{ "source": [ "https://unix.stackexchange.com/questions/81757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42415/" ] }
81,763
I am on OSX, using bash, trying to make sense of pipes. I wish to let a program communicate in two directions with a bash shell. I want to set this up in such a way that this is always the same shell, so that I can cd to some directory and bash will remember (rather than using a new bash shell all the time). What I have tried so far is this. From a new terminal (A), do mkdir /tmp/IOdir cd /tmp/IOdir mkfifo pToB mkfifo bToP tail -f -1 pToB | bash >> bToP Then, to test this connection, I can do, from a new terminal (B) cd /tmp/IOdir echo echo hello > pToB and from a third terminal (C) cd /tmp/IOdir (read myline && echo $myline) < bToP This behaves how I want. The same bash shell stays active, and the output comes through on the other side. Call this state of affairs X, so that I can refer later to it. Onward from state X However, now, from this state X, we cannot do the same thing again. That is, if we do from terminal (B) echo echo hello > pToB and then from terminal C (read myline && echo $myline) < bToP Then nothing comes through in terminal C. Moreover, if we again do, from terminal B echo echo hello > pToB The bash shell closes. What I could have done in state X was first do, from terminal C (read myline && echo $myline) < bToP and then from terminal B echo echo hello > pToB In this case hello comes through at terminal C, and it seems like we are in state X again. So we can basically repeat this forever. Now this might seem sufficient for two way communication, but my program is such that if it requests a new line like this (read myline && echo $myline) and there is no new line, it will "hang" (just like bash, in fact I mean to use a call to bash in the program). It is therefore not able to send input to pToB after that and there is nothing I can do. Questions Is there a way to set this up without doing too much programming in C? Is there a way to do this more elegantly without using two named pipes? What is causing the pipe to close in one scenario, and not in the other? Edits From this page on wikipedia , we have Full-duplex (two-way) communication normally requires two anonymous pipes. On one hand, it seems at least I have the right number of pipes. On the other, I am using named pipes, not anonymous ones. So maybe this will be hard/impossible. Furthermore mkfifo gnu/linux source is likely defined in terms of mknod gnu/linux source , which is also a unix command. But I'm not sure if much can be learned from that. Here is an introduction to pipes in C, including the linux source of pipe. Maybe that can tell us why a pipe gets closed, if that is indeed what happens. Here is a related question about preventing that fifos get closed. I tried tying the pipes to background sleeping processes as was done in an answer tere but that didn't help.
column seems to be what you want: $ cat file A aldkhasdfljhaf B klajsdfhalsdfh C salkjsdjkladdag D lseuiorlhisnflkc E sdjklfhnslkdfhn $ column file A aldkhasdfljhaf D lseuiorlhisnflkc B klajsdfhalsdfh E sdjklfhnslkdfhn C salkjsdjkladdag F kjhnakjshddnaskjdh
{ "source": [ "https://unix.stackexchange.com/questions/81763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42420/" ] }
81,834
I've just installed a Fedora 19 on VMware workstation 9. The default network device is "ens33" instead of "eth0" on RHEL. The reason I have to use "eth0" is that the license component of one of our products has be to be linked with "eth0". There are some posts discussing about similar issues, most of which are for older OS. I haven't found one that exactly match my situation.
The easiest way to restore the old way Kernel/modules/udev rename your ethernet interfaces is supplying these kernel parameters to Fedora 19 : net.ifnames=0 biosdevname=0 To do so follow this steps: Edit /etc/default/grub At the end of GRUB_CMDLINE_LINUX line append " net.ifnames=0 biosdevname=0 " Save the file Type " grub2-mkconfig -o /boot/grub2/grub.cfg " Type " reboot " If you didn't supply these parameters during the installation, you will probably need to adjust and/or rename interface files at /etc/sysconfig/network-scripts/ifcfg-* . Up to Fedora 18 , just biosdevname=0 was enough. As an example, in a certain machine, in a exhaustive research, I got: -No parameters: NIC identified as " enp5s2 ". -Parameter biosdevname=0: NIC identified as " enp5s2 ". -Parameter net.ifnames=0: NIC identified as " em1 ". -Parameter net.ifnames=0 AND biosdevname=0: NIC identified as " eth0 ".
{ "source": [ "https://unix.stackexchange.com/questions/81834", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42316/" ] }
81,904
Would like to have each line in a file repeated a fixed number of times. e.g. have each line repeated four times: a b c becomes: a a a a b b b b c c c c I've done some searching, and there are a lot of questions and answers along the lines of doing the reverse, e.g. merging duplicate lines into single lines, and maybe a few about doubling lines by printing them again. It would be easy to do this in C, but I wish I knew more about the native commands so I wouldn't have to resort to these kinds of one-off throw-aways all the time.
Perl: perl -ne 'for$i(0..3){print}' file and I have to add this one posted as a comment by @derobert because it is just cool: perl -ne 'print "$_" x4' awk and variants: awk '{for(i=0;i<4;i++)print}' file bash while read line; do for i in {1..4}; do echo "$line"; done; done < file
{ "source": [ "https://unix.stackexchange.com/questions/81904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40428/" ] }
81,923
When I do this in my terminal: echo -e "\xF0\x9f\x8d\xba" I get a cool beer mug in my terminal: However, when I start screen, and try the same, I'm getting a �. How do I get the nice beer mug? This is my .screenrc: nethack on startup_message off defscrollback 3000 hardstatus on hardstatus alwayslastline hardstatus string "%{.bW}%-w%{.rW}%n %t%{-}%+w %=%{..G} %H %{..Y} %m/%d %c" termcapinfo xterm 'bc@:bs@' ignorecase on maptimeout 0 vbell off defutf8 on defencoding utf8 Also running screen with the -U option doesn't help.
It's a apparently a known bug : No characters beyond the BMP are displayed, as screen apparently only has a two byte buffer for characters. (It works in tmux).
{ "source": [ "https://unix.stackexchange.com/questions/81923", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39618/" ] }
81,998
I have file1.txt this is the original text line2 line3 line4 happy hacking ! and file2.txt this is the original text line2 line4 happy hacking ! GNU is not UNIX if I do: diff file1.txt file2.txt I get: 3d2 < line3 5a5 > GNU is not UNIX How is the output generally interpreted? I think that < means removed but what do 3d2 or 5a5 mean? If I do: $ diff -u file1.txt file2.txt --- file1.txt 2013-07-06 17:44:59.180000000 +0200 +++ file2.txt 2013-07-06 17:39:53.433000000 +0200 @@ -1,5 +1,5 @@ this is the original text line2 -line3 line4 happy hacking ! +GNU is not UNIX The results are clearer but what does @@ -1,5 +1,5 @@ mean?
In your first diff output (so called "normal diff") the meaning is as follows: < - denotes lines in file1.txt > - denotes lines in file2.txt 3d2 and 5a5 denote line numbers affected and which actions were performed. d stands for deletion, a stands for adding (and c stands for changing). the number on the left of the character is the line number in file1.txt, the number on the right is the line number in file2.txt. So 3d2 tells you that the 3rd line in file1.txt was deleted and has the line number 2 in file2.txt (or better to say that after deletion the line counter went back to line number 2). 5a5 tells you that the we started from line number 5 in file1.txt (which was actually empty after we deleted a line in previous action), added the line and this added line is the number 5 in file2.txt. The output of diff -u command is formatted a bit differently (so called "unified diff" format). Here diff shows us a single piece of the text, instead of two separate texts. In the line @@ -1,5 +1,5 @@ the part -1,5 relates to file1.txt and the part +1,5 to file2.txt. They tell us that diff will show a piece of text, which is 5 lines long starting from line number 1 in file1.txt. And the same about the file2.txt - diff shows us 5 lines starting from line 1. As I have already said, the lines from both files are shown together this is the original text line2 -line3 line4 happy hacking ! +GNU is not UNIX Here - denotes the lines which were deleted from file1.txt, and + denotes the lines which were added.
{ "source": [ "https://unix.stackexchange.com/questions/81998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42132/" ] }
82,060
How can I get a slice of $@ in Bash without first having to copy all positional parameters to another array like this? argv=( "$@" ) echo "${argv[@]:2}";
You can use the same format as for any other array. To extract the 2nd and 3rd elements from $@ , you would do: echo "${@:1:2}" - - | |----> slice length |------> slice starting index
{ "source": [ "https://unix.stackexchange.com/questions/82060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27926/" ] }
82,112
Is there something like a stereo (separate left-and right-channel) tone-generator for Linux? Where you can set volume and tone/pitch for each channel, and preferably also set the wave-form (sine, square, sawtooth, ...) and invert one channel (as opposed to the other). If not, any ideas for a good places to start to make one? I guess the simplest would be to adapt existing programs like synths... But if that work poorly, are there any libraries (like SDL?) that can be used as bases for such a program?
It sounds like you're looking for Audacity which is a cross-platform open source audio editor. One of its features is to allow you to generate tones. It's a multi-track audio editor, so you can easily create a stereo tone. Under the Generate menu, you're able to create Sine, Sawtooth, and Square waveform tones of arbitrary frequency, amplitude, and length without the need for recording or needing additional input files.
{ "source": [ "https://unix.stackexchange.com/questions/82112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28975/" ] }
82,314
I have to mount a .img file but I don't know what type of .img it is. How can I figure out what type of .img file it is? # mount -t auto -o ro,loop gmapsupp.img /mnt/iso/ mount: you must specify the filesystem type # file -k gmapsupp.img gmapsupp.img: x86 boot sector, code offset 0x0 #
Try running the command fdisk -l <img file> . Typically if the .img files are entire disks from say a KVM VM then they're technically a virtual disk. Example I've got a CentOS KVM VM which shows up like so with the file command: $ file centostest.img centostest.img: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 208782 sectors; partition 2: ID=0x8e, starthead 0, startsector 208845, 20755980 sectors, code offset 0x48 Running fdisk with it: $ sudo /sbin/fdisk -lu /kvm/centostest.img last_lba(): I don't know how to handle files with mode 81ed You must set cylinders. You can do this from the extra functions menu. Disk /kvm/centostest.img: 0 MB, 0 bytes 255 heads, 63 sectors/track, 0 cylinders, total 0 sectors Units = sectors of 1 * 512 = 512 bytes Device Boot Start End Blocks Id System /kvm/centostest.img1 * 63 208844 104391 83 Linux /kvm/centostest.img2 208845 20964824 10377990 8e Linux LVM Partition 2 has different physical/logical endings: phys=(1023, 254, 63) logical=(1304, 254, 63) If you'd like to mount one of these partitions you can do so as follows: fdisk (cylinder output) block-size of 512 bytes and the start-block is 63. The offset is 512 * 63 = 32256. fdisk (sector output) block-size of 512 bytes and the start-block is 1. The offset is 512 * 1 = 512. So the mount command would be: in cylinders $ mount -o loop,offset=32256 centostest.img /mnt/tmp To mount the other partition (512 * 208845 = 106928640): $ mount -o loop,offset=106928640 centostest.img /mnt/tmp in sectors $ mount -o loop,offset=512 centostest.img /mnt/tmp To mount the other partition (512 * 14 = 7168): $ mount -o loop,offset=7168 centostest.img /mnt/tmp NOTE This will only work if mount can determine the type of filesystem within the "partition" you're attempting to mount. You may need to include -t auto , or be specific and tell mount that's it's -t ext4 for example. References how to mount .img file
{ "source": [ "https://unix.stackexchange.com/questions/82314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42666/" ] }
82,347
*nix user permissions are really simple, but things can get messy when you have to take in account all the parent directory access before reaching a given file. How can I check if the user has enough privileges? If not, then which directory is denying access? For example, suppose a user joe , and the file /long/path/to/file.txt . Even if file.txt was chmoded to 777, joe still has to be able to access /long/ , and then /long/path/ and then /long/path/to/ before. What I need is a way to automatically check this. If joe does not have access, I would also like to know where he has been denied. Maybe he can access /long/ , but not /long/path/ .
To verify access visually, you can use namei -m /path/to/really/long/directory/with/file/in which will output all of the permissions in the path in a vertical list. or namei -l /path/to/really/long/directory/with/file/in to list all owners and the permissions. Other answers explain how to verify this programmatically.
{ "source": [ "https://unix.stackexchange.com/questions/82347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42684/" ] }
82,354
ll is a common alias in many Linux distros. How can I tell what it aliases to? I've tried checking my .bashrc , but I am not able tell what ll is equivalent to.
You can use the alias command. $ alias ll ll='ls --color=auto -Flh'
{ "source": [ "https://unix.stackexchange.com/questions/82354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39263/" ] }
82,357
I noticed that if I run ls -F on a directory, some of the entries have a * or a @ after them. spuder@ubuntu:~$ ls -F /sbin acpi_available* getpcaps* lvmconf* ntfscp* start-stop-daemon* agetty* getty* lvmdiskscan@ ntfslabel* status@ alsa* halt@ lvmdump* ntfsresize* stop@ alsactl* hdparm* lvmsadc@ spuder@ubuntu:~$ ls -F ~ daq-0.6.1/ examples.desktop noname-cache.lib snort-2.9.1/ Templates/ Desktop/ jpgraph-1.27.1/ noname.sch snortfiles/ Ubuntu One/ Documents/ According to the ls man pages spuder@ubuntu:~$ man ls ... -F, --classify append indicator (one of */=>@|) to entries ... I'm guessing that @ means symbolic link, What do these other indicators mean ( */=>@| ) ?
ls -F appends symbols to filenames. These symbols show useful information about files. @ means symbolic link (or that the file has extended attributes ). * means executable . = means socket . | means named pipe . > means door . / means directory . If you want this behavior to be the default, add this to your shell configuration: alias ls='ls -F' .
{ "source": [ "https://unix.stackexchange.com/questions/82357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39263/" ] }
82,361
I remember doing someting like "XXX /home/user/dir/child/file" and it returned the owner and/or permission of: /home /home/user /home/user/dir /home/user/child /home/user/child/file But I don't remember what this command was. Anybody any idea?
The command could have been: namei -m /home/user/dir/child/file
{ "source": [ "https://unix.stackexchange.com/questions/82361", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16766/" ] }
82,489
How might one check which .rsa, .pem, and other files are 'loaded' or 'active' by ssh-add ? I just had this scare: $ scp -i /home/dotancohen/.ssh/dotancohen.pem [email protected]:~/files.tgz . Warning: Identity file /home/dotancohen/.ssh/dotancohen.pem not accessible: No such file or directory. files.tgz 100% 89MB 1.5MB/s 00:59 $ scp [email protected]:~/all_git.tgz . files.tgz 100% 89MB 1.7MB/s 00:54 After verifying that the server does in fact require the .pem key, I would now like to know which other keys are loaded without me knowing about it!
As stated in ssh-add 's man page, the -L option allows you to view the public keys of the identities ssh-agent currently maintains. You could also use the -l option to view their fingerprints.
{ "source": [ "https://unix.stackexchange.com/questions/82489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
82,497
I'm new to Linux but I'm about to start using a VPS to host my website. I've been reading about and it's been mentioned that I need to make sure that all of my logs are rotating and compressed, but I don't understand how to do that. I'll be using Plesk but I understand that it will only cover a few logs, so how do I make sure that all my logs are rotated? And is there an easier way to view them than by using grep/cat?
As stated in ssh-add 's man page, the -L option allows you to view the public keys of the identities ssh-agent currently maintains. You could also use the -l option to view their fingerprints.
{ "source": [ "https://unix.stackexchange.com/questions/82497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34618/" ] }
82,541
The object of the script I'm making is to compare two series of files. The file names are themselves stored into two separate files, one path per line. My idea is to have two while read loops, one for each list of file names, but how can I mix the two loops together? while read compareFile <&3; do if [[ ! $server =~ [^[:space:]] ]] ; then #empty line exception continue fi echo "Comparing file - $compareFile" if diff "$compareFile" _(other file from loop?_) >/dev/null ; then echo Same else echo Different fi done 3</infanass/dev/admin/filestoCompare.txt I need to be able to compare files from two different lists at the same time through two while read loops... Is this even possible?
You don't need two loops; you just need to read from two files in the one loop. while read compareFile1 <&3 && read compareFile2 <&4; do if [[ ! $server =~ [^[:space:]] ]] ; then #empty line exception continue fi echo "Comparing file - $compareFile" if diff "$compareFile1" "$compareFile2" >/dev/null ; then echo Same else echo Different fi done 3</infanass/dev/admin/filestoCompare.txt 4<other_file
{ "source": [ "https://unix.stackexchange.com/questions/82541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42211/" ] }
82,561
I have a binary file that I can send with netcat : $ nc -l localhost 8181 < my.dat The file contains this: $ xxd my.dat 0000000: 0006 3030 3030 4e43 ..0000NC What I really want to do is send the hex string directly. I've tried this: $ echo '0006303030304e43' | nc -l localhost 8181 However, the above command just sends the ascii string directly to nc .
I used the -r and -p switches for xxd : $ echo '0006303030304e43' | xxd -r -p | nc -l localhost 8181 Thanks to inspiration from @Gilles' answer, here's a Perl version: $ echo '0006303030304e43' | perl -e 'print pack "H*", <STDIN>' | nc -l localhost 8181
{ "source": [ "https://unix.stackexchange.com/questions/82561", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24554/" ] }
82,597
Where does Firefox store cookies in Linux? I searched everywhere but did not find anything.
Firefox stores cookies in sqlite database ~/.mozilla/firefox/<profile path>/cookies.sqlite . You can have full access to it. For example, to watch all cookies from stackoverflow.com you can do: cd ~/.mozilla/firefox/<profile path>/ sqlite3 cookies.sqlite select * from moz_cookies where baseDomain glob '*stackoverflow*' (replace here <profile path> by path of your firefox profile). To see names of database fields do: .schema .
{ "source": [ "https://unix.stackexchange.com/questions/82597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37910/" ] }
82,598
I want to write logic in shell script which will retry it to run again after 15 sec upto 5 times based on "status code=FAIL" if it fails due to some issue.
This script uses a counter n to limit the attempts at the command to five. If the command is successful, break ends the loop. n=0 until [ "$n" -ge 5 ] do command && break # substitute your command here n=$((n+1)) sleep 15 done
{ "source": [ "https://unix.stackexchange.com/questions/82598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39230/" ] }
82,606
I want to backup the running-config of all switches per SCP to a server. Question 1: is this possible with kron? Is there a better way and why? Question 2: how can I get the public key of the switch to auto-authentificate the user to the server? Thank you very much.
This script uses a counter n to limit the attempts at the command to five. If the command is successful, break ends the loop. n=0 until [ "$n" -ge 5 ] do command && break # substitute your command here n=$((n+1)) sleep 15 done
{ "source": [ "https://unix.stackexchange.com/questions/82606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42829/" ] }
82,626
Everybody on the Internet advises to disable root login via SSH as it is a bad practice and a security hole in the system, but nobody explains why it is so. What is so dangerous in enabling root login (especially with disabled password login)? And what is the difference between X symbol username and Y symbol password or root username and X+Y symbol password from security point of view in case password authentication is allowed?
Why root over SSH is bad There are a lot of bots out there which try to log in to your computer over SSH. These bots work the following way. They execute something like ssh root@$IP and then they try standard passwords like "root" or "password123". They do this as long as they can, until they find the right password. On a world wide accessible server you can see a lot of log entries in your log files. I can go up to 20 per minute or more. When the attackers have luck (or enough time), and find a password, they would have root access and that would mean you are in trouble. But when you disallow root to log in over SSH, the bot needs first to guess a user name and then the matching password. So lets say the list of plausible passwords has N entries and the list of plausible users is M entries large. The bot has a set of N*M entries to test, so this makes it a little bit harder for the bot compared to the root case where it is only a set of size N . Some people will say that this additional M isn't a real gain in security and I agree that it is only a small security enhancement. But I think of this more as these little padlocks which are in itself not secure, but they hinder a lot of people from easy access. This of course is only valid if your machine has no other standard user names, like tor or apache. The better reason to not allow root is that root can do a lot more damage on the machine than a standard user can do. So, if by dumb luck they find your password, the whole system is lost while with a standard user account you only could manipulate the files of that user(which is still very bad). In the comments it was mentioned that a normal user could have the right to use sudo and if this user's password would be guessed the system is totally lost too. In summary I would say that it doesn't matter which user's password an attacker gets. When they guess one password you can't trust the system anymore. An attacker could use the rights of that user to execute commands with sudo , the attacker could also exploit a weakness in your system and gain root privileges. If an attacker had access to your system you can't trust it anymore. The thing to remember here is that every user in your system that is allowed to log in via SSH is an additional weakness. By disabling root you remove one obvious weakness. Why passwords over SSH are bad The reason to disable passwords is really simple. Users choose bad passwords! The whole idea of trying passwords only works when the passwords are guessable. So when a user has the password "pw123" your system becomes insecure. Another problem with passwords chosen by people is that their passwords are never truly random because that would then be hard to remember. Also it is the case that users tend to reuse their passwords, using it to log into Facebook or their Gmail accounts and for your server. So when a hacker gets this user's Facebook account password he could get into your server. The user could easily lose it through phishing or the Facebook server might get hacked. But when you use a certificate to log in, the user doesn't choose his password. The certificate is based on a random string which is very long from 1024 Bits up to 4096 Bits (~ 128 - 512 character password). Additionally this certificate is only there to log into your server and isn't used with any outside services. Monitoring root access The comment from @Philip Couling which should have been an answer: There's an administrative reason for disabling root. On commercial servers you always want to control access by person. root is never a person. Even if you allow some users to have root access, you should force them to login via their own user and then su - or sudo -i so that their actual login can be recorded. This makes revoking all access to an individual much simpler so that even if they have the root password they can't do anything with it. – Philip Couling I would also add that it allows the team to enforce the principle of least privilege , with a proper sudo configuration (but writing one sounds easier then it is). This enables the team to distribute uncritical better, without giving away the key to the castle. Links http://bsdly.blogspot.de/2013/10/the-hail-mary-cloud-and-lessons-learned.html This article comes from the comments and I wanted to give it a bit more prominent position, since it goes a little bit deeper into the matter of botnets that try to log in via SSH, how they do it, how the log files look like and what one can do to stop them. It's been written by Peter Hansteen.
{ "source": [ "https://unix.stackexchange.com/questions/82626", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13003/" ] }
82,673
I noticed when running ifconfig that there is a network interface called tun0 and it has an ipv4 address. A bit of research shows that it is a tunneling device, but I don't really know how it's used, what's using it, and why it has an IP address. I do have iptables enabled, and there seems to be some link between iptables and tun, if that helps.
It's for tunneling software. See the wikipedia article titled: TUN/TAP for full details. excerpt from FreeBSD tun man page The tun interface is a software loopback mechanism that can be loosely described as the network interface analog of the pty(4), that is, tun does for network interfaces what the pty(4) driver does for terminals. This socat documentation page does a good job of showing how they could be used. excerpt from socat doc Some operating systems allow the generation of virtual network interfaces that do not connect to a wire but to a process that simulates the network. Often these devices are called TUN or TAP. References Manual Reference Pages - TUN (4) Tun/Tap interface tutorial Less widely known features of iproute
{ "source": [ "https://unix.stackexchange.com/questions/82673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21271/" ] }
82,724
I tried ps with different kinds of switches e.g. -A , aux , ef , and so forth but I cannot seem to find the right combination of switches that will tell me the Process ID (PID), Parent Process ID (PPID), Process Group ID (PGID), and the Session ID (SID) of a process in the same output.
Here you go: $ ps xao pid,ppid,pgid,sid | head PID PPID PGID SID 1 0 1 1 2 0 0 0 3 2 0 0 6 2 0 0 7 2 0 0 21 2 0 0 22 2 0 0 23 2 0 0 24 2 0 0 If you want to see the process' name as well, use this: $ ps xao pid,ppid,pgid,sid,comm | head PID PPID PGID SID COMMAND 1 0 1 1 init 2 0 0 0 kthreadd 3 2 0 0 ksoftirqd/0 6 2 0 0 migration/0 7 2 0 0 watchdog/0 21 2 0 0 cpuset 22 2 0 0 khelper 23 2 0 0 kdevtmpfs 24 2 0 0 netns
{ "source": [ "https://unix.stackexchange.com/questions/82724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15378/" ] }
82,856
I have a folder in which I have around 4k files. Some of these files start with a a ? or ! character. I need to delete them but can't find an expression that would do so: rm -f ./?* just deletes everything. I can possibly use grep on ls and pipe it through xargs and move files to another folder but I was hoping there was a proper way of doing this. Need help on both the ? and ! files.
No need for any fancy stuff. Simply escape the ? so that it's not considered part of the glob: rm -f ./\?* This works for ! too: rm -f ./\!* Or in one fell swoop: rm -f ./{\?,\!}* Update Just noticed that you were suggesting to grep the output of ls . I wanted to bring your attention to the fact that you shouldn't parse the output of ls
{ "source": [ "https://unix.stackexchange.com/questions/82856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12264/" ] }
82,910
I am a graduate student and a relative Linux novice. This question is a sort of spin-off of my earlier question . My institution has a Ubuntu Linux cluster. I am just a user; I do not have sysadmin permissions, and I certainly do not have the expertise to be a sysadmin! My question is, how do I find my local mail spool? As far as I know, I am not using a mail transfer agent. Answers to my previous question suggested that I look for a file /var/spool/mail/$USER , but unfortunately, I do not see a file corresponding to my user name. In /var/spool/mail/ , using ls I only see two files: nobody and www-data , which are both extensionless files. Do you have any other ideas of where I can look for my mail spool (which is probably local, since I do not have a mail transfer agent configured, as far as I know)?
Mail spools are typically under here: /var/spool/mail/$USER Where $USER is your username. For example on my Fedora Linux system: $ ls -l /var/spool/mail/ total 1908 -rw-------. 1 root root 1943163 Jul 13 12:00 root -rw-rw----. 1 rpc mail 0 Dec 18 2010 rpc -rw-rw----. 1 saml mail 689 Jul 12 19:38 saml Mail spools however are not necessarily local. If you do not have this file then your mail is being maintained on another server. You can either interact with this server using protocols such as IMAP or POP3 . It is on this system where your mail spool is being maintained. In some environments the mail spool can be shared out as part of a user's home directory, often times in a directory called mail ($HOME/mail) . Other times it can be shared out as its own share under something like /mail/users/$USER . It really depends on how the sysadmins within a given environment choose to do so.
{ "source": [ "https://unix.stackexchange.com/questions/82910", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
82,919
What do the Linux interface names mean? eth0 eth1 wlan0 My current assumption is that when we are connected to the Internet via LAN cable it's eth0 or eth1 and when we are connected with internet via WiFi it's wlan0.
Your assumption is correct. The names however can be set/chosen by the user or the operating system that you are using. eth0 and eth1 is used because it's more intuitive than choosing an arbitrary name because "LAN cable" connection, like you said is Ethernet (hence the eth in eth0, eth1 ). Similarly when you connect to WiFi, it's "WirelessLAN" (hence the wlan in wlan0 ).
{ "source": [ "https://unix.stackexchange.com/questions/82919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40801/" ] }