source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
168,838
Since Android is based on Linux (and I understand is only a Java layer on top of Linux), I wonder why Linux does not generally run Android applications. Why is an Android compatibility layer, either with its own desktop or within X, not a standard feature of modern Linux distributions?
Android is based on the Linux kernel . That, and a very stripped-down BusyBox . All the rest of GNU/X11/Apache/Linux/TeX/Perl/Python/FreeCiv is not present on Android. Asking why Linux doesn't emulate Android is like asking why trucks don't emulate airplanes — after all they're both big vehicles with wheels at the bottom. Most Android applications are specifically designed to handle the limitations of a portable device: limited computing resources, energy consumption paramount, small screen, no external input device. There are usually similar applications for PC-style computers, except for location-related applications which are generally not useful outside of a mobile device. You can run Android applications in the emulator provided by Google. This is a developer tool, because the main application of running Android applications on a PC-style computer is to test them. There is some work on systems that combine Linux with Android (such as Ubuntu for Android , but it's been abandoned), mainly running on intermediate-format devices (tablets), but also on smaller devices (phones) to allow users of mobile devices to run existing applications from the larger-format world. Since the two operating systems have mostly compatible kernels, it's possible to run the rest of the operating system side by side (that's easier than rewriting the Android libraries to work on top of Linux/X11 or vice versa). There are significant technical difficulties, however. Probably the biggest one is that the GUI operate on completely different software: Linux uses the X Window System like other unix variants while Android has its own stack .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9839/" ] }
168,861
My java-based installer couldn't copy itself to target dir eating all disk space and I thought it was a java bug, but then I reproduced it with plain dd . When I try to read my own executable using a large enough buffer (131072), the read operation suddenly returns value greater than the actual file size and never returns EOF: DD=/media/distr/dd(/bin/cp /bin/dd $DD$DD bs=131071 if=$DD of=/dev/null count=20$DD bs=131072 if=$DD of=/dev/null count=20)0+1 records in0+1 records out55256 bytes (55 kB) copied, 0.00211071 s, 26.2 MB/s20+0 records in20+0 records out2621440 bytes (2.6 MB) copied, 0.0194318 s, 135 MB/s It only happens when both samba server and cifs mount run on Oracle Linux 6.6 with samba 3.6.23-12.0.1.el6 . What is it? A kernel, cifs or samba bug? strace of dd: [root@ec-stage-db-2 ~]# strace $DD bs=131072 if=$DD of=/dev/null count=20execve("/mnt/dd", ["/mnt/dd", "bs=131072", "if=/mnt/dd", "of=/dev/null", "count=20"], [/* 32 vars */]) = 0brk(0) = 0x13e8000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcc172ee000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=79239, ...}) = 0mmap(NULL, 79239, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fcc172da000close(3) = 0open("/lib64/librt.so.1", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@!\3002=\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=47104, ...}) = 0mmap(0x3d32c00000, 2128816, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3d32c00000mprotect(0x3d32c07000, 2093056, PROT_NONE) = 0mmap(0x3d32e06000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x3d32e06000close(3) = 0open("/lib64/libc.so.6", O_RDONLY) = 3read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\356\3011=\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=1926760, ...}) = 0mmap(0x3d31c00000, 3750152, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3d31c00000mprotect(0x3d31d8a000, 2097152, PROT_NONE) = 0mmap(0x3d31f8a000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x18a000) = 0x3d31f8a000mmap(0x3d31f8f000, 18696, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3d31f8f000close(3) = 0open("/lib64/libpthread.so.0", O_RDONLY) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340]\0002=\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=145896, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcc172d9000mmap(0x3d32000000, 2212848, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3d32000000mprotect(0x3d32017000, 2097152, PROT_NONE) = 0mmap(0x3d32217000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x3d32217000mmap(0x3d32219000, 13296, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3d32219000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcc172d8000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcc172d7000arch_prctl(ARCH_SET_FS, 0x7fcc172d8700) = 0mprotect(0x3d32e06000, 4096, PROT_READ) = 0mprotect(0x3d31f8a000, 16384, PROT_READ) = 0mprotect(0x3d32217000, 4096, PROT_READ) = 0mprotect(0x3d3161f000, 4096, PROT_READ) = 0munmap(0x7fcc172da000, 79239) = 0set_tid_address(0x7fcc172d89d0) = 39608set_robust_list(0x7fcc172d89e0, 0x18) = 0futex(0x7fffa32a778c, FUTEX_WAKE_PRIVATE, 1) = 0futex(0x7fffa32a778c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, NULL, 7fcc172d8700) = -1 EAGAIN (Resource temporarily unavailable)rt_sigaction(SIGRTMIN, {0x3d32005c60, [], SA_RESTORER|SA_SIGINFO, 0x3d3200f710}, NULL, 8) = 0rt_sigaction(SIGRT_1, {0x3d32005cf0, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x3d3200f710}, NULL, 8) = 0rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0rt_sigaction(SIGUSR1, NULL, {SIG_DFL, [], 0}, 8) = 0rt_sigaction(SIGINT, NULL, {SIG_DFL, [], 0}, 8) = 0rt_sigaction(SIGUSR1, {0x401be0, [INT USR1], SA_RESTORER, 0x3d31c326a0}, NULL, 8) = 0rt_sigaction(SIGINT, {0x401bd0, [INT USR1], SA_RESTORER|SA_NODEFER|SA_RESETHAND, 0x3d31c326a0}, NULL, 8) = 0brk(0) = 0x13e8000brk(0x1409000) = 0x1409000open("/usr/lib/locale/locale-archive", O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=99158576, ...}) = 0mmap(NULL, 99158576, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fcc11446000close(3) = 0open("/mnt/dd", O_RDONLY) = 3dup2(3, 0) = 0close(3) = 0lseek(0, 0, SEEK_CUR) = 0open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3dup2(3, 1) = 1close(3) = 0mmap(NULL, 143360, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcc11423000read(0, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\2\0>\0\1\0\0\0\340\32@\0\0\0\0\0"..., 131072) = 131072write(1, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\2\0>\0\1\0\0\0\340\32@\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072close(0) = 0close(1) = 0open("/usr/share/locale/locale.alias", O_RDONLY) = 0fstat(0, {st_mode=S_IFREG|0644, st_size=2512, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcc172ed000read(0, "# Locale name alias data base.\n#"..., 4096) = 2512read(0, "", 4096) = 0close(0) = 0munmap(0x7fcc172ed000, 4096) = 0open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = 0fstat(0, {st_mode=S_IFREG|0644, st_size=435, ...}) = 0mmap(NULL, 435, PROT_READ, MAP_PRIVATE, 0, 0) = 0x7fcc172ed000close(0) = 0write(2, "20+0 records in\n20+0 records out"..., 3320+0 records in20+0 records out) = 33write(2, "2621440 bytes (2.6 MB) copied", 292621440 bytes (2.6 MB) copied) = 29write(2, ", 0.0245485 s, 107 MB/s\n", 24, 0.0245485 s, 107 MB/s) = 24close(2) = 0exit_group(0) = ?
Android is based on the Linux kernel . That, and a very stripped-down BusyBox . All the rest of GNU/X11/Apache/Linux/TeX/Perl/Python/FreeCiv is not present on Android. Asking why Linux doesn't emulate Android is like asking why trucks don't emulate airplanes — after all they're both big vehicles with wheels at the bottom. Most Android applications are specifically designed to handle the limitations of a portable device: limited computing resources, energy consumption paramount, small screen, no external input device. There are usually similar applications for PC-style computers, except for location-related applications which are generally not useful outside of a mobile device. You can run Android applications in the emulator provided by Google. This is a developer tool, because the main application of running Android applications on a PC-style computer is to test them. There is some work on systems that combine Linux with Android (such as Ubuntu for Android , but it's been abandoned), mainly running on intermediate-format devices (tablets), but also on smaller devices (phones) to allow users of mobile devices to run existing applications from the larger-format world. Since the two operating systems have mostly compatible kernels, it's possible to run the rest of the operating system side by side (that's easier than rewriting the Android libraries to work on top of Linux/X11 or vice versa). There are significant technical difficulties, however. Probably the biggest one is that the GUI operate on completely different software: Linux uses the X Window System like other unix variants while Android has its own stack .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168861", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73160/" ] }
168,862
Why this bash script ssh $SERVER bash <<EOFsed -i "s/database_name: [^ ]*/database_name: kartable_$ME" $PARAM_FILEexitEOF output -> sed: -e expression #1, char 53: unterminated `s' command
The s command in sed , uses a specific syntax: s/AAAA/BBBB/options where s is the substitution command, AAAA is the regex you want to replace, BBBB is with what you want it to be replaced with and options is any of the substitution command's options, such as global ( g ) or ignore case ( i ). In your specific case, you were missing the final slash / , if you add it, sed will work just fine: ➜ ~ sed 's/database_name: [^ ]*/database_name: kartable_$ME/'database_name: somethingdatabase_name: kartable_$ME info sed 'The "s" Command' includes the full description and usage of the s command.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/168862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92104/" ] }
168,866
From time to time I need to do a simple task where I output basic HTML into the console. I'd like to have it minimally rendered, to make it easier to read at a glance. Is there a utility which can handle basic HTML rendering in the shell (think of Lynx -style rendering--but not an actual browser)? For example, sometimes I'll put a watch on Apache's mod_status page: watch -n 1 curl http://some-server/server-status The output of the page is HTML with some minimal markup, which shows in the shell like: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html><head><title>Apache Status</title></head><body><h1>Apache Server Status for localhost</h1><dl><dt>Server Version: Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.15 with Suhosin-Patch</dt><dt>Server Built: Jul 22 2014 14:35:25</dt></dl><hr /><dl><dt>Current Time: Wednesday, 19-Nov-2014 15:21:40 UTC</dt><dt>Restart Time: Wednesday, 19-Nov-2014 15:13:02 UTC</dt><dt>Parent Server Generation: 1</dt><dt>Server uptime: 8 minutes 38 seconds</dt><dt>Total accesses: 549 - Total Traffic: 2.8 MB</dt><dt>CPU Usage: u35.77 s12.76 cu0 cs0 - 9.37% CPU load</dt><dt>1.06 requests/sec - 5.6 kB/second - 5.3 kB/request</dt><dt>1 requests currently being processed, 9 idle workers</dt></dl><pre>__W._______.....................................................................................................................................................................................................................................................</pre><p>Scoreboard Key:<br />"<b><code>_</code></b>" Waiting for Connection,"<b><code>S</code></b>" Starting up,"<b><code>R</code></b>" Reading Request,<br />"<b><code>W</code></b>" Sending Reply,"<b><code>K</code></b>" Keepalive (read),"<b><code>D</code></b>" DNS Lookup,<br />"<b><code>C</code></b>" Closing connection,"<b><code>L</code></b>" Logging,"<b><code>G</code></b>" Gracefully finishing,<br />"<b><code>I</code></b>" Idle cleanup of worker,"<b><code>.</code></b>" Open slot with no current process</p><p /> When viewed in Lynx the same HTML is rendered as: Apache Status (p1 of 2) Apache Server Status for localhost Server Version: Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.15 with Suhosin-Patch Server Built: Jul 22 2014 14:35:25 ________________________________________________________________________________________________________ Current Time: Wednesday, 19-Nov-2014 15:23:50 UTC Restart Time: Wednesday, 19-Nov-2014 15:13:02 UTC Parent Server Generation: 1 Server uptime: 10 minutes 48 seconds Total accesses: 606 - Total Traffic: 3.1 MB CPU Usage: u37.48 s13.6 cu0 cs0 - 7.88% CPU load .935 requests/sec - 5088 B/second - 5.3 kB/request 2 requests currently being processed, 9 idle workers_C_______W_..................................................................................................................................................................................................................................................... Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process
lynx has a "dump" mode, which you can use with watch : $ watch lynx https://www.google.com -dump From man lynx : -dump dumps the formatted output of the default document or those specified on the command line to standard output. Unlike interactive mode, all documents are processed. This can be used in the following way: lynx -dump http://www.subir.com/lynx.html Files specified on the command line are formatted as HTML if their names end with one of the standard web suffixes such as “.htm” or “.html”. Use the -force_html option to format files whose names do not follow this convention. This Ask Ubuntu question has many more options.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/168866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26343/" ] }
168,939
If I double click on a Python file on my Gnome Classic desktop, it is not opened in IDLE . How can I make IDLE the default editor for Python files? I have tried to delete all the other programs that were being opened. I have been looking through IDLE's properties. I have been looking through the program's properties as well. When I run xdg-mime query default text/x-python I get the output gedit.destkop I do have IDLE installed, and I tried to right click the file and open with an application. IDLE was nowhere to be seen. But when I go to my applications, and go to programming, IDLE is there, clear as day.
You can easily associate the .py files with IDLE. You have to right click a python script, choose "Open with -> Other application", then choose IDLE from the list with the checkbox asking if it should be the default application ticked. I use XFCE, but Gnome's Nautilus should be similar. Alternatively, there is a configuration file in your home directory which contains the list of default applications. You can access it on the following path: ~/.local/share/applications/mimeapps.list You can extend it with the following line: text/x-python=idle.desktop where idle.desktop stands for the desktop file of the IDLE version you would like to use, for instance idle-python2.7.desktop or idle-python3.desktop based on the python version you have to use. NOTE: Be aware of the permissions of your scripts. If it has the executable permission, despite the default application, your system won't open it in IDLE, as it will try to execute it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168939", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92149/" ] }
168,975
Our web server was originally maintained by a different company. They wrote a short script to clean out cache files. Their file has several lines that read like this: /usr/bin/find /var/www/cache/blah/ |xargs /bin/rm -f >/dev/null 2>&1 Is there any reason why they couldn't just write: /bin/rm -f /var/www/cache/blah/* to delete the files? I can see using find when you need a particular criteria, but I can't seem to find one in this case.
There are a few differences in the behavior of the command lines: The find command line would delete files recursively in subdirectories , the rm command line wouldn't.You need to consider whether or not you want to recurse. The find command line would delete all files, if possible. The rm command line might skip files based on the shell's settings like GLOBIGNORE .You need to consider whether or not there might be filenames that might be accidentally ignored in pathname expansion. The find command line would succeed for any number of files . The rm command line might fail if the pathname expansion creates a command line that is too long (bigger than the system supports). Some systems have limits on this.You need to consider how many files might need to be deleted. The find command line ignores all output messages (using the redirections to >/dev/null ). The rm command line prints the output messages.You need to consider what should happen with the messages. If these differences do not matter to you, /bin/rm -f /var/www/cache/blah/* will work for you. If only files are to be removed, and directories are to be retained, I would actually use /usr/bin/find /var/www/cache/blah/ -not -type d -exec /bin/rm -f -- {} + >/dev/null 2>&1 or /usr/bin/find /var/www/cache/blah/ -type f -exec /bin/rm -f -- {} + >/dev/null 2>&1 whatever suits your purpose, both have pros and cons. -exec command {} + works just like xargs , but is slightly more efficient. The -- prevents that rm falls over if one of the filenames starts with - . Also, xargs was used in a way that makes too many assumptions about filenames. Special characters like space would actually break the simple invocation of xargs . Something like find ... -print0 | xargs -0 would be required, and then the find -exec command {} + is just much simpler.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43753/" ] }
168,986
I have a string tstArr2 which has the following content '3 5 8' Now in awk I want to parse a flat file test my array which array is better arrayINDIA USA SA NZ AUS ARG GER BRAUS AUS INDIA ENG NZ SRI PAK WI BAN NED IRE at these numbered columns only.I tried the following awk -vA="$tstArr2" 'BEGIN{split(A,B," ");} {if(NR>1){for(i=1; i<= length(B); i++){printf "%s ",B[i]}}print " "}' testUnix3.txt But it says awk: Cannot read the value of B. It is an array name.The input line number is 2. The file is testUnix3.txt.The source line number is 1. What am I missing ?If I try the following awk -vA="$tstArr2" 'BEGIN{split(A,B," ");} {if(NR>1){for(i in B){printf "%s ",$B[i]}}print " "}' testUnix3.txt it prints the outputs but they are not in order. I want them to be in order. Please explain.Desired output : SA AUS BRA INDIA NZ WI
POSIX defined length in awk is a string function, argument taken as a string. Using length with an array as argument is unspecified behavior. In some implementations of awk like gawk (version >= 3.1.6) , OS X version of AWK , you can use length with an array as argument, it will return number of elements in array. Array in awk is associative array, looping through associative array does not guarantee anything about the order. In this case, you can take advantage of split function, which return number of fields to get the number elements of array. POSIXly, you can try: $ awk -vA="$tstArr2" ' BEGIN{n = split(A,B," ");} { if(NR > 1) { for(i = 1;i <= n;i++) { printf "%s ",$B[i]; } } print " "; }' fileSA AUS BRA INDIA NZ WI
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168986", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91079/" ] }
169,020
I have the following lines of bash script to test whether a file exists: MYAPPPATH=$(find $APPDIR -iname myapp* -print)if [ -e $MYAPPPATH ]then echo "File exists" fi However when file starting with "myapp" does not exist, therefore MYAPPPATH='', the above check succeeds again.See what happens when using set -x and the file does not exist: ++ find /path/to/app -iname 'myapp*' -print+ MYAPPPATH=+ '[' -e ']'+ echo 'File exists' What is the reason for this? What do I need to do in order to make this work as expected?
When your variable is empty, your command becomes: [ -e ] In this case, you call [..] with one argument -e . String "-e" is not null, so test return true. This behavior is defined by POSIX test : In the following list, $1, $2, $3, and $4 represent the arguments presented to test: 0 arguments: Exit false (1). 1 argument: Exit true (0) if $1 is not null; otherwise, exit false. .... To make it works, you must double quote your variable : [ -e "$MYAPPPATH" ] This works because -e with an argument that is an empty string is false.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169020", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27308/" ] }
169,039
I have a problem with locale and I can't found any solution that works! Every tutorial is similar to this: Perl warning Setting locale failed in Debian . This is the problem with locale: pi @ server [~]:$ > sudo deluser --remove-home cm22perl: warning: Setting locale failed.perl: warning: Please check that your locale settings:LANGUAGE = (unset),LC_ALL = (unset),LC_CTYPE = "UTF-8",LANG = "en_GB.UTF-8"are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").Looking for files to backup/remove ...Removing user `cm22' ...Warning: group `cm22' has no more members.Done. How can I resolve it?
Debian saves network bandwidth by shipping locale definitions in a form that isn't directly usable, where information that is shared between locales (e.g. en_US and en_CA are very similar) is stored in a single file. Usable locale definitions must be generated on each machine. To save CPU time and disk space, only locales requested by the system administrator are generated. Run the following command as root to configure the set of locales to generate: dpkg-reconfigure locales Alternatively, edit the file /etc/locale.gen and comment out the lines corresponding to the locales you want (lines beginning with # are comment lines). For example, if you want the en_GB.UTF-8 locale, you need to have a line containing en_GB.UTF-8 UTF-8 Once you've edited /etc/locale.gen , run locale-gen to regenerate the locale definitions. The value UTF-8 that you've set for LC_CTYPE is invalid. You need to use a valid locale name, e.g. LC_CTYPE=en_GB.UTF-8 . You can leave LC_CTYPE unset: it'll default to the value of LANG . Though I rather recommend to leave LANG unset and set LC_CTYPE=en_GB.UTF-8 and LC_TIME=en_GB.UTF-8 ( LC_MESSAGES effectively defaults to English, if you were using another language then you should set it as well).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89975/" ] }
169,054
I'm tailing a log file using tail -f messages.log and this is part of the output: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce eget tellus sit amet odio porttitor rhoncus. Donec consequat diam sit amet tellus viverra pellentesque. tail: messages.log: file truncatedSuspendisse at risus id neque pharetra finibus in facilisis ipsum. It shows tail: messages.log: file truncated when the file gets truncated automatically and that's supposed to happen, but I just want tail to show me the output without this truncate message. I've tried using tail -f messages.log | grep -v truncated but it shows me the message anyway. Is there any method to suppress this message?
That message is output on stderr like all warning and error messages. You can either drop all the error output: tail -f file 2> /dev/null Or to filter out only the error messages that contain truncate : { tail -f file 2>&1 >&3 3>&- | grep -v truncated >&2 3>&-;} 3>&1 That means however that you lose the exit status of tail . A few shells have a pipefail option (enabled with set -o pipefail ) for that pipeline to report the exit status of tail if it fails. zsh and bash can also report the status of individual components of the pipeline in their $pipestatus / $PIPESTATUS array. With zsh or bash , you can use: tail -f file 2> >(grep -v truncated >&2) But beware that the grep command is not waited for, so the error messages if any may end up being displayed after tail exits and the shell has already started running the next command in the script. In zsh , you can address that by writing it: { tail -f file; } 2> >(grep -v truncated >&2) That is discussed in the zsh documentation at info zsh 'Process Substitution' : There is an additional problem with >(PROCESS) ; when this is attached to an external command, the parent shell does not wait for PROCESS to finish and hence an immediately following command cannot rely on the results being complete. The problem and solution are the same as described in the section MULTIOS in note Redirection:: . Hence in a simplified version of the example above: paste <(cut -f1 FILE1) <(cut -f3 FILE2) > >(PROCESS) (note that no MULTIOS are involved), PROCESS will be run asynchronously as far as the parent shell is concerned. The workaround is: { paste <(cut -f1 FILE1) <(cut -f3 FILE2) } > >(PROCESS) The extra processes here are spawned from the parent shell which will wait for their completion.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169054", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60445/" ] }
169,064
I have a disk with classic MBR and want to transform it to use GPT without data loss. I have seen several more or less useful tutorials, but most of them are dealing with the specific problems related to GRUB, the operating systems and multiple partitions on a disk. In my case, the situation is much simpler - I have a simple disk used to store data on a single partition. I discovered that simply running gdisk and pressing w writes GPT to the disk and I can mount and use it without issues afterwards. I am worried about data corruption though, gdisk warns me that the operation I'm about to perform is potentially destructive, and I've seen diagrams on which GPT occupies some space which is normally used by the first partition. So my questions are: Is this a good way to transform MBR to GPT? Can GPT overwrite some data which was originally on the primary partition, thus corrupting my files or the filesystem?
I created an MBR disk with one partition, filled every single byte on that partition with data, created a SHA1 checksum of the whole partition, converted it to GPT as described in the question, created yet another checksum and compared it with the original. They were the same. So my conclusion is this: You can safely convert a disk to GPT without corrupting the data. Warning: This does not mean the procedure is safe. It might corrupt your partitions. Always make a backup before converting using this approach.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21790/" ] }
169,067
I have two CentOS 7 VM's running in virtualbox. On each of the machines I want to set the hostname and a static IP address. VM1 works just fine. VM2 does not. I did the same thing on both servers so I'm not sure why VM2 is having issues. It shows as localhost.localdomain and I can't get it to read the new correct hostname. Here is what I've done: Modified the /etc/sysconfic/network file as follows: NETWORKING=yesHOSTNAME=newhost.newdomain Modified the /etc/resolv.conf file as follows: nameserver 8.8.8.8 Modified the /etc/sysconfig/network-scripts/ifcfg-enp0s3 file as follows: HWADDR=#TYPE=EthernetBOOTPROTO=staticDEFROUTE=yesNAME=enp0s3UUID=#ONBOOT=yesIPADDR=192.168.10.1NETMASK=255.255.255.0NM_CONTROLLER=noGATEWAY=192.168.10.100 The interface works and the IP is assigned as specified. The only thing that does not work is the hostname. I can change it temporarily by using the 'hostname {newname}' command but that is only a temp fix as it reverts back on reboot. All of this is the same as on VM1 (except for the IP address assigned) and VM1 works fine. I'm not concerned with the hosts file at the moment since I'm not worried about name resolution; I'm just worried with the hostname. Any thoughts or suggestions?
Try setting the host name in /etc/hostname From the hostname man page on my CentOS 7 machine: The host name is usually set once at system startup (normally by read‐ing the >contents of a file which contains the host name, e.g. /etc/hostname).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87721/" ] }
169,079
Variants of this question have certainly been asked several times in different places, but I am trying to remove the last M lines from a file without luck. The second most voted answer in this question recommends doing the following to get rid of the last line in a file: head -n -1 foo.txt > temp.txt However, when I try that in OSX & Zsh, I get: head: illegal line count -- -1 Why is that? How can I remove the M last lines and the first N lines of a given file?
You can remove the first 12 lines with: tail -n +13 (That means print from the 13 th line.) Some implementations of head like GNU head support: head -n -12 but that's not standard. tail -r file | tail -n +13 | tail -r would work on those systems that have tail -r (see also GNU tac ) but is sub-optimal. Where n is 1: sed '$d' file You can also do: sed '$d' file | sed '$d' to remove 2 lines, but that's not optimal. You can do: sed -ne :1 -e 'N;1,12b1' -e 'P;D' But beware that won't work with large values of n with some sed implementations. With awk : awk -v n=12 'NR>n{print line[NR%n]};{line[NR%n]=$0}' To remove m lines from the beginning and n from the end: awk -v m=6 -v n=12 'NR<=m{next};NR>n+m{print line[NR%n]};{line[NR%n]=$0}'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169079", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
169,098
How can I print a range of ip addresses on linux command line using the "seq" command? For eg: I need seq to print a range of ip from 10.0.0.1 to 10.0.0.23 . Seems like the period in between the octets causes the number to behave like a floating point . I am getting a "invalid floating point argument error" . I tried using the -f option . May be I am not using it correctly. But it still gave me an error. I am trying to something similar to seq 10.0.0.2 10.0.0.23 Is there another way to print IP addresses in a range in Linux other than switching over to excel ?
Use a format: $ seq -f "10.20.30.%g" 40 5010.20.30.4010.20.30.4110.20.30.4210.20.30.4310.20.30.4410.20.30.4510.20.30.4610.20.30.4710.20.30.4810.20.30.4910.20.30.50 Unfortunately this is non-obvious as GNU doesn't like to write man pages.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169098", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92605/" ] }
169,126
Is there a shorter way of writing this? Basically output a command to a file, then use the file as input for the next command. I also want to keep the file to view afterwards. cmd1 > verylong.txt; cmd2 < verylong.txt I know I can do cmd1 | tee verylong.txt | cmd2 But since I expect "verylong.txt" to be a huge file, I thought it would be less efficient to use pipe since that would hold the entire file in memory. Whereas if I use file input then it would process it one line at a time. (Or is my assumption wrong?) It would be great if I could do something elegant like cmd1 > verylong.txt > cmd2
As far as I know, cmd1 | tee verylong.txt | cmd2 will not hold the whole file in memory. In fact, if cmd2 was to wait too long before consuming its input, cmd1 might block on a write call and unblock only when cmd2 starts reading again. The reason for that is that there is a buffer for the pipe, and that buffer, by default, is limited to a certain reasonable size . Of course, the story might be different if cmd2 is sort (or something alike) where the entire input must be read before the command is able to write its output. In that case, the entire file content might be held in cmd2 memory, but that is independent of whether a pipe or an intermediary file was used for the input of that command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54444/" ] }
169,178
I have been trying to print out the base file names using find in Unix. To simply print out the file names the command I have been using is this find . -type f -name \*.out -print It prints out the full path qualified path names. But I want the base file names only. That is why I have been trying the following command find . -type f -name \*.out -exec basename {} But it errors out and displays find: incomplete statement Please help me through.
You missing ; character to terminate primary expression (See POSIX find ): find . -type f -name \*.out -exec basename {} ';' The reason you must escape, or quote ; because it's your shell list separator. You must make your shell treat it literally. \; , ';' or ";" all work well. But this solution will call basename for each file found, make it slow. If file names don't contain newline, you can: find . -type f -name '*.out' | sed -e 's#.*/##' If you have GNU coreutils version >= 8.16 , or you are on OSX, you can use basename -a : find . -type f -name '*.out' -exec basename -a -- {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169178", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91079/" ] }
169,186
I've just lost a small part of my audio collection, by a stupid mistake I made. :-( GLADLY I had a fairly recent backup, but it was still irritating. Apart from yours truly, the other culprit doing the mischief was mv , which will show as follows: The audio files had a certain scheme: ARTIST - Some Title YY.mp3 where YY is the 2-digit year specification. mkdir 90<invisible control character> (Up to this moment, I did not know that I had actually typed one third excess character which was invisible ...!) Instead of having all in one directory, I wanted to have all 1990s music in one directory. So I typed: find . -name '* 9?.mp3' -exec mv {} 90 \; Not so hard to get the idea what happened eh? :-> The (disastrous) result was a virgin empty directory called '90 something ' (with something being the "invisible" control character) and one single file called '90', overwritten n times. ALL FILES WERE GONE. :-(( (obviously) Wish mv would've checked in time whether the signature of the destination "file" (remember on *NIX: Everything Is A File ) starts with a d------ (e. g. drwxr-xr-x ). And, of course, whether the destination exists at all. There is a variant of the aforementioned scenario, when you simply forgot to mkdir the directory first. (but of course, you assumed that it's there...) Even our pet-hate OS starting with the capital W DOES DO THIS. You get even prompted to specify the type of destination (file? directory?) if you ask for it. Hence, I'm wondering if we *NIXers still have to write ourselves a " mv scriptlet" just to avoid these kinds of most unwanted surprises.
You can append a / to the destination if you want to move files to adirectory. In case the directory does not exist you'll receive an error: mv somefile somedir/mv: cannot move ‘somefile’ to ‘somedir/’: Not a directory In case the directory exists, it moves the file into that directory.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22858/" ] }
169,200
I have file1: "$lvl=57""$lvl=20""$lvl=48""$lvl=17""$lvl=58" File2 I want: "$lvl=17""$lvl=20""$lvl=48""$lvl=57""$lvl=58" Basically numerically sort of file1.
I like the -V / --version-sort option found in a few implementations of sort (from GNU sort ): it behaves very well for many situations mixing strings and numbers sort -V I use this option very often... In the same direction, with some implementations of ls , ls -v for version-sort ls (from GNU ls ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80576/" ] }
169,207
I have Input: NISHA =\455 Output: NISHA = 455 I want to remove \ from output. I tried to use command sed "s/[\]//g" P but it is not working and it flags an error: character found after backslash is not meaningful
You can either replace the backslash by a space as you show in the example result: sed 's/\\/ /g' or you can remove it as you show in your code: sed 's/\\//g' Special characters There are possible problems with escaping of backslash to cancel its special meaning. Backslash is a special character used for escaping both in a shell and in regular expressions. The command you type to the shell's command line or a script is first processed by the shell which interprets the special meaning of characters and their escaping. The result is then passed to the command to be executed (like sed ) which performs its own interpretation of the characters. When you are constructing a command the mental procedure is the opposite way: first add the escaping for the regex then add the escaping for the shell. In a regex (input to commands like sed , grep etc.) backslash can be escaped by backslash like this: \\ and also you can use the set expression [\] like you used because there backslash loses its special meaning. In a shell (like bash ) you can escape backslash by backslash. So instead of \ write \\ . Enclosing the string between double quotes " makes backslash behaviour more complicated <1> but double backslash will still produce a single backslash. Enclosing the string between single quotes ' makes every character to be treated literally except ' . If you want to use double quotes , you can use one of the following: sed "s/\\\\//g" - Escape \ by \ in the shell, and escape every \ in the regex again. In fact the double quotes are not necessary in this case because every special character is properly escaped. sed "s/[\\]//g" - Escape in the shell by a backslash \ and in the regex use a set [ ] . sed "s/[\]//g" - Yes, your example should work in a POSIX compliant environment! Between double quotes \ represents itself unless it precedes a special character in the context of double quotes: $`"\ or a newline. It looks like that in your case either the shell or the sed does not follow the POSIX standard. With single quotes you can also use the string as you used or a shorter way: sed 's/[\]//g' sed 's/\\//g'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80576/" ] }
169,279
I'm looking for an alternative to iotop. Here's my situation: I want to find out if a program is accessing the hard drive a lot while running. iotop requires root/sudo privileges. My account is on someone else's system so I'm not allowed to have root or sudo privileges. Is there an alternative to iotop I could use?
To reference a few more tools. htop http://hisham.hm/htop/ https://github.com/hishamhm/htop Command line tool, packaged in most distributions, is able to show the I/O without root privileges but only for your processes. run htop(1) , you'll find an interface similar to top(1) hit F2 to enter the configuration use ↓ to select "Columns" use → to select "Available Columns" use ↓ / ↑ to select the I/O informations you want (ie: IO_READ_RATE, IO_WRITE_RATE, IO_RATE) and F5 to add them to the "Active Columns" save with F10 use < / > to select the I/O column to affect the sort order glances https://nicolargo.github.io/glances/ https://github.com/nicolargo/glances Command line tool with a web mode, not widely packaged but easy to install (ie: pip install glances ). netdata https://my-netdata.io/ https://github.com/firehol/netdata Web interface, can be run without root privileges, not yet packaged (require compilation).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92392/" ] }
169,326
Suppose that I have three (or more) bash scripts: script1.sh , script2.sh , and script3.sh . I would like to call all three of these scripts and run them in parallel . One way to do this is to just execute the following commands: nohup bash script1.sh &nohup bash script2.sh &nohup bash script3.sh & (In general, the scripts may take several hours or days to finish, so I would like to use nohup so that they continue running even if my console closes.) But, is there any way to execute those three commands in parallel with a single call? I was thinking something like nohup bash script{1..3}.sh & but this appears to execute script1.sh , script2.sh , and script3.sh in sequence, not in parallel.
for((i=1;i<100;i++)); do nohup bash script${i}.sh & done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169326", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
169,330
My direcotrory is root: pwd / I have the following dir: drwxrwxrwx 4 root root 81920 Jun 4 09:25 imr_report_repo NOTE: imr_report_repo is an NFS share. Here is the fstab listing for imr_report_repo : netapp1:/imr_report_repos_stage /imr_report_repo nfs rw,bg,actimeo=0,nointr,vers=3,timeo=600,rsize=32768,wsize=32768,tcp 1 1d imr_report_repo A file within mount: $ ls -al-rw-r--r-- 1 502 502 1273 Mar 21 2013 imr1_test.txt The UID 502 does not exist. If we add that UID/GID locally: $ groupadd -g 502 jimmy$ useradd -g 502 -u 502 jimmy It now shows up: $ ls -al-rw-r--r-- 1 jimmy jimmy 1273 Mar 21 2013 imr1_test.txt Now change to root: $ su -$ chown oracle:oinstall imr1_test.txtchown: changing ownership of `imr1_test.txt': Operation not permitted
Usually root does not have special permissions on NFS shares. On the contrary: root is mapped to an ordinary user (i.e. does not even have "normal" read and write access to root files). You must run chown on the NFS server.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88123/" ] }
169,334
First I made a file called telephone containing: Jan;032569874 Annemie;014588529 Hans;015253694 Stefaan;011802367 And what I now need to do is make a script where the user inputs a name and the output should look like, The phone number of Jan is 032569874 I tried a lot of things but nothing worked like its supposed to do. I know its a very easy one but I just can't get it. Ok guys those answers are clearly from a higher level. This is what I got: #!/bin/bash#Solution script2IFS=";"echo "Whose phone number do you want to know?"read nameread name number <$1echo "The phone number of $name is $number."
Here's a roughly modified version of your script: $ more cmd.bash #!/bin/bashecho "Whose phone number do you want to know?"read namenumber=$(grep "$name" telephone | cut -d';' -f2)echo ''echo "The phone number of $name is $number." It works as follows: $ ./cmd.bash Whose phone number do you want to know?HansThe phone number of Hans is 015253694. How it works It simply takes in the name via the read name command and stores what was typed in the variable, $name . We then grep through the file telephone and then use cut to split the resulting line from the file telephone into 2 fields, using the semicolon as the separation character. The phone number, filed 2, is stored in the variable $number . Testing You can use the following script to test that your script works, using the data from the file telephone . $ while read -r i ;do echo "-----" echo "Test \"$i\"" ./cmd.bash <<<$i echo "-----" done < <(cut -d';' -f1 telephone) The above commands read in the contents of the file telephone , split it on the semicolon, ; , and then takes each value from field 1 (then names) and loops through them 1 at a time. Each name is then passed to your script, cmd.bash , via STDIN, aka. <<<$i . This simulates a user typing in each of the names. $ while read -r i ;do echo "-----"; echo "Test \"$i\""; \ ./cmd.bash <<<$i; echo "-----"; done < <(cut -d';' -f1 telephone)-----Test "Jan"Whose phone number do you want to know?The phone number of Jan is 032569874.----------Test "Annemie"Whose phone number do you want to know?The phone number of Annemie is 014588529.----------Test "Hans"Whose phone number do you want to know?The phone number of Hans is 015253694.----------Test "Stefaan"Whose phone number do you want to know?The phone number of Stefaan is 011802367.-----
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92430/" ] }
169,363
I want to grep a file, and I want to get all the lines that have a certain environment variable (to be exact, $PWD ). Of course, using just cat file | grep '/'$PWD'/' is not working, since $PWD contains slashes. I am trying to figure out how to do it correctly, but I come up only with weird and over-complicated solutions. What's some simple way to do this?
Just use double quote instead of single quote, and you don't have to use cat (See UUOC ): grep -F -- "$PWD" file And remember that without -F , $PWD would be treated as a regular expression as opposed to a string to be looked for in the file .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10393/" ] }
169,366
I want certain files to be able to be altered by myself on my basic account. To me, they are high priority files, with many backups. But we have some young'uns in the house and I don't quite trust them. I feel like they will find a way to delete the files. Is there a way I could hide them, or make them invisible without a command needed to be input from the command line?
Directory permissions: The write bit allows the affected user to create, rename, or deletefiles within the directory, and modify the directory's attributes The read bit allows the affected user to list the files within thedirectory The execute bit allows the affected user to enter the directory, andaccess files and directories inside The sticky bit states that files and directories within thatdirectory may only be deleted or renamed by their owner (or root) You can save the files under the ownership of root user and thus this will require them to use password before accessing those files. As said in directory permissions, you can take away 'write bit' and 'execute bit' thus not allowing them to enter directory. only give them read permission so that they can view files without altering and deleting them. you can learn the use of sticky bit ( link here ) and disabling alter and delete feature on every file inside that directory If they have root password then hiding files is only the way to protect your files and root is god of the system, if they have root password, so they are the real god of your system !
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92149/" ] }
169,370
I'm trying to install OpenBSD 5.6 amd64 on my 2012 Mac mini (quad-core Intel i7), but USB stops working partway through the boot sequence, as evidenced by my lighted keyboard going dark. Without a working keyboard, I am unable to proceed through the installer. The keyboard works just fine during installation and use under OpenBSD 5.5 amd64 on this Mac mini. Attempting to boot the 5.6 installer on another Mac mini (dual-core i5) also results in the same issue, so it's not something wonky with this particular machine. I have tried install56.iso, install56.fs, cd56.iso, and bsd.rd from the release version of 5.6. I have also tried install56.iso from two recent snapshots. None of these work. Here's the text output on the screen as it boots, videoed and hand-transcribed (there might be typos): CD-ROM: 90Loading /5.6/AMD64/CDBOOTprobing: pc0 mem[568K 63K 511M 510M 1197M 80K 2M 14070M a20=on]disk: hd0+ hd1+ sd0* cd0>> OpenBSD/amd64 CDBOOT 3.23boot>cannot open cd0a:/etc/random.seed: No such file or directorybooting cd0a:/5.6/amd64/bsd.rd: 3189664+1368975+2401280+0+517104 [72+350160+227754]=0x7b0838entry point at 0x1000160 [7205c766, 34000004, 24448b12, 6670a304]kbc: cmd word write errorCopyright (c) 1982, 1986, 1989, 1991, 1993 The Regents of the University of California. All rights reserved.Copyright (c) 1995-2014 OpenBSD. All rights reserved. http://www.OpenBSD.orgOpenBSD 5.6-current (RAMDISK_CD) #551: Fri Nov 21 10:20:00 MST 2014 [email protected]:/usr/src/sys/arch/amd64/compile/RAMDISK_CDRTC BIOS diagnostic error 7f<ROM_cksum,config_unit,memory_size,fixed_disk,invalid_time>real mem = 17065648128 (16275MB)avail mem = 16610545664 (15841MB)mainbus0 at rootbios0 at mainbus0: SMBIOS rev. 2.4 @ 0xe0000 (83 entries)bios0: vendor Apple Inc. version "MM61.88Z.0106.B04.1309191433" date 09/19/2013bios0: Apple Inc. Macmini6,2acpi0 at bios0: rev 2acpi0: sleep states S0 S3 S4 S5acpi0: tables DSDT FACP HPET APIC SBST ECDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT DMAR MCFGacpimadt0 at acpi0 addr 0xfee00000: PC-AT compatcpu0 at mainbus0: apid 0 (boot processor)cpu0: Intel(R) Core(TM) i7-3615QM CPU @ 2.30GHz, 2295.13 MHzcpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,MCOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMScpu0: 256KB 64b/line 8-way L2 cache (There's a discontinuity here, there might be some lines missing) ppb2 at pci2 dev 0 function 0 vendor "Intel", unknown product 0x1547 rev 0x03: msipci3 at ppb2 bus 6vendor "Intel", unknown product 0x1547 (class system subclass miscellaneous, rev 0x03) at pci3 dev 0 function 0 not configuredppb3 at pci2 dev 3 function 0 vendor "Intel", unknown product 0x1547 rev 0x03: msipci4 at ppb3 bus 7ppb4 at pci2 dev 4 function 0 vendor "Intel", unknown product 0x1547 rev 0x03: msipci5 at ppb4 bus 56ppb5 at pci2 dev 5 function 0 vendor "Intel", unknown product 0x1547 rev 0x03: msipci6 at ppb5 bus 105ppb6 at pci2 dev 6 function 0 vendor "Intel", unknown product 0x1547 rev 0x03: msipci7 at ppb6 bus 106vga1 at pci0 dev 2 function 0 "Intel HD Graphics 4000" rev 0x09wsdisplay0 at vga1 mux 1: console(80x25, vt100 emulation)"Intel 7 Series xHCI" rev 0x04 at pci0 dev 20 function 0 not configured"Intel 7 Series MEI" rev 0x04 at pci0 dev 22 function 0 not configuredehci0 at pci0 dev 26 function 0 "Intel 7 Series USB" rev 0x04: apic 2 int 23usb0 at ehci0: USB revision 2.0uhub0 at usb0 "Intel EHIC root hub" rev 2.00/1.00 addr 1"Intel 7 Series HD Audio" rev 0x04 at pci0 dev 27 function 0 not configuredppb7 at pci0 dev 28 function 0 "Intel 7 Series PCIE" rev 0xc4: msipci8 at ppb7 bus 1bge0 at pci8 dev 0 function 0 "Broadcom BCM57766" rev 0x01, unknown BCM57766 (0x57766001): msi, address [removed]bgrphy0 at bge0 phy1: BCM57765 10/100/1000baseT PHY, rev. 1sdhc0 at pci8 dev 0 function 1 "Broadcom SD Host Controller" rev 0x01: apic 2 int 17sdhc0 at 0x10: can't map registersppb8 at pci0 dev 28 function 1 "Intel 7 Series PCIE" rev 0xc4: msipci9 at ppb8 bus 2vendor "Broadcom", unknown product 0x4331 (class network subsclass miscellaneous, rev 0x02) at pci9 dev 0 function 0 not configuredppb9 at pci0 dev 28 function 2 "Intel 7 Series PCIE" rev 0xc4: msipci10 at ppb9 bus 3"AT&T/Lucent FW643 1394" rev 0x08 at pci10 dev 0 function 0 not configuredehci1 at pci0 dev 29 function 0 "Intel 7 Series USB" rev 0x04: apic 2 int 22usb1 at ehic1: USB revision 2.0uhub1 at usb1 "Intel EHCI root hub" rev 2.00/1.00 addr 1"Intel HM77 LPC" rev 0x04 at pci0 dev 31 function 0 not configuredahci0 at pci0 dev 31 function 2 "Intel 7 Series AHCI" rev 0x04: msi, AHCI 1.3scsibus0 at ahci0: 32 targetssd0 at scsibus0 targ 0 lun 0: <ATA, Samsung SSD 850, EXM0> SCSI3 0/direct fixednaa.50025388a069068dsd0: 244198MB, 512 bytes/sector, 500118192 sectors, thinsd1 at scsibus0 targ 1 lun 0: <ATA, Samsung SSD 850, EXM0> SCSI3 0/direct fixednaa.50025388a0690757sd1: 244198MB, 512 bytes/sector, 500118192 sectors, thin"Intel 7 Series SMBus" rev 0x04 at pci0 dev 31 function 3 not configuredisa0 at mainbus0com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifouhub2 at uhub0 port 1 "vendor 0x8087 product 0x0024" rev 2.00/0.00 addr 2uhub3 at uhub1 port 1 "vendor 0x8087 product 0x0024" rev 2.00/0.00 addr 2uhub4 at uhub3 port 8 "vendor 0x0424 product 0x2512" rev 2.00/b.b3 addr 3uhub5 at uhub4 port 1 "Apple Inc. BRCM20702 Hub" rev 2.00/1.00 addr 4uhidev0 at uhub5 port 1 configuration 1 interface 0 "vendor 0x05ac product 0x820a" rev 2.00/1.00 addr 5uhidev0: iclass 3/1, 1 report idukbd0 at uhidev0 reportid 1wskbd0 at ukbd0: console keyboard, using wsdisplay0uhidev1 at uhub5 port 2 configuration 1 interface 0 "vendor 0x05ac product 0x820b" rev 2.00/1.00 addr 6uhidev1: iclass 3/1, 2 report idsuhid at uhidev1 reportid 2 not configured"Apple Inc. Bluetooth USB Host Controller" rev 2.00/0.79 addr 7 at uhub5 port 3 not configureduhidev2 at uhub4 port 2 configuration 1 interface 0 "Apple, Inc. IR Receiver" rev 2.00/1.00 addr 8uhidev2: iclass 3/0, 38 report idsuhid at uhidev2 reportid 36 not configureduhid at uhidev2 reportid 37 not configureduhid at uhidev2 reportid 38 not configuredsoftraid0 at rootscsibus1 at softraid0: 256 targetssoftraid0: trying to bring up sd2 degradedsd2 at scsibus1 targ 1 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixedsd2: 244190MB, 512 bytes/sector, 500102858 sectorssoftraid0: roaming device -> sd1aroot on rd0a swap on rd0b dump on rd0berase ^?, werase ^W, kill ^U, intr ^C, status ^TWelcome to the OpenBSD/amd64 5.6 installation program.(I)nstall, (U)pgrade, (A)utoinstall or (S)hell? _ There is some mention of USB and the keyboard as it's booting, but at this point my backlit keyboard is unlit and pressing keys doesn't do anything. No other keyboard I have works. I have tried plugging in the keyboard to all the USB ports on the system, it does not help. Any ideas what the problem could be? Is there any way to control this system without needing a USB keyboard?
Directory permissions: The write bit allows the affected user to create, rename, or deletefiles within the directory, and modify the directory's attributes The read bit allows the affected user to list the files within thedirectory The execute bit allows the affected user to enter the directory, andaccess files and directories inside The sticky bit states that files and directories within thatdirectory may only be deleted or renamed by their owner (or root) You can save the files under the ownership of root user and thus this will require them to use password before accessing those files. As said in directory permissions, you can take away 'write bit' and 'execute bit' thus not allowing them to enter directory. only give them read permission so that they can view files without altering and deleting them. you can learn the use of sticky bit ( link here ) and disabling alter and delete feature on every file inside that directory If they have root password then hiding files is only the way to protect your files and root is god of the system, if they have root password, so they are the real god of your system !
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402/" ] }
169,378
I'm using centOS 6.3 on virtualbox as guest OS on Win 7 host. My problem is when I use ifconfig command in terminal, I'm being thrown internal IP address(10.x.x.x). However, when I googled "my IP address" I got my actual IP address. The same thing happens when I type ipconfig in DOS prompt. Is there a way to get external IP address in those places?
Directory permissions: The write bit allows the affected user to create, rename, or deletefiles within the directory, and modify the directory's attributes The read bit allows the affected user to list the files within thedirectory The execute bit allows the affected user to enter the directory, andaccess files and directories inside The sticky bit states that files and directories within thatdirectory may only be deleted or renamed by their owner (or root) You can save the files under the ownership of root user and thus this will require them to use password before accessing those files. As said in directory permissions, you can take away 'write bit' and 'execute bit' thus not allowing them to enter directory. only give them read permission so that they can view files without altering and deleting them. you can learn the use of sticky bit ( link here ) and disabling alter and delete feature on every file inside that directory If they have root password then hiding files is only the way to protect your files and root is god of the system, if they have root password, so they are the real god of your system !
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169378", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42165/" ] }
169,402
I have a bigfile like this: denovo1 xxx yyyy oggugu ddddddenovo11 ggg hhhh bbbb ggggdenovo22 hhhh yyyy kkkk iiiidenovo2 yyyyy rrrr fffff jjjjdenovo33 hhh yyy eeeee fffff then my pattern file is: denovo1denovo3denovo22 I'm trying to use fgrep in order to extract only the lines exactly matching the pattern in my file (so I want denovo1 but not denovo11 ). I tried to use -x for the exact match, but then I got an empty file. I tried: fgrep -x --file="pattern" bigfile.txt > clusters.blast.uniq Is there a way to make grep searching only in the first column?
You probably want the -w flag - from man grep -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. i.e. grep -wFf patfile filedenovo1 xxx yyyy oggugu ddddddenovo22 hhhh yyyy kkkk iiii To enforce matching only in the first column, you would need to modify the entries in the pattern file to add a line anchor : you could also make use of the \b word anchor instead of the command-line -w switch e.g. in patfile : ^denovo1\b^denovo3\b^denovo22\b then grep -f patfile filedenovo1 xxx yyyy oggugu ddddddenovo22 hhhh yyyy kkkk iiii Note that you must drop the -F switch if the file contains regular expressions instead of simple fixed strings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91761/" ] }
169,419
I have to write an oneliner which print system groups and their identifiers, for all groups whose identifiers start with '1'. For example, the result of cat command on /etc/group is this: I must print: users 100 libuuid 101 netdev 102 crontab 103........penny 1002 leonard 1003 sheldon 1004
You probably want the -w flag - from man grep -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. i.e. grep -wFf patfile filedenovo1 xxx yyyy oggugu ddddddenovo22 hhhh yyyy kkkk iiii To enforce matching only in the first column, you would need to modify the entries in the pattern file to add a line anchor : you could also make use of the \b word anchor instead of the command-line -w switch e.g. in patfile : ^denovo1\b^denovo3\b^denovo22\b then grep -f patfile filedenovo1 xxx yyyy oggugu ddddddenovo22 hhhh yyyy kkkk iiii Note that you must drop the -F switch if the file contains regular expressions instead of simple fixed strings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92488/" ] }
169,469
I am using source ~/.rvm/scripts/rvmrepos="repo_1_ruby_193 repo_2_ruby_211 repo_3_ruby_191"> rvm_check.txtfor repo in $reposdo cd ~/zipcar/$repo 2>rvm_check.txt cd .. echo $repo if [ -z `cat rvm_check.txt | grep not` ] # line 9 then echo "YES" else echo "NO" exit 1 fi done and it's mostly working but I get: $ ./multi_repo_rubies.sh repo_1_ruby_193YESrepo_2_ruby_211YESrepo_3_ruby_191./multi_repo_rubies.sh: line 9: [: too many argumentsNO$ whether I try -s or -z I am getting the YES/NO that I want but how to avoid the [: error?
Replace: if [ -z `cat rvm_check.txt | grep not` ] With: if ! grep -q not rvm_check.txt The reason to use test in an if statement is because it sets an exit code that the shell uses to decide to go to the then or else clause. grep also sets an exit code. Consequently there is no need for test, [ , here. grep sets the exit code to success (0), if it found the string. You want success to be if the string is not found. Thus, we negate the exit code result by using ! . Explanation The test command, [ , expects a single string to follow -z . If the grep command produces more than one word, then the test will fail with the error that you saw. As an example, consider this sample file: $ cat rvm_check.txtone not two The output of grep looks like: $ cat rvm_check.txt | grep notone not two When test is executed all three words appear inside the [...] causing the command to fail: $ [ -z `cat rvm_check.txt | grep not` ]bash: [: too many arguments This is just the same as if you had entered: $ [ -z one not two ]bash: [: too many arguments One solution for that is to use double-quotes: $ [ -z "`cat rvm_check.txt | grep not`" ] Double-quotes prevent the shell from performing word splitting . As a result, the output from grep here is treated as a single string, not split into separate words. However, since grep sets a sensible exit code, there is, as shown in the recommended line above, no need for test. Additional comments The currently preferred form for command substitution is $(...) . While backticks still work, they are fragile. In particular, backticks cannot be nested. On commands that take filenames on the command, the use of cat is unnecessary. Instead of: cat somefile | grep something Just use: grep something somefile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
169,479
linux learner here. (Running on a Debian -derived distro) My mouse sensitivity was too high so I was able to change it, but can't seem to get it to apply on startup. I made /etc/init.d/mouse When I run sudo /etc/init.d/mouse start , the script works fine and the mouse settings are updated. But I can't get it to run on startup. I tried running sudo update-rc.d mouse defaults , but it still doesn't update when I log out and back in again. Not sure what else I'm missing in order to make it run on startup. Related question: Is /etc/init.d even the right place to be putting it? Or is there some other startup folder that's better for configuration type changes? (As I read init.d is a folder for applications to be run on startup)
Replace: if [ -z `cat rvm_check.txt | grep not` ] With: if ! grep -q not rvm_check.txt The reason to use test in an if statement is because it sets an exit code that the shell uses to decide to go to the then or else clause. grep also sets an exit code. Consequently there is no need for test, [ , here. grep sets the exit code to success (0), if it found the string. You want success to be if the string is not found. Thus, we negate the exit code result by using ! . Explanation The test command, [ , expects a single string to follow -z . If the grep command produces more than one word, then the test will fail with the error that you saw. As an example, consider this sample file: $ cat rvm_check.txtone not two The output of grep looks like: $ cat rvm_check.txt | grep notone not two When test is executed all three words appear inside the [...] causing the command to fail: $ [ -z `cat rvm_check.txt | grep not` ]bash: [: too many arguments This is just the same as if you had entered: $ [ -z one not two ]bash: [: too many arguments One solution for that is to use double-quotes: $ [ -z "`cat rvm_check.txt | grep not`" ] Double-quotes prevent the shell from performing word splitting . As a result, the output from grep here is treated as a single string, not split into separate words. However, since grep sets a sensible exit code, there is, as shown in the recommended line above, no need for test. Additional comments The currently preferred form for command substitution is $(...) . While backticks still work, they are fragile. In particular, backticks cannot be nested. On commands that take filenames on the command, the use of cat is unnecessary. Instead of: cat somefile | grep something Just use: grep something somefile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169479", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92490/" ] }
169,492
How can I use a variable - $BASE in my cd.I tried the following but I get an error $ cd ~/z/repo_1_ruby_193/23:23:57 durrantm Castle2012 /home/durrantm/z/repo_1_ruby_193 $ BASE="~/z"23:24:03 durrantm Castle2012 /home/durrantm/z/repo_1_ruby_193 $ cd $BASE/repo_1_ruby_193-bash: cd: ~/z/repo_1_ruby_193: No such file or directory23:24:25 durrantm Castle2012 /home/durrantm/z/repo_1_ruby_193
In cd ~/z/ you are using Tilde expansion to expand ~ into your home directory. In BASE="~/z" , you are not because you quoted the ~ character, so it is not expanded. That is why you get a message complaining about a nonexistent ~ directory. The solution is to not quote it, i.e. BASE=~/z in order to let the expansion occur.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
169,508
In section 3.1.2.3 titled Double Quotes, the Bash manual says: Enclosing characters in double quotes (‘"’) preserves the literal value of all characters within the quotes, with the exception of ‘$’, ‘`’, ‘\’, and, when history expansion is enabled, ‘!’. At the moment I am concerned with the single quote( ' ). It's special meaning, described in the preceding section, section 3.1.2.2 is: Enclosing characters in single quotes ( ' ) preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash. Combining the two expositions, echo "'$a'" where variable a is not defined (hence $a = null string), should print $a on the screen, as '' , having it's special meaning inside, would shield $ from the special interpretation. Instead, it prints '' . Why so?
The ' single quote character in your echo example gets it literal value (and loses its meaning) as it enclosed in double quotes ( " ). The enclosing characters are the double quotes. What you can do is print the single quotes separately: echo "'"'$a'"'" or escape the $ : echo "'\$a'"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39843/" ] }
169,518
To secure my mysql server, I checked the list of users : mysql> SELECT User,Host,Password FROM mysql.user;+------------------+-----------+-------------------------------------------+| User | Host | Password |+------------------+-----------+-------------------------------------------+| root | localhost | ******************************************|| root | 127.0.0.1 | ******************************************|| root | ::1 | ******************************************|| debian-sys-maint | localhost | ******************************************|+------------------+-----------+-------------------------------------------+4 rows in set (0.00 sec) root and debian-sys-maint are using localhost/127.0.0.1 as hosts. But I don't understand what does ::1 notation means ?
::1 is the ipv6 version of 127.0.0.1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49573/" ] }
169,533
I've been offered a Macbook Pro mid-2012. Although it wouldn't have been my first choice it's still a great piece of hardware, only problem for me is that it only has a single Thunderbolt port allowing me to plug only one external monitor by default. I use Debian 64-bit on it and I've been looking for solution to add a second external monitor (third total). My only option seems to be using a USB to DVI/VGA adapter. I'm aware of the limitations, it will be for basic coursework and office stuff. I've been Googling for a while and can't seem to find any reliable information on using these kind of devices on Linux. I'm adventurous so I don't mind getting dirty in config files, although I don't have much experience with these things on Linux. Has anyone had any experience in getting these to work? Which device would you suggest? Any help/pointers/personal experiences. NOTE: I'm not asking for information for the particular device linked, my question is mainly, does anyone have any experience in getting any USB to VGA device working on Linux and if so, which device? Perhaps a comment on the particular configurations used, as setting them up on Linux appears to be non-trivial.
The UltraVideo device If you look at the specs for that particular device it doesn't support Linux. Features Support Windows XP,Vista, Winodws 7,Windows 8, windows 8.1, Mac OS up to 10.9.4 (**Does NOT support XP 64bit and Windows Server**) System Requirements Does NOT support XP 64bit and Windows Server/Linux Other compatible devices? Option #1 In general USB to (HDMI,DVI,VGA) devices either work or don't. But there are devices that are known to work under Linux, such as this one: UltraVideo® USB 2.0 to DVI-I or VGA Video Adapter Option #2 As well as this one: DisplayLink . Does it work with Linux? An open source driver is available, for DL-1x5 devices which is now built into the Linux kernel. Linux support for DL-3x00 or DL-41xx is not currently available. Digging further with respect to the DisplayLink technology had this to say on the Wikipedia page : The Linux kernel 3.4 also contains a DisplayLink driver, but current generation USB3 chips are not supported as of Sep 2014. It looks like no current DisplayLink-chip will ever work under Linux [17] due to intended encryption. Option #3 Here's another option: Plugable UGA-2K-A USB to VGA/DVI/HDMI Adapter for Multiple Monitors up to 2048×1152 . Windows 8/7/XP drivers installed automatically via Windows Update (Internet connection required) Mac is not supported due to significant limitations in the operating system. -Linux configuration for advanced users only The Pluggable website even has a page devoted to Linux, titled: DisplayLink USB 2.0 Graphics Adapters on Linux – 2014 Edition . The article had this to say on the issue: Excerpt The short story Multi-monitor on Linux, especially with multiple graphics cards and USB graphics adapters, remains problematic. You can find many distros and configurations where it just won’t work. We’d recommend staying away unless you’re an advanced Linux user who is willing to play with different distros, install optional components and do hand configuration. Unfortunately, it’s just not plug and play yet today, as it is on Windows The long story That said, it is possible to get things working in limited scenarios for USB 2.0 generation DisplayLink-based adapters. We used all Plugable products in the tests for this post. Our test systems included Intel, Nvidia, and AMD primary graphics adapters. For Nvidia and AMD, we tested both the open-source and proprietary drivers. Intel is the most compatible, providing decent results under all configurations. Nvidia graphics cards, when running the open source nouveau driver, only work in Multi-Seat mode. Attempting multi-monitor setup with a DisplayLink adapter and an Nvidia graphics card results in garbage graphics being displayed on your DisplayLink-attached monitor. The Nvidia proprietary drivers do not work under any scenario. The AMD open-source drivers work under both multi-seat and multi-monitor setups, but the performance, at least in our tests, is significantly worse than with the Intel drivers. The AMD proprietary drivers are unavailable in any easy to install package under Fedora 20, but we installed them in Ubuntu, and were unable to get any results, they simply do not work with DisplayLink graphics.. TL;DR As I've shown, it isn't a simple answer, it's very hit or miss, which devices will work with which particular distros of Linux. If it were me, I'd likely go with option #3, but your mileage will vary. Also, prepare yourself for spending a fare amount of time messing with options to get things working, or potentially having to switch to a different distro. Excerpt We don’t recommend or support USB graphics on Linux yet, because of the problems above — but if you do have questions, please feel free to comment below. We want to get as much information out as possible about what works and doesn’t, so things can improve here. There’s no reason Linux can’t have the same or better multi-monitor support as any other platform in time!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89986/" ] }
169,534
Say I have a large 800x5000 image; how would I split that into 5 separate images with dimensions 800x1000 using the command line?
Solved it using ImageMagick's convert -crop geometry +repage: convert -crop 100%x20% +repage image.png image.png
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92557/" ] }
169,560
Im making a script where I connect to a database, look for a value and then if that value returned something then I will do something else. ip=$("variable is defined here") Then I connect to the MYSQL database using: /Applications/MAMP/Library/bin/mysql --host=localhost -uroot -proot --password=* However, this takes me to another prompt, and I need to run another command from that, I can't get it to work. I tried this: /Applications/MAMP/Library/bin/mysql "MYSQL argumens;"; and just: MYSQL arguments; None of these work and I have to pass a variable to it, what can I do now?
You can do something like this: #! bin/bashselectvar="SELECT * FROM test;"mysql --user=root --password=mypass database << eof $selectvareof
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
169,625
Suppose I have a file (call it sample.txt) that looks like this: Row1,10Row2,20Row3,30Row4,40 I want to be able to work on a stream from this file that is essentially the pairwise combination of all four rows (so we should end up with 16 in total). For instance, I'm looking for a streaming (ie efficient) command where the output is: Row1,10 Row1,10Row1,10 Row2,20Row1,10 Row3,30Row1,10 Row4,40Row2,20 Row1,10Row1,20 Row2,20...Row4,40 Row4,40 My use case is that I want to stream this output into another command (like awk) to compute some metric about this pairwise combination. I have a way to do this in awk but my concern is that my use of the END{} block means that I'm basically storing the entire file in memory before I output. Example code: awk '{arr[$1]=$1} END{for (a in arr){ for (a2 in arr) { print arr[a] " " arr[a2]}}}' samples/rows.txt Row3,30 Row3,30Row3,30 Row4,40Row3,30 Row1,10Row3,30 Row2,20Row4,40 Row3,30Row4,40 Row4,40Row4,40 Row1,10Row4,40 Row2,20Row1,10 Row3,30Row1,10 Row4,40Row1,10 Row1,10Row1,10 Row2,20Row2,20 Row3,30Row2,20 Row4,40Row2,20 Row1,10Row2,20 Row2,20 Is there an efficient streaming way to do this without having to essentially store the file in memory and then output in the END block?
Here's how to do it in awk so that it doesn't have to store the whole file in an array. This is basically the same algorithm as terdon's. If you like, you can even give it multiple filenames on the command line and it will process each file independently, concatenating the results together. #!/usr/bin/awk -f#Cartesian product of records{ file = FILENAME while ((getline line <file) > 0) print $0, line close(file)} On my system, this runs in about 2/3 the time of terdon's perl solution.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92603/" ] }
169,645
Manual says that "The --text option has been removed.". It was setting the --type to text, i.e. ack was searching all text files regardless of their semantic category (i.e. in *.php files, *.txt files, *.cpp files, etc.). How to obtain this behavior in current version of ack ?
Use ack without a --type predicate: ack <search pattern> It ignores binary files and searches all textual files regardless of file type.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92616/" ] }
169,697
I'd like to know what is the exact mechanism (implementation) used to defer mounting until after network interface is up when one uses _netdev option in /etc/fstab ? Does systemd alter this behavior? Also, what does delay_connect option to sshfs provide what _netdev does not? From mount man page : _netdev The filesystem resides on a device that requires network access (used to prevent the system from attempting to mount these filesystems until the network has been enabled on the system). From sshfs man page : -o delay_connect delay connection to server
From man systemd.mount for version 231 of systemd: Mount units referring to local and network file systems are distinguished by their file system type specification. In some cases this is not sufficient (for example network block device based mounts, such as iSCSI), in which case _netdev may be added to the mount option string of the unit, which forces systemd to consider the mount unit a network mount.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5355/" ] }
169,716
Is using a while loop to process text generally considered bad practice in POSIX shells? As Stéphane Chazelas pointed out , some of the reasons for not using shell loop are conceptual , reliability , legibility , performance and security . This answer explains the reliability and legibility aspects: while IFS= read -r line <&3; do printf '%s\n' "$line"done 3< "$InputFile" For performance , the while loop and read are tremendously slow when reading from a file or a pipe, because the read shell built-in reads one character at a time. How about conceptual and security aspects?
Yes, we see a number of things like: while read line; do echo $line | cut -c3done Or worse: for line in `cat file`; do foo=`echo $line | awk '{print $2}'` echo whatever $foodone (don't laugh, I've seen many of those). Generally from shell scripting beginners. Those are naive literal translations of what you would do in imperative languages like C or python, but that's not how you do things in shells, and those examples are very inefficient, completely unreliable (potentially leading to security issues), and if you ever manage to fix most of the bugs, your code becomes illegible. Conceptually In C or most other languages, building blocks are just one level above computer instructions. You tell your processor what to do and then what to do next. You take your processor by the hand and micro-manage it: you open that file, you read that many bytes, you do this, you do that with it. Shells are a higher level language. One may say it's not even a language. They're before all command line interpreters. The job is done by those commands you run and the shell is only meant to orchestrate them. One of the great things that Unix introduced was the pipe and those default stdin/stdout/stderr streams that all commands handle by default. In 50 years, we've not found better than that API to harness the power of commands and have them cooperate to a task. That's probably the main reason why people are still using shells today. You've got a cutting tool and a transliterate tool, and you can simply do: cut -c4-5 < in | tr a b > out The shell is just doing the plumbing (open the files, setup the pipes, invoke the commands) and when it's all ready, it just flows without the shell doing anything. The tools do their job concurrently, efficiently at their own pace with enough buffering so as not one blocking the other, it's just beautiful and yet so simple. Invoking a tool though has a cost (and we'll develop that on the performance point). Those tools may be written with thousands of instructions in C. A process has to be created, the tool has to be loaded, initialised, then cleaned-up, process destroyed and waited for. Invoking cut is like opening the kitchen drawer, take the knife, use it, wash it, dry it, put it back in the drawer. When you do: while read line; do echo $line | cut -c3done < file It's like for each line of the file, getting the read tool from the kitchen drawer (a very clumsy one because it's not been designed for that ), read a line, wash your read tool, put it back in the drawer. Then schedule a meeting for the echo and cut tool, get them from the drawer, invoke them, wash them, dry them, put them back in the drawer and so on. Some of those tools ( read and echo ) are built in most shells, but that hardly makes a difference here since echo and cut still need to be run in separate processes. It's like cutting an onion but washing your knife and put it back in the kitchen drawer between each slice. Here the obvious way is to get your cut tool from the drawer, slice your whole onion and put it back in the drawer after the whole job is done. IOW, in shells, especially to process text, you invoke as few utilities as possible and have them cooperate to the task, not run thousands of tools in sequence waiting for each one to start, run, clean up before running the next one. Further reading in Bruce's fine answer . The low-level text processing internal tools in shells (except maybe for zsh ) are limited, cumbersome, and generally not fit for general text processing. Performance As said earlier, running one command has a cost. A huge cost if that command is not builtin, but even if they are builtin, the cost is big. And shells have not been designed to run like that, they have no pretension to being performant programming languages. They are not, they're just command line interpreters. So, little optimisation has been done on this front. Also, the shells run commands in separate processes. Those building blocks don't share a common memory or state. When you do a fgets() or fputs() in C, that's a function in stdio. stdio keeps internal buffers for input and output for all the stdio functions, to avoid to do costly system calls too often. The corresponding even builtin shell utilities ( read , echo , printf ) can't do that. read is meant to read one line. If it reads past the newline character, that means the next command you run will miss it. So read has to read the input one byte at a time (some implementations have an optimisation if the input is a regular file in that they read chunks and seek back, but that only works for regular files and bash for instance only reads 128 byte chunks which is still a lot less than text utilities will do). Same on the output side, echo can't just buffer its output, it has to output it straight away because the next command you run will not share that buffer. Obviously, running commands sequentially means you have to wait for them, it's a little scheduler dance that gives control from the shell and to the tools and back. That also means (as opposed to using long running instances of tools in a pipeline) that you cannot harness several processors at the same time when available. Between that while read loop and the (supposedly) equivalent cut -c3 < file , in my quick test, there's a CPU time ratio of around 40000 in my tests (one second versus half a day). But even if you use only shell builtins: while read line; do echo ${line:2:1}done (here with bash ), that's still around 1:600 (one second vs 10 minutes). Reliability/legibility It's very hard to get that code right. The examples I gave are seen too often in the wild, but they have many bugs. read is a handy tool that can do many different things. It can read input from the user, split it into words to store in different variables. read line does not read a line of input, or maybe it reads a line in a very special way. It actually reads words from the input those words separated by $IFS and where backslash can be used to escape the separators or the newline character. With the default value of $IFS , on an input like: foo\/bar \bazbiz read line will store "foo/bar baz" into $line , not " foo\/bar \" as you'd expect. To read a line, you actually need: IFS= read -r line That's not very intuitive, but that's the way it is, remember shells were not meant to be used like that. Same for echo . echo expands sequences. You can't use it for arbitrary contents like the content of a random file. You need printf here instead. And of course, there's the typical forgetting of quoting your variable which everybody falls into. So it's more: while IFS= read -r line; do printf '%s\n' "$line" | cut -c3done < file Now, a few more caveats: except for zsh , that doesn't work if the input contains NUL characters while at least GNU text utilities would not have the problem. if there's data after the last newline, it will be skipped inside the loop, stdin is redirected so you need to pay attention that the commands in it don't read from stdin. for the commands within the loops, we're not paying attention to whether they succeed or not. Usually, error (disk full, read errors...) conditions will be poorly handled, usually more poorly than with the correct equivalent. Many commands, including several implementations of printf also don't reflect their failure to write to stdout in their exit status. If we want to address some of those issues above, that becomes: while IFS= read -r line <&3; do { printf '%s\n' "$line" | cut -c3 || exit } 3<&-done 3< fileif [ -n "$line" ]; then printf '%s' "$line" | cut -c3 || exitfi That's becoming less and less legible. There are a number of other issues with passing data to commands via the arguments or retrieving their output in variables: the limitation on the size of arguments (some text utility implementations have a limit there as well, though the effect of those being reached are generally less problematic) the NUL character (also a problem with text utilities). arguments taken as options when they start with - (or + sometimes) various quirks of various commands typically used in those loops like expr , test ... the (limited) text manipulation operators of various shells that handle multi-byte characters in inconsistent ways. ... Security considerations When you start working with shell variables and arguments to commands , you're entering a mine-field. If you forget to quote your variables , forget the end of option marker , work in locales with multi-byte characters (the norm these days), you're certain to introduce bugs which sooner or later will become vulnerabilities. When you may want to use loops. TBD
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/169716", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38906/" ] }
169,723
I came across two Unix scripts in my textbook. In one of them they implemented a=$b (in the form of max=$a) and in the other a=b (in the form of number=num). Are these the same thing or is there a difference?
$ a=b$ printf $ab$ b=hello$ a=$b $ printf $bhello Basically a=b makes a variable called a of which value is a literal b . a=$b makes a variable called a with the same value as an already existing variable called b .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92667/" ] }
169,739
I wrote the following script to test the speed of Python's sort functionality: from sys import stdin, stdoutlines = list(stdin)lines.sort()stdout.writelines(lines) I then compared this to the coreutils sort command on a file containing 10 million lines: $ time python sort.py <numbers.txt >s1.txtreal 0m16.707suser 0m16.288ssys 0m0.420s$ time sort <numbers.txt >s2.txt real 0m45.141suser 2m28.304ssys 0m0.380s The built-in command used all four CPUs (Python only used one) but took about 3 times as long to run! What gives? I am using Ubuntu 12.04.5 (32-bit), Python 2.7.3, and sort 8.13
Izkata's comment revealed the answer: locale-specific comparisons. The sort command uses the locale indicated by the environment, whereas Python defaults to a byte order comparison. Comparing UTF-8 strings is harder than comparing byte strings. $ time (LC_ALL=C sort <numbers.txt >s2.txt)real 0m5.485suser 0m14.028ssys 0m0.404s How about that.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45158/" ] }
169,787
I recently installed CentOS 7 on a machine that has been running Windows 7. I did a dual boot installation and installed CentOS in a partition. But when I boot up my machine, it only gives me two CentOS options. It does not give me the option to choose to boot Windows 7. How can I add windows 7 back to the boot options? NOTE: I'm reading this post titled: CenTOS 7 dual boot with windows , but my /grub folder only seems to have a splash.xpm.gz file in it with no other files. Also, I'm new to Linux and need something more step by step. EDIT #1 I'm getting the following results on the command line: [root@localhost home]# sudo update-grubsudo: update-grub: command not found[root@localhost home]# sudo grub-mkconfigsudo: grub-mkconfig: command not found Also, I'm currently researching the possibility that these commands might not apply to CentOS. For example in this U&L Q&A titled: " Equivalent of update-grub for RHEL/Fedora/CentOS systems? ", as well as this Q&A titled: " Installed Centos 7 after Windows and can't boot into CentOS " seem to imply that I should reinstall grub2. But how do I do that? I'm just now learning Linux. EDIT #2 The following command does work. Here is the output: [root@localhost home]# sudo grub2-mkconfig 2>/dev/null## DO NOT EDIT THIS FILE## It is automatically generated by grub2-mkconfig using templates# from /etc/grub.d and settings from /etc/default/grub#### BEGIN /etc/grub.d/00_header ###set pager=1if [ -s $prefix/grubenv ]; then load_envfiif [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=trueelse set default="${saved_entry}"fiif [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id"else menuentry_id_option=""fiexport menuentry_id_optionif [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=truefifunction savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi}function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi}terminal_output consoleif [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=5# Fallback normal timeout code in case the timeout_style feature is# unavailable.else set timeout=5fi### END /etc/grub.d/00_header ###### BEGIN /etc/grub.d/10_linux ###menuentry 'CentOS Linux, with Linux 3.10.0-123.el7.x86_64' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-123.el7.x86_64-advanced-77a053a9-a71b-43ce-a8d7-1a3418f5b0d9' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod xfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint- efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 --hint='hd0,msdos5' 589631f1-d5aa-4374-a069-7aae5ca289bc else search --no-floppy --fs-uuid --set=root 589631f1-d5aa-4374-a069-7aae5ca289bc fi linux16 /vmlinuz-3.10.0-123.el7.x86_64 root=UUID=77a053a9-a71b-43ce-a8d7-1a3418f5b0d9 ro rd.luks.uuid=luks-a45243be-2514-4a81-b7a1-7e4eff712d2d vconsole.font=latarcyrheb-sun16 crashkernel=auto vconsole.keymap=us rd.luks.uuid=luks-5349515e-a082-4ff2-b035-54da7b8d4990 rhgb quiet initrd16 /initramfs-3.10.0-123.el7.x86_64.img}menuentry 'CentOS Linux, with Linux 0-rescue-369d0c1b630b48cc8ef010ceb99bc668' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-369d0c1b630b48cc8ef010ceb99bc668-advanced-77a053a9-a71b-43ce-a8d7-1a3418f5b0d9' { load_video insmod gzio insmod part_msdos insmod xfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 --hint='hd0,msdos5' 589631f1-d5aa-4374-a069-7aae5ca289bc else search --no-floppy --fs-uuid --set=root 589631f1-d5aa-4374-a069-7aae5ca289bc fi linux16 /vmlinuz-0-rescue-369d0c1b630b48cc8ef010ceb99bc668 root=UUID=77a053a9-a71b-43ce-a8d7-1a3418f5b0d9 ro rd.luks.uuid=luks-a45243be-2514-4a81-b7a1-7e4eff712d2d vconsole.font=latarcyrheb-sun16 crashkernel=auto vconsole.keymap=us rd.luks.uuid=luks-5349515e-a082-4ff2-b035-54da7b8d4990 rhgb quiet initrd16 /initramfs-0-rescue-369d0c1b630b48cc8ef010ceb99bc668.img}### END /etc/grub.d/10_linux ###### BEGIN /etc/grub.d/20_linux_xen ###### END /etc/grub.d/20_linux_xen ###### BEGIN /etc/grub.d/20_ppc_terminfo ###### END /etc/grub.d/20_ppc_terminfo ###### BEGIN /etc/grub.d/30_os-prober ###menuentry 'Windows 7 (loader) (on /dev/sda2)' --class windows --class os $menuentry_id_option 'osprober-chain-386ED4266ED3DB28' { insmod part_msdos insmod ntfs set root='hd0,msdos2' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos2 --hint-efi=hd0,msdos2 --hint-baremetal=ahci0,msdos2 --hint='hd0,msdos2' 386ED4266ED3DB28 else search --no-floppy --fs-uuid --set=root 386ED4266ED3DB28 fi chainloader +1}### END /etc/grub.d/30_os-prober ###### BEGIN /etc/grub.d/40_custom #### This file provides an easy way to add custom menu entries. Simply type the# menu entries you want to add after this comment. Be careful not to change# the 'exec tail' line above.### END /etc/grub.d/40_custom ###### BEGIN /etc/grub.d/41_custom ###if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfgelif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg;fi### END /etc/grub.d/41_custom ###
This is usually fixed by running the scripts detect the installed operating systems and generate the boot loader's ( grub2 in this case) configuration file. On CentOS 7, that should be grub2-mkconfig . Check that windows is detected. Run grub2-mkconfig but discard its output: $ sudo grub2-mkconfig > /dev/null Generating grub configuration file ...Found background image: /usr/share/images/desktop-base/desktop-grub.pngFound linux image: /boot/vmlinuz-3.16.0-4-amd64Found initrd image: /boot/initrd.img-3.16.0-4-amd64Found memtest86+ image: /boot/memtest86+.binFound memtest86+ multiboot image: /boot/memtest86+_multiboot.binFound Windows 7 (loader) on /dev/sda2 The output will look similar (but not identical) to what is shown above. Make sure that Windows is listed. If Windows was listed in the previous step, go ahead and save the new configuration file. Make a backup first, just in case. sudo cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.oldsudo grub2-mkconfig -o /boot/grub2/grub.cfg If all went well, you should now be able to reboot into Windows.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92670/" ] }
169,798
I know I can use this option to find file between particular modified times. But I'm curious about what does this mean? I used man find | grep newermt trying to find something. But I got no direct content. It seems -newer file and mtime stuff may have relation with it. But I'm not sure.. So, what does -newermt actually mean?
find(1) : -newerXY reference Compares the timestamp of the current file with reference. The reference argument is normally the name of a file (and one of its timestamps is used for the comparison) but it may also be a string describing an absolute time. X and Y are placeholders for other letters, and these letters select which time belonging to how reference is used for the comparison. a The access time of the file reference B The birth time of the file reference c The inode status change time of reference m The modification time of the file reference t reference is interpreted directly as a time
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
169,870
I need to remove everything that is not a 4 character number such as 9838 , 6738 , 1337 or 1889 . I though that this command would work: sed 's/....[^0-9]//g' . Means any character in regex, and [^0-9] removes none numbers. Here is an example input: 9228 Hello 8473 World War 1 1914-1918 Hello 8391 World War 2 1939-1945 Would be: 9228 8473 1914 1918 8391 1939 1945
I can answer with grep command: Input file: 9228 Hello 8473 World War 1 1914-1918 Hello 8391 World War 2 1939-1945 Command: grep -Eo '\<[0-9]{4}\>' file |tr '\n' ' ' Return any number with length=4. -E switches to extended regex -o print only the matching part Output: 9228 8473 1914 1918 8391 1939 1945 Update answer: Input file: 9228 Hello 8473 World War 1 1914-1918 Hello 8391 World War 2 1939-1945foo1234bara1111123450x2222ff1.33332.54321 Command grep -oP '(?<![0-9])[0-9]{4}(?![0-9])' file | tr '\n' ' ' grep with negative lookbehind/lookahead: (?<![0-9])[0-9]{4} (negative lookbehind): matches numbers to length=4 that is not preceded by a number [0-9] . [0-9]{4}(?![0-9]) (negative lookahead): match numbers to length=4 not followed by a number. Output: 9228 8473 1914 1918 8391 1939 1945 1234 1111 2222 3333
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
169,886
I'm using less to parse HTTP access logs. I want to view everything neatly on single lines, so I'm using -S . The problem I have is that the first third of my terminal window is taken up with metadata that I don't care about. When I use my arrow keys to scroll right, I find that it scrolls past the start of the information that I do care about! I could just delete the start of each line, but I don't know if I may need that data in the future, and I'd rather not have to maintain separate files or run a script each time I want to view some logs. Example This line: access.log00002:10.0.0.0 - USER_X [07/Nov/2013:16:50:50 +0000] "GET /some/long/URL" Would scroll to: ng/URL" Question Is there a way I can scroll in smaller increments, either by character or by word?
The only horizontal scrolling commands scroll by half a screenful, but you can pass a numeric argument to specify the number of characters, e.g. typing 4 Right scrolls to the right by 4 characters. Less doesn't really have a notion of “current line” and doesn't split a line into words, so there's no way to scroll by a word at a time. You can define a command that scrolls by a fixed number of characters. For example, if you want Shift + Left and Shift + Right to scroll by 4 characters at a time: Determine the control sequences that your terminal sends for these key combinations. Terminals send a sequence of bytes that begin with the escape (which can be written \e , \033 , ^[ in various contexts) character for function keys and keychords. Press Ctrl + V Shift + Left at a shell prompt: this inserts the escape character literally (you'll see ^[ on the screen) instead of it being processed by your shell, and inserts the rest of the escape sequence. A common setup has Shift + Left and Shift + Right send \eO2D and \eO2C respectively. Create a file called ~/.lesskey and add the following lines (adjust if your terminal sends different escape sequences): #command\eO2D noaction 4\e(\eO2C noaction 4\e)\eOD noaction 40\e(\eOC noaction 40\e) In addition to defining bindings for Shift + arrow , you may want to define bindings for arrow alone, because motion commands reuse the numeric values from the last call. Adjust 40 to your customary terminal width. There doesn't appear to be a way to say “now use the terminal width again, whatever it is at this moment”. A downside of these bindings is that you lose the ability to pass a numeric argument to Left and Right (you can still pass a numeric argument to Esc ( and Esc ) ). Then run lesskey , which converts the human-readable ~/.lesskey into a binary file ~/.less that less reads when it starts.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/169886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63553/" ] }
169,898
I recently came across this in a shell script. if ! kill -0 $(cat /path/to/file.pid); then ... do something ...fi What does kill -0 ... do?
This one is a little hard to glean but if you look in the following 2 man pages you'll see the following notes: kill(1) $ man 1 kill...If sig is 0, then no signal is sent, but error checking is still performed.... kill(2) $ man 2 kill...If sig is 0, then no signal is sent, but error checking is still performed; this can be used to check for the existence of a process ID or process group ID.... So signal 0 will not actually in fact send anything to your process's PID, but will check whether you have permissions to do so. Where might this be useful? One obvious place would be if you were trying to determine if you had permissions to send signals to a running process via kill . You could check prior to sending the actual kill signal that you want, by wrapping a check to make sure that kill -0 <PID> was first allowed. Example Say a process was being run by root as follows: $ sudo sleep 2500 &[1] 15693 Now in another window if we run this command we can confirm that that PID is running. $ pgrep sleep15693 Now let's try this command to see if we have access to send that PID signals via kill . $ if ! kill -0 $(pgrep sleep); then echo "You're weak!"; fibash: kill: (15693) - Operation not permittedYou're weak! So it works, but the output is leaking a message from the kill command that we don't have permissions. Not a big deal, simply catch STDERR and send it to /dev/null . $ if ! kill -0 $(pgrep sleep) 2>/dev/null; then echo "You're weak!"; fiYou're weak! Complete example So then we could do something like this, killer.bash : #!/bin/bashPID=$(pgrep sleep)if ! kill -0 $PID 2>/dev/null; then echo "you don't have permissions to kill PID:$PID" exit 1fikill -9 $PID Now when I run the above as a non-root user: $ ~/killer.bash you don't have permissions to kill PID:15693$ echo $?1 However when it's run as root: $ sudo ~/killer.bash $ echo $?0$ pgrep sleep$
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/169898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
169,900
How can I log into the mysql 5.6 command line client and reset the root password in Centos7? I read the following at this link , but it does not work: 1) sudo service mysqld stop2) sudo service mysqld startsos3) mysql -u root4) Now you will be at mysql prompt. Here type:-4.1) UPDATE mysql.user SET Password=PASSWORD('NewPassHere') WHERE User='root';4.2) FLUSH PRIVILEGES;4.3) quit;5) sudo service mysqld restart Step 1) above results in: [root@localhost ~]# sudo service mysqld stopRedirecting to /bin/systemctl stop mysqld.serviceFailed to issue method call: Unit mysqld.service not loaded. Step 3) above results in: -bash: syntax error near unexpected token `(' When I change step 3 to UPDATE mysql.user SET Password='NewPassHere' WHERE User='root'; , I get the following error: bash: UPDATE: command not found... I do seem to be able to get into mysql when i type su - to become root and then type mysql - u root at the next prompt. But then the above 5 step commands do not work, even when I remove the word sudo and/or replace the word service with systemctl . How can I get working access to the mysql 5.6 command line in CentOS 7, starting with setting the root password?
Sometimes you can clobber your configuration. As such, it's easier to start over, as if the package had never been installed. In your case, we are looking at MySQL. We use Yum to Remove MySQL, like so: yum remove mysql mysql-server With MySQL removed, we can safely backup the configuration: mv /var/lib/mysql /var/lib/mysql_old_backup If you'd rather remove it, issue: rm -vR /var/lib/mysql Now we can safely reinstall MySQL, using the default configuration that is included in the package from the Official MySQL repository (we need wget to fetch the rpm that will update your repos): yum install wget Now download and install the repository: wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm && rpm -ivh mysql-community-release-el7-5.noarch.rpm Verify the repositories are installed: ls -1 /etc/yum.repos.d/mysql-community* Issue the actual install command (This will replace the mysql-server in the CentOS repository with the official package from upstream MySQL): yum install mysql-server Use the script provided to set the root password, now that we have a fresh install again: mysql_secure_installation If you ever need to set the password after using the script, use: mysql -u root Now you can use the standard commands from systemctl , part of systemd to Start and Stop the daemon like so: systemctl start mysqld References How to Remove MySQL Completely from Linux System - CentOS How to install MySQL Server 5.6 on CentOS 7 / RHEL 7
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/169900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92670/" ] }
169,909
I have following line in /etc/fstab: UUID=E0FD-F7F5 /mnt/zeno vfat noauto,utf8,user,rw,uid=1000,gid=1000,fmask=0113,dmask=0002 0 0 The partition is freshly created by gnome-disks under the respective user, and spans the whole card. Now: Running mount /mnt/zeno as user (1000) succeeds, but right after that I find out that it's actually not mounted: following umount /mnt/zeno fails with umount: /mnt/zeno: not mounted . When watching journalctl -f , I can see following messages appear when mounting: [...] kernel: SELinux: initialized (dev mmcblk0p1, type vfat), uses genfs_contexts[...] systemd[1]: Unit mnt-zeno.mount is bound to inactive service. Stopping, too.[...] systemd[1]: Unmounting /mnt/zeno...[...] systemd[1]: Unmounted /mnt/zeno. So it seems that systemd indeed keeps unmounting the drive, but I can't find out why. I don't remember creating any custom ".mount" files. I tried to find something in /etc/systemd and in my home folder but did not find anything. So what is this "mnt-zeno.mount" file and how can I review it? And most importantly, how can I mount the drive?
mnt-zeno.mount was created by systemd-fstab-generator . According to Jonathan de Boyne Pollard's explanation on debian-user mailing list : [systemd-fstab-generator is] a program that reads /etc/fstab at boot time and generates units that translate fstab records to the systemd way of doing things [.....] The systemd way of doing things is mount and device units, per the systemd.mount(5) and systemd.device(5) manual pages. In the raw systemd way of doing things, there's a device unit named "dev-sde1.device" which is a base requirement for a mount unit named "media-lumix\x2dphotos.mount". After altering fstab one should either run systemctl daemon-reload (this makes systemd to reparse /etc/fstab and pick up the changes) or reboot.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/169909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9365/" ] }
169,910
Can I use the case statement to handle arguments? So, using -restart I'd like execute "-umount" and "-mount" #!/bin/bashcase "$1" in-mountmount /ip/share1 /local/share1;;-umountumount /ip/share1;;-restart# echo TODO;;*)[...]esac
It looks to me like that should work, other than the syntactical quibble of missing ) s. I tested this and it behaves correctly.. #/bin/bashcase "$1" in "-mount") mount /path/to/device /path/to/mountpoint ;; "-unmount") umount /path/to/mountpoint ;; "-remount") "$0" -unmount "$0" -mount ;; *) echo "You have failed to specify what to do correctly." exit 1 ;;esac
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169910", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40628/" ] }
169,995
Suppose I have a file: File1: PAPER TEAM MANISH NISHA GARIMA JYOUTI ........etc File2 I want: PAPER TEAMMANISHNISHAGARIMA JYOUTI Rows to column conversion of File1.
Using tr , replace each repeated space character( ) with a single new-line( \n ) character. tr -s ' ' '\n'< infile > outfile But I think you want something like this? Original Transposed 0 1 2 3 a b c d # $ @ % 0 a # 1 b $ 2 c @ 3 d % With awk we could do: awk '{ for (i=1; i<=NF; i++) RtoC[i]= (i in RtoC?RtoC[i] OFS :"") $i; } END{ for (i=1; i<=NF; i++) print RtoC[i] }' infile This joins each same filed number positon into together and in END prints the result that would be first row in first column , second row in second column, etc. Of course the input file is limited to your memory size.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/169995", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80576/" ] }
170,013
Apparently, running: perl -n -e 'some perl code' * Or find . ... -exec perl -n -e '...' {} + (same with -p instead of -n ) Or perl -e 'some code using <>' * often found in one-liners posted on this site, has security implications. What's the deal? How to avoid it?
What's the problem First, like for many utilities, you'll have an issue with file names starting with - . While in: sh -c 'inline sh script here' other args The other args are passed to the inline sh script ; with the perl equivalent, perl -e 'inline perl script here' other args The other args are scanned for more options to perl first, not to the inline script. So, for instance, if there's a file called -eBEGIN{do something evil} in the current directory, perl -ne 'inline perl script here;' * (with or without -n ) will do something evil. Like for other utilities, the work around for that is to use the end-of-options marker ( -- ): perl -ne 'inline perl script here;' -- * But even then, it's still dangerous and that's down to the <> operator used by -n / -p . The issue is explained in perldoc perlop documentation. That special operator is used to read one line (one record, records being lines by default) of input, where that input is coming from each of the arguments in turn passed in @ARGV . In: perl -pe '' a b -p implies a while (<>) loop around the code (here empty). <> will first open a , read records one line at a time until the file is exhausted and then open b ... The problem is that, to open the file, it uses the first, unsafe form of open : open ARGV, "the file as provided" With that form, if the argument is "> afile" , it opens afile in writing mode, "cmd|" , it runs cmd and reads it's output. "|cmd" , you've a stream open for writing to the input of cmd . So for instance: perl -pe '' 'uname|' Doesn't output the content of the file called uname| (a perfectly valid file name btw), but the output of the uname command. If you're running: perl -ne 'something' -- * And someone has created a file called rm -rf "$HOME"| (again a perfectly valid file name) in the current directory (for instance because that directory was once writeable by others, or you've extracted a dodgy archive, or you've run some dodgy command, or another vulnerability in some other software was exploited), then you're in big trouble. Areas where it's important to be aware of that problem is tools processing files automatically in public areas like /tmp (or tools that may be called by such tools). Files called > foo , foo| , |foo are a problem. But to a lesser extent < foo and foo with leading or trailing ASCII spacing characters (including space, tab, newline, cr...) as well as that means those files won't be processed or the wrong one will be. Also beware that some characters in some multi-byte character sets (like ǖ in BIG5-HKSCS) end in byte 0x7c, the encoding of | . $ printf ǖ | iconv -t BIG5-HKSCS | od -tx1 -tc0000000 88 7c 210 |0000002 So in locales using that charset, perl -pe '' ./nǖ Would try to run the ./n\x88 command as perl would not try to interpret that file name in the user's locale! How to fix/work around AFAIK, there is nothing you can do to change that unsafe default behaviour of perl once and for all system-wide. First, the problem occurs only with characters at the start and end of the file name. So, while perl -ne '' * or perl -ne '' *.txt are a problem, perl -ne 'some code' ./*.txt is not because all the arguments now start with ./ and end in .txt (so not - , < , > , | , space...). More generally, it's a good idea to prefix globs with ./ . That also avoids problems with files called - or starting with - with many other utilities (and here, that means you don't need the end-of-options ( -- ) marker any more). Using -T to turn on taint mode helps to some extent. It will abort the command if such malicious file is encountered (only for the > and | cases, not < or whitespace though). That's useful when using such commands interactively as that alerts you that there's something dodgy going on. That may not be desirable when doing some automatic processing though, as that means someone can make that processing fail just by creating a file. If you do want to process every file, regardless of their name, you can use the ARGV::readonly perl module on CPAN (unfortunately usually not installed by default). That's a very short module that does: sub import{ # Tom Christiansen in Message-ID: <24692.1217339882@chthon> # reccomends essentially the following: for (@ARGV){ s/^(\s+)/.\/$1/; # leading whitespace preserved s/^/< /; # force open for input $_.=qq/\0/; # trailing whitespace preserved & pipes forbidden };}; Basically, it sanitises @ARGV by turning " foo|" for instance into "< ./ foo|\0" . You can do the same in a BEGIN statement in your perl -n/-p command: perl -pe 'BEGIN{$_.="\0" for @ARGV} your code here' ./* Here we simplify it on the assumption that ./ is being used. A side effect of that (and ARGV::readonly ) though is that $ARGV in your code here shows that trailing NUL character. Update 2015-06-03 perl v5.21.5 and above have a new <<>> operator that behaves like <> except that it will not do that special processing. Arguments will only be considered as file names. So with those versions, you can now write: perl -e 'while(<<>>){ ...;}' -- * (don't forget the -- or use ./* though) without fear of it overwriting files or running unexpected commands. -n / -p still use the dangerous <> form though. And beware symlinks are still being followed, so that does not necessarily mean it's safe to use in untrusted directories.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22565/" ] }
170,016
The qcow2 image file format for KVM can use AES encryption . The encryption is applied at the cluster level : Each sector within each cluster is independently encrypted using AES Cipher Block Chaining mode, using the sector's offset (relative to the start of the device) in little-endian format as the first 64 bits of the 128 bit initialisation vector. The cluster size can be set from 512 bytes to 2M (64K appears to be the default). One of the main issues with using qcow2 encryption is the performance hit for the CPU - every disk write or non-cached read needs to encrypt or unencrypt. What I'd like to know is does QEMU/KVM use the Intel AES instructions to mitigate the performance hit if the host CPU has them? If so, does usage or performance depend significantly on cluster size? Intel® AES instructions are a new set of instructions available beginning with the all new 2010 Intel® Core™ processor family based on the 32nm Intel® microarchitecture codename Westmere. These instructions enable fast and secure data encryption and decryption, using the Advanced Encryption Standard (AES) which is defined by FIPS Publication number 197. Since AES is currently the dominant block cipher, and it is used in various protocols, the new instructions are valuable for a wide range of applications.
At least with the Fedora 20 package qemu-img (1.6.2, 10.fc20) does not use AES-NI for AES crypto. Confirming One can verify it like this: Does the CPU have AES-NI? $ grep aes /proc/cpuinfo -i For example my Intel Core 7 has this extension. Install the necessary debug packages: # debuginfo-install qemu-img Run qemu-img in a debugger: $ gdb --args qemu-img convert -o encryption -O qcow2 disk1.img enc1.qcow2 Set a break-point in a well known qemu encryption function that is not optimized for AES-NI: (gdb) b AES_encryptBreakpoint 1 at 0x794b0: file util/aes.c, line 881. Run the program: (gdb) rStarting program: /usr/bin/qemu-img convert -o encryption -O qcow2 disk1.img enc1.qcow2 Results In my testing it does stop there: Breakpoint 1, AES_encrypt (in=0x7ffff7fabd60 "...", key=0x555555c1b510) at util/aes.c:881881 const AES_KEY *key) {(gdb) n889 assert(in && out && key);(gdb) n881 const AES_KEY *key) {(gdb) n889 assert(in && out && key);(gdb) n896 s0 = GETU32(in ) ^ rk[0];(gdb) n897 s1 = GETU32(in + 4) ^ rk[1]; Meaning that, indeed, Intel AES instructions are not used. My first thought was that qemu-img perhaps just uses libcrypto such that AES-NI is automatically used, when available. qemu-img even links against libcrypto (cf ldd $(which qemu-img) ) - but it does not seem to use it for AES crypto. Hmm. I derived the breakpoint location via grepping the QEMU source code. On Fedora you can get it like this: $ fedpkg clone -a qemu$ cd qemu$ fedpkg source$ tar xfv qemu-2.2.0-rc1.tar.bz2$ cd qemu-2.2.0-rc1 NOTE: gdb can be exited via the q uit command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
170,020
I've written a shell script for testing an API that copies files and echoes its progress after each one. There is a two second sleep between each copy, so I would like to add the ability to press any key to pause the script to allow deeper testing. Then press any key to resume. How can I add this in as few lines as possible?
You don't need to add something to your script. The shell allows such a functionality. Start your script in a terminal. While is is running and blocking the terminal use ctrl - z . The terminal is released again and your see a message that the process is stopped. (It is now in the porcess state T , stopped) Now do whatever you want. You can also start other processes/scripts and stop them with ctrl - z . Type jobs in the terminal or list all stopped jobs. To let your script continue, type fg (foreground). It resumes the job back into the foreground process group and the jobs continues running. See an example: root@host:~$ sleep 10 # sleep for 10 seconds^Z[1]+ Stopped sleep 10root@host:~$ jobs # list all stopped jobs[1]+ Stopped sleep 10root@host:~$ fg # continue the jobsleep 10root@host:~$ # job has finished
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170020", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46764/" ] }
170,032
I am trying to do something like alias ftp='echo do not use ftp. Use sftp instead.' just so that ftp will not be accidentally used. But I noticed that ftp abcd.com will cause the command to echo do not use ftp. Use sftp instead. abcd.com because the abcd.com is taken to be an argument for echo . Is there a way to make Bash not add abcd.com to the substitution, or make echo not take it as extra arguments? (Is there a solution for each approach?) I supposed I could make it alias ftp='sftp' but I just want to make the command stop all together to remind myself not to use ftp .
alias ftp='echo do not use ftp. Use sftp instead. # '
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19342/" ] }
170,043
I have Apache logfile, access.log , how to count number of line occurrence in that file? for example the result of cut -f 7 -d ' ' | cut -d '?' -f 1 | tr '[:upper:]' '[:lower:]' is a.phpb.phpa.phpc.phpd.phpb.phpa.php the result that I want is: 3 a.php2 b.php1 d.php # order doesn't matter1 c.php
| sort | uniq -c As stated in the comments. Piping the output into sort organises the output into alphabetical/numerical order. This is a requirement because uniq only matches on repeated lines, ie aba If you use uniq on this text file, it will return the following: aba This is because the two a s are separated by the b - they are not consecutive lines. However if you first sort the data into alphabetical order first like aab Then uniq will remove the repeating lines. The -c option of uniq counts the number of duplicates and provides output in the form: 2 a1 b References: sort(1) uniq(1)
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/170043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27996/" ] }
170,063
After about an hour of Googling this, I can't believe nobody has actually asked this question before... So I've got a script running on TTY1. How do I make that script launch some arbitrary program on TTY2? I found tty , which tells you which TTY you're currently on. I found writevt , which writes a single line of text onto a different TTY. I found chvt , which changes which TTY is currently displayed. I don't want to display TTY2. I just want the main script to continue executing normally, but if I manually switch to TTY2 I can interact with the second program.
setsid sh -c 'exec command <> /dev/tty2 >&0 2>&1' As long as nothing else is using the other TTY ( /dev/tty2 in this example), this should work. This includes a getty process that may be waiting for someone to login; having more than one process reading its input from a TTY will lead to unexpected results. setsid takes care of starting the command in a new session. Note that command will have to take care of setting the stty settings correctly, e.g. turn on "cooked mode" and onlcr so that outputting a newline will add a carriage return, etc.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26776/" ] }
170,068
In Fedora we have 'systemctl' and 'service' scripts. It seems that service internally calls systemctl . So what is the correct/right way on Fedora to start or stop services -- via systemctl or service facility? May be there are nuances to keep in mind?
It's typically the case that the service scripts are redirected to systemctl (Systemd) scripts so it's basically your preference which you want to use. Example From my Fedora 20 system. $ service sshd statusRedirecting to /bin/systemctl status sshd.servicesshd.service - OpenSSH server daemon Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled) Active: active (running) since Fri 2014-11-21 09:12:10 EST; 5 days ago Main PID: 1095 (sshd) CGroup: /system.slice/sshd.service └─1095 /usr/sbin/sshd -DNov 21 09:12:10 dufresne systemd[1]: Starting OpenSSH server daemon...Nov 21 09:12:10 dufresne systemd[1]: Started OpenSSH server daemon.Nov 21 09:12:11 dufresne sshd[1095]: Server listening on 0.0.0.0 port 22.Nov 21 09:12:11 dufresne sshd[1095]: Server listening on :: port 22. I generally use both methods, since old habits die hard. But if you're trying to adapt to the Systemd world, I'd continue to force myself to do things using systemctl if possible. Also Systemd brings everything that you used to do with chkconfig and service under one command, systemctl , so I generally find that easier to cope with in the long run. This cheatsheet on the Fedora project's website is helpful in making the switch. Incidentally, the answer to your original question is answered in a footnote on that page: Note that all /sbin/service and /sbin/chkconfig lines listed above continue to work on systemd, and will be translated to native equivalents as necessary. The only exception is chkconfig --list. References SysVinit to Systemd Cheatsheet
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86215/" ] }
170,152
I just received notification that our site has a power outage tomorrow morning. I am a Windows admin but I have to cover for our Linux admin who's not around until tomorrow evening. I need to shutdown our RHEL server at 06:45 tomorrow morning (without me doing it). I have searched on here but see mixed answers using shutdown , some say -h , some say -p , some say something completely different. It's ~21:15 now and I need to shutdown at 06:45 in the morning. What is the simplest way I can schedule this?
You should use the at command: $ sudo at 6:45[sudo] password for root: warning: commands will be executed using /bin/shat> poweroffat> <EOT> Don't type the <EOT> , but press Ctrl + D at the second at> prompt. The significant advantage of using at over using shutdown with a TIME argument, is that it involves real, persistent, scheduling, and works even if the machine is rebooted in the intermediate time period. The shutdown TIME will not restart automatically in such an event, which might cause a double ungraceful power off if the reboot in the intermediate time period was not anticipated.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170152", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92959/" ] }
170,179
find ~/ -name *test.txtfind ~/ -name '*test.txt' I need to construct an example where the first form fails but the second still works.
The quotes protect the contents from shell wildcard expansion. Run that command (or even simpler just echo *test.txt in a directory with a footest.txt file and then one without any files that end in test.txt and you will see the difference. $ lsa b c d e$ echo *test.txt*test.txt$ touch footest.txt$ echo *test.txtfootest.txt The same thing will happen with find. $ set -x$ find . -name *test.txt+ find . -name footest.txt./footest.txt$ find . -name '*test.txt'+ find . -name '*test.txt'./footest.txt$ touch bartest.txt+ touch bartest.txt$ find . -name *test.txt+ find . -name bartest.txt footest.txtfind: paths must precede expressionUsage: find [-H] [-L] [-P] [path...] [expression]$ find . -name '*test.txt'+ find . -name '*test.txt'./bartest.txt./footest.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92925/" ] }
170,204
How to find the max value from the column 1 and echo the respective path location from a file which contains n number of records. $ cat version.log112030 /opt/oracle/app/oracle/product/11.2.0121010 /opt/oracle/app/oracle/product/12.1.0 Expected output: /opt/oracle/app/oracle/product/12.1.0
This should work: awk -v max=0 '{if($1>max){want=$2; max=$1}}END{print want} ' version.log The -v max=0 sets the variable max to 0 , then, for each line, the first field is compared to the current value of max . If it is greater, max is set to the value of the 1st field and want is set to the current line. When the program has processed the entire file, the current value of want is printed. Edit I did not test the awk solution earlier and it was really my bad to have provided it. Anyways, the edited version of the answer should work (Thanks to terdon for fixing it) and I tested the below as well. sort -nrk1,1 filename | head -1 | cut -d ' ' -f3 I am sort ing on the first field where, -n specifies numerical sort. -r specifies reverse the sort result. -k1,1 specifies first field for the sorting to occur. Now, after the sorting I am piping the output and just getting the first result which will give me the numerically highest value of column1 in the result. Now, I finally pipe it to cut with the delimiter specified as space and printing the -f3 which is the intended output. Testing cat filename112030 /opt/oracle/app/oracle/product/11.2.0121010 /opt/oracle/app/oracle/product/12.1.02312 /hello/some/other/path3423232 /this/is/really/big/number342 /ok/not/the/maximum/number9999899 /Max/number9767 /average/number Now, after I run the above command for the input as above, I get the output as, /Max/number
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92238/" ] }
170,223
how to extract path location from below given string. /opt/oracle/app/oracle/product/12.1.0/bin/tnslsnr expected output. /opt/oracle/app/oracle/product/12.1.0/bin (or) /opt/oracle/app/oracle/product/12.1.0/bin/
Use the shell's suffix removal feature str=/opt/oracle/app/oracle/product/12.1.0/bin/tnslsnrpath=${str%/*}echo "$path" In general, ${parameter%word} removes word from the end of parameter . In our case, we want to remove the final slash and all characters which follow: /* . The above produces: /opt/oracle/app/oracle/product/12.1.0/bin Use dirname dirname can be used to strip the last component from a path: $ dirname -- "$str"/opt/oracle/app/oracle/product/12.1.0/bin
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92238/" ] }
170,242
I know it for one command but how to work with a sequence?
I put this as an answer, because cannot format it in the comment properly foo() { echo foo echo bar}> foofoobar Imho, you have more freedom with a function than with alias. At least you can format it properly. Advanced Bash-Scripting Guide In a script, aliases have very limited usefulness. It would be nice if aliases could assume some of the functionality of the C preprocessor, such as macro expansion, but unfortunately Bash does not expand arguments within the alias body. [2] Moreover, a script fails to expand an alias itself within "compound constructs," such as if/then statements, loops, and functions. An added limitation is that an alias will not expand recursively. Almost invariably, whatever we would like an alias to do could be accomplished much more effectively with a function.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170242", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93005/" ] }
170,275
How can I list the number of lines in the files in /group/book/four/word , sorted by the number of lines they contain? ls -l command lists them down but does not sort them
You should use a command like this: find /group/book/four/word/ -type f -exec wc -l {} + | sort -rn find : search for files on the path you want. If you don't want it recursive, and your find implementation supports it, you should add -maxdepth 1 just before the -exec option. exec : tells the command to execute wc -l on every file. sort -rn : sort the results numerically in reverse order. From greater to lower. (that assumes file names don't contain newline characters).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170275", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93026/" ] }
170,279
I'm trying to set something up on a server I run, when ever I cd into a public_html folder 95% of the time there's a few commands I will always run to check certain things. Is there anyway I can hook into cd so if the directory is a public_html , it will automatically run the commands for me? If I can't hook into the cd command, are there any other things I could do to achieve the outcome I'm after? I'm running CentOS 5.8.
You could add this function to your .bashrc or other startup file (depending on your shell). cd() { if [ "$1" = "public_html" ]; then echo "current dir is my dir" fi builtin cd "$1"}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63703/" ] }
170,301
I have never needed to use any other options for tar other than -cvzpf. Is there some way to set that as the default behavior? While I recognize that it is often impossible to say why someone wrote a program one way as opposed to another, I do not understand why tar doesn't do things like most other command line utilities, i.e.: command -options /source/path /target/path Why is it instead: command -options /target/path /source/path
-cvzpf is not the default behavior for at least the following reasons. -c specifies creating an archive, it is at least equally likely that one will want to extract an archive or view the contents of an archive. -v specifies verbose operations, some people don't want to see everything -p this is irrelevant for creating archives. -f in case the user wants to pipe the output to a different device/program instead of a file (or to the default tape device in traditional Unices). Regarding why it is not how you suggest it should be, it is historic reasons dealing with its use with tape drives and the original authors coding. Regarding making that the default behavior, you could create an alias however, you would need a separate one for extracting files. A separate way to change the default options with the GNU implementation of tar is by setting the TAR_OPTIONS environment variable. Though I have found that it does not like it when you try to specify -f as one of the options. export TAR_OPTIONS=-tvzp Note that while you can set the options, this will cause an error if you pass tar a conflicting option. For instance, if you have TAR_OPTIONS set as above and you try to extract an archive, you will get the following error. tar: You may not specify more than one `-Acdtrux' or `--test-label' optionTry `tar --help' or `tar --usage' for more information.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83488/" ] }
170,346
In a CentOS 7 server, I want to get the list of selectable units for which journalctl can produce logs. How can I change the following code to accomplish this? journalctl --output=json-pretty | grep -f UNIT | sort -u In the CentOS 7 terminal, the above code produces grep: UNIT: No such file or directory . EDIT: The following java program is terminating without printing any output from the desired grep. How can I change things so that the java program works in addition to the terminal version? String s; Process p; String[] cmd = {"journalctl --output=json-pretty ","grep UNIT ","sort -u"}; try { p = Runtime.getRuntime().exec(cmd); BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream())); while ((s = br.readLine()) != null) System.out.println("line: " + s); p.waitFor(); System.out.println ("exit: " + p.exitValue()+", "+p.getErrorStream()); BufferedReader br2 = new BufferedReader(new InputStreamReader(p.getErrorStream())); while ((s = br2.readLine()) != null) System.out.println("error line: " + s); p.waitFor(); p.destroy(); } catch (Exception e) {}
journalctl can display logs for all units - whether these units write to the log is a different matter. To list all available units and therefore all available for journalctl to use: systemctl list-unit-files --all As to your java code, in order to make pipes work with Runtime.exec() you could either put the command in a script and invoke the script or use a string array, something like: String[] cmd = {"sh", "-c", "command1 | command2 | command3"};p = Runtime.getRuntime().exec(cmd); or: Runtime.getRuntime().exec(new String[]{"sh", "-c", "command1 | command2 | command3"});
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92670/" ] }
170,390
When I execute command sudo iwlist wlan0 scan | grep ESSID I get the result: ESSID:"DHS_3RD_FLOOR" ESSID:"MAXTA" ESSID:"MAXTA_5THWL" ESSID:"OPENSTACK" ESSID:"IOT" ESSID:"ved_opa" ESSID:"dlink" ESSID:"WifiFeazt" But I want output as:(without ESSID:") DHS_3RD_FLOOR MAXTA MAXTA_5THWL OPENSTACK IOT ved_opa dlink WifiFeazt I googled but I have no idea how to do it. Any advice?
With GNU sed : sed -r 's/(ESSID:|")//g' or sed 's/\(ESSID:\|"\)//g' or perl -pe 's/(?:ESSID:|")//g' or in pure bash: str=$(sudo iwlist wlan0 scan | grep ESSID)str=${str//ESSID:/}echo ${str//\"/} Output: DHS_3RD_FLOOR MAXTA MAXTA_5THWL OPENSTACK IOT ved_opa dlink WifiFeazt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48634/" ] }
170,398
I know that flv and mp4 files contain aac audio, while avi video usually mp3 audio streams. What command (avconv, ffmpeg) would extract the audio without transcoding it?
ffmpeg -i video.mp4 -vn -acodec copy audio.aac Here’s a short explanation on what every parameter does: -i option specifies the input file. -vn option is used to skip the video part. -acodec copy will copy the audio stream keeping the original codec.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
170,399
I am install openstack controller node for one machtion, and Another metchion running nova-compute only. so I am running controller node cinder will got error. I clearly meantion it which service gor error, so please help me. cat /var/log/cinder/cinder-backup.log 1) ERROR cinder.service [-] Recovered model server connection! 2) 2014-11-28 12:43:35.415 4628 ERROR cinder.openstack.common.rpc.common AMQP server on 10.192.1.126:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds. 3) ERROR cinder.brick.local_dev.lvm Unable to locate Volume Group cinder-volumes 4) ERROR cinder.backup.manager Error encountered during initialization of driver: LVMISCSIDriver 5) ERROR cinder.backup.manager Bad or unexpected response from the storage volume backend API: Volume Group cinder-volumes does not exist scheduler: 1) ERROR cinder.service [-] Recovered model server connection! 2) ERROR cinder.volume.flows.create_volume Failed to schedule_create_volume: No valid host was found.
ffmpeg -i video.mp4 -vn -acodec copy audio.aac Here’s a short explanation on what every parameter does: -i option specifies the input file. -vn option is used to skip the video part. -acodec copy will copy the audio stream keeping the original codec.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78285/" ] }
170,409
If I issue the trap builtin twice [for the same signal], what happens? Is the second command added to the first, or does it replace the first? trap Foo SIGINT...trap Bar SIGINT... When SIGINT happens, does Bash just run Bar , or does it run Foo as well? Or something else...?
The command is replaced. The manpage states: trap [-lp] [[arg] sigspec ...] The command arg is to be read and executed when the shell receives signal(s) sigspec. If arg is absent (and there is a single sigspec) or -, each specified signal is reset to its original disposition (the value it had upon entrance to the shell). If arg is the null string the signal specified by each sigspec is ignored by the shell and by the commands it invokes. If arg is not present and -p has been supplied, then the trap commands associated with each sigspec are displayed. If no arguments are supplied or if only -p is given, trap prints the list of commands associated with each signal. The -l option causes the shell to print a list of signal names and their cor‐ responding numbers. Each sigspec is either a signal name defined in <signal.h>, or a signal number. Signal names are case insensitive and the SIG prefix is optional. It states the command arg is to be read and executed ... period. It would otherwise not be possible to reset the signal handling if the arg was always added to the list.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26776/" ] }
170,444
I have a file in ~/file.txt . I have created a hard link by : ln ~/file.txt ~/test/hardfile.txt and a symlink file : ln -s ~/file.txt ~/test/symfile.txt Now, How can I find out that which file is hard link ? How can I find out hard link follows which file? We can find symlink file by -> , but what about hard link?
-rw--r--r-- 2 kamix users 5 Nov 17:10 hardfile.txt ^ That's the number of hard links the file has. A "hard link" is actually between two directory entries; they're really the same file. You can tell by looking at the output from stat : stat hardlink.file | grep -i inodeDevice: 805h/2053d Inode: 1835019 Links: 2 Notice again the number of links is 2, indicating there's another listing for this file somewhere. The reason you know this is the same file as another is they have the same inode number; no other file will have that. Unfortunately, this is the only way to find them (by inode number). There are some ideas about how best to find a file by inode (e.g., with find ) in this Q&A .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52995/" ] }
170,487
I am trying to write a script for changing user password in dovecot user database and I can't understand how to replace a set of characters between delimiters for exact lines with sed . Please check this line for example (this is the part from dovecot userdb): [email protected]:{SHA512-CRYPT}$6$0vthg.LubtSCxRRK$MdTKNQ2Vk8ZW3XQXNXStt9rfr6fNaXqPvZ0o9WJ8mW8y9ozE1pi8dYM8oQzwWa8ESGzEmJO6yT/tgi3ZEqAiE0:::[email protected]:{SHA512-CRYPT}$6$0vthg.LubtSCxRRK$MdTKNQ2Vk8ZW3XQXNXStt9rfr6fNaXqPvZ0o9WJ8mW8y9ozE1pi8dYM8oQzwWa8ESGzEmJO6yT/tgi3ZEqAiE0::: How to replace the string between ":" delimeters starting with "{SHA512-CRYPT}" only for user "[email protected]" and not for user "[email protected]" with sed?
Perhaps: sed 's/\(\(^\|:\)123@example\.com:\)\([^:]\+\)/\1foo/' given there is no escaped delimiters in the value. sed 's/\(\(^\|:\)123@example\.com:\)\([^:]\+\)/\1foo/' | | | | | | | | | | | | | | | +----- H. End of sub. | | | | | | +------- G. Sub string | | | | | +---------- F. Match Group 1. | | | | +------------ E. End of Group 3. | | | +------------------ D. Group 3. | | +----------------------------------- C. User | +--------------------------------------------- B. Prefix Group 2. +--------------------------------------------------- A. Substitute A: s/ Substitute command. B: (^|:) Starts with start of line or delimiter : , Group 2, part of match group 1. C: The user to match, part of match group 1. D: ([^:]+) The part to remove, anything until : . Part of group 3. Everything until next delimiter. Should perhaps be \(:\|$\) , but as it should end in : it should suffice. E: \) Ending the removal grouping. F: \1 Put back match group 1. User + delimiter(s). G: foo What ever is to be inserted as crypt. H: / Ending it all. Optioanlly an /g for global, but assume it is a once only.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61558/" ] }
170,488
I have some Python scripts, that I keep in a repository; they are plain-text files, but if their executable bit is set, then the online repository page serves them as binary downloads, not as plain text pages. Thus, I'd prefer to keep these scripts non-executable. However, I'd also like to use them, as well. In principle, I could do: sudo ln -s /path/to/wherever/I/have/put/myscript.py /usr/bin/ ... and then, if the script was executable (and has a shebang), I could just call on the command line: myscript.py [ARGS] ... which is what I'd want. But if I make the script itself executable, then I have the repository download problem as stated above. And, as long as the script is non-executable, I'd have to call it with an extra python - and a which ( cause otherwise python would just look in the current directory for the file): python `which myscript.py` [ARGS] ... which is still quite a bit of typing, which I don't like. Also, as long-as the file is non-executable, not even tab completion will work for my[TAB] even if it is in /usr/bin ; only which will work. Now, crudely - if I could have a separate, executable permissions on the symlink, I could hope to keep the original non-executable, and still be able to run directly via just myscript.py on the command line. I'm not sure if there is possibility for Mac OSX - but as How do file permissions apply to symlinks? - Super User notes, Linux definitely doesn't offer options for that: only the original file permissions are taken into account, the permission of the symlink itself isn't. So I was wondering: Is it possible to use a different type of link (maybe "hard link"?) for that kind of purpose? Is there some kind of driver or software, which would basically allow you to make something akin to a symlink, but would be an identical copy of the source - except with its own set of permissions?
Perhaps: sed 's/\(\(^\|:\)123@example\.com:\)\([^:]\+\)/\1foo/' given there is no escaped delimiters in the value. sed 's/\(\(^\|:\)123@example\.com:\)\([^:]\+\)/\1foo/' | | | | | | | | | | | | | | | +----- H. End of sub. | | | | | | +------- G. Sub string | | | | | +---------- F. Match Group 1. | | | | +------------ E. End of Group 3. | | | +------------------ D. Group 3. | | +----------------------------------- C. User | +--------------------------------------------- B. Prefix Group 2. +--------------------------------------------------- A. Substitute A: s/ Substitute command. B: (^|:) Starts with start of line or delimiter : , Group 2, part of match group 1. C: The user to match, part of match group 1. D: ([^:]+) The part to remove, anything until : . Part of group 3. Everything until next delimiter. Should perhaps be \(:\|$\) , but as it should end in : it should suffice. E: \) Ending the removal grouping. F: \1 Put back match group 1. User + delimiter(s). G: foo What ever is to be inserted as crypt. H: / Ending it all. Optioanlly an /g for global, but assume it is a once only.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170488", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8069/" ] }
170,549
Previously I used openSUSE 11.4 and I had an old manual mount. Despite, I copied all config files (I think) I noticed that unknown to /etc/fstab devices are automounted (know I defined as noauto ). But since this is big difference in openSUSE 13.2 distro versions I am not so surprised. So how to do this in openSUSE 13.2? I would like to mount the device manually by mount , and unmount also manually by umount . No other way, no smart timeout on inactivity or anything like that. I would like to disable that feature at system level, nothing per desktop (for the record I use KDE 3.5, not a joke), so I could be 100% sure this problem will not appear again when working in pure console or another desktop. Related issue provided by don-crissti : Automount not disabling in Ubuntu 12.04 or 13.04 Update # more /etc/udev/rules.d/85-no-automount.rulesSUBSYSTEM=="usb", ENV{UDISKS_AUTO}="0" kernel-desktop-devel-3.16.6-2.1.x86_64 udev-210-25.5.4.x86_64 udisks2-2.1.3-2.1.5.x86_64
The automounting you see on a modern Linux distribution like OpenSUSE or Fedora is implemented by the udisks2 service. Thus, you can disable that feature on system level by stopping that service, e.g.: # systemctl stop udisks2.service To verify that it is stopped: # systemctl status udisks2 Of course, this change isn't permanent. The udisks2 service isn't even enabled, by default and thus isn't autostarted during boot. Instead, it is activated via Dbus (e.g. when the first user starts a desktop session). Thus, if you really hate udisks2: $ systemctl mask udisks2 This will block all starts, including manual ones. Motivation Why would one want to disable automounting via the fine disks2 disk manager? There are several good reasons, e.g. work around a udisks2 automount bug 1 do forensics work on some USB drives rescue data from a corrupted FS on a USB device (where the automount would lead to more destruction) 1. e.g. on Fedora 25, when connecting 2 USB devices that are a Btrfs RAID-1 mirror, the mirror is automounted under /run/media/juser/mirror alright - BUT it also mounted a second time under /run/media/juser/mirror1 when unlocking the screen ... while the first mount is still live ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170549", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5884/" ] }
170,555
I have configured the lines in /etc/inittab as follows: # The default runlevel.id:2:initdefault: But after logging in the output of runlevel is as follows: N 5 So why am I in runlevel 5 instaed of 2? Note: As an additionaly info here is uname -a output for my system Linux d3bi4n 3.16.0-4-amd64 #1 SMP Debian 3.16.7-2 (2014-11-06) x86_64 GNU/Linux and the output of dpkg -S /sbin/init is systemd-sysv: /sbin/init
$ dpkg -S /sbin/initsystemd-sysv: /sbin/init Your init system is Systemd, not SysVinit. /etc/inittab is a configuration file of SysVinit, it is not used by Systemd. I presume you have this file because this is a jessie system which was upgraded from an earlier jessie or from wheezy with SysVinit. Systemd doesn't exactly have a concept of runlevels, though it approximates them for compatibility with SysVinit. Systemd has “target units” instead. You can choose the boot-time target unit by setting the symbolic link /etc/systemd/system/default.target . See the Systemd FAQ for more information. If you don't want to use Systemd, install the sysvinit-core package, which provides a traditional SysVinit (formerly in the sysvinit package, which in jessie is now a front for systemd). As of jessie, Debian defaults to Systemd but still supports SysVinit.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170555", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93225/" ] }
170,562
I'm trying to structure an android project for continuous integration/continuous delivery via gradle and git. The main code will be pulled from git on the build server without various files that contain keys. Gradle needs these files to successfully build the project. I will pull those files on the build server separately, but i'm looking for a common place to store these files both on my local env and on the build server.So that I can reference this location in ENVs, then point to that location in the gradle build file. The build server runs in root mode, obviously my local is running as a user. Where is a non-root accessible, public place in the linux file system besides /home/$USER? The dist i'm using are ubuntu and debian.
$ dpkg -S /sbin/initsystemd-sysv: /sbin/init Your init system is Systemd, not SysVinit. /etc/inittab is a configuration file of SysVinit, it is not used by Systemd. I presume you have this file because this is a jessie system which was upgraded from an earlier jessie or from wheezy with SysVinit. Systemd doesn't exactly have a concept of runlevels, though it approximates them for compatibility with SysVinit. Systemd has “target units” instead. You can choose the boot-time target unit by setting the symbolic link /etc/systemd/system/default.target . See the Systemd FAQ for more information. If you don't want to use Systemd, install the sysvinit-core package, which provides a traditional SysVinit (formerly in the sysvinit package, which in jessie is now a front for systemd). As of jessie, Debian defaults to Systemd but still supports SysVinit.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170562", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85822/" ] }
170,571
I want to execute a program (which I know was written in C++), but I get this error: zsh: exec format error: ./myProgram Output of file myProgram : myProgram: Mach-O i386 executable My system is a 64-bit Linux. I also tried on a 32-bit Ubuntu VM, but I get: bash: ./myProgram: cannot execute binary file: Exec format error Why wasn't I able execute that program? How can I execute it?
$ dpkg -S /sbin/initsystemd-sysv: /sbin/init Your init system is Systemd, not SysVinit. /etc/inittab is a configuration file of SysVinit, it is not used by Systemd. I presume you have this file because this is a jessie system which was upgraded from an earlier jessie or from wheezy with SysVinit. Systemd doesn't exactly have a concept of runlevels, though it approximates them for compatibility with SysVinit. Systemd has “target units” instead. You can choose the boot-time target unit by setting the symbolic link /etc/systemd/system/default.target . See the Systemd FAQ for more information. If you don't want to use Systemd, install the sysvinit-core package, which provides a traditional SysVinit (formerly in the sysvinit package, which in jessie is now a front for systemd). As of jessie, Debian defaults to Systemd but still supports SysVinit.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65577/" ] }
170,572
In a script I have inherited the output of one of our programs is put in to a file with: program &>> result.txt I have been reading in my book "Learning the bash shell" at home over the weekend, but cannot find what this means ( I know what > and >> mean ). I am missing something obvious?
Your book is likely too old, this is something new in Bash version 4. program &>> result.txt is equivalent to program >> result.txt 2>&1 Redirect and append both stdout and stderr to file result.txt .More about I/O redirection here .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93236/" ] }
170,579
I'm fairly new to linux but I want to install it on my chromebook with crouton. Is there a way to install something like lubuntu or xubuntu without any of the software normally bundled? For example, I don't need a music manager, libre office, an internet browser (I will install chrome). I would rather just choose these kinds of programs myself and install what I need rather than have them included. Is this possible?
Your book is likely too old, this is something new in Bash version 4. program &>> result.txt is equivalent to program >> result.txt 2>&1 Redirect and append both stdout and stderr to file result.txt .More about I/O redirection here .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93242/" ] }
170,581
Suppose I have two scripts, script1.sh and script2.sh . I am wondering if there is a way to make a filesystem interface such that, for example, I can go vim file and then have my system run script1.sh and have the output from the script inside my editor. Then, when I write the file, the system would send the modified text as piped input into script2.sh . Is this possible? I've looked into using 'inotify', which could run my desired script when the file was changed. But I haven't figured out how to do the first part yet, where opening the file itself just gets the standard output from some script.
Your book is likely too old, this is something new in Bash version 4. program &>> result.txt is equivalent to program >> result.txt 2>&1 Redirect and append both stdout and stderr to file result.txt .More about I/O redirection here .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170581", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93243/" ] }
170,587
I am attempting to do some forensics on my iPad. I am following along with this article from sans.. on page 13 is the start of the Jailbreaking section...I would like to mount the image and then find the files listed on the following pages. Model A1219...I did redsn0w 0.9.14b2 the iPad is not 3G and I don't have a passcode on it. I installed openSSH and did ssh [email protected] dd if=/dev/rdisk0 bs=1M |dd of=ios-root1.img I can't get it to mount though...or couldn't find anything when trying to use Scalpel tried this mount -t hfsplus -o ro,loop /media/psf/Home/ios-root1.img /mnt/hfs/mount -t hfs /media/psf/Home/ios-root1.img /mnt/hfs/mount -t hfsplus /media/psf/Home/ios-root1.img /mnt/hfs/ dmesg | tail gives me hfsplus: unable to find HFS+ superblock
Your book is likely too old, this is something new in Bash version 4. program &>> result.txt is equivalent to program >> result.txt 2>&1 Redirect and append both stdout and stderr to file result.txt .More about I/O redirection here .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170587", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93248/" ] }
170,588
I need to delete all lines in a file, if the values in all the columns are 0 (so if the sum of the row is 0). My file is like this (13 columns and 60000 rows, tab delimited) KO gene S10 S11 S12 S1 S2 S3 S4 S5 S6 S7 S8 S9K02946 aap:NT05HA_2163 0 0 0 0 1 0 8 0 0 5 0 0K06215 aar:Acear_1499 0 0 0 0 0 0 8 0 0 0 0 0K00059 acd:AOLE_11635 0 0 5 0 0 0 0 0 8 0 0 0K00991 afn:Acfer_0744 0 0 0 0 0 0 0 0 0 0 0 0K01784 aha:AHA_2893 0 0 0 0 0 0 7 0 0 0 0 0K01497 amd:AMED_3340 0 0 0 0 0 0 0 0 0 0 0 0 How can I do?
Your book is likely too old, this is something new in Bash version 4. program &>> result.txt is equivalent to program >> result.txt 2>&1 Redirect and append both stdout and stderr to file result.txt .More about I/O redirection here .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91761/" ] }
170,600
I need to sort a CSV file, but the the header row (1st row) keeps getting sorted.This is what I'm using: cat data1.csv | sort -t"|" -k 1 -o data1.csv Here's a sample line: Name|Email|Country|Company|Phone Brent Trujillo|[email protected]|Burkina Faso|Donec LLC|(612) 943-0167
This should work and output to data2.csv : head -n 1 data1.csv > data2.csv &&tail -n +2 data1.csv | sort -t "|" -k 1 >> data2.csv
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93252/" ] }
170,659
The help docs mention marking for upgrade. The context menu and menu bar do not have this option listed at all, and the only non-greyed-out options are marking for removal or complete removal. There are some 40 packages in "Installed (upgradeable)" but there doesn't seem to be any way of actually upgrading them. This is on Linux Mint 17, 32-bit. What am I doing wrong?
In Linux Mint 17 Qiana, Synaptic "officially" lacks the upgrade feature . This is for sake of stability - users are supposed to use mintupdate for already-installed packages. Synaptic can still be used to install additional, new packages.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34450/" ] }
170,661
I want to print all lines from file until the match wordplease advice how to do that with awk for example I want to print all lines until word PPP remark the first line chuld be diff from AAA ( any word ) cat file.txtAAA ( the first line/word chuld be any word !!!!! )BBBJJJOOO345211BBBOOOOOOPPPMMM(((&&& so I need to get this AAABBBJJJOOO345211BBBOOOOOOPPP other example ( want to print until KJGFGHJ ) cat file.txt1 HG KJGFGHJ KKKK so I need to get HG KJGFGHJ
Try: $ awk '1;/PPP/{exit}' fileAAABBBJJJOOO345211BBBOOOOOOPPP
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67059/" ] }
170,667
After I performed a dist-upgrade on a Debian testing (Jessie) instance, I can no longer boot. I'm marooned at the command prompt: Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs The following error shows up: root@debian:~# journalctl -xbdebian systemd[222]: Failed at step EXEC spawning /bin/plymouth: No such file or directory Surprisingly, Google is not helping and the little thread I see are for Arch (even if I add +debian in my search) and don't make sense to me. Any pointer on how to recover from this? # uname -aLinux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-2 (2014-11-06) x84_64 GNU/Linux
I also had this precise error today as the result of a debian wheezy to jessie upgrade. The system failed to reboot despite no errors from "apt-get dist-upgrade". The final error output via "journalctl -xb" (or "-xd") was associated with "plymouth" (an application that I'd never heard of). But it turns out failing to reboot had nothing to do with plymouth, but rather a minor anomaly under an ancillary entry under /etc/fstab: change "auto" to "noauto" for a cdrom device (nothing to do with NFS) and then systemd will allow the boot. This is an fstab line which functioned under wheezy and fails silently to allow a reboot under jessie. There was no error via journalctl associated with fstab. It was lucky web searches that lead me to this obscure solution.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44359/" ] }
170,691
I have a file with genotype data. The second column has both alleles for a particular genetic variant concatenated, as below. rs969931 CA 1.000 2.000 2.000 2.000 2.000 2.000 1.000 1.000rs2745406 CT 0.000 2.000 2.000 1.000 1.000 2.000 1.000 1.000rs6939431 AG 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000rs1233427 AG 1.000 2.000 2.000 2.000 2.000 1.000 1.000 1.000rs1233426 AG 1.000 2.000 2.000 2.000 2.000 1.000 1.000 1.000rs1233425 GC 1.000 1.999 1.999 2.000 2.000 2.000 1.000 1.000rs362546 GA 1.000 2.000 2.000 2.000 2.000 1.000 1.000 1.000rs909968 AG 0.000 2.000 2.000 1.000 1.000 1.000 1.000 1.000rs909967 GA 1.000 2.000 2.000 2.000 2.000 2.000 1.000 1.000rs886381 AG 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 I need to create a new file with the alleles as two separate columns, i.e. splitting the second column into two columns. Desired output below. Is there a way to specify multiple field separators in awk to achieve this? rs969931 C A 1.000 2.000 2.000 2.000 2.000 2.000 1.000 1.000rs2745406 C T 0.000 2.000 2.000 1.000 1.000 2.000 1.000 1.000rs6939431 A G 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000rs1233427 A G 1.000 2.000 2.000 2.000 2.000 1.000 1.000 1.000rs1233426 A G 1.000 2.000 2.000 2.000 2.000 1.000 1.000 1.000rs1233425 G C 1.000 1.999 1.999 2.000 2.000 2.000 1.000 1.000rs362546 G A 1.000 2.000 2.000 2.000 2.000 1.000 1.000 1.000rs909968 A G 0.000 2.000 2.000 1.000 1.000 1.000 1.000 1.000rs909967 G A 1.000 2.000 2.000 2.000 2.000 2.000 1.000 1.000rs886381 A G 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
You can do it using the sub function in awk : awk 'sub(/./,"& ",$2)1;' file If you want tab-separated output, you can use: awk -v OFS="\t" 'sub(/./,"&\t",$2)1;' file Or in a variety of other tools: Perl perl -alne '$F[1]=~s/./$& /; print "@F"' file Or, for tab-separated output: perl -alne '$F[1]=~s/./$&\t/; print join "\t",@F' file GNU sed sed -r 's/\S+\s+\S/& /' file Other sed sed 's/^[[:alnum:]]*[[:blank:]]*./& /' file Shell while read -r snp nt rest; do printf "%s\t%s\t%s\t%s\n" "$snp" "${nt:0:1}" "${nt:0:1}" "$rest"done < file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89610/" ] }
170,693
I want to check the results of a restore from backup. This is, because TimeMachine on MacOS gives me some weird errors and warnings and I want to make sure, everything is in its place again after restoration. While I don't trust TimeMachine to put every file back, I trust it to put every file it restores with the correct content. I thought about diff -r, but going through roughly 300 GiB may take eternally. I'm fine to compare at least the presence of files but comparing size and date in the same run is even better. I'm aware of solutions like diff <(ls -R $PATH1) <(ls -R $PATH2) but the output is diffish to read. I'd rather like a single line per file found on only one side. Also I have to rely on ls proceeding through the tree in the same order on both sides. This may be different because filesystems may differ. I'd love most to get a tool for the lazy which takes two pathes and outputs differences up to any desired level of inspection, perhaps something out of macports. But I don't fear ample bashisms.
Here is your solution: rsync -nrv --delete dirA/ dirB/ Instead of making the two folders identical, we use rsync to only show what it would do.That is the effect of -n . Careful, do not forget to add this option! The -r means a recursive scan, the -v gives the wanted verbose listing. You can add another -v to get all equals listed, too. The --delete tells rsync to simulate deletion of target files which do not exist in the source. Without the -n flag, the dirB folder would become identical to dirA . By default, rsync checks only name and timestamp of the files, which is exactly the fast option you were asking for. If you want similar behavior like the diff behavior (and equally slow), you can add a -c flag to enforce checksum comparison. rsync -nrvc --delete dirA/ dirB/ Note the usage of trailing slashes in dirA/ and dirB/ , they are significant in rsync .For further information, study the rsync man page, it makes a lot of sense to get used to this powerful command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/170693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93302/" ] }
170,713
In my file contains the following text http://mydomain.com/test.phtmlhttp://mydomain.com/classes/main.class.phtmlhttp://mydomain.com/scripts/filemanager/nl.phtml i need this (i think i need use sed?) http://mydomain.com/mydirectory/test.phphttp://mydomain.com/classes/mydirectory/main.class.phphttp://mydomain.com/scripts/filemanager/mydirectory/nl.php
sed 's/\([^/]*\)\.phtml$/mydirectory\/\1.php/' <filename> Will do that, if that’s what you need. (optionally with the -i flag to do an in-place replace.) To break it up, first we have s/<regexp>/<replacement>/ Which will replace what <regexp> matches with <replacement> . Next let’s look at the regexp we had: \([^/]*\)\.phtml$ First, at the end we have \.phtml$ which will look for the string .phtml at the end of the line. $ anchors the regexp to the end of the line, and there’s a backslash in front of the dot to escape it, because ordinarily a dot matches anything. After that what we have left is: \([^/]*\) Looking in the middle we have [^/] which will match one character ( [] will match one of the characters inside the square brackets), which can be anything that isn’t a slash, because ^ will do a negated match, so ^/ inside square brackets “anything other than a slash”. After the closing square bracket there is an asterisks, * , which means that it will match one or more of the characters inside the square brackets. Then wrapped around the above we have \( and \) which will capture what is matched inside the parenthesis and let you use them in the <replacement> section of s/<regexp>/<replacement>/ so we can use them. And then in the <replacement> section we have: mydirectory\/\1.php/ Which will replace the matched rexecp with first mydirectory/ , the slash needing to be replaced since / is used as the separator for the sed replace, and then \1 is used to put in the text that was captured in the first capture group, and then we add the .php extension at the end. All of this taken together means that we will capture everything from the last / to .phtml , add mydirectory/ after the last slash, write back the text we captured, and then add the .php extension.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170713", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93315/" ] }
170,724
How can I select all the records that have '2' as the second field? My data is: $ cat numbers.txt 1 2 3 4 5 6 7 82 4 6 8 10 12 14 163 6 9 12 15 18 21 24 My awk is: awk '$2 - /^2$/ {print}' numbers.txt but I get all lines, not just the first one: 1 2 3 4 5 6 7 82 4 6 8 10 12 14 163 6 9 12 15 18 21 24
You need to use matching operator ~ , not subtraction operator - : $ awk '$2 ~ /^2$/' file or use equality operator == like @glenn jackman's answer . But let take a look at your previous solution, to explain that why you got all the lines. awk '$2 - /^2$/ {print}' numbers.txt Here, with each line input, if expression $2 - /^2$/ is true, you will print this line, else do nothing. Because you got all the lines, so it seems that expression $2 - /^2$/ was always evaluated to true. How awk evaluated that expression? When you use subtraction operator, the type result is numeric. $2 variable was a number, but /^2$/ is a regular expression, what is its numeric value? Well, from POSIX awk documentation : When an ERE token appears as an expression in any context other than as the right-hand of the '˜' or "!˜" operator or as one of the built-in function arguments described below, the value of the resulting expression shall be the equivalent of: $0 ˜ /ere/ So, your awk program become: awk '$2 - ($0 ~ /^2$/) {print}' numbers.txt You can see, each input line will be checked with regular expression /^2$/ . Because none of your input lines matched, the result of expression $0 ~ /^2$/ will be 0. With your input, all second fields have value greater than 0 (subtract to 0 make it unchanged), which are true value in awk . So expression $2 - /^2$/ is always true, cause awk print all lines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
170,758
I am trying to set up gpg-agent When I log in to my machine (via SSH) and invoke gpg , it asks for password every time (it does not start gpg-agent automatically). When I start gpg-agent manually using: eval "$(gpg-agent --daemon)" then next invocation of gpg actually launches the agent, and after typing my password once, I can then use gpg without password. Also, I see a socket has been created in $HOME/.gnupg/S.gpg-agent= When I log out and log in again (ssh), I see the socket still exists in $HOME/.gnupg/S.gpg-agent= and ps shows that the agent is running, but every invocation of gpg asks for password, as if there was no agent. I have added this to my .bashrc : GPG_TTY=$(tty)export GPG_TTY but that does not seem to help I find the behaviour of gpg-agent very confusing. I am familiar with ssh-agent and that behaves straightforward and understandable. How can I use gpg-agent same way as I would use ssh-agent ? For comparison, this is how ssh-agent behaves: After I boot my machine and login for the first time, I launch ssh-add manually and type my pass phrase once. Then, every time I log in to my machine (X, console, ssh, ...) I can use the agent (I don't need to type my password again). This is done by adding following line to .bashrc : export SSH_AUTH_SOCK="$(find /tmp/ssh-*/agent.* -uid $(id -u) -type s -print -quit 2>/dev/null)"
gpg does not look for the socket (this is different with the new version 2.1) but for the environment variable GPG_AGENT_INFO . This is not set on log in. That is the problem. Obviously you have the option use-standard-socket in gpg-agent.conf so that the socket name is always the same. You should set the variable in a login script run a simple script afterwards which checks whether gpg-agent is running: export GPG_AGENT_INFO=/path/to/your/HOME/.gnupg/S.gpg-agent:42:1gpg-connect-agent /bye &>/dev/null || gpg-agent --daemon &>/dev/null That is the part for using gpg . For SSH you also need SSH_AUTH_SOCK . The easiest way to get both variables set is to add the line write-env-file "${HOME}/.gpg-agent-info to the config file gpg-agent.conf and to run this script after the above: . "${HOME}/.gpg-agent-info"export SSH_AUTH_SOCK This is explaned in the gpg-agent man page.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
170,759
I am working on a script to delete users from text file. However, before deleting, I want to check if any process by a user is running and if running then kill all the processes and delete.Here's how I'm doing this: if #user doesn't exists then print user not existelsepids = `ps U "$name" -o pid | head -n -1` if [ -n "$pids" ] then for pid in $pids do sudo kill -KILL $pid donefi sudo userdel $name -r -f echo echo "The user - $name - has been deleted!" echo fi done < "$fname" # double-quote fname echo So, basically its a loop to delete users and in that an if statement to make sure no process is being run by a user and if its being run then force close processes and delete user. However, Its not working and it says pids: command not found .. Any solution, how to fix it?
gpg does not look for the socket (this is different with the new version 2.1) but for the environment variable GPG_AGENT_INFO . This is not set on log in. That is the problem. Obviously you have the option use-standard-socket in gpg-agent.conf so that the socket name is always the same. You should set the variable in a login script run a simple script afterwards which checks whether gpg-agent is running: export GPG_AGENT_INFO=/path/to/your/HOME/.gnupg/S.gpg-agent:42:1gpg-connect-agent /bye &>/dev/null || gpg-agent --daemon &>/dev/null That is the part for using gpg . For SSH you also need SSH_AUTH_SOCK . The easiest way to get both variables set is to add the line write-env-file "${HOME}/.gpg-agent-info to the config file gpg-agent.conf and to run this script after the above: . "${HOME}/.gpg-agent-info"export SSH_AUTH_SOCK This is explaned in the gpg-agent man page.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93331/" ] }
170,761
I'm having a hard time getting my brain wrapped around how LD_LIBRARY_PATH is handled differently for the following three cases: run as a regular user run via " sudo command " run via " sudo bash " followed in the root shell by " command " My particular problem is that a binary I'm trying to run (called dc_full) requires sudo access, but throws the following error when run as "sudo command": ljw@test$ sudo ./dc_full ./dc_full: error while loading shared libraries: libthrift-0.9.1.so: cannot open shared object file: No such file or directoryljw@test$ sudo bashroot@ljw-vm1:~/test# ./dc_full....<works fine here!> . I have the following line in both /etc/bash.bashrc and in ~/.bashrc for user ljw. root@ljw-vm1:~# grep LD_LIBRARY ~/.bashrcexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/libroot@ljw-vm1:~# grep LD_LIBRARY /etc/bash.bashrc export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib I would expect that this covers both the sudo and sudo-bash cases, one covers the user shell and one covers the "root" shell. But clearly this is not happening. I found references to ldd, which gives me a big hint that it's not working, but not quite WHY... root@ljw-vm1:~/dc_full# ldd ./dc_full | grep thrift libthrift-0.9.1.so => /usr/local/lib/libthrift-0.9.1.so (0x00007eff19e7c000)ljw@ljw-vm1:~/dc_full$ ldd ./dc_full | grep thrift libthrift-0.9.1.so => /usr/local/lib/libthrift-0.9.1.so (0x00007f8340cc5000)ljw@ljw-vm1:~/dc_full$ sudo ldd ./dc_full | grep thrift[sudo] password for ljw: libthrift-0.9.1.so => not found How is LD_LIBRARY_PATH set in each of these three cases?
Allowing LD_LIBRARY_PATH for suid binaries like sudo is a security problem, thus LD_LIBRARY_PATH gets stripped out of the environment. sudo by default doesn't passthrough LD_LIBRARY_PATH to its children either for the same security concerns: carefully crafted libraries would allow you to bypass the sudo argument restrictions to execute whatever you want. If you need a variable set like this either use sudo -E , or pass the env variables on the command line like so: sudo -- LD_LIBRARY_PATH=/usr/local/lib dc_full . sudo will have to be configured to allow you to pass environment variables, which usually does not need manual configuration.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93838/" ] }
170,775
How can I check if a UTF-8 text file has a BOM from command line? file command shows me: UTF-8 Unicode text But, I don't know if it means there is no BOM in the file. I'm using Ubuntu 12.04.
file will tell you if there is a BOM . You can simply test it with: printf '\ufeff...\n' | file -/dev/stdin: UTF-8 Unicode (with BOM) text Some shells such as ash or dash have a printf builtin that does not support \u , in which case you need to use printf from the GNU coreutils, e.g. /usr/bin/printf . Note: according to the file changelog, this feature existed already in 2007. So, this should work on any current machine.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44001/" ] }
170,780
I expected systemctl --user enable SERVICE to start the service on login, which is not happening. Then what is it supposed to mean?
It makes the unit start on first login of a user, but for that corresponding unit file should have WantedBy = default.target or something along the lines. Because when user instance of systemd starts , it brings up the default.target target.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170780", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29867/" ] }
170,791
I have three systems, and I want to have them doing backups between them periodically. The two of them, have Debian Wheezy installed and the other one has Ubuntu 12.04 installed. Only the Ubuntu has a GUI environment, while the other two are CLI only. For the backups I want to use rsync via ssh , with the Debian systems being the destinations of the backups. I have the commands sorted out and the ssh keys properly generated and copied among the systems, but since the Debian systems do not have a graphical environment installed, the ssh-agent is not run automatically. Therefore, whenever I try to ssh to the Debian systems, I get a prompt for the passphrase. Is there a way to skip the prompt? From what I understand I cannot use the ssh-agent , when I only have the CLI. I am looking for a solution that works even after a restart without me doing anything after reboot. Thanks in advance.
I would create a separate specific passwordless SSH key for this purpose. On the server side, you can set limits to what that key can be used for and where it can connect from, so that even if someone gets hold of the key, they would still not be able to use it to do something malicious. The way to limit the key is to edit the authorized_keys file on the server side, and add some configuration to it. Here's an example: from="10.1.2.3",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command="/path/to/rsync" ssh-dss AA....[rest of key]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33054/" ] }
170,804
I tried the code as given on delete text between curly brackets however I am facing this different error regarding event in sed. file contains: This is {{the multilinetext} file }that wants{ to {bechanged}} anyway. sed ':again;$!N;$!b again; s/{[^}]*}//g' file what is supposively going wrong in the workout? Error N: Event not found.
You have to escape ! to prevent csh/tcsh from performing history expansion. They still do history expansion though you wrote ! in single quote. Try: sed ':again;$\!N;$\!b again; s/{[^}]*}//g' file Or you can write a script an call with -f script.sed (Read sed FAQ ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/170804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93151/" ] }
170,823
I have created a custom application entry in the applications menu for Eclipse as follows in a file /usr/share/applications/eclipse.desktop as follows [Desktop Entry]Version=1.0Name=Eclipse Exec=/usr/local/eclipse/eclipseTerminal=falseType=ApplicationStartupNotify=trueCategories=X-Red-Hat-Extra;Application;Development;X-Desktop-File-Install-Version=0.15Icon=/usr/local/eclipse/icon.xpm This now appears fine in the Programming section of the Applications menu. How can I add it to the Favorites section?
The favourite in Gnome Classic view follows the favourites in the Gnome 3 shell. Click on Activities in the top-left corner or use your keyboard's Windows button if it has one, to bring up the activities overview. Right-click on one of those activities and Add to Favourites . It should now be visible in the Gnome Classic Favourite menu.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/170823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93387/" ] }