source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
180,492
Listening to TCP port 0 allocates a free port number on the system for me. But what happens when I try to connect to TCP port 0? The obvious answer is: "It doesn't work": $ nc localhost 0 nc: port number too small: 0 Where in the system is this handled? In the TCP stack of the OS kernel? Are there Unixes where connecting to TCP port 0 would work?
Just to make sure we're on the same page (your question is ambiguous this way), asking to bind TCP on port 0 indicates a request to dynamically generate an unused port number. In other words, the port number you're actually listening on after that request is not zero. There's a comment about this in [linux kernel source]/net/ipv4/inet_connection_sock.c on inet_csk_get_port() : /* Obtain a reference to a local port for the given sock, * if snum is zero it means select any available local port. */ Which is a standard Unix convention. There could be systems that will actually allow the use of port 0, but that would be considered a bad practice. This behaviour is not officially specified by POSIX, IANA, or the TCP protocol, however. 1 You may find this interesting . That's why you cannot sensibly make a TCP connection to port zero. Presumably nc is aware of this and informs you you're making a nonsensical request. If you try this in native code: int fd = socket(AF_INET, SOCK_STREAM, 0);struct sockaddr_in addr;addr.sin_family = AF_INET;addr.sin_port = 0;inet_aton("127.0.0.1", &addr.sin_addr);if (connect(fd, (const struct sockaddr*)&addr, sizeof(addr)) == -1) { fprintf(stderr,"%s", strerror(errno));} You get the same error you would trying to connect to any other unavailable port: ECONNREFUSED , "Connection refused". So in reply to: Where in the system is this handled? In the TCP stack of the OS kernel? Probably not; it doesn't require special handling. I.e., if you can find a system that allows binding and listening on port 0, you could presumably connect to it. 1. But IANA does refer to port 0 as "Reserved" ( see here ). Meaning, this port should not be used online. That makes it okay with regard to the dynamic assignment convention (since it won't actually be used). Stipulating that specifically as a purpose would probably be beyond the scope of IANA; in essence, operating systems are free to do what they want with it, including nothing.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/180492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100270/" ] }
180,497
the project currently i am working on require me that fetch all info installed in the system, like total capacity, form factor, ssd or hdd, rotational speed, interface type, etc. i searched a lot and i could find any command that does this. as a comparison, i found in windows, there are applications that would satisfy my requirement. how they do this? just for interest, is there something similar in Linux too? these two os is majorly for server, so i think the hardware info should play more important role than one in pc.
Just to make sure we're on the same page (your question is ambiguous this way), asking to bind TCP on port 0 indicates a request to dynamically generate an unused port number. In other words, the port number you're actually listening on after that request is not zero. There's a comment about this in [linux kernel source]/net/ipv4/inet_connection_sock.c on inet_csk_get_port() : /* Obtain a reference to a local port for the given sock, * if snum is zero it means select any available local port. */ Which is a standard Unix convention. There could be systems that will actually allow the use of port 0, but that would be considered a bad practice. This behaviour is not officially specified by POSIX, IANA, or the TCP protocol, however. 1 You may find this interesting . That's why you cannot sensibly make a TCP connection to port zero. Presumably nc is aware of this and informs you you're making a nonsensical request. If you try this in native code: int fd = socket(AF_INET, SOCK_STREAM, 0);struct sockaddr_in addr;addr.sin_family = AF_INET;addr.sin_port = 0;inet_aton("127.0.0.1", &addr.sin_addr);if (connect(fd, (const struct sockaddr*)&addr, sizeof(addr)) == -1) { fprintf(stderr,"%s", strerror(errno));} You get the same error you would trying to connect to any other unavailable port: ECONNREFUSED , "Connection refused". So in reply to: Where in the system is this handled? In the TCP stack of the OS kernel? Probably not; it doesn't require special handling. I.e., if you can find a system that allows binding and listening on port 0, you could presumably connect to it. 1. But IANA does refer to port 0 as "Reserved" ( see here ). Meaning, this port should not be used online. That makes it okay with regard to the dynamic assignment convention (since it won't actually be used). Stipulating that specifically as a purpose would probably be beyond the scope of IANA; in essence, operating systems are free to do what they want with it, including nothing.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/180497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82930/" ] }
180,571
I'm running some cron jobs on my machine and every time I fire up a terminal session I'm getting a 'You have mail' message (the job produces output on success which gets mailed to me). Any way to turn this notification off?
The exact mechanism depends on what shell is running in the "terminal session". For the BASH shell, the man page for "bash" says: MAILCHECK Specifies how often (in seconds) bash checks for mail. The default is 60 seconds. When it is time to check for mail, the shell does so before displaying the primary prompt. If this variable is unset, or set to a value that is not a number greater than or equal to zero, the shell disables mail checking. so setting MAILCHECK=-1 in your .bashrc file would do it. Other shells have man pages with similar advice. (My bash 5.0.17 refuses to let me set the variable to a non-integer unless I first unset it, so the man page is incomplete about using "not a number".)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43445/" ] }
180,601
SERVER:~ # df -mP /home/Filesystem 1048576-blocks Used Available Capacity Mounted on/dev/mapper/rootvg-home_lv 496 491 0 100% /homeSERVER:~ # SERVER:/home # lsof | grep -i deleted | grep -i "home" | grep homebadprocess 4315 root 135u REG 253,2 133525523 61982 /home/username/tr5J6fRJ (deleted)badprocess2 44654 root 133u REG 253,2 144352676 61983 /home/username/rr2sxv4L (deleted)...SERVER:/home # Files were deleted while they were still in use. So they still consume space. But we don't want to restart the "badprocess*". OS is SLES9, but we are asking this "in general". Question: How can we remove these already deleted files without restarting the process that holds them, so the space would free up?
You can use the entries in /proc to truncate such files. # ls -l /proc/4315/fd That will show all the files opened by process 4315. You've already used lsof and that shows that the deleted file is file descriptor 135, so you can free the space used by that deleted file as follows: # > /proc/4315/fd/135 The same goes for the other deleted file opened by process 44654, there it's file descriptor 133, so: # > /proc/44654/fd/133 You should now see that the space is freed up. You can also use this to copy the contents of a file that's been deleted but still held open by a process, just cp /proc/XXX/fd/YY /some/other/place/filename .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
180,613
I have a parent folder named "parent_folder" with a lot of subfolders, in these subfolders is a file named "foo.mp4". I can find these files easily by doing this: mymacbook:parent_folder username$ find ./ -name "foo.mp4" -exec echo {} \; Now that returns the path of each file, relative to parent_folder/ ./path/to/foo.mp4 How can i return just the path, without the filename?
With GNU find: find . -name foo.mp4 -printf '%h\n' With other find s, provided directory names don't contain newline characters: find . -name foo.mp4 | sed 's|/[^/]*$||' Or: find . -name foo.mp4 -exec dirname {} \; though that means running one dirname command per file. If you need to run a command on that path , you can do (standard syntax): find . -name "featured.mp4" -exec sh -c ' for file do dir=${file%/*} ffmpeg -i "$file" -c:v libvpx -b:v 1M -c:a libvorbis "$dir" featured.webm done' sh {} + Though in this case, you may be able to use -execdir (a BSD extension also available in GNU find ), which chdir() s to the file's directory: find . -name "featured.mp4" -execdir \ ffmpeg -i {} -c:v libvpx -b:v 1M -c:a libvorbis . featured.webm \; Beware though that while the GNU implementation of find will expand {} to ./filename here, BSD ones expand to filename . It's OK here as the filename is passed as argument to an option and is always featured.mp4 anyway, but for other usages you may have to take into account that the file name may start with - or + (and be understood as an option by the command) or contain = (and be understood as a variable assignment by awk for instance), or other characters causing this kind of problem with perl -p/n (not all of them fixed by GNU find 's ./ prefix though in that case), etc.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100350/" ] }
180,631
I am handed a path of a directory or a file. Which utility/shell script will reliably give me the UUID of the file system on which is this directory/file located? By UUID of file system I mean the UUID=... entry as shown by e.g. blkid I'm using Redhat Linux. (someone suggested that I should ask this here at unix.stackexchange.com, so I moved it from the original stackexchange.com)
One option is stat + findmnt combo: findmnt -n -o UUID $(stat -c '%m' "$path") Here -n disables header, and -o UUID prints only UUID value. Option -c '%m' of stat is present to output only mountpoint of given path.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100368/" ] }
180,639
If I want to know the meaning of wget -b , I see the manual by man wget , then search the -b option. -b --background Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log. I want to get the result by a command like man wget -b . (Of course this doesn't work.) Is there a similar way to make it possible?
If you use less as pager for man you can try LESS="+/^\s+-b" man wget where + symbol to execute next operation after less has opened / command to start search ^\s+-b regexp to match -b from start of line So if you like you can arrange the apropriate function for shell function rman {#USAGE: rman programm.name option.to.search (with "-" symbol)LESS="+/^\s+$2" man "$1"} and add it into ~/.bashrc for example.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44001/" ] }
180,660
I have a file test.txt and have no file named test . When I tried ls test test.txt > new 2>new I was expecting new to be overwritten since >> is not used. But in the output file I got both contents appended. Why is it so?
TL;DR bash opens and truncates all involved files before anything is written to them. stdout and stderr are both sent to new because bash has already truncated the file (twice) when ls starts printing. This is how bash prepares/handles I/O redirection. When you ask for a command to be redirected ( > ) to a file, bash basically opens that file, creating it if necessary. If the file already exists, it is truncated. This is done through the open system call and a few flags, in your case: open("new", O_WRONLY|O_CREAT|O_TRUNC, 0666) O_CREAT creates the file if it does not exist, while O_TRUNC truncates it when it does. This open system call is part of bash 's initialisation for redirection, meaning that when you use several redirection operations, such as in... $ ls test test.txt > new 2>new ... bash begins by opening all concerned files. Therefore, before running ls , it opens new twice, with the same flags: open("new", O_WRONLY|O_CREAT|O_TRUNC, 0666)open("new", O_WRONLY|O_CREAT|O_TRUNC, 0666) This means that basically, when running your command, bash does the following (in that order) : Open new as standard output, create/truncate the file when necessary. Open new as standard error, create/truncate the file when necessary. Run ls : this will write contents to new . As you can see, bash truncates all involved files before starting ls . This means that when running something with ... >new 2>new , new is basically truncated "twice", and then , outputs are redirected to it. The behaviour you are expecting would require bash to capture ls 's stdout and stderr independently, and to open them one after the other, just before writing. Basically: Start ls . When something comes on stdout , open new , truncate it and write to it. When something comes on stderr , open new again, truncate it, and write to it. However, messages may come out interweaven: the redirected program might very well write something to stdout , then something else to stderr , and then back on stdout ... It would be horrible to manage all of that (and it might lead to undesirable (undefined?) behaviours...) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3539/" ] }
180,663
How can I select first occurrence between two patterns including them. Preferably using sed or awk . I have: textsomething P1 somethingcontent1content2something P2 somethingtextsomething P1 somethingcontent3content4something P2 somethingtext I want the first occurrence of the lines between P1 and P2 (including P1 line and P2 line): something P1 somethingcontent1content2something P2 something
sed '/P1/,/P2/!d;/P2/q' ...would do the job portably by d eleting all lines which do ! not fall within the range, then q uitting the first time it encounters the end of the range. It does not fail for P2 preceding P1, and it does not require GNU specific syntax to write simply.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100386/" ] }
180,669
I am using Cloud9 for Rails development and it uses an Ubuntu environment. In the documentation about using the PostgreSQL database, it says: Connect to the service: $ sudo sudo -u postgres psql What is the meaning of typing sudo twice? https://docs.c9.io/setting_up_postgresql.html
sudo -u postgres allows you to impersonate the postgres user when running the command. Your user probably doesn't have that privilege, but root's does. So the first sudo gives you root's privileges and the second sudo allows you (as root) to sudo -u to postgres allowing the command to be run as the postgres user.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100390/" ] }
180,710
I would like to set up system-wide vi settings. I know i can setup preferences for vim editor in /etc/vimrc and ~/.vimrc but I don't think my vi on CentOS7 is reading anything from the vimrc files or locations such as /etc/virc or ~/.virc
Poking at POSIX : Initialization in ex and vi See Initialization in ex and vi for a description of ex and vi initialization for the vi utility. And the manpage for ex says: IEEE Std 1003.1-2001 does not mention system-wide ex and vi start-up files. While they exist in several implementations of ex and vi, they are not present in any implementations considered historical practice by IEEE Std 1003.1-2001. Implementations that have such files should use them only if they are owned by the real user ID or an appropriate user (for example, root on UNIX systems) and if they are not writable by any user other than their owner. System-wide start-up files should be read before the EXINIT variable, $HOME/.exrc, or local .exrc files are evaluated. So I suppose /etc/exrc is your best bet for old-school vi . However, vi on CentOS 7 is likely just vim-minimal , in which case the startup files will still be using vim in their name: /etc/vimrc or /etc/vim/vimrc .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100166/" ] }
180,776
I am trying to create a script that requires me to print the Debian codename so that I may echo it into the sources.list file. I am trying to make this script work across any version of Debian, so I had hoped to set a bash variable of the release codename.This would be simple to do with lsb_release -c .However, our deployment images do not contain lsb_release by default - and with this script being required to fix the sources.list , installing lsb-release with apt-get would not be an option. I have found numerous ways to get the release number and other info about the system, but cannot find a reliable place to get the codename. (I am testing this with Debian Squeeze.)
You can use /etc/os-release : ( . /etc/os-release printf '%s\n' "$VERSION")7 (wheezy)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100456/" ] }
180,780
As I understand it, part of the Unix identity is that it has a microkernel delegating work to highly modular file processes. So why is Linux still considered "Unix-Like" if it strays from this approach with a monolithic kernel?
I believe the answer lies in how you define "Unix-like". As per the wikipedia entry for "Unix-like", there doesn't seem to be a standard definition. 1 A Unix-like (sometimes referred to as UN*X or *nix) operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. There is no standard for defining the term, and some difference of opinion is possible as to the degree to which a given operating system is "Unix-like". The term can include free and open-source operating systems inspired by Bell Labs' Unix or designed to emulate its features, commercial and proprietary work-alikes, and even versions based on the licensed UNIX source code (which may be sufficiently "Unix-like" to pass certification and bear the "UNIX" trademark). Probably the most obvious reason is that UNIX and MINIX are antecedent of Linux, having inspired its creation. 2 Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were also used on Linux. Later, Linux matured and further Linux kernel development took place on Linux systems. Linus Torvalds had wanted to call his invention Freax, a portmanteau of "free", "freak", and "x" (as an allusion to Unix). Whether a system is monolithic or microkernel does not seem to be considered when calling an operating system "Unix-like". At least, not nearly as often as whether the system is POSIX-compliant or mostly POSIX-compliant.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180780", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93215/" ] }
180,783
Given this: echo AAA | sed -r 's/A/echo B/ge' I get this: Becho Becho B I would have thought I would get "BBB". This is with GNU sed version 4.2.1. What is going on, and how can I use the execute flag, and have multiple replacements can occur on one line (from the shell, not from perl et al)?
The flags work together in the opposite way to what you're expecting. The documentation of /e is , for the record: This command allows one to pipe input from a shell command into pattern space. If a substitution was made, the command that is found in pattern space is executed and pattern space is replaced with its output. A trailing newline is suppressed; results are undefined if the command to be executed contains a nul character. This is a GNU sed extension. That is a bit tortuously written. What it means is that, after the completion of a s/// command for this line, if there was a change, the (new) line is executed as a command and its output used as the replacement for this line. So for your given command: echo AAA | sed -r 's/A/echo B/ge' it first replaces each A with echo B , and then executes the result as a command. It has (roughly speaking) the same effect as: echo AAA | sed -r 's/A/echo B/g' | sh GNU sed does not directly support the mode you want, although you can fake it with a more complex script if desired. Alternatively, Perl's /e modifier to its s command does have the behaviour you're looking for, but with Perl expressions instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74575/" ] }
180,790
When I open several application simultaneously, there is necessity to switch between one application to another. But when I place mouse pointer on top of window of some application, then window of other application will close immediately. I need to maintain some window always on top whether I place mouse pointer on it or not. In Windows, I can doing this with turbo TOP, but I can not discover the same way to doing this in Linux, especially in Linux Mint. http://www.savardsoftware.com/turbotop/
On Linux, with a window manager that follows the Extended Window Manager Hints (EWMH) you can do this by setting the above property. Linux Mint Cinnameon and Mate desktop environments both incorporate elements in the stack that handle the EWMH functionality. What you can do using the window title is use the following command: wmctrl -r :SELECT: -b add,above and then click in the window that you want to have at the top. You can also replace :SELECT: by a substring of a window title (this better be unique as the first match found is used). Alternatively, at least in Cinnamon, you can right click in the title-bar of the window you want to have permanently on top, and select Always on Top
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99949/" ] }
180,814
I would like to synchronize all the content of my phone to my home server using the phone's samba share. My approach is to write a script that mounts the phone's samba share, and then copies all the files on the phone to the specified directory. Then the script is ran every 10 minutes with crontab. The first problem I am facing is that I would like the two folders (phone and server) to have a "contribute" relationship. This means that: new and updated files are copied from the phone to the server. Renames on the phone are repeated on the server. No deletions (if a file is deleted on the phone, it remains on the server). How can I achieve this? Maybe with rsync? The second problem is: is there a better approach than trying to mount the samba share every 10 minutes to find out if the phone is connected or not to the wifi network?
On Linux, with a window manager that follows the Extended Window Manager Hints (EWMH) you can do this by setting the above property. Linux Mint Cinnameon and Mate desktop environments both incorporate elements in the stack that handle the EWMH functionality. What you can do using the window title is use the following command: wmctrl -r :SELECT: -b add,above and then click in the window that you want to have at the top. You can also replace :SELECT: by a substring of a window title (this better be unique as the first match found is used). Alternatively, at least in Cinnamon, you can right click in the title-bar of the window you want to have permanently on top, and select Always on Top
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56135/" ] }
180,818
I had similar issues before but i don't remember how i solved it. When i try to copy something to USB stick, with FAT, it stops near the end, sometimes at 100%. And of course, when i transfer the memory stick somewhere else, it doesn't contain complete file. (file is a movie!) I tried to mount device with mount -o flush, but i get same issue. Also, i did format the USB stick with new FAT partition... Any idea what cold i do? p.s.I believe it's not related to OS, which is Debian, and i believe that coping from SSD drive doesn't make it stuck.
The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk. So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know. If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.) A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data. But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/180818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
180,867
All the howtos that I find on the web states: Find all SUID files:find / -perm -4000 -printFind all SGID files:find / -perm -2000 -print But that is not true. See: $ ls -lah test-r-sr-xr-x 1 user user 0B Jan 24 22:47 test$ $ $ stat -x test | grep Mode Mode: (4555/-r-sr-xr-x) Uid: ( 1000/ user) Gid: ( 1000/ user)$ $ $ find test -perm 4000$ find test -perm 2000$ Question: So what is the truth? How can I really list all the SUID/SGID files?
If you want to test for any of the bits, use / . I.e. for your use case: find "$DIRECTORY" -perm /4000 and: find "$DIRECTORY" -perm /2000 or combined: find "$DIRECTORY" -perm /6000 You may use both folders and files as argument for GNU find . Another, IMO better readable, approach is using the mnemonic shortcuts. I.e.: find "$DIRECTORY" -perm /u=s,g=s Caveat emptor Keep in mind that the variants of find vary. They may also behave differently. Always read the friendly manual (RTFM).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81044/" ] }
180,889
A simple question: why with solaris 10 on vim pressing end keyproduce F? This is what appened pressing end This is a text file,i will press "end key" F F F F:(
If you want to test for any of the bits, use / . I.e. for your use case: find "$DIRECTORY" -perm /4000 and: find "$DIRECTORY" -perm /2000 or combined: find "$DIRECTORY" -perm /6000 You may use both folders and files as argument for GNU find . Another, IMO better readable, approach is using the mnemonic shortcuts. I.e.: find "$DIRECTORY" -perm /u=s,g=s Caveat emptor Keep in mind that the variants of find vary. They may also behave differently. Always read the friendly manual (RTFM).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
180,900
I am using screen /dev/tty-MyDevice to look at traffic on my serial port. Pressing Ctrl + D does not cause the screen to terminate. What I have to do in order to terminate it?
Use the screen quit command (normally ctrl-A \ ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18578/" ] }
180,901
I've been using sed for quite some time but here is a quirk I came around with, which I am not able to resolve. Let me explain my problem with the actual case. Scene#1 printf "ls" | xclip -selection clipboardecho "ls" | xclip -selection clipboard In the first command, I pipe printf output to xclip so that it gets copied to the clipboard. Now, printf , unlike echo does not insert a new line at the end by default. So, if I paste this content into terminal, the ls command that is copied does not automatically run. In the second, there is a new line at the end, so pasting the clipboard content also results in the running of the command in the clipboard. This is undesirable for me. So, I wanted to remove the newline using sed , but it failed, as explained in the scene below. Scene#2 echo "ls" | sed -r 's/\n//g' | xclip -selection clipboard The content in the clipboard still contains new-line. When I paste it into terminal, the command automatically runs. I also tried removing carriage return character \r . But nada. It seems I am missing something very crucial/basic here.
sed delimits on \n ewlines - they are always removed on input and reinserted on output. There is never a \n ewline character in a sed pattern space which did not occur as a result of an edit you have made. Note: with the exception of GNU sed 's -z mode... Just use tr : echo ls | tr -d \\n | xclip -selection clipboard Or, better yet, forget sed altogether: printf ls | xclip -selection clipboard
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89385/" ] }
180,943
I'm on a Mac but I think this is generally Unix-applicable. I'm in the process of learning shell scripting and there's something I seem to be missing. When I'm in the ordinary terminal, I can use scripting syntax like for loops and such in conjunction with commands to do stuff. But.... bash opens an interpreter for shell scripting. Which is where I get confused, because isn't the terminal already an interpreter for shell scripting, as demonstrated by the fact that the scripting works when given to stdin? Bonus question: how is bash different from bash -i , which according to man "starts an interactive session".....isn't that what happens when you just enter bash on its own? Which, to my eye is no different than being in the normal terminal in the first place...
When you launch a terminal it will always run some program inside it. That program will generally by default be your shell. On OS X, the default shell is Bash. In combination that means that when you launch Terminal you get a terminal emulator window with bash running inside it (by default). You can change the default shell to something else if you like, although OS X only ships with bash and tcsh . You can choose to launch a custom command in a new terminal with the open command : open -b com.apple.terminal somecommand In that case, your shell isn't running in it, and when your custom command terminates that's the end of things. If you run bash inside your terminal that is already running bash , you get exactly that: one shell running another. You can exit the inner shell with Ctrl-D or exit and you'll drop back to the shell you started in. That can sometimes be useful if you want to test out configuration changes or customise your environment temporarily — when you exit the inner shell, the changes you made go away with it. You can nest them arbitrarily deeply. If you're not doing that, there's no real point in launching another one, but a command like bash some-script.sh will run just that script and then exit, which is often useful. The differences between interactive and non-interactive shells are a bit subtle and mostly deal with which configuration files are loaded, which error behaviours there are, and whether aliases and similar are enabled. The rough principle is that an interactive shell gives you the settings you'd want for sitting in front of it, while a non-interactive shell gives you what you'd want for a standalone script. All of the differences are documented explicitly in the Bash Reference Manual , and also in a dedicated question on this site . For the most part, you don't need to care. There's not often a reason to launch another shell, and when you do you'll have a specific purpose in mind and know what to do with it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17563/" ] }
180,982
Let's say I want to modify the original behavior of the ls tool this way: $ lsHello Worldfile1 file2 ... How can I do this? When running ls I would like to run another command let's say echo "Hello World!" . The quick solution I see is using alias: alias orig_ls="ls"alias ls='echo "Hello World!"' However, this is not a real solution since when I will run orig_ls it will output "Hello World!" .
Sometimes an alias isn't powerful enough to easily do what you want, so here's a way without using them. In some file that is sourced when your shell starts (e.g. .bashrc ), add the following function: ls () { echo "Hello world!" command ls "$@"} Unlike an alias, a function can recurse. That's why command ls is used instead of ls ; it tells your shell to use the actual ls instead of the function you've just defined.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45370/" ] }
180,985
I'm trying to copy files and subfolders from A folder without the A itself. For instance, A folder contains next: | file1.txt | file2.txt | subfolder1 Executing next command gives me wrong result: sudo cp -r /home/username/A/ /usr/lib/B/ The result is /usr/lib/B/A/...copied files... instead of.. /usr/lib/B/...copied files... How can I reach the expected one without origin-folder
advanced cp cp -r /home/username/A/. /usr/lib/B/ This is especially great because it works no matter whether the target directory already exists. shell globbing If there are not too many objects in the directory then you can use shell globbing: mkdir -p /usr/lib/B/shopt -s dotglobcp -r /home/username/A/* /usr/lib/B/ rsync rsync -a /home/username/A/ /usr/lib/B/ The / at the end of the source path is important; works no matter whether the target directory already exists. find mkdir -p /usr/lib/B/find /home/username/A/ -mindepth 1 -maxdepth 1 -exec cp -r -t /usr/lib/B/ {} + or if you don't need empty subdirectories: find /home/username/A/ -mindepth 1 -type f -exec cp --parents -t /usr/lib/B/ {} + (without mkdir )
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/180985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100580/" ] }
180,991
I have directories set up a bit like this ~/code~/code/src~/code/build -> /path/to/somewhere/else That last one's a symlink. If I do this cd ~/code/buildls .. then I get the listing for /path/to/somewhere , but from other remarks and my own experience, I'd expected to see the listing for ~/code -- I'd swear that this used to work the other way round. I'm using zsh and bash on Ubuntu. Is there a setting for this or is it deeply ingrained into POSIX or something?
Not the issue of ls . It's how symlinks work. The .. gets you into the parent of the current directory, the directory doesn't know you got to it through a symlink. The shell has to intervene to prevent this behaviour. For the shell builtin cd , there is special handling that doesn't just call chdir but memorizes the full directory path and tries to figure out what you want. ls , however, is not a builtin. The shell has to change .. to a different path before passing it to ls if you want to get what you expect. zsh option CHASE_DOTS helps you with that. Generally speaking, symlinks to directories are a dirty business. For critical and semi-permanent applications, rather use mount --bind .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180991", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100592/" ] }
180,992
I have a large file (2-3 GB, binary, undocumented format) that I use on two different computers (normally I use it on a desktop system but when I travel I put it on my laptop). I use rsync to transfer this file back and forth. I make small updates to this file from time to time, changing less than 100 kB. This happens on both systems. The problem with rsync as I understand it is that if it think a file has changed between source and destination it transfers the complete file. In my situation it feels like a big waste of time when just a small part of a file has changes. I envisage a protocol where the transfer agents on source and destination first checksums the whole file and then compare the result. When they realise that the checksum for the whole file is different, they split the file into two parts, A and B and checksum them separately. Aha, B is identical on both machines, let's ignore that half. Now it splits A into A1 and A2. Ok, only A2 has changed. Split A2 into A2I and A2II and compare etc. Do this recursively until it has found e.g., three parts that are 1 MB each that differs between source and destination and then transfer just these parts and insert them in the right position at the destination file. Today with fast SSDs and multicore CPUs such parallelisation should be very efficient. So my question is, are there any tools that works like this (or in another manner I couldn't imagine but with similar result) available today? A request for clarification has been posted. I mostly use Mac so the filesystem is HFS+. Typically I start rsync like this rsync -av --delete --progress --stats - in this cases I sometimes use SSH and sometimes rsyncd. When I use rsyncd I start it like this rsync --daemon --verbose --no-detach . Second clarification: I ask for either a tool that just transfers the delta for a file that exists in two locations with small changes and/or if rsync really offers this. My experience with rsync is that it transfers the files in full (but now there is an answer that explains this: rsync needs an rsync server to be able to transfer just the deltas, otherwise (e.g., using ssh-shell) it transfers the whole file however much has changed).
Rsync will not use deltas but will transmit the full file in its entirety if it - as a single process - is responsible for the source and destination files. It can transmit deltas when there is a separate client and server process running on the source and destination machines. The reason that rsync will not send deltas when it is the only process is that in order to determine whether it needs to send a delta it needs to read the source and destination files. By the time it's done that it might as well have just copied the file directly. If you are using a command of this form you have only one rsync process: rsync /path/to/local/file /network/path/to/remote/file If you are using a command of this form you have two rsync processes (one on the local host and one on the remote) and deltas can be used: rsync /path/to/local/file remote_host:/path/to/remote/file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32351/" ] }
180,995
I'm trying to spell check all the *.md files in my current directory but the following command fails: >> find . -maxdepth 1 -name "*.md" | xargs -I {} aspell check {}xargs: aspell: exited with status 255; aborting I'm assuming this is because aspell requires stdin to interact with the user and somehow xargs doesn't provide it. I found a hack on Twitter , find . -maxdepth 1 -name "*.md" | xargs -n 1 xterm -e aspell check but this opens up a new xterm each time. How can I get my original command to work as if I were to individually run aspell on the results of my find command?
You don't need xargs at all, just use exec option: find . -maxdepth 1 -name "*.md" -exec aspell check {} \; And just in case you, or any future reader, will really need to use xargs - you can do that by spawning new shell and taking standard input from terminal ( /dev/tty ): find . -maxdepth 1 -name "*.sh" | xargs -n1 sh -c 'aspell check "$@" < /dev/tty' aspell
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180995", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7093/" ] }
181,001
Suppose, for example, you have a shell script similar to: longrunningthing &p=$!echo Killing longrunningthing on PID $p in 24 hourssleep 86400echo Time up!kill $p Should do the trick, shouldn't it? Except that the process may have terminated early and its PID may have been recycled, meaning some innocent job get a bomb in its signal queue instead. In practice this possibly does matter, but its worrying me nonetheless. Hacking longrunningthing to drop dead by itself, or keep/remove its PID on the FS would do but I'm thinking of the generic situation here.
Best would be to use the timeout command if you have it which is meant for that: timeout 86400 cmd The current (8.23) GNU implementation at least works by using alarm() or equivalent while waiting for the child process. It does not seem to be guarding against the SIGALRM being delivered in between waitpid() returning and timeout exiting (effectively cancelling that alarm ). During that small window, timeout may even write messages on stderr (for instance if the child dumped a core) which would further enlarge that race window (indefinitely if stderr is a full pipe for instance). I personally can live with that limitation (which probably will be fixed in a future version). timeout will also take extra care to report the correct exit status, handle other corner cases (like SIGALRM blocked/ignored on startup, handle other signals...) better than you'd probably manage to do by hand. As an approximation, you could write it in perl like: perl -MPOSIX -e ' $p = fork(); die "fork: $!\n" unless defined($p); if ($p) { $SIG{ALRM} = sub { kill "TERM", $p; exit 124; }; alarm(86400); wait; exit (WIFSIGNALED($?) ? WTERMSIG($?)+128 : WEXITSTATUS($?)) } else {exec @ARGV}' cmd There's a timelimit command at http://devel.ringlet.net/sysutils/timelimit/ (predates GNU timeout by a few months). timelimit -t 86400 cmd That one uses an alarm() -like mechanism but installs a handler on SIGCHLD (ignoring stopped children) to detect the child dying. It also cancels the alarm before running waitpid() (that doesn't cancel the delivery of SIGALRM if it was pending, but the way it's written, I can't see it being a problem) and kills before calling waitpid() (so can't kill a reused pid). netpipes also has a timelimit command. That one predates all the other ones by decades, takes yet another approach, but doesn't work properly for stopped commands and returns a 1 exit status upon timeout. As a more direct answer to your question, you could do something like: if [ "$(ps -o ppid= -p "$p")" -eq "$$" ]; then kill "$p"fi That is, check that the process is still a child of ours. Again, there's a small race window (in between ps retrieving the status of that process and kill killing it) during which the process could die and its pid be reused by another process. With some shells ( zsh , bash , mksh ), you can pass job specs instead of pids. cmd &sleep 86400kill %wait "$!" # to retrieve the exit status That only works if you spawn only one background job (otherwise getting the right jobspec is not always possible reliably). If that's an issue, just start a new shell instance: bash -c '"$@" & sleep 86400; kill %; wait "$!"' sh cmd That works because the shell removes the job from the job table upon the child dying. Here, there should not be any race window since by the time the shell calls kill() , either the SIGCHLD signal has not been handled and the pid can't be reused (since it has not been waited for), or it has been handled and the job has been removed from the process table (and kill would report an error). bash 's kill at least blocks SIGCHLD before it accesses its job table to expand the % and unblocks it after the kill() . Another option to avoid having that sleep process hanging around even after cmd has died, with bash or ksh93 is to use a pipe with read -t instead of sleep : { { cmd 4>&1 >&3 3>&- & printf '%d\n.' "$!" } | { read p read -t 86400 || kill "$p" }} 3>&1 That one still has race conditions, and you lose the command's exit status. It also assumes cmd doesn't close its fd 4. You could try implementing a race-free solution in perl like: perl -MPOSIX -e ' $p = fork(); die "fork: $!\n" unless defined($p); if ($p) { $SIG{CHLD} = sub { $ss = POSIX::SigSet->new(SIGALRM); $oss = POSIX::SigSet->new; sigprocmask(SIG_BLOCK, $ss, $oss); waitpid($p,WNOHANG); exit (WIFSIGNALED($?) ? WTERMSIG($?)+128 : WEXITSTATUS($?)) unless $? == -1; sigprocmask(SIG_UNBLOCK, $oss); }; $SIG{ALRM} = sub { kill "TERM", $p; exit 124; }; alarm(86400); pause while 1; } else {exec @ARGV}' cmd args... (though it would need to be improved to handle other types of corner cases). Another race-free method could be using process groups: set -m((sleep 86400; kill 0) & exec cmd) However note that using process groups can have side-effects if there's I/O to a terminal device involved. It has the additional benefit though to kill all the other extra processes spawned by cmd .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181001", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100598/" ] }
181,009
GNOME Desktop has been installed on CentOS7 using sudo yum -y groups install "GNOME Desktop" and when startx is executed the desktop starts. However, when the system reboots the following issue occurs: When c has been executed the following occurs: 1 results in: and typing 2 unchecks the box and the issue persists. Attempts to solve the issue According this Q&A , 1 should be executed to solve the issue, but that did not help. Questions Why does the issue occur once the system has been rebooted? If it works to accept the license by issuing certain commands how to avoid that these commands need to be executed every time the system boots? The last and main question is how to avoid that this accept license prompt appears at boot?
Concise Press 1 Press 2 in order to change [ ] to [x] in front of 2) I accept the license agreement Press q Accept license menu does not prompt anymore at boot Comprehensive The issue was caused because the prompt was not clear to me. First I thought that pressing c would result in continue, i.e., moving to section2, but that was not the case. A Q&A was found that indicated that 1 should be pressed in order to continue to section 2 (see question). Braiam asked whether 2 was pressed. Pressing 2 over and over again added an X and removed it once it was pressed again (see comments to question). When the X was added, i.e. [X] 2) I accept the license agreement. , pressing c did not work, but q had to be chosen. My assumption was that an instruction would be displayed, e.g. press a to accept . Once yes was entered the license was accepted. Every time the system reboots the accept license menu does not prompt anymore. I still do not understand why the license agreement menu did not prompt during the installation of GNOME and this desktop could run without accepting any license agreement and the menu was prompted once the system had been rebooted, but the license is accepted now and answers this question.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181009", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65367/" ] }
181,067
dmesg is a command to read the contents from /var/log/dmesg .The nice thing compared to less /var/log/dmesg is that I can use the -T flag for human readable time output. Now I would like to look at /var/log/dmesg.0 , to see how my computer crashed. The file contains the logs from previous session. But I want to use the -T flag from the dmesg command. Or something equivalent. Any idea how? I would not mind a graphical tool, but the best would be a cli solution.
Although a bit late for the OP... I use Fedora, but if your system uses journalctl then you can easily get the kernel messages (dmesg log) from prior shutdown/crash (in a dmesg -T format) through the following. Options: -k (dmesg) -b < boot_number > (How many reboots ago 0, -1, -2, etc.) -o short-precise (dmesg -T) -p priority Filter by priority output (4 to filter out notice and info). NOTE: there is also an -o short and -o short-iso which gives you the date only, and the date-time in iso format respectively. Commands: All boot cycles : journalctl -o short-precise -k -b all Current boot : journalctl -o short-precise -k Last boot : journalctl -o short-precise -k -b -1 Two boots prior : journalctl -o short-precise -k -b -2 And so on Example Output: Feb 18 21:41:26.917400 localhost.localdomain kernel: usb 2-4: USB disconnect, device number 12Feb 18 21:41:26.917678 localhost.localdomain kernel: usb 2-4.1: USB disconnect, device number 13Feb 18 21:41:27.246264 localhost.localdomain kernel: usb 2-4: new high-speed USB device number 22 using xhci_hcdFeb 18 21:41:27.419395 localhost.localdomain kernel: usb 2-4: New USB device found, idVendor=05e3, idProduct=0610Feb 18 21:41:27.419581 localhost.localdomain kernel: usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=0Feb 18 21:41:27.419739 localhost.localdomain kernel: usb 2-4: Product: USB2.0 HubFeb 18 21:41:27.419903 localhost.localdomain kernel: usb 2-4: Manufacturer: GenesysLogic The amount of boots you can look back on can be viewed with the following. journalctl --list-boot The output of journalctl --list-boot looks like the following. -6 cc4333602fbd4bbabb0df2df9dd1f0d4 Sun 2016-11-13 08:32:58 JST—Thu 2016-11-17 07:53:59 JST-5 85dc0d63e6a14b1b9a72424439f2bab4 Fri 2016-11-18 22:46:28 JST—Sat 2016-12-24 02:38:18 JST-4 8abb8267e06b4c26a2466562f3422394 Sat 2016-12-24 08:10:28 JST—Sun 2017-02-12 12:31:20 JST-3 a040f5e79a754b2a9055ac2598d430e8 Sun 2017-02-12 12:31:36 JST—Sat 2017-02-18 21:31:04 JST-2 6c29e3b6f6a14f549f06749f9710e1f2 Sat 2017-02-18 21:31:15 JST—Sat 2017-02-18 22:36:08 JST-1 42fd465eacd345f7b595069c7a5a14d0 Sat 2017-02-18 22:51:22 JST—Sat 2017-02-18 23:08:30 JST 0 26ea10b064ce4559808509dc7f162f07 Sat 2017-02-18 23:09:25 JST—Sun 2017-02-19 00:57:35 JST
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/181067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29368/" ] }
181,069
I have many files that are ordered by file name in a directory. I wish to copy the final N (say, N=4) files to my home directory. How should I do it? cp ./<the final 4 files> ~/
This can be easily done with bash/ksh93/zsh arrays: a=(*)cp -- "${a[@]: -4}" ~/ This works for all non-hidden file names even if they contain spaces, tabs, newlines, or other difficult characters (assuming there are at least 4 non-hidden files in the current directory with bash ). How it works a=(*) This creates an array a with all the file names. The file names returned by bash are alphabetically sorted. (I assume that this is what you mean by "ordered by file name." ) ${a[@]: -4} This returns the last four elements of array a (provided the array contains at least 4 elements with bash ). cp -- "${a[@]: -4}" ~/ This copies the last four file names to your home directory. To copy and rename at the same time This will copy the last four files only to the home directory and, at the same time, prepend the string a_ to the name of the copied file: a=(*)for fname in "${a[@]: -4}"; do cp -- "$fname" ~/a_"$fname"; done Copy from a different directory and also rename If we use a=(./some_dir/*) instead of a=(*) , then we have the issue of the directory being attached to the filename. One solution is: a=(./some_dir/*) for f in "${a[@]: -4}"; do cp "$f" ~/a_"${f##*/}"; done Another solution is to use a subshell and cd to the directory in the subshell: (cd ./some_dir && a=(*) && for f in "${a[@]: -4}"; do cp -- "$f" ~/a_"$f"; done) When the subshell completes, the shell returns us to the original directory. Making sure that the ordering is consistent The question asks for files "ordered by file name". That order, Olivier Dulac points out in the comments, will vary from one locale to another. If it is important to have fixed results independent of machine settings, then it is best to specify the locale explicitly when the array a is defined. For example: LC_ALL=C a=(*) You can find out which locale you are currently in by running the locale command.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181069", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/71888/" ] }
181,089
I have an old 64MB CF card which I use in my laptop(kernel 2.6.38) with CardBus adapter. If I write a 64MB image to this CF card, then the write speed is more than 200MB/s: T42 ~ # fdisk -luDisk /dev/sda: 40.0 GB, 40007761920 bytes255 heads, 63 sectors/track, 4864 cylinders, total 78140160 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00043afc Device Boot Start End Blocks Id System/dev/sda1 * 2048 73947135 36972544 83 Linux/dev/sda2 73949182 78139391 2095105 5 Extended/dev/sda5 73949184 78139391 2095104 82 Linux swap / SolarisDisk /dev/sdb: 64 MB, 64225280 bytes8 heads, 32 sectors/track, 490 cylinders, total 125440 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdb1 * 32 125300 62634+ 4 FAT16 <32MPartition 1 has different physical/logical endings: phys=(488, 7, 32) logical=(489, 3, 21)T42 ~ # mount | grep -i sdbT42 ~ # time dd if=64MB of=/dev/sdb bs=10M6+1 records in6+1 records out64225280 bytes (64 MB) copied, 0.320419 s, 200 MB/sreal 0m0.624suser 0m0.000ssys 0m0.304sT42 ~ # 64MB within 0.32s in obviously unrealistic in case of 10 year old CF card and if I remove the card from the laptop right after the dd if=64MB of=/dev/sdb bs=10M has finished, I see lot of <timestamp> end_request: I/O error, dev sdb, sector <sector number> errors in dmesg output. What might cause such behavior?
Block device writes are buffered by the kernel. This is clearly visible when a filesystem is mounted (when you unmount, the buffers have to be flushed, leading to a sometimes very long delay before the umount returns). This delay is seemingly getting worse as the available RAM is getting larger and larger. You may be able to write half a GB instantly before the kernel even initializes the data transfer. The kernel may transparently write to the device for minutes after you see the transfer complete. This feature is very good for many reasons. It allows for much faster response for reading and writing to devices, the data can be also transparently read from the buffer after the write, before the actual physical write is even completed. For hard drives that are mounted on the long term, the kernel schedules writes for when it has time while making the device respond faster from the user's perspective. Especially for magnetic hard drives, writing in big chunks sequentially is also faster than writing small pieces in several places all over the drive: the chunks can be sorted and grouped before pushing to the physical device (although hard drives also do some buffering and data sorting internally in hardware). In short, you don't notice slowness of the device that much, and you also don't notice initial delays (in case of network mounted drives or hard drives that have to spin up from hibernation). For direct access to a block device, buffering is somewhat unfortunate, because you don't call umount and you don't really notice when the transfer is complete. You should call sync in any case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
181,099
This is a followup to I am repeatedly zipping the same folder of files, but the shasum keeps changing essentially.. I am trying to add a file containing the git sha of the current commit when I zip a part of my repository and take its sha sum.. The code for that is git rev-parse HEAD > .gitsha . However adding this to my zip means that the shasum of my zip keeps changing every second or so. The zip command uses -X to ignore file timestamps. I experimented with just the .gitsha file below $ git rev-parse HEAD > .gitsha ; shasum .gitsha8fa263bc885822ccba03006ea10015ef32da485c .gitsha This is consistent over time. However after ziping it, the shasum is no longer consistent $ git rev-parse HEAD > .gitsha ; zip -X --quiet -r test.zip .gitsha ; shasum test.zip26cc38c624f91a1c555d503fdfdecb1ce670274f test.zip$ git rev-parse HEAD > .gitsha ; zip -X --quiet -r test.zip .gitsha ; shasum test.zipb03f7cb654e3aa0d25d18ead5fe1f225bc2aac9f test.zip These are 2 trials a few seconds apart. I presume that the -X flag does not include creation time perhaps? Any way to get this to work? Update: Deleting the zip does not help. $ rm test.zip; git rev-parse HEAD > .gitsha ; zip -X --quiet -r test.zip .gitsha ; shasum test.zip76c722ccf2df75fb624f9640ad948f4508dd6152 test.zip$ rm test.zip; git rev-parse HEAD > .gitsha ; zip -X --quiet -r test.zip .gitsha ; shasum test.zip6bd26d2bc821d9f12806fc81a8ba8c8babcc664a test.zip
Block device writes are buffered by the kernel. This is clearly visible when a filesystem is mounted (when you unmount, the buffers have to be flushed, leading to a sometimes very long delay before the umount returns). This delay is seemingly getting worse as the available RAM is getting larger and larger. You may be able to write half a GB instantly before the kernel even initializes the data transfer. The kernel may transparently write to the device for minutes after you see the transfer complete. This feature is very good for many reasons. It allows for much faster response for reading and writing to devices, the data can be also transparently read from the buffer after the write, before the actual physical write is even completed. For hard drives that are mounted on the long term, the kernel schedules writes for when it has time while making the device respond faster from the user's perspective. Especially for magnetic hard drives, writing in big chunks sequentially is also faster than writing small pieces in several places all over the drive: the chunks can be sorted and grouped before pushing to the physical device (although hard drives also do some buffering and data sorting internally in hardware). In short, you don't notice slowness of the device that much, and you also don't notice initial delays (in case of network mounted drives or hard drives that have to spin up from hibernation). For direct access to a block device, buffering is somewhat unfortunate, because you don't call umount and you don't really notice when the transfer is complete. You should call sync in any case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181099", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23934/" ] }
181,141
I want to rename files to change their extension, effectively looking to accomplish mv *.txt *.tsv But when doing this I get : *.tsv is not a directory I find it somewhat strange that the first 10 google hits show mv should work like this.
When you issue the command: mv *.txt *.tsv the shell, lets assume bash, expands the wildcards if there are any matching files (including directories). The list of files are passed to the program, here mv . If no matches are found the unexpanded version is passed. Again: the shell expands the patterns, not the program. Loads of examples is perhaps best way, so here we go: Example 1: $ lsfile1.txt file2.txt$ mv *.txt *.tsv Now what happens on the mv line is that the shell expands *.txt to the matching files. As there are no *.tsv files that is not changed. The mv command is called with two special arguments : argc : Number of arguments, including the program. argv : An array of arguments, including the program as first entry. In the above example that would be: argc = 4 argv[0] = mv argv[1] = file1.txt argv[2] = file2.txt argv[3] = *.tsv The mv program check to see if last argument, *.tsv , is a directory. As it is not, the program can not continue as it is not designed to concatenate files. (Typically move all the files into one.) Nor create directories on a whim. As a result it aborts and reports the error: mv: target ‘*.tsv’ is not a directory Example 2: Now if you instead say: $ mv *1.txt *.tsv The mv command is executed with: argc = 3 argv[0] = mv argv[1] = file1.txt argv[2] = *.tsv Now, again, mv check to see if *.tsv exists. As it does not the file file1.txt is moved to *.tsv . That is: the file is renamed to *.tsv with the asterisk and all. $ mv *1.txt *.tsv‘file1.txt’ -> ‘*.tsv’$ lsfile2.txt *.tsv Example 3: If you instead said: $ mkdir *.tsv$ mv *.txt *.tsv The mv command is executed with: argc = 3 argv[0] = mv argv[1] = file1.txt argv[1] = file2.txt argv[2] = *.tsv As *.tsv now is a directory, the files ends up being moved there. Now: using commands like some_command *.tsv when the intention is to actually keep the wildcard one should always quote it. By quoting you prevent the wildcards from being expanded if there should be any matches. E.g. say mkdir "*.tsv" . Example 4: The expansion can further be viewed if you do for example: $ lsfile1.txt file2.txt$ mkdir *.txtmkdir: cannot create directory ‘file1.txt’: File existsmkdir: cannot create directory ‘file2.txt’: File exists Example 5: Now: the mv command can and do work on multiple files. But if there is more then two the last has to be a target directory. (Optionally you can use the -t TARGET_DIR option, at least for GNU mv.) So this is OK: $ ls -Fb1.tsv b2.tsv f1.txt f2.txt f3.txt foo/$ mv *.txt *.tsv foo Here mv would be called with: argc = 7 argv[0] = mv argv[1] = b1.tsv argv[2] = b2.tsv argv[3] = f1.txt argv[4] = f2.txt argv[5] = f3.txt argv[6] = foo and all the files end up in the directory foo . As for your links. You have provided one (in a comment), where mv is not mentioned at all, but rename . If you have more links you could share. As well as for man pages where you claim this is expressed.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39219/" ] }
181,152
On my Fedora 21 64bit Gnome 3.14.3 install, I have noticed that the NeworkManager always connects to the wired connection, even if the cable is not connected: However, the driver (or something) does know whether or not it is connected - I can see this with watch "dmesg | tail -10" : [ 7349.552202] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Down[ 7373.496359] IPv6: ADDRCONF(NETDEV_UP): enp7s0: link is not ready[ 7376.271449] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Up<100 Mbps Full Duplex>[ 7376.271482] IPv6: ADDRCONF(NETDEV_CHANGE): enp7s0: link becomes ready[ 7553.088393] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Down[ 7597.096174] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Up<100 Mbps Full Duplex>[ 7620.983378] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Down[ 7622.556874] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Up<100 Mbps Full Duplex> This causes issues when the cable has unplugged, but it still thinks it is connected to the internet, and trys to sync or connect and fails. lspci -v 07:00.0 Ethernet controller: Qualcomm Atheros AR8152 v2.0 Fast Ethernet (rev c1) Subsystem: Lenovo Device 3979 Flags: bus master, fast devsel, latency 0, IRQ 29 Memory at e0500000 (64-bit, non-prefetchable) [size=256K] I/O ports at 2000 [size=128] Capabilities: <access denied> Kernel driver in use: atl1c Kernel modules: atl1c ifconfig : enp7s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.22 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::de0e:a1ff:fed1:d12b prefixlen 64 scopeid 0x20<link> ether dc:0e:a1:d1:d1:2b txqueuelen 1000 (Ethernet) RX packets 111875 bytes 67103677 (63.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 87152 bytes 7793021 (7.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 13 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 3669 bytes 880913 (860.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3669 bytes 880913 (860.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 sudo yum list installed NetworkManager* Installed PackagesNetworkManager.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-adsl.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-bluetooth.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-config-connectivity-fedora.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-config-server.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-devel.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-glib.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-glib-devel.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-iodine.x86_64 0.0.4-4.fc21 @fedora NetworkManager-iodine-gnome.x86_64 0.0.4-4.fc21 @fedora NetworkManager-l2tp.x86_64 0.9.8.7-3.fc21 @fedora NetworkManager-openconnect.x86_64 0.9.8.6-2.fc21 @updates NetworkManager-openswan.x86_64 0.9.8.4-4.fc21 @fedora NetworkManager-openswan-gnome.x86_64 0.9.8.4-4.fc21 @fedora NetworkManager-openvpn.x86_64 1:0.9.9.0-3.git20140128.fc21 @koji-override-0/$releaseverNetworkManager-openvpn-gnome.x86_64 1:0.9.9.0-3.git20140128.fc21 @koji-override-0/$releaseverNetworkManager-pptp.x86_64 1:0.9.8.2-6.fc21 @koji-override-0/$releaseverNetworkManager-pptp-gnome.x86_64 1:0.9.8.2-6.fc21 @koji-override-0/$releaseverNetworkManager-ssh.x86_64 0.9.3-0.3.20140601git9d834f2.fc21 @fedora NetworkManager-ssh-gnome.x86_64 0.9.3-0.3.20140601git9d834f2.fc21 @fedora NetworkManager-tui.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-vpnc.x86_64 1:0.9.9.0-6.git20140428.fc21 @koji-override-0/$releaseverNetworkManager-vpnc-gnome.x86_64 1:0.9.9.0-6.git20140428.fc21 @koji-override-0/$releaseverNetworkManager-wifi.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates NetworkManager-wwan.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates modinfo atl1c filename: /lib/modules/3.17.8-300.fc21.x86_64/kernel/drivers/net/ethernet/atheros/atl1c/atl1c.ko.xzversion: 1.0.1.1-NAPIlicense: GPLdescription: Qualcom Atheros 100/1000M Ethernet Network Driverauthor: Qualcomm Atheros Inc., <[email protected]>author: Jie Yangsrcversion: 4333D8ADEE755DD5ABDF0B8alias: pci:v00001969d00001083sv*sd*bc*sc*i*alias: pci:v00001969d00001073sv*sd*bc*sc*i*alias: pci:v00001969d00002062sv*sd*bc*sc*i*alias: pci:v00001969d00002060sv*sd*bc*sc*i*alias: pci:v00001969d00001062sv*sd*bc*sc*i*alias: pci:v00001969d00001063sv*sd*bc*sc*i*depends: intree: Yvermagic: 3.17.8-300.fc21.x86_64 SMP mod_unload signer: Fedora kernel signing keysig_key: F4:8F:FC:A3:C9:62:D6:47:0F:1A:63:E0:32:D1:F5:F1:93:2A:03:6Asig_hashalgo: sha256 If this is a bug, is it NetworkManager or something else? I had no issues with this under Fedora 19 with the same driver (which seems to be the same version in the latest kernel in its backup).
When you issue the command: mv *.txt *.tsv the shell, lets assume bash, expands the wildcards if there are any matching files (including directories). The list of files are passed to the program, here mv . If no matches are found the unexpanded version is passed. Again: the shell expands the patterns, not the program. Loads of examples is perhaps best way, so here we go: Example 1: $ lsfile1.txt file2.txt$ mv *.txt *.tsv Now what happens on the mv line is that the shell expands *.txt to the matching files. As there are no *.tsv files that is not changed. The mv command is called with two special arguments : argc : Number of arguments, including the program. argv : An array of arguments, including the program as first entry. In the above example that would be: argc = 4 argv[0] = mv argv[1] = file1.txt argv[2] = file2.txt argv[3] = *.tsv The mv program check to see if last argument, *.tsv , is a directory. As it is not, the program can not continue as it is not designed to concatenate files. (Typically move all the files into one.) Nor create directories on a whim. As a result it aborts and reports the error: mv: target ‘*.tsv’ is not a directory Example 2: Now if you instead say: $ mv *1.txt *.tsv The mv command is executed with: argc = 3 argv[0] = mv argv[1] = file1.txt argv[2] = *.tsv Now, again, mv check to see if *.tsv exists. As it does not the file file1.txt is moved to *.tsv . That is: the file is renamed to *.tsv with the asterisk and all. $ mv *1.txt *.tsv‘file1.txt’ -> ‘*.tsv’$ lsfile2.txt *.tsv Example 3: If you instead said: $ mkdir *.tsv$ mv *.txt *.tsv The mv command is executed with: argc = 3 argv[0] = mv argv[1] = file1.txt argv[1] = file2.txt argv[2] = *.tsv As *.tsv now is a directory, the files ends up being moved there. Now: using commands like some_command *.tsv when the intention is to actually keep the wildcard one should always quote it. By quoting you prevent the wildcards from being expanded if there should be any matches. E.g. say mkdir "*.tsv" . Example 4: The expansion can further be viewed if you do for example: $ lsfile1.txt file2.txt$ mkdir *.txtmkdir: cannot create directory ‘file1.txt’: File existsmkdir: cannot create directory ‘file2.txt’: File exists Example 5: Now: the mv command can and do work on multiple files. But if there is more then two the last has to be a target directory. (Optionally you can use the -t TARGET_DIR option, at least for GNU mv.) So this is OK: $ ls -Fb1.tsv b2.tsv f1.txt f2.txt f3.txt foo/$ mv *.txt *.tsv foo Here mv would be called with: argc = 7 argv[0] = mv argv[1] = b1.tsv argv[2] = b2.tsv argv[3] = f1.txt argv[4] = f2.txt argv[5] = f3.txt argv[6] = foo and all the files end up in the directory foo . As for your links. You have provided one (in a comment), where mv is not mentioned at all, but rename . If you have more links you could share. As well as for man pages where you claim this is expressed.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181152", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52956/" ] }
181,180
I have a number of files I want to update by replacing one multi-line string with another multi-line string. Something along the lines of: * Some text, * something else* another thing And I want to replace it with: * This is completely* different text The result would be that after the replacement the file containing the first block of text, will now contain the second string (the rest of the file is unchanged). Part of the problem is that I have to find the list of files to be updated in the file system. I guess I can use grep for that (though again that's not as easy to do with multiline strings) then pipe it in sed maybe? Is there an easy way to do this? Sed is an option but it's awkward because I have to add \n etc. Is there a way to say "take the input from this file, match it in those files, then replace it with the content of this other file" ?I can use python if need be, but I want something quick and simple, so if there is a utility available, I would rather use that than write my own script (which I know how to do).
Substitute "Some...\n...Thing" by the contents of file "new" in one or more input files perl -i -p0e 's/Some.*?thing\n/`cat new`/se' input.txt ... -i to change input.txt directly -p0 slurp input file file and print it in the end s/regexp/.../s in regexp . is .|\n s/.../exp/e replace by eval(exp) new -- a file containing the replacement text (This is completely...different text) if useful you can expand the original text s/Some text\n...\n...thing\n/...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84023/" ] }
181,254
I am trying to use grep and cut to extract URLs from an HTML file. The links look like: <a href="http://examplewebsite.com/"> Other websites have .net , .gov , but I assume I could make the cut off point right before > . So I know I can use grep and cut somehow to cut off everything before http and after .com, but I have been stuck on it for a while.
Not sure if you are limited on tools: But regex might not be the best way to go as mentioned, but here is an example that I put together: cat urls.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*" | sort -u grep -E : is the same as egrep grep -o : only outputs what has been grepped (http|https) : is an either / or a-z : is all lower case A-Z : is all upper case . : is dot / : is the slash ? : is ? = : is equal sign _ : is underscore % : is percentage sign : : is colon - : is dash * : is repeat the [...] group sort -u : will sort & remove any duplicates Output: bob@bob-NE722:~s$ wget -qO- https://stackoverflow.com/ | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sort -uhttps://stackauth.comhttps://meta.stackoverflow.comhttps://cdn.sstatic.net/Img/svg-iconshttps://stackoverflow.comhttps://www.stackoverflowbusiness.com/talenthttps://www.stackoverflowbusiness.com/advertisinghttps://stackoverflow.com/users/login?ssrc=headhttps://stackoverflow.com/users/signup?ssrc=headhttps://stackoverflow.comhttps://stackoverflow.com/helphttps://chat.stackoverflow.comhttps://meta.stackoverflow.com... You can also add in \d to catch other numeral types.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99897/" ] }
181,260
I cannot create files in my newly mounted partition . The folder to which i am mount the partition, /media/hrmount , is owned by root so even if I add a fstab line like UUID=irrelevant /media/hrmount ext4 defaults,user 0 2 /media/hrmount is still owned by root when i remount "irrelevant" . And if i delete the directory in hope that "mount" will create it automatically and make it owned the user that issued the mount command I'll just get an error message saying the directory does not exist. I could just use chown to make the directory owned by uid1000 but I understand it should not be needed plu I am positive that if i create another user, let's call him uid1001 , then if we unmount the partition fs and then remount it as uid1001 it's mount point, /media/hrmount will still be owned by uid1000 . This means I'll have to fiddle with permissions and while I can do that I have heard that just by adding the device to fstab like above should just work. How can I achieve that? The ideal behaviour would be to just issue the mount device command either by sudoing or normally and the partition is mounted and also the folder is automatically created. PS: I'm on Linux Mint 13
If a Linux filesystem (not e.g. FAT32, NTFS) is mounted then the directory permissions for the root directory are taken from the filesystem. root must either change the owner ( chown ) or permissions ( chmod , setfacl ) of the root directory or has to create subdirectories which are writable by the users. The latter is what happens with the normal root volume: With the exception of tmp no standard directory is writable by users. The users can write to their directory below /home (and maybe to non-standard directories and subdirectories).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100018/" ] }
181,280
I'm using git. I did a normal merge, but it keeps asking this: # Please enter a commit message to explain why this merge is necessary,# especially if it merges an updated upstream into a topic branch. And even if I write something, I can't exit from here. I can't find docs explaining this. How should I do?
This is depend on the editor you're using. If vim you can use ESC and :wq or ESC and Shift + zz . Both command save file and exit. You also can check ~/.gitconfig for editor, in my case ( cat ~/.gitconfig ): [user] name = somename email = [email protected][core] editor = vim excludesfile = /home/mypath/.gitignore_global[color] ui = auto # other settings here
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59856/" ] }
181,311
I would like to see how many users are on my system. How could I view a list of all the users on the system?
You can get a list of all users with getent passwd | cut -d':' -f1 This selects the first column (user name) of the system user database. In contrast to solutions parsing /etc/passwd , this will work regardless of the type of database used (traditional /etc/passwd , LDAP, etc). Note that this list includes system users as well (e.g. nobody, mail, etc.). The exact user number can be determined as follows: getent passwd | wc -l A list of currently logged in users can be obtained with the users or who command: users # orwho
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93519/" ] }
181,341
The technical explanation of what is unprivileged container is quite good. However, it is not for ordinary PC user. Is there a simple answer when and why people should use unpriviliged containers, and what are their benefits and downsides?
Running unprivileged containers is the safest way to run containers in a production environment. Containers get bad publicity when it comes to security and one of the reasons is because some users have found that if a user gets root in a container then there is a possibility of gaining root on the host as well. Basically what an unprivileged container does is mask the userid from the host . With unprivileged containers, non root users can create containers and will have and appear in the container as root but will appear as userid 10000 for example on the host (whatever you map the userids as). I recently wrote a blog post on this based on Stephane Graber's blog series on LXC (One of the brilliant minds/lead developers of LXC and someone to definitely follow). I say again, extremely brilliant. From my blog: From the container: lxc-attach -n ubuntu-unprivedroot@ubuntu-unprived:/# ps -efUID PID PPID C STIME TTY TIME CMDroot 1 0 0 04:48 ? 00:00:00 /sbin/initroot 157 1 0 04:48 ? 00:00:00 upstart-udev-bridge --daemonroot 189 1 0 04:48 ? 00:00:00 /lib/systemd/systemd-udevd --daemonroot 244 1 0 04:48 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pidsyslog 290 1 0 04:48 ? 00:00:00 rsyslogdroot 343 1 0 04:48 tty4 00:00:00 /sbin/getty -8 38400 tty4root 345 1 0 04:48 tty2 00:00:00 /sbin/getty -8 38400 tty2root 346 1 0 04:48 tty3 00:00:00 /sbin/getty -8 38400 tty3root 359 1 0 04:48 ? 00:00:00 cronroot 386 1 0 04:48 console 00:00:00 /sbin/getty -8 38400 consoleroot 389 1 0 04:48 tty1 00:00:00 /sbin/getty -8 38400 tty1root 408 1 0 04:48 ? 00:00:00 upstart-socket-bridge --daemonroot 409 1 0 04:48 ? 00:00:00 upstart-file-bridge --daemonroot 431 0 0 05:06 ? 00:00:00 /bin/bashroot 434 431 0 05:06 ? 00:00:00 ps -ef From the host: lxc-info -Ssip --name ubuntu-unprivedState: RUNNINGPID: 3104IP: 10.1.0.107CPU use: 2.27 secondsBlkIO use: 680.00 KiBMemory use: 7.24 MiBLink: vethJ1Y7TGTX bytes: 7.30 KiBRX bytes: 46.21 KiBTotal bytes: 53.51 KiBps -ef | grep 3104100000 3104 3067 0 Nov11 ? 00:00:00 /sbin/init100000 3330 3104 0 Nov11 ? 00:00:00 upstart-udev-bridge --daemon100000 3362 3104 0 Nov11 ? 00:00:00 /lib/systemd/systemd-udevd --daemon100000 3417 3104 0 Nov11 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0100102 3463 3104 0 Nov11 ? 00:00:00 rsyslogd100000 3516 3104 0 Nov11 pts/8 00:00:00 /sbin/getty -8 38400 tty4100000 3518 3104 0 Nov11 pts/6 00:00:00 /sbin/getty -8 38400 tty2100000 3519 3104 0 Nov11 pts/7 00:00:00 /sbin/getty -8 38400 tty3100000 3532 3104 0 Nov11 ? 00:00:00 cron100000 3559 3104 0 Nov11 pts/9 00:00:00 /sbin/getty -8 38400 console100000 3562 3104 0 Nov11 pts/5 00:00:00 /sbin/getty -8 38400 tty1100000 3581 3104 0 Nov11 ? 00:00:00 upstart-socket-bridge --daemon100000 3582 3104 0 Nov11 ? 00:00:00 upstart-file-bridge --daemonlxc 3780 1518 0 00:10 pts/4 00:00:00 grep --color=auto 3104 As you can see processes are running inside the container as root but are not appearing as root but as 100000 from the host. So to sum up: Benefits - added security and added isolation for security. Downside - A little confusing to wrap your head around at first and not for the novice user.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47022/" ] }
181,347
I need to calculate the time remaining for the next run of a specific job in my Cron Schedule, I've a Cron with jobs having frequencies of per hour, thrice a day etc, no jobs running on specific days/dates hence just HH:MM:SS is concerned, also I do not have right to check /var/spool/cron/ in my RHEL.If some job starts at 9:30 , 30 9 * * * /some/job.sh-bash-3.2$ date +"%H:%M" 13:52 I'd need output as, 19 Hours and 38 Minutes How would I be knowing the total time until next run occurs from current system time? Calculation of seconds is concerned only around the job time.
cron doesn't know when a job is going to fire. All it does is every minute, go over all the crontab entries and fire those that match "$(date '+%M %H %d %m %w')" . What you could do is generate all those timestamps for every minute from now to 49 hours from now (account for DST change), do the matching by hand (the tricky part) and report the first matching one. Or you could use the croniter python module: python -c 'from croniter import croniterfrom datetime import datetimeiter = croniter("3 9 * * *", datetime.now())print(iter.get_next(datetime))' For the delay: $ faketime 13:52:00 python -c 'from croniter import croniterfrom datetime import datetimed = datetime.now()iter = croniter("30 9 * * *", d)print iter.get_next(datetime) - d'19:37:59.413956 Beware of potential bugs around DST changes though: $ faketime '2015-03-28 01:01:00' python -c 'from croniter import croniterfrom datetime import datetimeiter = croniter("1 1 * * *", datetime.now())print iter.get_next(datetime)'2015-03-29 02:01:00$ FAKETIME_FMT=%s faketime -f 1445734799 dateSun 25 Oct 01:59:59 BST 2015$ FAKETIME_FMT=%s faketime -f 1445734799 python -c 'from croniter import croniterfrom datetime import datetimeiter = croniter("1 1 * * *", datetime.now())print iter.get_next(datetime)'2015-10-25 01:01:00$ FAKETIME_FMT=%s faketime -f 1445734799 python -c 'from croniter import croniterfrom datetime import datetimed = datetime.now()iter = croniter("1 1 * * *", d)print iter.get_next(datetime) - d'-1 day, 23:01:01 cron itself takes care of that by avoiding to run the job twice if the time has gone backward, or run skipped jobs after the shift if the time has gone forward.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74910/" ] }
181,409
So this is for homework, but I will not be asking the specific homework question. I need to use head and tail to grab different sets of line from one file. So like lines 6-11 and lines 19-24 and save them both to another file. I know I can do this using append such as head -11 file|tail -6 > file1; head -24 file| tail -6 >> file1. But I don't think we are supposed to. Is there a specific way I could combine the head and tail commands and then save to the file?
You can do it with head alone and basic arithmetic, if you group commands with { ... ; } using a construct like { head -n ...; head -n ...; ...; } < input_file > output_file where all commands share the same input (thanks @mikeserv ). Getting lines 6-11 and lines 19-24 is equivalent to: head -n 5 >/dev/null # dump the first 5 lines to `/dev/null` thenhead -n 6 # print the next 6 lines (i.e. from 6 to 11) thenhead -n 7 >/dev/null # dump the next 7 lines to `/dev/null` ( from 12 to 18)head -n 6 # then print the next 6 lines (19 up to 24) So, basically, you would run: { head -n 5 >/dev/null; head -n 6; head -n 7 >/dev/null; head -n 6; } < input_file > output_file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100855/" ] }
181,410
I'd like to extract the extension out of a filename with multiple dots.For instance: gbamidi-v1.0.tar.gz I should get "tar.gz", not "gz" or "0.tar.gz". I prefer a solution not relying on bash exclusive features (using posix commands like sed or cut is ok). EDIT: A reasonable solution could be:"get everything after the second-last dot, but exclude numbers or substrings with a lenght <=1"
You can do it with head alone and basic arithmetic, if you group commands with { ... ; } using a construct like { head -n ...; head -n ...; ...; } < input_file > output_file where all commands share the same input (thanks @mikeserv ). Getting lines 6-11 and lines 19-24 is equivalent to: head -n 5 >/dev/null # dump the first 5 lines to `/dev/null` thenhead -n 6 # print the next 6 lines (i.e. from 6 to 11) thenhead -n 7 >/dev/null # dump the next 7 lines to `/dev/null` ( from 12 to 18)head -n 6 # then print the next 6 lines (19 up to 24) So, basically, you would run: { head -n 5 >/dev/null; head -n 6; head -n 7 >/dev/null; head -n 6; } < input_file > output_file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31724/" ] }
181,414
I have some custom functions and aliases defined in my .bashrc file. Sometimes I need to execute whem as root. I then do something like: $ sudo custom_cmd 80 sudo: custom_cmd: command not found How can I execute such commands with root privileges? EDIT: Of course to achieve that I could simply source my .bashrc as root but I want to know if there is some quick (in terms of typing needed) way to do this. I also would like to avoid any customization if that is possible (such as sourcing my .bashrc automatically).
Run it through your shell: sudo bash -c 'source /home/reachus/.bashrc; custom_cmd 80' Alternatively, write a script which sources .bashrc for you, say /usr/local/bin/my : #! /bin/bashsource /home/reachus/.bashrc"$@" Then do: sudo my custom_cmd 80
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
181,421
I have a requirement to boot RHEL 6.6/7.0 into read-only mode with a writable layer only in RAM. I believe this is similar to how live CDs work, in that the file system is read-only, but certain parts of it are writable after being loaded into RAM. Here, any changes written to the file system are lost on reboot (since only RAM is updated in the writable layer). While looking around the net, I haven't found a guide on configuring my own "live CD" without helper tools so that I can mimic this process in an existing installed system. Does anyone know where I might be able to get some resources on either building my own live CD or making a read-only Linux with a writable layer only in RAM?
OK, so I do have a working read-only system on an SD card that allows the read/write switch to be set to read-only mode. I'm going to answer my own question, since I have a feeling I'll be looking here again for the steps, and hopefully this will help someone else out. While setting various directories in /etc/fstab as read-only on a Red Hat Enterprise Linux 6.6 system, I found the file /etc/sysconfig/readonly-root . This piqued my interest in what this file was used for, as well as any ancillary information regarding it. In short, this file contains a line that states, " READONLY=no ". Changing this line automatically loads most of the root file system as read-only while preserving necessary write operations on various directories (directories and files are loaded as tmpfs). The only changes I had to make were to set /home , /root , and a few other directories as writable through the /etc/rwtab.d directory and modify /etc/fstab to load the root file system as read-only (changed " defaults " to " ro " for root). Once I set " READONLY=yes " in the /etc/sysconfig/readonly-root file, and set my necessary writable directories through /etc/rwtab.d , as well as the fstab change, I was able to get the system to load read-only, but have writable directories loaded into RAM. For more information, these are the resources that I used: http://www.redhat.com/archives/rhl-devel-list/2006-April/msg01045.html (specifies how to create files in the /etc/rwtab.d/ directory to load files and directories as writable) http://fedoraproject.org/wiki/StatelessLinux (more information on readonly-root file and stateless Linux) http://warewolf.github.io/blog/2013/10/12/setting-up-a-read-only-rootfs-fedora-box/ And, of course, browsing through /etc/rc.d/rc.sysinit shows how files and folders are mounted read-only. The readonly-root file is parsed within the rc.sysinit , for those who are looking for how readonly-root is used in the init process. Also, I did a quick verification on Red Hat Enterprise Linux 7.0, and this file is still there and works. My test environment was CentOS 6.6 and 7.0 in a virtual machine as well as RHEL 6.6 and 7.0 on a VME single-board computer. NOTE: Once the root is read-only, no changes can be made to the root system. For example, you cannot use yum to install packages and have them persist upon reboot. Therefore, to break the read-only root, I added a grub line that removes rhgb and quiet (this is only for debugging boot issues, you can leave them if you want), and added " init=/bin/bash ". This allowed me to enter into a terminal. Once at the terminal, I typed, " mount - / -oremount,rw " to have the system writable. Once writable, I modified (using vim ) /etc/sysconfig/readonly-root to say " READONLY=no " and rebooted the system. This allows me to perform maintenance on the system by turning off read-only. If you are using an SD card like I am, then the read/write switch on the SD card needs to be set to writable.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100859/" ] }
181,423
I have a bash script that produces a cat output when it takes an argument. I also have another bash script that executes the first bash script with an an argument that I want to produce cat outputs with. How do I store those cat outputs produced by the first bash script in variables?
var=$( cat foo.txt ) would store the output of the cat in variable var . var=$( ./myscript ) would store the output of myscript in the same variable.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99897/" ] }
181,433
I have an application which is communicating with workers via signals (particullary SIGUSR1/SIGUSR2/SIGSTOP). Can I trust that whatever happens every signal will be delivered and processed by handler? What happens if signals are sent quicklier than is't possible for application to handle them (eg. due to high host load at the moment)?
Aside from the "too many signals" problem, signals can be explicitly ignored. From man 2 signal : If the signal signum is delivered to the process, then one of thefollowing happens: * If the disposition is set to SIG_IGN, then the signal is ignored. Signals can also be blocked. From man 7 signal ; A signal may be blocked, which means that it will not be delivereduntil it is later unblocked. Between the time when it is generatedand when it is delivered a signal is said to be pending. Both blocked and ignored sets of signals are inherited by child processes, so it may happen that the parent process of your application ignored or blocked one of these signals. What happens when multiple signals are delivered before the process has finished handling previous ones? That depends on the OS. The signal(2) manpage linked above discusses it: System V would reset the signal disposition to the default. Worse, rapid delivery of multiple signals would result in recursive (?) calls. BSD would automatically block the signal until the handler is done. On Linux, this depends on the compilation flags set for GNU libc , but I'd expect the BSD behaviour.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181433", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94554/" ] }
181,436
I have Rapsberry Pi B+ with Arch Linux installation. uname reports version: [computer@computer001 ~]$ uname -aLinux computer001 3.18.3-3-ARCH #1 PREEMPT Mon Jan 26 20:10:28 MST 2015 armv6l GNU/Linux I've installed ftp server via pacman -S vsftpd and installation has passed without any errors. Then I tried to configure it, which resulted in following vsftpd.conf : anonymous_enable=NOlocal_enable=YESwrite_enable=YES#local_umask=022anon_upload_enable=NOanon_mkdir_write_enable=NOdirmessage_enable=YESxferlog_enable=YESconnect_from_port_20=YESchown_uploads=YESchown_username=computer#xferlog_file=/var/log/vsftpd.log#xferlog_std_format=YES#idle_session_timeout=600#data_connection_timeout=120#nopriv_user=ftpsecure#async_abor_enable=YES#ascii_upload_enable=YES#ascii_download_enable=YESftpd_banner=Welcome to personal ftp service.#deny_email_enable=YES#banned_email_file=/etc/vsftpd.banned_emails#chroot_local_user=YES#chroot_list_enable=YES#chroot_list_file=/etc/vsftpd.chroot_listls_recurse_enable=YESlisten=YES#listen_ipv6=YES Now, when I try to restart vsftpd , I get: [computer@computer001 etc]$ sudo systemctl restart vsftpd.service && systemctl status -l vsftpd.service* vsftpd.service - vsftpd daemon Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 1970-01-01 06:32:24 UTC; 112ms ago Process: 350 ExecStart=/usr/bin/vsftpd (code=exited, status=2) Main PID: 350 (code=exited, status=2) Here is also output of sudo journalctl | grep -i vsftp : Jan 01 06:32:24 computer001001 sudo[347]: computer001 : TTY=pts/0 ; PWD=/etc ; USER=root ; COMMAND=/usr/bin/systemctl restart vsftpd.serviceJan 01 06:32:24 computer001001 systemd[1]: Starting vsftpd daemon...Jan 01 06:32:24 computer001001 systemd[1]: Started vsftpd daemon.Jan 01 06:32:24 computer001001 systemd[1]: vsftpd.service: main process exited, code=exited, status=2/INVALIDARGUMENTJan 01 06:32:24 computer001001 systemd[1]: Unit vsftpd.service entered failed state.Jan 01 06:32:24 computer001001 systemd[1]: vsftpd.service failed. Here is unit script /usr/lib/systemd/system/vsftpd.service : [Unit]Description=vsftpd daemonAfter=network.target[Service]ExecStart=/usr/bin/vsftpdExecReload=/bin/kill -HUP $MAINPIDKillMode=process[Install]WantedBy=multi-user.target If I run sudo /usr/bin/vsftpd , I get following error: 500 OOPS: config file not owned by correct user, or not a file I have corrected file permissions for /etc/vsftpd.conf via sudo chown root:root /etc/vsftpd.conf and now manually server gets started.I am also aware date/time is not correct, I haven't setup it yet.What am I missing?
I've reset the permissions for /etc/vsftpd.conf to root:root via sudo chown root:root /etc/vsftpd.conf and now the vsftpd server get started via sudo systemctl restart vsftpd.service and running it manually via sudo /usr/bin/vsftpd .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40993/" ] }
181,459
Often I have complicated installation procedures to follow,like having to build dependencies for buildtools, to build dependencies for the application I wish to install from source. Once I have done this once, I don't really want to do have to try and remember all the steps I have done. The command-history in zsh (and other shells) already records what I have done.Is it possible to export the last 100 (for example) commands executed in to a .sh script? I could then edit this script, remove commands that are from before I started installing,and remove commands that were me making mistake,and be left with a install script I can give to someone else in a similar environment.(Or use myself on another machine.). I suspect, that my command history is already stored in one of the dot-files in my home directory.
The fc built-in command allows you to extract commands from the history using a number of criteria (see man zshbuiltins for details). fc stands for “fix command”, and when invoked with no parameters it opens an editor with the last command entered. You can use all your editor’s features to change the command, and when you save and exit zsh runs the fixed command. The editor used by default is vi , but that can be overridden using the EDITOR shell variable or, if you want to use a specific editor with the fc command, FCEDIT . fc has many options to manipulate history beyond the last command, some of which provide exactly the capabilities you're asking for. The -l option “lists” the contents of the history. By default it lists the last 16 commands, but you can specify lower and upper boundaries, as indices in the history or even as the starting text of a command. Negative indices work back from the last command, so to extract the last 15 lines: fc -l -15 By default fc -l includes history indices as the first column of its output. Once you have the exact range you want, -n drops the numbers so: fc -ln -12 -5 will extract only those lines (from 12 back to 5 back) in a format suitable for a script. Using commands as boundaries can be very useful: fc -l rm ls lists all the history starting with the last rm and ending with the last ls (so there will be a single rm command in the output but there may be multiple ls commands). There are many other options, such as adding timestamps, replacing portions of commands, loading and saving portions of history, switching entire history stacks...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18637/" ] }
181,492
I'd like to create a new user and give him sudo access. To be specific, I want him to use sudo vim and edit httpd.conf. I wrote this in sudoers: user ALL=(ALL) /usr/bin/vim /etc/httpd/confs/httpd.conf I, however, heard that this could be risky. Why is this problematic? How serious is the problem?
Although you restrict the commandline arguments there is nothing that prevents the user from using vim to open, edit and overwrite any random file once it is running as root. The user can run sudo vim /etc/httpd/conf/httpd.conf and then clear all that text from the edit buffer then for convenience source an existing file (although that is not even required): for instance the sudo configuration :r /etc/sudoers NOTE: Unless restricted by SELinux the user can read any file this way! grant himself more sudo privileges user ALL=(ALL) NOPASSWD: ALL overwrite the old configuration :w /etc/sudoers I can imagine dozens of similar ways your user can now access, modify or destroy your system. You won't even have an audit trail which files were changed in this fashion as you will only see him editing your Apache config in the sudo log messages. This is a security risk in granting sudo privileges to any editor. This is more or less the same reason why granting sudo root level rights to commands such as tar and unzip is often insecure, nothing prevents you from including replacements for system binaries or system configuration files in the archive. A second risk, as many other commentators have pointed out, is that vim allows for shell escapes , where you can start a sub-shell from within vim that allows you execute any arbitrary command . From within your sudo vim session those will run as root, for instance the shell escape: :!/bin/bash will give you an interactive root shell :!/bin/rm -rf / will make for good stories in the pub. What to do instead? You can still use sudo to allow users to edit files they don't own in a secure way. In your sudoers configuration you can set a special reserved command sudoedit followed by the full (wildcard) pathname to the file(s) a user may edit: user ALL=(ALL) sudoedit /etc/httpd/conf/httpd.conf /etc/httpd/conf.d/*.conf The user may then use the -e switch in their sudo command line or use the sudoedit command: sudo -e /etc/httpd/conf/httpd.confsudoedit /etc/httpd/conf/httpd.conf As explained in the man page : The -e (edit) option indicates that, instead of running a command, the user wishes to edit one or more files. In lieu of a command, the string "sudoedit" is used when consulting the security policy. If the user is authorized by the policy, the following steps are taken: Temporary copies are made of the files to be edited with the owner set to the invoking user. The editor specified by the policy is run to edit the temporary files. The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order). If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers (5) option is used. If they have been modified, the temporary files are copied back to their original location and the temporary versions are removed. If the specified file does not exist, it will be created. Note that unlike most commands run by sudo, the editor is run with the invoking user's environment unmodified. If, for some reason, sudo is unable to update a file with its edited version, the user will receive a warning and the edited copy will remain in a temporary file. The sudoers manual also has a whole section how it can offer limited protection against shell escapes with the RESRICT and NOEXEC options. restrict Avoid giving users access to commands that allow the user to run arbitrary commands. Many editors have a restricted mode where shell escapes are disabled, though sudoedit is a better solution to running editors via sudo. Due to the large number of programs that offer shell escapes, restricting users to the set of programs that do not is often unworkable. and noexec Many systems that support shared libraries have the ability to override default library functions by pointing an environment variable (usually LD_PRELOAD) to an alternate shared library. On such systems, sudo's noexec functionality can be used to prevent a program run by sudo from executing any other programs. Note, ... ... To enable noexec for a command, use the NOEXEC tag as documented in the User Specification section above. Here is that example again: aaron shanty = NOEXEC: /usr/bin/more, /usr/bin/vi This allows user aaron to run /usr/bin/more and /usr/bin/vi with noexec enabled. This will prevent those two commands from executing other commands (such as a shell).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/181492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100746/" ] }
181,496
My laptop (a Toshiba Sattelite) runs far too bright, even in the ambient light from outside in the day, and I need to be able to dim it below its minimum setting. ~#cat /sys/class/backlight/acpi_video0/brightness ~#0 Setting it below 0 will not work, and apps like flux even with some hackery to force it to night mode via script by rolling the timezone fails to do too much and leave colours of course yellowed. Is there some sort of method to set it below its minimum somehow? (uses some integrated nvidia card by the way) Is there a program I'm missing that will artificially dim it by overlaying transparent black?
With xrandr you can affect the gamma and brightness of a display by altering RGB values. From man xrandr : --brightness Multiply the gamma values on the crtc currently attached to the output to specified floating value. Useful for overly bright or overly dim outputs. However, this is a software only modification, if your hardware has support to actually change the brightness, you will probably prefer to use xbacklight . I can use it like: xrandr --output DVI-1 --brightness .7 There is also the xgamma package, which does much of the same, but... man xgamma : Note that the xgamma utility is obsolete and deficient, xrandr should be used with drivers that support the XRandr extension. I can use it like: xgamma -gamma .7
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100902/" ] }
181,503
I have recently installed CentOS 7 (Minimal Install without GUI) and now I want to install a GUI environment in it. How can I install Desktop Environments on previously installed CentOS7 without reinstalling it?
1. Installing GNOME-Desktop: Install GNOME Desktop Environment on here. # yum -y groups install "GNOME Desktop" Input a command like below after finishing installation: # startx GNOME Desktop Environment will start. For first booting, initial setup runs and you have to configure it for first time. Select System language first. Select your keyboard type. Add online accounts if you'd like to. Finally click "Start using CentOS Linux". GNOME Desktop Environments starts like follows. How to use GNOME Shell? The default GNOME Desktop of CentOS 7 starts with classic mode but if you'd like to use GNOME Shell, set like follows: Option A: If you start GNOME with startx , set like follows. # echo "exec gnome-session" >> ~/.xinitrc# startx Option B: set the system graphical login systemctl set-default graphical.target ( more info ) and reboot the system. After system starts Click the button which is located next to the "Sign In" button. Select "GNOME" on the list. (The default is GNOME Classic) Click "Sign In" and log in with GNOME Shell. GNOME shell starts like follows: 2. Installing KDE-Desktop: Install KDE Desktop Environment on here. # yum -y groups install "KDE Plasma Workspaces" Input a command like below after finishing installation: # echo "exec startkde" >> ~/.xinitrc# startx KDE Desktop Environment starts like follows: 3. Installing Cinnamon Desktop Environment: Install Cinnamon Desktop Environment on here. First Add the EPEL Repository (EPEL Repository which is provided from Fedora project.) Extra Packages for Enterprise Linux (EPEL) How to add EPEL Repository? # yum -y install epel-release# sed -i -e "s/\]$/\]\npriority=5/g" /etc/yum.repos.d/epel.repo # set [priority=5]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/epel.repo # for another way, change to [enabled=0] and use it only when needed# yum --enablerepo=epel install [Package] # if [enabled=0], input a command to use the repository And now install the Cinnamon Desktop Environment from EPEL Repository: # yum --enablerepo=epel -y install cinnamon* Input a command like below after finishing installation: # echo "exec /usr/bin/cinnamon-session" >> ~/.xinitrc# startx Cinnamon Desktop Environment will start. For first booting, initial setup runs and you have to configure it for first time. Select System language first. Select your keyboard type. Add online accounts if you'd like to. Finally click "Start using CentOS Linux". Cinnamon Desktop Environment starts like follows. 4. Installing MATE Desktop Environment: Install MATE Desktop Environment on here (You will need to add the EPEL Repository as explained above in advance). # yum --enablerepo=epel -y groups install "MATE Desktop" Input a command like below after finishing installation: # echo "exec /usr/bin/mate-session" >> ~/.xinitrc # startx MATE Desktop Environment starts. 5. Installing Xfce Desktop Environment: Install Xfce Desktop Environment on here (You will need to add the EPEL Repository as like above in "Cinnamon" installation before). # yum -y groupinstall X11# yum --enablerepo=epel -y groups install "Xfce" Input a command like below after finishing installation: # echo "exec /usr/bin/xfce4-session" >> ~/.xinitrc # startx Xfce Desktop Environment starts.
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/181503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72456/" ] }
181,505
We are troubleshooting a MySQL issue where some queries are taking a very long time complete and I see many of these entry in /var/log/messages: Jan 28 05:52:15 64455-alpha01 kernel: [2529273.616327] INFO: task mysqld:4123 blocked for more than 120 seconds.Jan 28 05:52:15 64455-alpha01 kernel: [2529273.616525] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.Jan 28 05:52:15 64455-alpha01 kernel: [2529273.616813] mysqld D 000000000000000d 0 4123 3142 0x00000080 What does it mean? How does it affect that MySQL thread (4123 is the thread id?) The value in /proc/sys/kernel/hung_task_timeout_secs when I checked now is: $ cat /proc/sys/kernel/hung_task_timeout_secs120 I specifically would like to know how does it affect the process? I read in a forum that it means it happens when that process is holding up too much memory.
echo 0 > /proc/sys/kernel/hung_task_timeout_secs only silences the warning. Besides that it has no effect whatsoever. Any value above zero will cause this message to be issued whenever a task is blocked for that amount of time. The warning is given to indicate a problem with the system. In my experience it means that the process is blocked in kernel space for at least 120 seconds usually because the process is starved of disk I/O. This can be because of heavy swapping due to too much memory being used, e.g. if you have a heavy webserver load and you've configured too many apache child processes for your system. In your case it may just be that there are too many mysql processes competing for memory and data IO. It can also happen if the underlying storage system is not performing well, e.g. if you have a SAN which is overloaded, or if there are soft errors on a disk which cause a lot of retries. Whenever a task has to wait long for its IO commands to complete, these warning may be issued.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181505", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68296/" ] }
181,556
A normal tar command tar cvf foo.tar ./foo >foo.out 2>foo.err has three output IO streams archive data to foo.tar list of filenames to STDOUT (redirected into foo.out) error messages to STDERR (redirected into foo.err) I can then inspect foo.err for error messages without having to read through the list of filenames. if I want to do something with the archive data (pipe it through netcat or a special compression program) I can use tar's -f - option thus tar cvf - ./foo 2>foo.err | squish > foo.tar.S But now my list of filenames is mixed in with my error messages because tar's -v output obviously can't go to STDOUT (that's where the archive data flows) so tar cleverly writes that to STDERR instead. Using Korn shell, is there a way to construct a command that pipes the archive stream to another command but still capture the -v output separately from any error messages.
If your system supports /dev/fd/n : tar cvf /dev/fd/3 ./foo 3>&1 > foo.out 2>foo.err | squish > foo.tar.S Which with AT&T implementations of ksh (or bash or zsh ) you could write using process substitution : tar cvf >(squish > foo.tar.S) ./foo > foo.out 2>foo.err That's doing exactly the same thing except that this time, the shell decides of which file descriptor to use instead of 3 (typically above 9). Another difference is that this time, you get the exit status of tar instead of squish . On systems that do not support /dev/fd/n , some shells may resort to named pipes for that feature. If your system doesn't support /dev/fd/n or your shell can't make use of named pipes for its process substitution, that's where you'd have to deal with named pipes by hand .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181556", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2974/" ] }
181,574
I'm trying to change the files generated by an app that unfortunately often contain spaces. I have managed to get "echo" to give me output that works if copy pasted into terminal, but when I try to do the command, it does not work. I looked at this answer, which has helped me before, but even the "${x}" syntax does not seem to work in this case. #!/bin/shcd ~/DataIFS=$'\n';for i in $(ls); do echo "$i"; filename="$i" date=$(date -n +%Y-%m-%d) new_filename="$date$filename" echo mv \'"${filename}"\' \'"${new_filename}"\' mv \'"${filename}"\' \'"${new_filename}"\'done;
The main problem is that you loop over output of ls command. Use glob * instead: #!/bin/shcd ~/Datafor i in *; do echo "$i" filename="$i" date=$(date -n +%Y-%m-%d) new_filename="${date}${filename}" echo mv "${filename}" "${new_filename}" mv -- "${filename}" "${new_filename}"done Additionally I added -- to mv in order to treat correctly files which names begin with - . And btw my date command doesn't have -n option, but I leave it as you may have different version.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181574", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92702/" ] }
181,590
Given a text like this ./RFF_09 -f${FILE} -c${COND} inside a file, this egrep command will correctly match: egrep './RFF(.*) (.*)-c\\$\{COND\}' file but this sed command will not sed -n "s:'./RFF(.*) (.*)-c\\$\{COND\}':./RFF$1 $2-cRFF$1:gp" It will fail with sed: -e expression #1, char 38: invalid content of \{\} .I've also tried with sed -n "s:'./RFF(.*) (.*)-c\\$\{COND\}':DUMMY:gp" filesed -n s:'./RFF(.*) (.*)-c\\$\{COND\}':DUMMY:gp file with the same result. sed -n "s:'./RFF(.*) (.*)-c\\$\\{COND\\}':DUMMY:gp" file Will not give me an error message, but will not match. What am I doing wrong? Or better: How can I replace the text as suggested by the original command? I'm using rather old versions of sed (4.1.2) and egrep (2.5.1), so a workaround is appreciated even if you can't reproduce the error with newer versions.
The main problem is that you loop over output of ls command. Use glob * instead: #!/bin/shcd ~/Datafor i in *; do echo "$i" filename="$i" date=$(date -n +%Y-%m-%d) new_filename="${date}${filename}" echo mv "${filename}" "${new_filename}" mv -- "${filename}" "${new_filename}"done Additionally I added -- to mv in order to treat correctly files which names begin with - . And btw my date command doesn't have -n option, but I leave it as you may have different version.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30648/" ] }
181,617
Is there any way to have your disk/partition/file encrypted (in Linux or as a hardware encryption; no Windows here) in such way that it locks itself for say 10 minutes after say 3 failed unlock attempts?... The idea is to have a somewhat shorter password to remember without sacrificing security.
No. This is an entirely nonsensical endeavour. If you choose a password that is so easy that I might be able to guess it with 5 or 6 attempts, you might as well not use disk encryption at all. On the other hand, a password that cannot be guessed in under half a dozen attempts and would trigger this "lock out security measure" is of no avail either. An attacker who is only marginally clever will run an offline attack, that is he will read a few sectors and try to brute-force them with his own (massively parallel, multip-GPU) tool. He doesn't care whether you "lock him out" on the boot screen because he isn't using it at all. Note that every reasonably modern disk encryption software uses an expensive key derivation algorithm which takes around half a second or so on your computer to actually compute the encryption key from your password. This is meant to slow down brute force attacks which would otherwise test billions of passwords per second. But of course throwing a multi-GPU rig at the problem means you can still test a few thousand passwords per second. Given a dictionary-based test permutation, it is very optimistic to assume that an "easy" (read as: bad) password will hold an attacker back longer than a few seconds.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100997/" ] }
181,621
OpenSUSE (among other distributions) uses snapper to take snapshots of btrfs partitions. Some people think the default snapshot intervals take up too much space too quickly, but whether or not you believe that, there are times when you want to clear space on your filesystem and often find that the btrfs snapshots are taking a significant amount of space. Or, in other cases you may want to clear the filesystem of all excess data before moving it to/from a VM or changing the storage medium or something along those lines. But, I can't seem to find a command to quickly wipe all of the snapshots snapper has taken, either via snapper or another tool. How would I do this?
The command in recent versions of snapper is (I don't remember when it was introduced, but the version in e.g., openSUSE 13.2 supports this): snapper delete number1-number2 So to delete all snapshots (assuming you have no more than 100000 of them) you'd do: snapper delete 1-100000 Obviously this only deletes snapshots on the default root config, so for a different config it would be: snapper -c configname delete number1-number2
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13308/" ] }
181,676
I know how to use nmap to find the list of hosts that are currently online. What I would like to do is get a list of just their IP addresses, now it displays extra information such as Nmap scan report for 192.168.x.x' and 'Host is up (0.12s latency). What I would like is to be able to run an nmap command, get a text document of the IP addresses that are currently online. Is this at all possible?
This is a common one: nmap -n -sn 192.0.2.0/24 -oG - | awk '/Up$/{print $2}' Quick rundown of options and commands: -n turns off reverse name resolution, since you just want IP addresses. On a local LAN this is probably the slowest step, too, so you get a good speed boost. -sn means "Don't do a port scan." It's the same as the older, deprecated -sP with the mnemonic "ping scan." -oG - sends "grepable" output to stdout, which gets piped to awk . /Up$/ selects only lines which end with "Up", representing hosts that are online. {print $2} prints the second whitespace-separated field, which is the IP address.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/181676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
181,678
In Vim, I had set the Ctrl+Arrow keys to skip words. This works just fine when running Vim inside the gnome-terminal. However, when using byobu (tmux), it shows weird behavior : it deletes everything after the cursor. For reference, these are my vim settings: :inoremap <C-Left> <C-\><C-O>b:inoremap <C-Right> <C-\><C-O>w
The problem is twofold. First, tmux by default converts the control-arrow keys from one type of escape sequence to another. So special keys such as control left are sent to vim without the modifier, e.g., left . If you use cat -v to see the different escape sequences, you might see something like this ^[OD versus this (outside tmux): ^[[1;5D The line set-window-option -g xterm-keys on fixes that aspect. The other part is that tmux by default uses the terminal description for screen . That terminal description does not describe the control-arrow keys. These entries from the terminal database would be the most appropriate for VTE (gnome-terminal): screen.vte screen.vte-256color There are others, such as screen.xterm-new screen.xterm-256color which would be automatically selected when running in screen if the corresponding TERM outside were vte , vte-256color , etc. tmux does not do this automatic-selection; you have to modify its configuration file. By the way, there is no "screen.xterm" entry because it would interfere with some usages of screen . There is no conflict with TERM=xterm-new . If you have a default (minimal) terminal database such as ncurses-base in Debian, you might not have those. More common would be xterm-256color , which is close enough to use with vim and tmux. For example, if I add this to my .tmux.conf file, it behaves as you expect in vim: set -g default-terminal "xterm-256color" Further reading: XTERM Extensions (terminal database) vim: how to specify arrow keys How can I see what my keyboard sends?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89385/" ] }
181,684
while trying to create my Debian Wheezy image for a FriendlyARM mini210s ARM single board computer, I am bumping into this issue. I have opened a terminal via serial sudo screen /dev/cu.usbserial 115200 and that my current output: [ 3.398751] apple 0003:05AC:0220.0003: input: USB HID v1.11 Device [Apple, Inc Apple Keyboard] on usb-s5p-ehci-1.4.2/input1INIT: version 2.88 bootingINIT: Entering runlevel: 2INIT: Id "X1" respawning too fast: disabled for 5 minutesDebian GNU/Linux 7 FriendlyARM ttySAC0FriendlyARM login: root <======= entered root and hit returnUnable to determine your tty name. <===== THE ISSUEDebian GNU/Linux 7 FriendlyARM ttySAC0FriendlyARM login: INIT: Id "6" respawning too fast: disabled for 5 minutesINIT: Id "5" respawning too fast: disabled for 5 minutesINIT: Id "4" respawning too fast: disabled for 5 minutesINIT: Id "3" respawning too fast: disabled for 5 minutesINIT: Id "2" respawning too fast: disabled for 5 minutesINIT: Id "X1" respawning too fast: disabled for 5 minutesINIT: Id "5" respawning too fast: disabled for 5 minutesINIT: Id "2" respawning too fast: disabled for 5 minutesINIT: Id "6" respawning too fast: disabled for 5 minutesINIT: Id "4" respawning too fast: disabled for 5 minutesINIT: Id "3" respawning too fast: disabled for 5 minutes On the touchscreen, I have a message saying: "login: PAM Failure, aborting: Critical error - immediate abort" Can someone help me decrypt what this is tell me? How I created my rootfs sudo debootstrap --arch=armel --foreign wheezy rootfs/ http://ftp.us.debian.org/debianecho "proc /proc proc none 0 0" >> rootfs/etc/fstabecho "mini210s-anybots"echo "mini210s-anybots" > rootfs/etc/hostname mkdir -p rootfs/usr/share/man/man1/mknod rootfs/dev/console c 5 1mknod rootfs/dev/tty1 c 4 1mknod dev/ttySAC0 c 204 64 <==== the serial port then I created an image with yaffs2utils and booted on my FriendlyARM (I describe the process here, it's a bit long). I copied the inittab from a functioning Debian Squeeze image, which contains these lines T1:2345:respawn:/sbin/getty 115200 ttySAC0 <=== terminal on serial port RS232 X1:2345:respawn:/bin/login -f root tty1 </dev/tty1 >/dev/tty1 2>&12:23:respawn:/sbin/getty 38400 tty23:23:respawn:/sbin/getty 38400 tty34:23:respawn:/sbin/getty 38400 tty45:23:respawn:/sbin/getty 38400 tty56:23:respawn:/sbin/getty 38400 tty6 I am most certainly missing a couple of steps. UPDATE Added missing devices Apparently I need to configure some devices. I inspired myself from this post mknod -m 0600 ./rootfs/dev/console c 5 1mknod -m 0660 ./rootfs/dev/full c 1 7mknod -m 0640 ./rootfs/dev/kmem c 1 2mknod -m 0660 ./rootfs/dev/loop0 b 7 0mknod -m 0640 ./rootfs/dev/mem c 1 1mknod -m 0666 ./rootfs/dev/null c 1 3mknod -m 0640 ./rootfs/dev/port c 1 4mknod -m 0666 ./rootfs/dev/random c 1 8mknod -m 0660 ./rootfs/dev/tty c 5 0mknod -m 0666 ./rootfs/dev/urandom c 1 9mknod -m 0666 ./rootfs/dev/zero c 1 5mknod -m 0660 ./rootfs/dev/tty0 c 5 0mknod -m 0660 ./rootfs/dev/tty1 c 5 1mknod -m 0660 ./rootfs/dev/tty2 c 5 2mknod -m 0660 ./rootfs/dev/tty3 c 5 3mknod -m 0660 ./rootfs/dev/tty4 c 5 4mknod -m 0660 ./rootfs/dev/tty5 c 5 5 ... but still experiencing the issue. now addding these mknod -m 0660 ./rootfs/dev/ram0 b 1 0mknod -m 0660 ./rootfs/dev/ram1 b 1 1mknod -m 0660 ./rootfs/dev/ram2 b 1 2mknod -m 0660 ./rootfs/dev/ram3 b 1 3mknod -m 0660 ./rootfs/dev/ram4 b 1 4mknod -m 0660 ./rootfs/dev/ram5 b 1 5mknod -m 0660 ./rootfs/dev/ram6 b 1 6mknod -m 0660 ./rootfs/dev/ram7 b 1 7mknod -m 0660 ./rootfs/dev/ram8 b 1 8mknod -m 0660 ./rootfs/dev/ram9 b 1 9mknod -m 0660 ./rootfs/dev/ram10 b 1 10mknod -m 0660 ./rootfs/dev/ram11 b 1 11mknod -m 0660 ./rootfs/dev/ram12 b 1 12mknod -m 0660 ./rootfs/dev/ram13 b 1 13mknod -m 0660 ./rootfs/dev/ram14 b 1 14mknod -m 0660 ./rootfs/dev/ram15 b 1 15 My new rootfs/dev root@ubuntu:/home/joel/debian-mini210s/rootfs/dev# ls -altotal 8drwxr-xr-x 2 root root 4096 Jan 19 17:27 .drwxr-xr-x 19 root root 4096 Jun 22 2012 ..crw-r--r-- 1 root root 5, 1 Jan 19 14:07 consolecrw-rw---- 1 root root 1, 7 Jan 19 17:05 fullcrw-r----- 1 root root 1, 2 Jan 19 17:05 kmembrw-rw---- 1 root root 7, 0 Jan 19 17:05 loop0crw-r----- 1 root root 1, 1 Jan 19 17:05 memcrw-rw-rw- 1 root root 1, 3 Jan 19 17:05 nullcrw-r----- 1 root root 1, 4 Jan 19 17:05 portbrw-rw---- 1 root root 1, 0 Jan 19 17:27 ram0brw-rw---- 1 root root 1, 1 Jan 19 17:27 ram1brw-rw---- 1 root root 1, 10 Jan 19 17:27 ram10brw-rw---- 1 root root 1, 11 Jan 19 17:27 ram11brw-rw---- 1 root root 1, 12 Jan 19 17:27 ram12brw-rw---- 1 root root 1, 13 Jan 19 17:27 ram13brw-rw---- 1 root root 1, 14 Jan 19 17:27 ram14brw-rw---- 1 root root 1, 15 Jan 19 17:27 ram15brw-rw---- 1 root root 1, 2 Jan 19 17:27 ram2brw-rw---- 1 root root 1, 3 Jan 19 17:27 ram3brw-rw---- 1 root root 1, 4 Jan 19 17:27 ram4brw-rw---- 1 root root 1, 5 Jan 19 17:27 ram5brw-rw---- 1 root root 1, 6 Jan 19 17:27 ram6brw-rw---- 1 root root 1, 7 Jan 19 17:27 ram7brw-rw---- 1 root root 1, 8 Jan 19 17:27 ram8brw-rw---- 1 root root 1, 9 Jan 19 17:27 ram9crw-rw-rw- 1 root root 1, 8 Jan 19 17:05 randomcrw-rw---- 1 root root 5, 0 Jan 19 17:05 ttycrw-rw---- 1 root root 5, 0 Jan 19 17:07 tty0crw-r--r-- 1 root root 4, 1 Jan 19 14:47 tty1crw-rw---- 1 root root 5, 2 Jan 19 17:07 tty2crw-rw---- 1 root root 5, 3 Jan 19 17:07 tty3crw-rw---- 1 root root 5, 4 Jan 19 17:07 tty4crw-rw---- 1 root root 5, 5 Jan 19 17:07 tty5crw-r--r-- 1 root root 204, 64 Jan 19 14:11 ttySAC0crw-rw-rw- 1 root root 1, 9 Jan 19 17:05 urandomcrw-rw-rw- 1 root root 1, 5 Jan 19 17:05 zero but still getting Unable to determine your tty name UPDATE - added ttySAC0 to /etc/securetty In vim /etc/pam.d/login Line: auth [success=ok new_authtok_reqd=ok ignore=ignore user_unknown=bad default=die] pam_securetty.so this should allo login without password, if device listed. ... but that didn't work out as I expected. Found on the Internet about this issue On AskUbuntu eBook on the issue
The problem is twofold. First, tmux by default converts the control-arrow keys from one type of escape sequence to another. So special keys such as control left are sent to vim without the modifier, e.g., left . If you use cat -v to see the different escape sequences, you might see something like this ^[OD versus this (outside tmux): ^[[1;5D The line set-window-option -g xterm-keys on fixes that aspect. The other part is that tmux by default uses the terminal description for screen . That terminal description does not describe the control-arrow keys. These entries from the terminal database would be the most appropriate for VTE (gnome-terminal): screen.vte screen.vte-256color There are others, such as screen.xterm-new screen.xterm-256color which would be automatically selected when running in screen if the corresponding TERM outside were vte , vte-256color , etc. tmux does not do this automatic-selection; you have to modify its configuration file. By the way, there is no "screen.xterm" entry because it would interfere with some usages of screen . There is no conflict with TERM=xterm-new . If you have a default (minimal) terminal database such as ncurses-base in Debian, you might not have those. More common would be xterm-256color , which is close enough to use with vim and tmux. For example, if I add this to my .tmux.conf file, it behaves as you expect in vim: set -g default-terminal "xterm-256color" Further reading: XTERM Extensions (terminal database) vim: how to specify arrow keys How can I see what my keyboard sends?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53782/" ] }
181,735
How do I install BPG (Better Portable Graphics) on Linux Mint 17? I downloaded the tar.gz file from Fabrice Bellard's website . The ReadMe file says, Edit the Makefile to change the compile options (the default compile options should be OK for Linux). Type 'make' to compile and 'make install' to install the compiled binaries. I didn't edit the Makefile . I opened the terminal in the directory and ran make . It returned the following error: gcc -g -Wl,--gc-sections -o bpgdec bpgdec.o libbpg.a -lpng -lrt -lm -lpthreadbpgdec.o: In function `png_save':/home/ghort/Downloads/libbpg-0.9.5/bpgdec.c:118: undefined reference to `png_set_longjmp_fn'collect2: error: ld returned 1 exit statusmake: *** [bpgdec] Error 1 I think I read elsewhere that I need to install libpng16 experimental, but I'm not sure.
libbpg depends on version 1.6 of the PNG library, which you cannot install with apt-get on Linux Mint 17. This library is incompatible with libpng12 and needs to be installed from source (I used version 1.6.16 ) The additional complication is that if you install PNG 1.6 the make of libbpg still uses libpng12-dev even if you configure PNG 1.6 with configure --prefix=/usr . And you cannot just deinstall libpng12-dev as libsdl-image1.2-dev and libsdl1.2-dev depend on it, and those are needed for compiling libbpg as well. You could probably also download and compile the libsdl-image and libsdl1 sources and not install their -dev packages. I did not follow that route, I just temporarily removed the files (not the package) from libpng12-dev and reinstalled them after I was done (you should be able to copy and paste this on Linux Mint 17):¹ # install dev packages needed sudo apt-get install -y libsdl-image1.2-dev libsdl1.2-dev libjpeg8-dev#### temporarily remove the files from the libpng12 package# this gives error messages on the directories (that cannot be removed this# way, which can be ignored.sudo rm $(dpkg -L libpng12-dev)# download, configure, make and make install PNG 1.6wget -O libpng-1.6.16.tar.xz 'http://downloads.sourceforge.net/project/libpng/libpng16/1.6.16/libpng-1.6.16.tar.xz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibpng%2Ffiles%2Flibpng16%2F1.6.16%2F&ts=1424607004&use_mirror=garr'tar xf libpng-1.6.16.tar.xzpushd libpng-1.6.16./configure --prefix=/usrmake -jsudo make installpopd# download, make and make install BPGwget http://bellard.org/bpg/libbpg-0.9.5.tar.gztar xf libbpg-0.9.5.tar.gzpushd libbpg-0.9.5make -jsudo make installpopd# reinstall libpng12-devsudo apt-get install --reinstall libpng12-dev Of course it would be more appropriate to install PNG 1.6 in parallel to libpng12-dev and adapt the sources of libbpg , but since you don't recompile the BPG library on a regular basis, IMO this kludge is acceptable. ¹ You could also try and build libpng16-dev , but installing that might just remove the -dev packages depending on libpng12-dev , I did not try that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77038/" ] }
181,743
Say I have a great number of jobs (dozens or hundreds) that need doing, but they're CPU intensive and only a few can be run at once. Is there an easy way to run X jobs at once and start a new one when one has finished? The only thing I can come up with is something like below (pseudo-code): jobs=(...);MAX_JOBS=4;cur_jobs=0;pids=(); # hash/associative arraywhile (jobs); do while (cur_jobs < MAX_JOBS); do pop and spawn job and store PID and anything else needed; cur_jobs++; done sleep 5; for each PID: if no longer active; then remove PID; cur_jobs--;done I feel like I'm over-complicating the solution, as I often do. The target system is FreeBSD, if there might be some port that does all the hard work, but a generic solution or common idiom would be preferable.
If you have GNU Parallel you can do this: parallel do_it {} --option foo < argumentlist GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU: GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time: Installation If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this: (wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README Learn more See more examples: http://www.gnu.org/software/parallel/man.html Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1 Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
181,771
How can I replace the N th (for example, the third) occurrenceof a string (or pattern) in a file using the sed command? I do mean the N th occurrence in the file,not the N th occurrence in a lineor the first occurrence in the N th matching line. There may be multiple occurrences in a line,and an occurrence (a match) might be a partial word. Example: Change only the third occurrence of is to us in the file. My input file contains: hai this is linux.hai this is unix.hai this is mac.hai this is unchanged. The expected output is: hai this is linux.hai thus is unix.hai this is mac.hai this is unchanged. Note that “this” has been replaced by “th us” in the second line.
It's a lot easier done with perl . To change the 3 rd occurrence: perl -pe 's{is}{++$n == 3 ? "us" : $&}ge' To change every 3 rd occurrence: perl -pe 's{is}{++$n % 3 ? $& : "us"}ge'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101092/" ] }
181,781
I have a script that logs into sftp server eval `ssh-agent -s`ssh-add /home/<username>/.ssh/id_rsasftp <username>@<target> This works, but I would like to get only files that are newer than certain date. This doesn't seem to be possible with sftp, so I would like to use lftp If I try to run the same script with lftp eval `ssh-agent -s`ssh-add /home/<username>/.ssh/id_rsalftp <username>@<target> I get a prompt for password. Is it possible to use lftp with ssh-agent or is there some other way to avoid supplying the password?
It's actually just a wart of LFTP that it even asks for the password. If you provide a dummy password, such as the literal string DUMMY (e.g. lftp sftp://<username>:DUMMY@<target> ), lftp won't prompt for a password, and will then subsequently check with the ssh agent. Mind you, if you don't have a key set up, that password will be used. Alternatively, you can override lftp's sftp:connect-program setting to force ssh to use to a specific key file, without having to set up the agent (the dummy password will still be needed). (One way) this can be done is like so: lftp sftp://<username>:DUMMY@<target> -e 'set sftp:connect-program "ssh -a -x -i <yourprivatekeyfile>"' . The sftp:connect-program is the option lftp uses to create the sftp session. It defaults to ssh -a -x , but can be pretty much any command (see lftp man page for exact restrictions). Here I'm just tacking on the -i option to force a specific private key. (NOTE: all the <xxx> bits in the above examples should be replaced with actual values. To correct a few things in the accepted answer... there isn't an internal FTP server in SSH; sftp is its own protocol, designed as a extension of ssh. It only has "ftp" at the end because it's a file transfer protocol, they share very little in common in terms of details. Also, while LFTP can connect to FTP directly, it can also connect to a ton of other protocols. When connecting with sftp, it directly invokes ssh to handle establishing the connection, and thus all the normal ssh authentication methods apply. The command LFTP uses to invoke ssh can be reconfigured via it's sftp:connect-program option (hence the second alternative listed above).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101102/" ] }
181,782
In this question , the accepted answers states that For example, /sbin/init uses glibc [the vulnerable lib], and restarting that without a reboot is non-trivial. Now, non-trivial is engineer'ish for impossible , but I'm still curious: Is it possible to restart init without restarting the whole system?
telinit u will restart init without affecting the rest of the system.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98663/" ] }
181,833
So, I'm working on a remote server via ssh that I log in and out of dozens of times a day, and I'd like to have bash cd to a default directory I've chosen as soon as I login, but I don't actually want to change the user home. Is there any simple way to do that? To be clear, what I want is to see, say, ~/foo/bar/ when I login, instead of ~/ , and have the option to change the default at will without having to worry about dangerous usermod craziness. It's not important, but it would definitely be convenient.
In your ~/.bashrc or ~/.bash_profile file, put this at the end of the file: cd /path/to/your/destination Save the file and log out and log back in, you should be in /path/to/your/destination . You could also create an alias on your local account, edit your local ~/.bashrc and add: alias fastlogin='ssh servername -t "cd /path/to/your/destination; exec bash --login"' Source your file so changes take effect: source ~/.bashrc Now test it by typing fastlogin in your terminal. You require bash at the end so the connection doesn't terminate after cd executes and --login is required so it sources your ~/.bashrc & ~/.bash_profile files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56278/" ] }
181,841
There are instances where a function needs to execute and return to the caller and this works for me. if tst; then echo "success"else echo "failure"fifunction tst () { return 0;} However, I can't seem to be able to do it using shortcut syntax. I've tried many combinations of the following statement including testing if [[ tst = true ]] or if it = "0" but I haven't been able to figure it out. [[ tst ]] && echo "success" || echo "failure" What is the proper way to test the function in the if condition using bash shortcut syntax?
Assuming you want to use A && B || C , then simply call the function directly: tst && echo "success" || echo "failure" If you want to use [[ , you'd have to use the exit value: tstif [[ $? -eq 0 ]]then ...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79046/" ] }
181,864
By default F10 on Xfce activates the window File menu and so cannot be used as a shortcut in any program. I already looked in the Window Manager and Keyboard settings pages and no F10 key binding is listed. How can I re-map/un-map this?
Xfce provides a way to disable the key binding from the Settings Editor . This will work only for GTK+ 2 applications, given that "gtk-menu-bar-accel" has been deprecated in GTK+ 3 (since version 3.10) . As a result, user might have no choice but to disable the key binding per application, which may also depend on which toolkit in use. Go to Applications Menu > Settings > Settings Editor . Xfce 4.10 or newer provides another way to access by Settings Manager > Other - Settings Editor . In the Settings Editor: On the left, under "Channel", scroll down and select "xsettings" On the right, under "Property | Type | Locked | Value", look for Gtk > MenuBarAccel Double-click on the row of "MenuBarAccel" to edit this property In the "Edit Property" dialog, delete the value F10 (leave it blank) and click Save . The final step will disable the key binding for activating the menu bar. Custom keys : User can also change the key binding to something else. For example, changing the value to <Control>F12 will re-map to Ctrl + F12 key combination to activate the menu. Try with any key bindings using <Alt> <Shift> and other keys. More clues are found under "Channel: xfce4-keyboard-shortcuts" and under "Property". Restore to default : In the Settings Editor, click Reset button that is located at the right-most icon, either at near-bottom of window (Xfce 4.10 or newer) or, on top of "Property" column (Xfce 4.8). Precaution (Xfce 4.8) : In older Xfce, clicking Reset button will cause the entire row of "MenuBarAccel" to be removed at all. To avoid this, double-click on the row again and change the value to F10 to restore. Name: /Gtk/MenuBarAccel Type: String Value: F10 In case user have accidentally deleted the property, create again the property as follows. Click New and re-register the property in the "New Property" dialog with the settings quoted as above. Xfce can still disable the key binding for GTK+ 2 applications, such as Orage and Xournal. Given that many applications are now GTK+ 3, the setting will be less and less relevent in newer Xfce.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56450/" ] }
181,893
If I type: ls -l file.txt I see that the rights for that file are equivalent to "456": 4 = owner (r--) 5 = group (r-x) 6 = others (rw-) Which are the rights for root in this case? Does it have 777? Could the rights be changed so that the root will have less permissions than the owner?
I would check this page out . It talks about file permissions in depth. But to answer your question directly, no: The super user "root" has the ability to access any file on the system. In your example for instance if the file is owned by say bob and the group owner was also bob , then you would see something like this: -r--r-xrw-. 1 bob bob 8 Jan 29 18:39 test.file The 3rd bit group the (rw) would also apply to root, as root being part of the others group. If you tried to edit that file as root you would see that you have no problem doing so. But to test your theory even more, if the file was owned by root: -r--r-xrw-. 1 root root 8 Jan 29 18:40 test.file And you again went to edit the file, you would see that you still have no problem editing it. Finally if you did the extreme: chmod 000 test.filels -lh test.file----------. 1 root root 8 Jan 29 18:41 test.file And you went again to edit the file you would see (at least in vi/vim) "test.file" [readonly] . But you can still edit the file and force save it with :wq! . Testing @Stéphane Chazelas claim with a shell script file: #!/bin/shecho "I'm alive! Thanks root!" [root ~]# ls -lh test.sh----------. 1 atgadmin atgadmin 31 Jan 30 10:59 test.sh[root ~]# ./test.sh-bash: ./test.sh: Permission denied[root ~]# sh test.shI'm alive! Thanks root! @Shadur already said it so I'm just going to quote instead of restating it: Note: The execution bit is checked for existence, not whether it's applicable to root.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181893", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90220/" ] }
181,901
Hi I'm trying to find a repository on the web for oddjob-mkhomedir. I'm then going to try and add it to linux so I can install the library with these instructions https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Managing_Yum_Repositories.html Can someone help me to find this repository? I've been looking for almost 2 hours.
I would check this page out . It talks about file permissions in depth. But to answer your question directly, no: The super user "root" has the ability to access any file on the system. In your example for instance if the file is owned by say bob and the group owner was also bob , then you would see something like this: -r--r-xrw-. 1 bob bob 8 Jan 29 18:39 test.file The 3rd bit group the (rw) would also apply to root, as root being part of the others group. If you tried to edit that file as root you would see that you have no problem doing so. But to test your theory even more, if the file was owned by root: -r--r-xrw-. 1 root root 8 Jan 29 18:40 test.file And you again went to edit the file, you would see that you still have no problem editing it. Finally if you did the extreme: chmod 000 test.filels -lh test.file----------. 1 root root 8 Jan 29 18:41 test.file And you went again to edit the file you would see (at least in vi/vim) "test.file" [readonly] . But you can still edit the file and force save it with :wq! . Testing @Stéphane Chazelas claim with a shell script file: #!/bin/shecho "I'm alive! Thanks root!" [root ~]# ls -lh test.sh----------. 1 atgadmin atgadmin 31 Jan 30 10:59 test.sh[root ~]# ./test.sh-bash: ./test.sh: Permission denied[root ~]# sh test.shI'm alive! Thanks root! @Shadur already said it so I'm just going to quote instead of restating it: Note: The execution bit is checked for existence, not whether it's applicable to root.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/181901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101151/" ] }
181,937
While running a script, I want to create a temporary file in /tmp directory. After execution of that script, that will be cleaned by that script. How to do that in shell script?
tmpfile=$(mktemp /tmp/abc-script.XXXXXX): ...rm "$tmpfile" You can make sure that a file is deleted when the scripts exits (including kills and crashes) by opening a file descriptor to the file and deleting it. The file keeps available (for the script; not really for other processes but /proc/$PID/fd/$FD is a work-around) as long as the file descriptor is open. When it gets closed (which the kernel does automatically when the process exits) the filesystem deletes the file. # create temporary filetmpfile=$(mktemp /tmp/abc-script.XXXXXX)# create file descriptor 3 for writing to a temporary file so that# echo ... >&3 writes to that fileexec 3>"$tmpfile"# create file descriptor 4 for reading from the same file so that# the file seek positions for reading and writing can be differentexec 4<"$tmpfile"# delete temp file; the directory entry is deleted at once; the reference counter# of the inode is decremented only after the file descriptor has been closed.# The file content blocks are deallocated (this is the real deletion) when the# reference counter drops to zero.rm "$tmpfile"# your script continues: ...# example of writing to file descriptorecho foo >&3# your script continues: ...# reading from that file descriptorhead -n 1 <&4# close the file descriptor (done automatically when script exits)# see section 2.7.6 of the POSIX definition of the Shell Command Languageexec 3>&-
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/181937", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95074/" ] }
181,972
How can I show the CPU usage side by side rather than a list? I have this : but I want to show it like this:
Go to settings (F2), under Meters, you select what is in the left column and what in the right column. Instead of CPUs (1/1) in the left column, select CPUs (1/2) for the left column and CPUs (2/2) for the right column. F10 to save the changes and it's done.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/181972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101233/" ] }
182,015
As in example I'm trying to get line numbers which contains the pattern. My pattern contains slashes so I wanted to add custom delimiter. This simple one works: sed -n '/file/=' temp.txt Using delimiter for string replace works too: sed 's|file|gile|' temp.txt but when I want to add delimiter to first example it doesn't: sed -n '|file /etc|=' temp.txt I know I can escape slashes but I would prefer to add custom delimiter. Any idea how to fix my command?
Stéphane gave you the sed solution: sed -n '\|file /etc|=' file If you're open to using other tools, you can also do grep -n 'file /etc' file That will also print the line itself, to get the line number alone try: grep -n 'file /etc' file | cut -d: -f 1 Or, you can use perl : perl -lne 'm|file /etc| && print $.' file Or awk : awk '$0 ~ "file /etc" {print NR}'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101256/" ] }
182,032
I have a directory called folder that looks like this: folder -> root_folder -> some files I want to zip this directory into zipped_dir , I tried: zip -r zipped_dir.zip folder/* But this generates a ZIP that looks like this: zipped_dir -> folder -> root_folder -> some files in other words, it's including the directory whose contents I want to zip. How can I exclude this parent directory from the ZIP, without moving anything? IE I would like this end result: zipped_dir -> root_folder -> some files
Try to use this command (you will get the idea) cd folder; zip -r ../zipped_dir.zip * Maybe there is other way, but this is fastest and simplest for me :)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/182032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89986/" ] }
182,033
I want to "clean out" all of the files a directory including all files in subdirectories but I want to leave the subdirectories in place. My understanding of rm -r is that it will also delete the subdirectories themselves. I do not want to delete hidden (dot) files. How can this be done?
Use find for that: find . ! -name '.*' ! -type d -exec rm -- {} +
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101264/" ] }
182,077
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space $ lsof /tmp/COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEcron 1623 root 5u REG 0,21 0 395919638 /tmp/tmpfPagTZ4 (deleted) The disk space taken up by the deleted file above causes problems such as when trying to use the tab key to autocomplete a file path I get the error bash: cannot create temp file for here-document: No space left on device But after I run kill -9 1623 the space for that PID is freed and I no longer get the error. My questions are: why is this space not immediately freed when the file is first deleted? what is the best way to get back the file space associated with the deleted files? and please let me know any incorrect terminology I have used or any other relevant and pertinent info regarding this situation.
On unices, filenames are just pointers (inodes) that point to the memory where the file resides (which can be a hard drive or even a RAM-backed filesystem). Each file records the number of links to it: the links can be either the filename (plural, if there are multiple hard links to the same file), and also every time a file is opened, the process actually holds the "link" to the same space. The space is physically freed only if there are no links left (therefore, it's impossible to get to it). That's the only sensible choice: while the file is being used, it's not important if someone else can no longer access it: you are using it and until you close it, you still have control over it - you won't even notice the filename is gone or moved or whatever. That's even used for tempfiles: some implementations create a file and immediately unlink it, so it's not visible in the filesystem, but the process that created it is using it normally. Flash plugin is especially fond of this method: all the downloaded video files are held open, but the filesystem doesn't show them. So, the answer is, while the processes have the files still opened, you shouldn't expect to get the space back. It's not freed, it's being actively used. This is also one of the reasons that applications should really close the files when they finish using them. In normal usage, you shouldn't think of that space as free, and this also shouldn't be very common at all - with the exception of temporary files that are unlinked on purpose, there shouldn't really be any files that you would consider being unused, but still open. Try to review if there is a process that does this a lot and consider how you use it, or just find more space.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46709/" ] }
182,123
Suppose I have a file that contains, among many other things, \command{arg1,arg2,arg3} (arguments been paths, expressed with / , . , characters and numbers) But that a user can as well call it with \command{arg1,arg2 ,arg3} That is, on several lines and with superfluous spaces. I'd like to find a regular pattern to include in a shell script so that n variables will contain the n arguments.How to proceed ? I managed to write echo "\command{arg1, arg2 , arg3 }" | sed -n -e 's/\\command//p' | sed 's/,/\n/' | sed 's/{\|}//' but that only outputs arg1 , and I'm not even sure on how to store it in a variable. Related: matching the text between parenthesis in a multiline text finding strings between two characters outputing only the part that matches the regular expression how to delete everything between a group of brackets But I was not able to combine all those ingredients to get what I want.
I'd like to find a regular pattern to include in a shell script so that n variables will contain the n arguments The following creates a shell array arglist that contains each of the arguments: $ readarray -t arglist < <(echo "\command{arg1, arg2 , arg3 }" | sed -n '/\\command/{ :a;/}/!{N;b a}; s/\\command{//; s/[ \n}]//g; s/,/\n/g; p}') By using the declare statement, we can see that it worked: $ declare -p arglistdeclare -a arglist='([0]="arg1" [1]="arg2" [2]="arg3")' Here is another example with the arguments on one line: $ readarray -t arglist < <(echo "\command{arg1, arg2, arg3, arg4}" | sed -n '/\\command/{ :a;/}/!{N;b a}; s/\\command{//; s/[ \n}]//g; s/,/\n/g; p}') Again, it works: $ declare -p arglistdeclare -a arglist='([0]="arg1" [1]="arg2" [2]="arg3" [3]="arg4")' Note that the space in < <( is essential. We are redirecting input from a process substitution. Without the space, bash will try something else entirely. How it works The sed command is a bit subtle. Let's look at it a piece at a time: -n Don't print lines unless explicitly asked. /\\command/{...} If we find a line that contains \command , then perform the commands found in the braces which are as follows: :a;/}/!{N;b a} This reads lines into the pattern buffer until we find a line that contains } . This way, we get the whole command in at once. s/\\command{// Remove the \command{ string. s/[ \n}]//g Remove all spaces, closing braces, and newlines. s/,/\n/g Replace commas with newlines. When this is done, each argument is on a separate line which is what readarray wants. p Print.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101324/" ] }
182,131
I need to install the virtual-box on my Linux Mint 17 machine. Also tell me if any prerequisites to install in the machine.
I'd like to find a regular pattern to include in a shell script so that n variables will contain the n arguments The following creates a shell array arglist that contains each of the arguments: $ readarray -t arglist < <(echo "\command{arg1, arg2 , arg3 }" | sed -n '/\\command/{ :a;/}/!{N;b a}; s/\\command{//; s/[ \n}]//g; s/,/\n/g; p}') By using the declare statement, we can see that it worked: $ declare -p arglistdeclare -a arglist='([0]="arg1" [1]="arg2" [2]="arg3")' Here is another example with the arguments on one line: $ readarray -t arglist < <(echo "\command{arg1, arg2, arg3, arg4}" | sed -n '/\\command/{ :a;/}/!{N;b a}; s/\\command{//; s/[ \n}]//g; s/,/\n/g; p}') Again, it works: $ declare -p arglistdeclare -a arglist='([0]="arg1" [1]="arg2" [2]="arg3" [3]="arg4")' Note that the space in < <( is essential. We are redirecting input from a process substitution. Without the space, bash will try something else entirely. How it works The sed command is a bit subtle. Let's look at it a piece at a time: -n Don't print lines unless explicitly asked. /\\command/{...} If we find a line that contains \command , then perform the commands found in the braces which are as follows: :a;/}/!{N;b a} This reads lines into the pattern buffer until we find a line that contains } . This way, we get the whole command in at once. s/\\command{// Remove the \command{ string. s/[ \n}]//g Remove all spaces, closing braces, and newlines. s/,/\n/g Replace commas with newlines. When this is done, each argument is on a separate line which is what readarray wants. p Print.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93265/" ] }
182,153
Reading a whole file into pattern space is useful for substituting newlines, &c.and there are many instances advising the following: sed ':a;N;$!ba; [commands...]' However, it fails if the input contains only one line. As an example, with two line input, every line is subjected to the substitution command: $ echo $'abc\ncat' | sed ':a;N;$!ba; s/a/xxx/g'xxxbccxxxt But, with single line input, no substitution is performed: $ echo 'abc' | sed ':a;N;$!ba; s/a/xxx/g'abc How does one write a sed command to read all the input in at once and not have this problem?
There are all kinds of reasons why reading a whole file into pattern space can go wrong. The logic problem in the question surrounding the last line is a common one. It is related to sed 's line cycle - when there are no more lines and sed encounters EOF it is through - it quits processing. And so if you are on the last line and you instruct sed to get another it's going to stop right there and do no more. That said, if you really need to read a whole file into pattern space, then it is probably worth considering another tool anyway. The fact is, sed is eponymously the stream editor - it is designed to work a line - or a logical data block - at a time. There are many similar tools that are better equipped to handle full file blocks. ed and ex , for example, can do much of what sed can do and with similar syntax - and much else besides - but rather than operating only on an input stream while transforming it to output as sed does, they also maintain temporary backup files in the file-system. Their work is buffered to disk as needed, and they do not quit abruptly at end of file (and tend to implode a lot less often under buffer strain) . Moreover they offer many useful functions which sed does not - of the sort that simply do not make sense in a stream context - like line marks, undo, named buffers, join, and more. sed 's primary strength is its ability to process data as soon as it reads it - quickly, efficiently, and in stream. When you slurp a file you throw that away and you tend to run into edge case difficulties like the last line problem you mention, and buffer overruns, and abysmal performance - as the data it parses grows in length a regexp engine's processing time when enumerating matches increases exponentially . Regarding that last point, by the way: while I understand the example s/a/A/g case is very likely just a naive example and is probably not the actual script you want to gather in an input for, you might might find it worth your while to familiarize yourself with y/// . If you often find yourself g lobally substituting a single character for another, then y could be very useful for you. It is a transformation as opposed to a substitution and is far quicker as it does not imply a regexp. This latter point can also make it useful when attempting to preserve and repeat empty // addresses because it does not affect them but can be affected by them. In any case, y/a/A/ is a more simple means of accomplishing the same - and swaps are possible as well like: y/aA/Aa/ which would interchange all upper/lowercase as on a line for each other. You should also note that the behavior you describe is really not what is supposed to happen anyway. From GNU's info sed in the COMMONLY REPORTED BUGS section: N command on the last line Most versions of sed exit without printing anything when the N command is issued on the last line of a file. GNU sed prints pattern space before exiting unless of course the -n command switch has been specified. This choice is by design. For example, the behavior of sed N foo bar would depend on whether foo has an even or an odd number of lines. Or, when writing a script to read the next few lines following a pattern match, traditional implementations of sed would force you to write something like /foo/{ $!N; $!N; $!N; $!N; $!N; $!N; $!N; $!N; $!N } instead of just /foo/{ N;N;N;N;N;N;N;N;N; } . In any case, the simplest workaround is to use $d;N in scripts that rely on the traditional behavior, or to set the POSIXLY_CORRECT variable to a non-empty value. The POSIXLY_CORRECT environment variable is mentioned because POSIX specifies that if sed encounters EOF when attempting an N it should quit without output, but the GNU version intentionally breaks with the standard in this case. Note also that even as the behavior is justified above the assumption is that the error case is one of stream-editing - not slurping a whole file into memory. The standard defines N 's behavior thus: N Append the next line of input, less its terminating \n ewline, to the pattern space, using an embedded \n ewline to separate the appended material from the original material. Note that the current line number changes. If no next line of input is available, the N command verb shall branch to the end of the script and quit without starting a new cycle or copying the pattern space to standard output. On that note, there are some other GNU-isms demonstrated in the question - particularly the use of the : label, b ranch, and { function-context brackets } . As a rule of thumb any sed command which accepts an arbitrary parameter is understood to delimit at a \n ewline in the script. So the commands... :arbitrary_label_name; ...b to_arbitrary_label_name; ...//{ do arbitrary list of commands } ... ...are all very likely to perform erratically depending on the sed implementation that reads them. Portably they should be written: ...;:arbitrary_label_name...;b to_arbitrary_label_name//{ do arbitrary list of commands} The same holds true for r , w , t , a , i , and c (and possibly a few more that I'm forgetting at the moment) . In almost every case they might also be written: sed -e :arbitrary_label_name -e b\ to_arbitary_label_name -e \ "//{ do arbitrary list of commands" -e \} ...where the new -e xecution statement stands in for the \n ewline delimiter. So where the GNU info text suggests a traditional sed implementation would force you to do : /foo/{ $!N; $!N; $!N; $!N; $!N; $!N; $!N; $!N; $!N } ...it should rather be... /foo/{ $!N; $!N; $!N; $!N; $!N; $!N; $!N; $!N; $!N} ...of course, that isn't true either. Writing the script in that way is a little silly. There are much more simple means of doing the same, like: printf %s\\n foo . . . . . . |sed -ne 'H;/foo/h;x;//s/\n/&/3p;tnd //!g;x;$!d;:nd' -e 'l;$a\' \ -e 'this is the last line' ...which prints: foo...foo\n.\n.\n.$.$this is the last line ...because the t est command - like most sed commands - depends on the line cycle to refresh its return register and here the line cycle is permitted to do most of the work. That's another tradeoff you make when you slurp a file - the line cycle doesn't refresh ever again, and so many tests will behave abnormally. The above command doesn't risk over-reaching input because it just does some simple tests to verify what it reads as it reads it. With H old all lines are appended to the hold space, but if a line matches /foo/ it overwrites h old space. The buffers are next e x changed, and a conditional s/// ubstitution is attempted if the contents of the buffer match the // last pattern addressed. In other words //s/\n/&/3p attempts to replace the third newline in hold space with itself and print the results if hold space currently matches /foo/ . If that t ests successful the script branches to the n ot d elete label - which does a l ook and wraps up the script. In the case that both /foo/ and a third newline cannot be matched together in hold space though, then //!g will overwrite the buffer if /foo/ is not matched, or, if it is matched, it will overwrite the buffer if a \n ewline is not matched (thereby replacing /foo/ with itself) . This little subtle test keeps the buffer from filling up unnecessarily for long stretches of no /foo/ and ensures the process stays snappy because the input does not pile on. Following on in a no /foo/ or //s/\n/&/3p fail case the buffers are again swapped and every line but the last is there deleted. That last - the last line $!d - is a simple demonstration of how a top-down sed script can be made to handle multiple cases easily. When your general method is to prune away unwanted cases starting with the most general and working toward the most specific then edge cases can be more easily handled because they are simply allowed to fall through to the end of the script with your other wanted data and when it all wraps you're left with only the data you want. Having to fetch such edge cases out of a closed loop can be far more difficult to do, though. And so here's the last thing I have to say: if you must really pull in an entire file, then you can stand to do a little less work by relying on the line cycle to do it for you. Typically you would use N ext and n ext for lookahead - because they advance ahead of the line cycle. Rather than redundantly implementing a closed loop within a loop - as the sed line cycle is just an simple read loop anyway - if your purpose is only to gather input indiscriminately, then it is probably easier to do: sed 'H;1h;$!d;x;...' ...which will gather the entire file or go bust trying. a side note about N and last line behavior... while i do not have the tools available to me to test, consider that N when reading and in-place editing behaves differently if the file edited is the script file for next readthrough.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101195/" ] }
182,180
I created a new user (testuser) using the useradd command on an Ubuntu server Virtual machine. I would like to create a home directory for the user and also give them root provileges. However, when I login as the new user, it complains that there is no home directory. What am I doing wrong?
Finally I found how to do this myself: useradd -m -d /home/testuser/ -s /bin/bash -G sudo testuser -m creates the home directory if it does not exist. -d overrides the default home directory location. -s sets the login shell for the user. -G expects a comma-separated list of groups that the user should belong to. See man useradd for details.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61108/" ] }
182,185
I ran into an issue trying to dynamically set some variables based on output of a program, in a fish function. I narrowed my issues down to a MWE: function example eval (echo 'set -x FOO 1;')end calling: >example>echo $FOO results in no output -- ie the FOO environment variable has not been set.How should I have done this?
Finally I found how to do this myself: useradd -m -d /home/testuser/ -s /bin/bash -G sudo testuser -m creates the home directory if it does not exist. -d overrides the default home directory location. -s sets the login shell for the user. -G expects a comma-separated list of groups that the user should belong to. See man useradd for details.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18637/" ] }
182,190
I'm trying to install php5-auth-pam on Ubuntu 14.04, but package doesn't exist after Ubuntu 12.04. I've tried by downloading package here: http://packages.ubuntu.com/precise/php5-auth-pam I've done things like: dpkg -I php*.debapt-get install -f I can't match dependencies, and I don't know how to install them too.
Finally I found how to do this myself: useradd -m -d /home/testuser/ -s /bin/bash -G sudo testuser -m creates the home directory if it does not exist. -d overrides the default home directory location. -s sets the login shell for the user. -G expects a comma-separated list of groups that the user should belong to. See man useradd for details.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101380/" ] }
182,199
Inside shell script ( test.sh ) I have inotifywait monitoring recursively some direcotry - "somedir": #!/bin/shinotifywait -r -m -e close_write "somedir" | while read f; do echo "$f hi"; done When I execute this in terminal I will get following message: Setting up watches. Beware: since -r was given, this may take a while!Watches established. What I need is to touch all files under "somedir" AFTER the watches were established. For that I use: find "somedir" -type f -exec touch {} The reason why is that when starting inotifywait after crash, all files that arrived during the time will be never picked up. So the problem and question is, how or when should I execute the find + touch ? So far I have tried to let it sleep few seconds after I call test.sh , but that doesn't work in long run when the number of subdirs in "somedir" will grow. I have tried to check if the process is running and sleep until it appears, but it seems that process appears before all the watches are established. I tried to change test.sh : #!/bin/shinotifywait -r -m -e close_write "somedir" && find "somedir" -type f -exec touch {} | while read f; do echo "$f hi"; done But no files are touched at all. So I would really need a help... Additional info is that test.sh is running in background: nohup test.sh & Any ideas? Thanks FYI: Based on advice from @xae, I use it like this: nohup test.sh > /my.log 2>&1 &while :; do (cat /my.log | grep "Watches established" > /dev/null) && break; done;find "somedir" -type f -exec touch {} \+
When inotifywait outputs the string " Watches established. " is secure to make changes in the watched inodes, so you should wait to the string to appears on standard error before touching the files. As an example,this code should to that, inotifywait -r -m -e close_write "somedir" \2> >(while :;do read f; [ "$f" == "Watches established." ] && break;done;\find "somedir" -type f -exec touch {} ";")\| while read f; do echo "$f hi";done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101382/" ] }
182,212
Hello I want to understand the role of the chmod g+s command in Unix. I also would like to know what it does in this particular context: cd /home/canard;touch un;chgrp canard .;chmod g+s .;touch deux ; I understand all the commands roles except for chmod g+s and I want to know the differences between the files un and deux resulting from this series of commands.
chmod g+s .; This command sets the "set group ID" (setgid) mode bit on the current directory, written as . . This means that all new files and subdirectories created within the current directory inherit the group ID of the directory, rather than the primary group ID of the user who created the file. This will also be passed on to new subdirectories created in the current directory. g+s affects the files' group ID but does not affect the owner ID. Note that this applies only to newly-created files. Files that are moved ( mv ) into the directory are unaffected by the setgid setting. Files that are copied with cp -p are also unaffected. Example touch un;chgrp canard .;chmod g+s .;touch deux ; In this case, deux will belong to group canard but un will belong to the group of the user creating it, whatever that is. Minor Note on the Use of Semicolons in Shell Commands Unlike C or Perl, a shell command only needs to be followed by a semicolon if there is another shell command following it on the same command line. Thus, consider the following command line: chgrp canard .; chmod g+s .; The final semicolon is superfluous and can be removed: chgrp canard .; chmod g+s . Further, if we were to place the two commands on separate lines, then the remaining semicolon is unneeded: chgrp canard .chmod g+s . Documentation For more information, see man chmod . Also, wikipedia has tables summarizing the chmod command options.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/182212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101397/" ] }
182,220
I have a server and I want to setup a VPN on it to route all traffic. Of course I don't want to block myself out when establishing the OpenVPN connection (already did that!) so I want port 22 to be unaffected and be reachable as usual. Is this possible? And if so, how can I set this up?
You need to add routing to your server so ssh packets get routed via the server's public ip not the vpn. Failing to do that means the ssh return packet gets routed via openvpn. This is why you get locked out of your server after you've inititated an openvpn client session. Lets assume your server's: Public IP is a.b.c.d Public IP Subnet is a.b.c.0/24 Default Gateway is x.x.x.1 eth0 is device to gateway iproute2 is your friend here. Do the following: ip rule add table 128 from a.b.c.dip route add table 128 to a.b.c.0/24 dev eth0ip route add table 128 default via x.x.x.1 Do route -n to confirm new routing table shows up.Above commands won't persists if you reboot the server. You'll need to add them to your network interface config file. Then run your openvpn client config openvpn --config youropenvpn-configfile.ovpn & Added bonus Also, should you wish to restrict traffic to your public IP to ssh and only ssh then you'll need to add iptables filtering as follows: iptables -A INPUT -d a.b.c.d -p tcp --dport <*ssh port number*> -j ACCEPTiptables -A INPUT -d a.b.c.d -j DROP ps: I recall first learning about this in the Linode's forum - google it and you should be able to find a post on this.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101403/" ] }
182,232
I don't understand how the data flows in the pipeline and hope someone could clarify what is going on there. I thought a pipeline of commands processes files (text, arrays of strings) in line by line manner. (If each command itself works line by line.) Each line of text passes through the pipeline, commands don't wait for the previous to finish processing whole input. But it seems it is not so. Here is a test example. There are some lines of text. I uppercase them and repeat each line twice. I do so with cat text | tr '[:lower:]' '[:upper:]' | sed 'p' . To follow the process we can run it "interactively" -- skip the input filename in cat . Each part of the pipeline runs line by line: $ cat | tr '[:lower:]' '[:upper:]'alkjsdALKJSDsdkjSDKJ$ cat | sed 'p'line1line1line1line 2line 2line 2 But the complete pipeline does wait for me to finish the input with EOF and only then prints the result: $ cat | tr '[:lower:]' '[:upper:]' | sed 'p'I am writing...keep writing...now ctrl-DI AM WRITING...I AM WRITING...KEEP WRITING...KEEP WRITING...NOW CTRL-DNOW CTRL-D Is it supposed to be so? Why isn't it line-by-line?
There's a general buffering rule followed by the C standard I/O library ( stdio ) that most unix programs use. If output is going to a terminal, it is flushed at the end of each line; otherwise it is flushed only when the buffer (8K on my Linux/amd64 system; could be different on yours) is full. If all your utilities were following the general rule, you would see output delayed in all of your examples ( cat|sed , cat|tr , and cat|tr|sed ). But there's an exception: GNU cat never buffers its output. It either doesn't use stdio or it changes the default stdio buffering policy. I can be fairly sure you're using GNU cat and not some other unix cat because the others wouldn't behave this way. Traditional unix cat has a -u option to request unbuffered output. GNU cat ignores the -u option because its output is always unbuffered. So whenever you have a pipe with a cat on the left, in the GNU system, the passage of data through the pipe will not be delayed. The cat isn't even going line by line - your terminal is doing that. While you're typing input for cat, your terminal is in "canonical" mode - line-based, with editing keys like backspace and ctrl-U offering you the chance to edit the line you have typed before sending it with Enter . In the cat|tr|sed example, tr is still receiving data from cat as soon as you press Enter , but tr is following the stdio default policy: its output is going to a pipe, so it doesn't flush after each line. It writes to the second pipe when the buffer is full, or when an EOF is received, whichever comes first. sed is also following the stdio default policy, but its output is going to a terminal so it will write each line as soon as it has finished with it. This has an effect on how much you must type before something shows up on the other end of the pipeline - if sed was block-buffering its output, you'd have to type twice as much (to fill tr 's output buffer and sed 's output buffer). GNU sed has -u option so if you reversed the order and used cat|sed -u|tr you would see the output appear instantly again. (The sed -u option might be available elsewhere but I don't think it's an ancient unix tradition like cat -u ) As far as I can tell there's no equivalent option for tr . There is a utility called stdbuf which lets you alter the buffering mode of any command that uses the stdio defaults. It's a bit fragile since it uses LD_PRELOAD to accomplish something the C library wasn't designed to support, but in this case it seems to work: cat | stdbuf -o 0 tr '[:lower:]' '[:upper:]' | sed 'p'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38371/" ] }
182,234
I'm running CENTOS 6.6 on a VPS and am trying to install the ZMQ PHP extension and tried installing using the command shown in the instructions: sudo pecl install zmq-beta However, it fails, showing this as the output: root@host [/zmq]# sudo pecl install zmq-betadownloading zmq-1.1.2.tgz ...Starting to download zmq-1.1.2.tgz (39,573 bytes)..........done: 39,573 bytescould not extract the package.xml file from "/root/tmp/pear/cache/zmq-1.1.2.tgz"Download of "pecl/zmq" succeeded, but it is not a valid package archiveError: cannot download "pecl/zmq"Download failedinstall failed I also tried: sudo pecl install -Z zmq-beta And: sudo pecl install --nocompress zmq-beta But I get the same error. Why is this error occuring?
There's a general buffering rule followed by the C standard I/O library ( stdio ) that most unix programs use. If output is going to a terminal, it is flushed at the end of each line; otherwise it is flushed only when the buffer (8K on my Linux/amd64 system; could be different on yours) is full. If all your utilities were following the general rule, you would see output delayed in all of your examples ( cat|sed , cat|tr , and cat|tr|sed ). But there's an exception: GNU cat never buffers its output. It either doesn't use stdio or it changes the default stdio buffering policy. I can be fairly sure you're using GNU cat and not some other unix cat because the others wouldn't behave this way. Traditional unix cat has a -u option to request unbuffered output. GNU cat ignores the -u option because its output is always unbuffered. So whenever you have a pipe with a cat on the left, in the GNU system, the passage of data through the pipe will not be delayed. The cat isn't even going line by line - your terminal is doing that. While you're typing input for cat, your terminal is in "canonical" mode - line-based, with editing keys like backspace and ctrl-U offering you the chance to edit the line you have typed before sending it with Enter . In the cat|tr|sed example, tr is still receiving data from cat as soon as you press Enter , but tr is following the stdio default policy: its output is going to a pipe, so it doesn't flush after each line. It writes to the second pipe when the buffer is full, or when an EOF is received, whichever comes first. sed is also following the stdio default policy, but its output is going to a terminal so it will write each line as soon as it has finished with it. This has an effect on how much you must type before something shows up on the other end of the pipeline - if sed was block-buffering its output, you'd have to type twice as much (to fill tr 's output buffer and sed 's output buffer). GNU sed has -u option so if you reversed the order and used cat|sed -u|tr you would see the output appear instantly again. (The sed -u option might be available elsewhere but I don't think it's an ancient unix tradition like cat -u ) As far as I can tell there's no equivalent option for tr . There is a utility called stdbuf which lets you alter the buffering mode of any command that uses the stdio defaults. It's a bit fragile since it uses LD_PRELOAD to accomplish something the C library wasn't designed to support, but in this case it seems to work: cat | stdbuf -o 0 tr '[:lower:]' '[:upper:]' | sed 'p'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43886/" ] }
182,279
I am trying to start the X server (I have all the packages installed). The problem I am having is that the server is offsite, has no screen and never has had a screen connected to it and as a result X Server has never run which means there is no xorg.config which from other posts I see can be used to start X server with out any screens. I have tried running X -configuration (can't remember the command but I used the right one) and it output the same message that I get when I do startx which is Fatal Error: no screens I need to find a way to start X server without connecting a screen to it.
You're looking for headless with X. It's little bit described on ArchWiki , there's shown a way how to do this. There's another alternative to run headless X11 compatible server: Xvfb (X virtual framebuffer). It's a display server that performs all graphical operations in memory without showing any screen output. startx is just front-end for xinit which setups X.Org server and clients(Window Manager, Desktop Environment,...). Among other things, it reads client side configuration from ~/.xinitrc . To run common user session inside virtual framebuffer: start Xfvb and setup environment - export proper environment variables execute ~/.xinitrc script that defines X client setup of user You may alternatively execute custom commands, that would setup WM, DM, etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101449/" ] }
182,282
From this guide to Bash completion we learn that in order for Bash to perform auto-completion one must perform . /etc/etc/bash_completion.d/foobar ( note the space after . ) in order for Bash completion to work. $ /etc/bash_completion.d/sshbash: /etc/bash_completion.d/ssh: Permission denied$ . /etc/bash_completion.d/ssh$ ls -l /etc/bash_completion.d | grep ssh-rw-r--r-- 1 root root 297 Jan 28 18:04 ssh Is . a shortcut for the source command? If not, then what is it? It is impossible to google for, man source returns nothing, and apropos source and info source give so much irrelevant information that I cannot tell if what I'm looking for is in there. How might I even begin to RTFM to find the answer to this question myself?
Yes . is identical to the source function. As always the first reference is the man bash manual page where you can confirm your initial guess by searching for / source ...shell function or script executed with . or source... is the first reference, but a bit further you find a section Shell Builtin Commands . filename [arguments] source filename [arguments] Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename. If filename does not contain a slash, file names in PATH are used to find the directory containing filename. The file searched for in PATH need not be executable. When bash is not in posix mode, the current directory is searched if no file is found in PATH. If the sourcepath option to the shopt builtin command is turned off, the PATH is not searched. If any arguments are supplied, they become the positional parameters when filename is executed. Otherwise the positional parameters are unchanged. The return status is the status of the last command exited within the script (0 if no commands are executed), and false if filename is not found or cannot be read. That fact it is a bash builtin function is the reason source doesn't come with its own man page, which is why apropos failed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
182,286
PXE Server - CentOS 6.5 64bit Objective - Client should be presented with OS choices in network boot menu - Oracle Linux 6.5, RHEL 7, Ubuntu 14. Upon selection it should proceed with the selected OS installation.
You can boot grub over the network through TFTP. grub can then present a menu of choices for the next thing to boot in the manner that it usually does. Those choices can be various OS installers. grub can load the chosen OS installer also through TFTP. I know that the Debian (and Ubuntu) installer can be booted as a single self-contained Linux kernel + initramfs (initrd) combination. That's the easiest because that can be booted by grub in a straightforward fashion (a menuentry with linux and initrd directives) and you don't need to arrange for the installer to gain access to anything else. Probably those other distribution's installers are similar. There are some notes here on setting up grub to boot over TFTP with EFI. More documentation can be easily found by searching. Basically it comes down to configuring the DHCP server and putting the right files on the TFTP server. Locations for the DHCP server configuration file and TFTP server root directory will vary from one OS to another. The DHCP server needs to supply a boot file name to the client as a DHCP option. This is standard for any net boot. The boot filename points to a filename located on the TFTP server that contains grub. For the grub image, you can use either a bundled standalone image (instructions for making one at the previously referenced page), bootx64.efi , or just the grub core core.efi . In the latter case grub will need to load additional modules as well as its config file separately from the TFTP server once it is running. grub.cfg should be a normal grub configuration file in which you specify the pathnames to the kernel and initrd as (tftp)/path/to/the/object . Of course you will give the kernels and initrds of different OS installers different names on the TFTP server.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56586/" ] }
182,308
How to install pip in Debian Wheezy? I've found many advises apt-get install python-pip but the result is "Unable to locate package python-pip" Is pip available in Debian Wheezy? I'm using 7.8
Although apt-get update might seem to help you, I recommend strongly against using pip installed from the Wheeze repository with apt-get install python-pip : that pip is at version 1.1 while the current version is > 9.0 version 1.1 of pip has known security problems when used to download packages version 1.1 doesn't restrict downloads/installs to stable versions of packages lacks a lot of new functionality (like support for the wheel format) and misses bug fixes (see the changelog ) python-pip installed via apt-get pulls in some perl modules for whatever reason Unless you are running python2.4 or so that is still supported by pip 1.1 (and which you should not use anyway) you should follow the installation instructions on the pip documentation page to securely download pip (don't use the insecure pip install --upgrade pip with the 1.1 version, and certainly don't install any packages with sudo pip ... with that version) If you already have made the mistake of installing pip version 1.1, immediately do: sudo apt-get remove python-pip After that: wget https://bootstrap.pypa.io/get-pip.pypython get-pip.py (for any of the python versions you have installed). Python2 versions starting with 2.7.9 and Python3 version starting with 3.4 have pip included by default.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/182308", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60701/" ] }
182,371
After a crash, I've got an Ext4 filesystem (on an LVM LV) that gives the following error when running fsck.ext4 -nf : e2fsck 1.42.12 (29-Aug-2014)Corruption found in superblock. (blocks_count = 0).The superblock could not be read or does not describe a valid ext2/ext3/ext4filesystem. If the device is valid and it really contains an ext2/ext3/ext4filesystem (and not swap or ufs or something else), then the superblockis corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> or e2fsck -b 32768 <device> I've run dumpe2fs to find the other copies of the superblock, but no matter which of them I add after fsck.ext4 s -b option, I get the exact same output. Moreover, dumpe2fs sees the correct block count ( Block count: 4294967296 , a 16TB filesystem). Here's the (truncated) output: Filesystem volume name: <none>Last mounted on: /storageFilesystem UUID: fef00ffc-5341-4158-9279-88cad6cc211fFilesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hashDefault mount options: user_xattr aclFilesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 268435456Block count: 4294967296Reserved block count: 42949672Free blocks: 534754162Free inodes: 268391425First block: 0Block size: 4096Fragment size: 4096Group descriptor size: 64Blocks per group: 32768Fragments per group: 32768Inodes per group: 2048Inode blocks per group: 128Flex block group size: 16Filesystem created: Wed Jan 16 11:07:07 2013Last mount time: Sun Feb 1 21:21:31 2015Last write time: Sun Feb 1 21:21:45 2015Mount count: 18Maximum mount count: -1Last checked: Wed Jan 16 11:07:07 2013Check interval: 0 (<none>)Lifetime writes: 14 TBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8Default directory hash: half_md4Directory Hash Seed: c7ec9ee0-002b-431d-a37c-33db922c6057Journal backup: inode blocksJournal features: journal_incompat_revoke journal_64bitJournal size: 128MJournal length: 32768Journal sequence: 0x0000e3feJournal start: 0Group 0: (Blocks 0-32767) [ITABLE_ZEROED] Checksum 0x4623, unused inodes 2034 Primary superblock at 0, Group descriptors at 1-2048 Block bitmap at 2049 (+2049), Inode bitmap at 2065 (+2065) Inode table at 2081-2208 (+2081) 28637 free blocks, 2036 free inodes, 1 directories, 2034 unused inodes Free blocks: 4130-4133, 4135-32767 Free inodes: 11, 14-2048Group 1: (Blocks 32768-65535) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0xfd95, unused inodes 2048 Backup superblock at 32768, Group descriptors at 32769-34816 Block bitmap at 2050 (bg #0 + 2050), Inode bitmap at 2066 (bg #0 + 2066) Inode table at 2209-2336 (bg #0 + 2209) 1522 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 34817, 35343-36863 Free inodes: 2049-4096Group 2: (Blocks 65536-98303) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x95d0, unused inodes 2048 Block bitmap at 2051 (bg #0 + 2051), Inode bitmap at 2067 (bg #0 + 2067) Inode table at 2337-2464 (bg #0 + 2337) 115 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 85901-86015 Free inodes: 4097-6144Group 3: (Blocks 98304-131071) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x6e40, unused inodes 2048 Backup superblock at 98304, Group descriptors at 98305-100352 Block bitmap at 2052 (bg #0 + 2052), Inode bitmap at 2068 (bg #0 + 2068) Inode table at 2465-2592 (bg #0 + 2465) 1505 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 100895-102399 Free inodes: 6145-8192Group 4: (Blocks 131072-163839) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x4788, unused inodes 2048 Block bitmap at 2053 (bg #0 + 2053), Inode bitmap at 2069 (bg #0 + 2069) Inode table at 2593-2720 (bg #0 + 2593) 1808 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 141552-143359 Free inodes: 8193-10240Group 5: (Blocks 163840-196607) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x0d39, unused inodes 2048 Backup superblock at 163840, Group descriptors at 163841-165888 Block bitmap at 2054 (bg #0 + 2054), Inode bitmap at 2070 (bg #0 + 2070) Inode table at 2721-2848 (bg #0 + 2721) 2023 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 165913-167935 Free inodes: 10241-12288Group 6: (Blocks 196608-229375) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0xc119, unused inodes 2048 Block bitmap at 2055 (bg #0 + 2055), Inode bitmap at 2071 (bg #0 + 2071) Inode table at 2849-2976 (bg #0 + 2849) 1755 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 198541-198655, 223640-225279 Free inodes: 12289-14336Group 7: (Blocks 229376-262143) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0xf858, unused inodes 2048 Backup superblock at 229376, Group descriptors at 229377-231424 Block bitmap at 2056 (bg #0 + 2056), Inode bitmap at 2072 (bg #0 + 2072) Inode table at 2977-3104 (bg #0 + 2977) 1796 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 231676-233471 Free inodes: 14337-16384Group 8: (Blocks 262144-294911) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x6a75, unused inodes 2048 Block bitmap at 2057 (bg #0 + 2057), Inode bitmap at 2073 (bg #0 + 2073) Inode table at 3105-3232 (bg #0 + 3105) 1700 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 278876-280575 Free inodes: 16385-18432Group 9: (Blocks 294912-327679) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x3840, unused inodes 2048 Backup superblock at 294912, Group descriptors at 294913-296960 Block bitmap at 2058 (bg #0 + 2058), Inode bitmap at 2074 (bg #0 + 2074) Inode table at 3233-3360 (bg #0 + 3233) 1986 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes Free blocks: 297022-299007 Free inodes: 18433-20480... truncated ... The strange thing is that I can mount the filesystem without any (apparent) problems (although I haven't yet dared to write to it). Any suggestions/pointers/ideas for a solution that allows me to finish the fsck?
Your device has exactly 4294967296 blocks, which is 2 32 , so this smells like a variable-size problem... If you’re running a 32-bit e2fsck, that could explain the error message; the error you’re seeing comes from e2fsck/super.c : check_super_value(ctx, "blocks_count", ext2fs_blocks_count(sb), MIN_CHECK, 1, 0); where check_super_value() is defined as static void check_super_value(e2fsck_t ctx, const char *descr, unsigned long value, int flags, unsigned long min_val, unsigned long max_val) So on a 32-bit system where unsigned long is four bytes, your blocks_count will end up being 0 and fail the minimum-value check, without it indicating an actual problem with the filesystem. The reason you’d only see this after a crash is that fsck is only run after a crash or if the filesystem hasn’t been checked for too long. The answer to your question would then be, if you are running a 32-bit e2fsck , to try a 64-bit version...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9809/" ] }
182,382
I want to generate a random password, and am doing it like so: </dev/urandom tr -dc [:print:] | head -c 64 On my laptop, which runs Ubuntu, this produces only printable characters, as intended. But when I ssh into my school's server, which runs Red Hat Enterprise Linux, and run it there, I get outputs like 3!ri�b�GrӴ��1�H�<�oM����&�nMC[�Pb�|L%MP�����9��fL2q���IFmsd|l�K , which won't do at all. What might be going wrong here?
It's your locale and tr problem. Currently, GNU tr fully supports only single-byte characters. So in locales using multibyte encodings, the output can be weird: $ </dev/urandom LC_ALL=vi_VN.tcvn tr -dc '[:print:]' | head -c 64`�pv���Z����c�ox"�O���%�YR��F�>��췔��ovȪ������^,<H ���> The shell will print multi-byte characters correctly, but GNU tr will remove bytes it think non-printable. If you want it to be stable, you must set the locale: $ </dev/urandom LC_ALL=C tr -dc '[:print:]' | head -c 64RSmuiFH+537z+iY4ySz`{Pv6mJg::RB;/-2^{QnKkImpGuMSq92D(6N8QF?Y9Co@
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19429/" ] }
182,387
I have a 4TB Disk formatted as NTFS and some data has been written into it. I then tried to encrypt it and mapped it to /dev/mapper/volume_sde1 like this: $ lsblk -io NAME,TYPE,SIZE,FSTYPE,UUID,MOUNTPOINTsde disk 3.7T`-sde1 part 3.7T crypto_LUKS 485c6049-9e6c-4f4c-9b4d-9efe54a9497a `-volume_sde1 (dm-0) crypt 3.7T Then I tried to mount it to some point: $ mount /dev/mapper/volume_sde /up2s3 The mount command returns: mount: you must specify the filesystem type Next I add crypto_LUKS as the -t crypto_LUKS , mount again returns: mount: unknown filesystem type 'crypto_LUKS' How do I mount the encrypted disk?
luks - create a new block device encrypted over existing block device. Not filesystem - so you can't mount it directly after opening. But - all data are lost. You can't encrypt existing ntfs partition. If you wish - you can encrypt device over sda, then open it with cryptsetup luksOpen /dev/sda1 crypted_sda1 and then mount /dev/mapper/crypted_sda1 /up2s3 But - note that: ntfs in this situation is useless - you can't mount it under windows you cant encrypt existing device without losing data - so create backup of data from your ntfs partition, encrypt device (cryptsetup create ...), open it, format (mkfs.ext4 /dev/mapper/crypted_sda1), and finally mount and restore from backup. If you finish writing data to partition, umount, close(cryptsetup close).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182387", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13331/" ] }
182,400
I want to show the IP address like below lo : 127.0.0.1 eth0 : 192.168.5.123eth1 : 192.172.0.212wlan0 : 10.1.0.124 I'm able to print all IP address by ifconfig | awk '/inet addr/{print substr($2,6)}' . But it is only printing the IPs. Every system have their own interface names and addresses. So my script has to show their interfaces with respect to IP addresses.
The following will do what you want: $ ip addr | awk '/^[0-9]+:/ { sub(/:/,"",$2); iface=$2 } /^[[:space:]]*inet / { split($2, a, "/") print iface" : "a[1] }'lo : 127.0.0.1br0 : 10.1.10.12
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48634/" ] }
182,421
We have got an application running on our linux server. From that application, when we try and access localhost(127.0.0.1):localport, it should be forwarded to an external IP. Users will only try and access the localhost on a certain port which will be automatically forwarded. I read up on iptables nat table, but PREROUTING and POSTROUTING will not be applicable if am right since I am accessing a port on localhost from localhost itself which doesn't touch the network interface at all. Wondering OUTPUT table might be useful but when I tried some combinations, it didn't work. Am I using the right thing or is it not possible at all to do it? Can someone point me in the right direction?
I have figured to do this myself. 2 rules and a flag should be set to achieve this. Example used here is for telnet localhost XXXX , should forward packets to Ext.er.nal.IP:YYYY . sysctl -w net.ipv4.conf.all.route_localnet=1 This flag unfortunately exists only on quite latest Linux kernels and not available on an old kernel (there isn't any alternate flag as well in the old kernel). Am quite not sure which exact kernel is the flag available from though. I believe it is available on kernel versions 3.XX. This flag is to consider the loopback addresses as a proper source or destination address. Source for ip sysctl command. iptables -t nat -A OUTPUT -p tcp --dport XXXX -j DNAT --to-destination Ext.er.nal.IP:YYYY The above command will alter the packets that is to localhost:XXXX with the destination IP as Ext.er.nal.IP:YYYY iptables -t nat -A POSTROUTING -j MASQUERADE The command will alter the source IP as the public ip of your machine. You could make your rules a bit more strict by adding appropriate source and destination IP and interfaces using -s , -d , -i and -o . See man iptables . Thanks to John WH Smith and Wurtel. Suggestions were very helpful.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101547/" ] }
182,448
It came to our notice that glibc needs to be upgraded from the current version 2.2 to the latest as per the release 5/6/7. My Question is that do I need to reboot the system, because we are having more then 7000 server in our environment, it will be hard to do reboot the system. After updating glibc it is not vulnerable, So why does the application/server needs to be restarted
I answered most questions in this thread here about the ghost vulnerability. In short no, rebooting the system isn't 'required' but because so many applications/system utilities use glibc, you will have to make sure you restart every one of them before the patch takes effect. This is why it is 'recommended' that you just restart the environment. My thread shows you how you can identify which applications use glibc that will need to be restarted and how to test for the vulnerability before and after to see if you are still affected. The glibc test might show not vulnerable after patching but you still need to make sure you restart all those applications because they load glibc into memory and those versions of glibc in memory are still vulnerable to the exploit. So you aren't safe yet.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182448", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98965/" ] }
182,462
I am hoping to run some of the commands listed in this article , e.g.: cal -3 which is supposed to show me the last month, this month, and next month. This command does not work in OS X. $ cal -3cal: illegal option -- 3usage: cal [-jy] [[month] year] cal [-j] [-m month] [year] ncal [-Jjpwy] [-s country_code] [[month] year] ncal [-Jeo] [year] Is there an alternative form I could use in OS X? For reference: $ which cal/usr/bin/cal and $ man cal reveals that this is probably the default OS X version: "...BSD General Commands Manual..."
OSX does not have the same version of utilities as Linux. The common denominator you can count on is POSIX , which defines cal but none of its options. There are two common versions of cal on Linux: the one from util-linux (e.g. on Fedora ) and the one from FreeBSD (e.g. on Ubuntu ). Both have the -3 option. OSX is based on FreeBSD, but its version is an older one that doesn't have the -3 option. You can emulate it with a bash/zsh script: #!/bin/bashcase $# in 0) month=$(date +%m) year=$(date +%Y);; 2) month=$1 year=$2;; *) echo 1>&2 "Usage: $0 [MONTH YEAR]"; exit 3;;esacmonth=${month#"${month%%[!0]*}"}if ((month == 1)); then previous_month=12 previous_year=$((year-1))else previous_month=$((month-1)) previous_year=$yearfiif ((month == 12)); then next_month=1 next_year=$((year+1))else next_month=$((month+1)) next_year=$yearfipaste <(cal "$previous_month" "$previous_year") \ <(cal "$month" "$year") \ <(cal "$next_month" "$next_year")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }