source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
226,428 | I am X11 forwarding from an Ubuntu instance hosted in a Amazon EC2 to my localhost. I am able to do the forwarding only when the remote (Amazon EC2) DISPLAY variable is set to localhost:10.0 I understand that 10 is the sequence number but I am wondering how it is decided. I had tried forwarding xclock app and it seems to work only when the sequence number is 10 and nothing else (0,1,2 .. 9 nothing worked). FWIW, my localhost's DISPLAY variable is set to :0 Could somebody enlighten me? Thanks in advance. | From vi you can type :cq to exit without saving and with a non-zero return code. In this case the command will not be repeated. Alternatively, you can usually suspend the editor with ctrl-z which gets you back to the shell without redoing the command. You still have to fg to restart the editor, but the tmp file will no longer be around, so you can safely quit the editor. Or you can kill -9 % this suspended editor. I agree, it could be easier. Of course, you can always edit lines within bash using vi or emacs commands. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130909/"
]
} |
226,438 | The getpid system call returns the process id of the invoking process.How does the kernel figure out which process is invoking the system call ? | The kernel does job scheduling and provides system calls. When a process is running, the kernel schedules its runtime - especially it assigns a PID to it - such information is stored inside the kernel address space, in data structures (e.g. inside a task struct). Thus, when a process calls the getpid() system call, the kernel just has to look in the task structure of the calling (i.e. currently running) process. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77724/"
]
} |
226,524 | In strace outputs, the paths to the libraries that executables call are in calls to open() . Is this the system call used by executables that are dynamically linked? What about dlopen() ? open() isn't a call I'd have guessed would play a role in the execution of programs. | dlopen isn't a system call, it's a library function in the libdl library . Only system calls show up in strace . On Linux and on many other platforms (especially those that use the ELF format for executables), dlopen is implemented by opening the target library with open() and mapping it into memory with mmap() . mmap() is really the critical part here, it's what incorporates the library into the process' address space, so the CPU can execute its code. But you have to open() the file before you can mmap() it! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89807/"
]
} |
226,535 | For any OS that uses systemd to manage processes and follows the Filesystem Hierarchy Standard by the Linux Foundation I recently asked where to but a systemd unit file: Where do I put my systemd unit file on Arch Linux? I would like to run a python script every 5 minutes (not to be confused with a systemd unit file script that calls the python script). I read the answers to this question: Run script every 30 min with systemd This is where my question comes in. Where should or could you store scripts that are run by systemd? Is there a reserved place for these, particularly on Arch Linux? For example, logs are placed in /var/log systemd unit files are placed under /etc/systemd/system /etc/systemd/system/writehello.service Here is an example service. [Unit]Description=Run python script that writes hello in file on /media/5TB/hello.txt[Service]Type=oneshotExecStart=# <-- This is what I am looking for[Install]WantedBy=multi-user.target /etc/systemd/system/writehello.timer Here is a corresponding timer. This is all stuff documented. [Unit]Description=test[Timer]Persistent=trueOnUnitActiveSec=10sOnBootSec=10s[Install]WantedBy=timers.target /path/to/writehello.py This is the path I am looking for. #!/usr/bin/env pythonimport osimport datetimenow = datetime.datetime.now()f1 = open('/media/mydrive/hello.txt','a')f1.write('hello %s\n' % (now))f1.close | I also was thinking about this same question and wanted to see other's opinion. My take on it is /usr/local/sbin as sbin is where you put things that should be run by admin. Your analysis is correct the /usr/local is the location dedicated for installing stuff not managed by package manager. But bin is for stuff that should be runnable by regular users. In either case, you should not allow write access to anybody but root to the files in /usr/local . That's the convention as far as I remember (for the whole /usr/). /opt is usually used for packages that are not used by default on the system and user should set some environment variables to access by bin/man/etc. directories of specific package. Read the links I've provided above. See RHEL FSH overview as well the latest FHS documentation . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33386/"
]
} |
226,545 | Is it possible to check for how long a user has been logged in? Or when the user logged in on an Unix/Linux system? I logged in as another user on my system a while ago and I would like to now how long that user has been logged in. | Using last you can find this information. The following may be useful: last <username> | less It will return something like this: benlavery@Talantinc:bin $>last benlavery | lessbenlavery ttys005 Mon Aug 31 09:58 still logged inbenlavery ttys005 fe80::105e:6b27:29ff:d967%en0 Mon Aug 31 09:14 - 09:36 (00:22)benlavery ttys005 fe80::105e:6b27:29ff:d967%en0 Mon Aug 31 09:12 - 09:14 (00:01) You can see when the user logged in and when they logged out—or if they are still logged in. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
226,546 | After reinstalling the server I can not mount it: sshfs [email protected]:/var /remote_mountfuse: bad mount point `/remote_mount': Transport endpoint is not connected When I SSH, I get an error: # ssh [email protected] authenticity of host 'example.com (xxx.xxx.xxx.xxx)' can't be established.ECDSA key fingerprint is 57:b6:bd:76:17:80:73:85:4a:14:8a:6f:dc:fa:fe:7c.Are you sure you want to continue connecting (yes/no)? | This error popped up for me after I had been using sshfs on and off for years. A search found this page but all the "setup sshd" answers were not much help as sshfs had been working well until it suddenly didn't and ssh worked just fine to other locations. However, after a bit of frustrating poking and testing I found the solution. The problem started with a sshfs mount failing from a bad hostname in it. As ls -l $mountpoint failed with this error I tried clearing the trouble with fusermount -u $mountpoint , and the mount started to work again! Even a simple ls $mountpoint made the error after the failed sshfs. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226546",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
226,563 | From the post Why can rm remove read-only files? I understand that rm just needs write permission on directory to remove the file. But I find it hard to digest the behaviour where we can easily delete a file who owner and group different. I tried the following mtk : my username abc : created a new user $ ls -l file-rw-rw-r-- 1 mtk mtk 0 Aug 31 15:40 file$ sudo chown abc file$ sudo chgrp abc file$ ls -l file-rw-rw-r-- 1 abc abc 0 Aug 31 15:40 file$ rm file$ ls -l file<deleted> I was thinking this shouldn't have been allowed. A user should be able to delete only files under his ownership? Can someone shed light on why this is permitted? and what is the way to avoid this? I can think only restricting the write permission of the parent directory to dis-allow surprised deletes of file. | The reason why this is permitted is related to what removing a file actually does. Conceptually, rm 's job is to remove a name entry from a directory. The fact that the file may then become unreachable if that was the file's only name and that the inode and space occupied by the file can therefore be recovered at that point is almost incidental. The name of the system call that the rm command invokes, which is unlink , is even suggestive of this fact. And, removing a name entry from a directory is fundamentally an operation on that directory , so that directory is the thing that you need to have permission to write. The following scenario may make it feel more comfortable? Suppose there are directories: /home/me # owned and writable only by me/home/you # owned and writable only by you And there is a file which is owned by me and which has two hard links: /home/me/myfile/home/you/myfile Never mind how that hard link /home/you/myfile got there in the first place. Maybe root put it there. The idea of this example is that you should be allowed to remove the hard link /home/you/myfile . After all, it's cluttering up your directory. You should be able to control what does and doesn't exist inside /home/you . And when you do remove /home/you/myfile , notice that you haven't actually deleted the file. You've only removed one link to it. Note that if the sticky bit is set on the directory containing a file (shows up as t in ls ), then you do need to be the owner of the file in order to be allowed to delete it (unless you own the directory). The sticky bit is usually set on /tmp . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/226563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
226,567 | I have a directory, that I cannot delete with rmdir . I get always a permission denied error. But when list the directory (with ls -l ) I get this: drwxrwxrwx 2 user user 4096 Aug 28 09:34 directory stat gives me that: File: `directory/' Size: 4096 Blocks: 16 IO Block: 32768 directoryDevice: 12h/18d Inode: 102368771 Links: 2Access: (0777/drwxrwxrwx) Uid: ( 1000/ user) Gid: ( 1000/ user)Access: 2015-08-31 03:00:20.630000002 +0200Modify: 2015-08-28 09:34:16.772930001 +0200Change: 2015-08-31 12:25:04.920000000 +0200 So how delete that directory. | If you are trying to delete a directory foo/bar/ , the permissions of bar isn't the relevant factor. Removing the name bar from directory foo is a modification of foo . So you need write permissions on foo . In your case, check the current directory's permissions with ls -ld . You might find this answer to "why is rm allowed to delete a file under ownership of a different user?" enlightening. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226567",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116283/"
]
} |
226,598 | I've tried some sed patterns like this from commandlinefu sed -r "s/('[a-z]+)_([a-z])([a-z]+)/\1\U\2\L\3/" But somehow it's not working. For one thing they forgot the digits, which I can fix, but this pattern works for one underscore only. So for example if I have a string in file 'foo_bar_foo' it will convert in to 'fooBar_foo' . Which is not what I want (I want 'fooBarFoo' ) I only want to change strings in file, not the variable names or anything else.So for example this delta_limits=Limits(general_settings['signal_lower_limit'] Should become this delta_limits=Limits(general_settings['signalLowerLimit'] | If I understand correctly, you want to change _x to X as long as it occurs inside '...' strings. Then, with GNU sed , you could do: sed -E ":1;s/^(([^']|'[^']*')*'[^']*)_([^'])/\1\u\3/;t1" That is replace a _X following '... itself following a sequence of either non-quotes or matched quotes. Which on an input like: foo_bar 'ab_cd_ef gh_ij' zz_zz 'aa_bb''delta_limits=Limits(general_settings['signal_lower_limit'] gives: foo_bar 'abCdEf ghIj' zz_zz 'aaBb'delta_limits=Limits(general_settings['signalLowerLimit'] That assumes you don't have strings embedding single quotes (as in 'foo\'bar' ). If so, you'd need to account for those \' escapes with: sed -E ":1;s/^(([^']|'([^\']|\\\\.)*')*'([^\']|\\\\.)*)_([^'])/\1\u\5/;t1" (also accounts for 'foo\\' ). That still doesn't cover "foo'bar" quotes or backslash-continued lines or python's '''it's a multi-line quote''' . You'd need a python parser to be able to cover all the cases. For your particular case, sed -E ":1;s/('\w*)_(\w)/\1\u\2/g;t1" May also be enough (only replaces the _X that follow '\w* ). That's the GNU sed equivalent (except for what \w exactly matches) of Glenn's perl approach . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53385/"
]
} |
226,705 | I have an RPM that I built. And I am looking to figure out how to extract the Spec file out of it. I have tried: rpm --scripts -qp sampleBuild.rpm That didn't work. Does anyone know the proper command? | Usually, only source rpms have a spec file. You can extract it with rpm2cpio myrpm.src.rpm | cpio -civ '*.spec' or you can install the src rpm, as a user, with rpm -i myrpm.src.rpm , whenthe directory rpmbuild/SPECS/ will get the spec file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72902/"
]
} |
226,716 | I'm looking for the latest source code of man command, the version in my Linux is pretty old(v1.6f), but I failed after googling a while. I mean the latest source code of man , not man-pages but the binary file in /usr/bin/man itself which can be compiled and installed. | You can usually query your distribution to see where sources come from. For example, I'm on Fedora, and I can see that the man command comes from the man-db package: $ rpm -qf /usr/bin/manman-db-2.6.7.1-16.fc21.x86_64 I can then query the man-db package for the upstream url: $ rpm -qi man-db | grep -i urlURL : http://www.nongnu.org/man-db/ And there you are, http://www.nongnu.org/man-db/ . You can perform a similar sequence of steps with the packaging systems used on other distributions. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22322/"
]
} |
226,728 | I use the tar command as, tar -cvf protTests.tar protTests/* to tar all files inside the folder, protTests . But this is including the symbolic links inside the folder, which is not a desired one. Is there a command line option, that will leave out all symlinks? | You could do this, to supply tar with a list of all files inside protTests except those which are symlinks: find protTests -maxdepth 1 -mindepth 1 -not -type l -print0 | tar --null --files-from - -cvf protTests.tar By the way, your existing command: tar -cvf protTests.tar protTests/* will not archive all files in protTests , it will only archive those whose names do not begin with . (those that are not hidden). The * glob operator skips files whose names begin with . by design. The command also has the problem that if protTests has lots of files (more than many thousand), then protTests/* can expand to too many arguments to fit on the command line. A simpler command like this would have neither of those problems: tar -cvf protTests.tar protTests | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
226,775 | I want to upgrade version 13.10 to 14.04. This is output that I get: W: Failed to fetch http://old-releases.ubuntu.com/ubuntu/dists/saucy-security/universe/binary-i386/Packages 403 ForbiddenW: Failed to fetch http://old-releases.ubuntu.com/ubuntu/dists/saucy-security/multiverse/binary-i386/Packages 403 ForbiddenE: Some index files failed to download. They have been ignored, or old ones used instead. My Ubuntu version is 13.10 Saucy. | Opening the URL directly in a web browser gives this more informative error message The requested URL /ubuntu/dists/saucy-security/universe/binary-i386/Packages was not found on this server. From this it's a short step to checking whether Saucy still exists. Looking at http://releases.ubuntu.com/ it's possible to see that it's not mainline, but that http://old-releases.ubuntu.com/releases/ suggests it should be available. Looking more closely with a web browser shows that although http://old-releases.ubuntu.com/ubuntu/dists/saucy-security/universe/binary-i386/Packages doesn't exist, the same file with a .bz2 and .gz suffix does exist. Furthermore the same configuration exists for the current live distribution. Searching for this scenario finds Why can't apt find Packages file when compressed versions exist? on AskUbuntu, which suggests that the solution is as follows: Turns out something was corrupt in my local apt repository. Resetting it using sudo rm -fr /var/lib/apt/lists/* fixed things. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131158/"
]
} |
226,803 | I want to make a for loop in bash with 0.02 as incrementsI tried for ((i=4.00;i<5.42;i+=0.02))docommandsdone but it didn't work. | Reading the bash man page gives the following information: for (( expr1 ; expr2 ; expr3 )) ; do list ; done First, the arithmetic expression expr1 is evaluated according to the rules described below under ARITHMETIC EVALUATION. [...] and then we get this section ARITHMETIC EVALUATION The shell allows arithmetic expressions to be evaluated, under certaincircumstances (see the let and declare builtin commands andArithmetic Expansion). Evaluation is done in fixed-width integerswith no check for overflow [...] So it can be clearly seen that you cannot use a for loop with non-integer values. One solution may be simply to multiply all your loop components by 100, allowing for this where you later use them, like this: for ((k=400;k<542;k+=2))do i=$(bc <<<"scale=2; $k / 100" ) # when k=402 you get i=4.02, etc. ...done If you starting and ending values are the same number of significant figures (three in this example) you can avoid the call out to bc for each iteration of the loop and instead use string processing to generate the decimal value, i="${k%??}.${k#?}" # POSIX; when k=402 you get i=4.02, etc. i="${k:0:1}.${k:1}" # bash; when k=402 you get i=4.02, etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130050/"
]
} |
226,831 | My font rendering in Firefox looks terrible on pages such as facebook.com and twitter.com: I'm running Debian 8 and fiddling with hardware acceleration, and it doesn't seem to work. | I've had this issue for ages, maybe it's time to do something about it! It comes done to ClearType , Microsoft and patents from what I read. Most *nix distro's disable any patent protected font rendering by default. Read about Debian and fonts here , you want Subpixel-hinting and Font-smoothing section. There's a config file on that page but I will add here for future reference. Create a file called .fonts.conf in your home directory, and add the following: <?xml version='1.0'?><!DOCTYPE fontconfig SYSTEM 'fonts.dtd'><fontconfig> <match target="font"> <edit mode="assign" name="rgba"> <const>rgb</const> </edit> </match> <match target="font"> <edit mode="assign" name="hinting"> <bool>true</bool> </edit> </match> <match target="font"> <edit mode="assign" name="hintstyle"> <const>hintslight</const> </edit> </match> <match target="font"> <edit mode="assign" name="antialias"> <bool>true</bool> </edit> </match> <match target="font"> <edit mode="assign" name="lcdfilter"> <const>lcddefault</const> </edit> </match></fontconfig> | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131210/"
]
} |
226,832 | I've issued a command gzip -r project.zip project/* in projects home directory and I've messed things up; every file in project directory and all other subdirectories have .gz extension. How do I undo this operation, i.e., how do I remove .gz extension from script, so I do not need to rename every file by hand? | I've had this issue for ages, maybe it's time to do something about it! It comes done to ClearType , Microsoft and patents from what I read. Most *nix distro's disable any patent protected font rendering by default. Read about Debian and fonts here , you want Subpixel-hinting and Font-smoothing section. There's a config file on that page but I will add here for future reference. Create a file called .fonts.conf in your home directory, and add the following: <?xml version='1.0'?><!DOCTYPE fontconfig SYSTEM 'fonts.dtd'><fontconfig> <match target="font"> <edit mode="assign" name="rgba"> <const>rgb</const> </edit> </match> <match target="font"> <edit mode="assign" name="hinting"> <bool>true</bool> </edit> </match> <match target="font"> <edit mode="assign" name="hintstyle"> <const>hintslight</const> </edit> </match> <match target="font"> <edit mode="assign" name="antialias"> <bool>true</bool> </edit> </match> <match target="font"> <edit mode="assign" name="lcdfilter"> <const>lcddefault</const> </edit> </match></fontconfig> | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40993/"
]
} |
226,857 | When I install shutter to take screenshots, imagemagick sets itself as default PDF-reader and I am unable to change it. I would like to have evince as default PDF-reader. I have tried right click on a PDF document in file explorer-> Properties -> Set default application-> Evince. This does not work, imagemagick stays as default. Doing this with the file explorer opened as root works but it doesn't change the normal-user default application. Using xdg-mime does not help either. In /etc/gnome/defaults.list the default application for PDF is evince . And, when I remove imagemagick-6.q16, evince becomes the default application for opening PDFs, but shutter is removed too. Am I missing something ? Where can I change this behavior ? I have an updated version of Debian Jessie in my computer and I am using Gnome3. EDIT 1: I can replicate this behavior with different file explorers (tested with nemo and nautilus ) The output of XDG_UTILS_DEBUG_LEVEL=2 xdg-mime query default application/x-pdf is Checking /home/USER/.local/share//applications/mimeapps.listChecking /usr/share/gnome/applications/defaults.listChecking /usr/local/share//applications/defaults.listChecking /usr/share//applications/defaults.list I've checked this files: In /home/USER/.local/share//applications/mimeapps.list I have a line with application/pdf=evince.desktop In /usr/share/gnome/applications/defaults.list the PDF reader is set to evince too. In /usr/local/share/applications/defaults.list there is no reference to PDFs. /usr/share/applications/defaults.list does not exist | Edit file : ~/.config/mimeapps.listand set pdf to evince.desktopWorks for me.Source: https://askubuntu.com/questions/591425/why-do-pdf-documents-open-with-imagemagick | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41867/"
]
} |
226,872 | I find myself needing to rearrange a system's partitions to move data previously under the root filesystem into dedicated mount points. The volumes are all in LVM, so this is relatively easy: create new volumes, move data into them, shrink the root filesystem, then mount the new volumes at the appropriate points. The issue is step 3, shrinking the root filesystem. The filesystems involved are ext4, so online resizing is supported; however, while mounted, the filesystems can only be grown. To shrink the partition requires unmounting it, which of course is not possible for the root partition in normal operation. Answers around the Web seem to revolve around booting a LiveCD or other rescue media, doing the shrink operation, then booting back into the installed system. However, the system in question is remote, and I have access only via SSH. I can reboot, but booting a rescue disc and doing operations from the console is not possible. How can I unmount the root filesystem while maintaining remote shell access? | In solving this issue, the information provided at http://www.ivarch.com/blogs/oss/2007/01/resize-a-live-root-fs-a-howto.shtml was pivotal. However, that guide is for a very old version of RHEL, and various information was obsolete. The instructions below are crafted to work with CentOS 7, but they should be easily enough transferable to any distro that runs systemd. All commands are run as root. Ensure the system is in a stable state Make sure no one else is using it and nothing else important is going on. It's probably a good idea to stop service-providing units like httpd or ftpd, just to ensure external connections don't disrupt things in the middle. systemctl stop httpd systemctl stop nfs-server # and so on.... Unmount all unused filesystems umount -a This will print a number of 'Target is busy' warnings, for the root volume itself and for various temporary/system FSs. These can be ignored for the moment. What's important is that no on-disk filesystems remain mounted, except the root filesystem itself. Verify this: # mount alone provides the info, but column makes it possible to read mount | column -t If you see any on-disk filesystems still mounted, then something is still running that shouldn't be. Check what it is using fuser : # if necessary: yum install psmisc # then: fuser -vm <mountpoint> systemctl stop <whatever> umount -a # repeat as required... Make the temporary rootNote: if /tmp is a directory on /, we will not be able to unmount / later in this procedure if we use /tmp/tmproot. Thus it may be necessary to use an alternative mountpoint such as /tmproot instead. mkdir /tmp/tmproot mount -t tmpfs none /tmp/tmproot mkdir /tmp/tmproot/{proc,sys,dev,run,usr,var,tmp,oldroot} cp -ax /{bin,etc,mnt,sbin,lib,lib64} /tmp/tmproot/ cp -ax /usr/{bin,sbin,lib,lib64} /tmp/tmproot/usr/ cp -ax /var/{account,empty,lib,local,lock,nis,opt,preserve,run,spool,tmp,yp} /tmp/tmproot/var/ This creates a very minimal root system, which breaks (among other things) manpage viewing (no /usr/share ), user-level customizations (no /root or /home ) and so forth. This is intentional, as it constitutes encouragement not to stay in such a jury-rigged root system any longer than necessary. At this point you should also ensure that all the necessary software is installed, as it will also assuredly break the package manager. Glance through all the steps, and make sure you have the necessary executables. Pivot into the root mount --make-rprivate / # necessary for pivot_root to work pivot_root /tmp/tmproot /tmp/tmproot/oldroot for i in dev proc sys run; do mount --move /oldroot/$i /$i; done systemd causes mounts to allow subtree sharing by default (as with mount --make-shared ), and this causes pivot_root to fail. Hence, we turn this off globally with mount --make-rprivate / . System and temporary filesystems are moved wholesale into the new root. This is necessary to make it work at all; the sockets for communication with systemd, among other things, live in /run , and so there's no way to make running processes close it. Ensure remote access survived the changeover systemctl restart sshd systemctl status sshd After restarting sshd, ensure that you can get in, by opening another terminal and connecting to the machine again via ssh. If you can't, fix the problem before moving on. Once you've verified you can connect in again, exit the shell you're currently using and reconnect. This allows the remaining forked sshd to exit and ensures the new one isn't holding /oldroot . Close everything still using the old root fuser -vm /oldroot This will print a list of processes still holding onto the old root directory. On my system, it looked like this: USER PID ACCESS COMMAND /oldroot: root kernel mount /oldroot root 1 ...e. systemd root 549 ...e. systemd-journal root 563 ...e. lvmetad root 581 f..e. systemd-udevd root 700 F..e. auditd root 723 ...e. NetworkManager root 727 ...e. irqbalance root 730 F..e. tuned root 736 ...e. smartd root 737 F..e. rsyslogd root 741 ...e. abrtd chrony 742 ...e. chronyd root 743 ...e. abrt-watch-log libstoragemgmt 745 ...e. lsmd root 746 ...e. systemd-logind dbus 747 ...e. dbus-daemon root 753 ..ce. atd root 754 ...e. crond root 770 ...e. agetty polkitd 782 ...e. polkitd root 1682 F.ce. master postfix 1714 ..ce. qmgr postfix 12658 ..ce. pickup You need to deal with each one of these processes before you can unmount /oldroot . The brute-force approach is simply kill $PID for each, but this can break things. To do it more softly: systemctl | grep running This creates a list of running services. You should be able to correlate this with the list of processes holding /oldroot , then issue systemctl restart for each of them. Some services will refuse to come up in the temporary root and enter a failed state; these don't really matter for the moment. If the root drive you want to resize is an LVM drive, you may also need to restart some other running services, even if they do not show up in the list created by fuser -vm /oldroot . You might be unable to to resize an LVM drive under Step 7 because of this Error: fsadm: Cannot proceed with mounted filesystem "/oldroot" You can try systemctl restart systemd-udevd and if that fails, you can find the leftover mounts with grep system /proc/*/mounts | column -t Look for processes that say mounts:none and try restarting these: PATH BIN FSTYPE /proc/16395/mounts:tmpfs /run/systemd/timesync tmpfs /proc/16395/mounts:none /var/lib/systemd/timesync tmpfs /proc/18485/mounts:tmpfs /run/systemd/inhibit tmpfs /proc/18485/mounts:tmpfs /run/systemd/seats tmpfs /proc/18485/mounts:tmpfs /run/systemd/sessions tmpfs /proc/18485/mounts:tmpfs /run/systemd/shutdown tmpfs /proc/18485/mounts:tmpfs /run/systemd/users tmpfs /proc/18485/mounts:none /var/lib/systemd/linger tmpfs Some processes can't be dealt with via simple systemctl restart . For me these included auditd (which doesn't like to be killed via systemctl , and so just wanted a kill -15 ). These can be dealt with individually. The last process you'll find, usually, is systemd itself. For this, run systemctl daemon-reexec . Once you're done, the table should look like this: USER PID ACCESS COMMAND /oldroot: root kernel mount /oldroot Unmount the old root umount /oldroot At this point, you can carry out whatever manipulations you require. The original question needed a simple resize2fs invocation, but you can do whatever you want here; one other use case is transferring the root filesystem from a simple partition to LVM/RAID/whatever. Pivot the root back mount <blockdev> /oldroot mount --make-rprivate / # again pivot_root /oldroot /oldroot/tmp/tmproot for i in dev proc sys run; do mount --move /tmp/tmproot/$i /$i; done This is a straightforward reversal of step 4. Dispose of the temporary root Repeat steps 5 and 6, except using /tmp/tmproot in place of /oldroot . Then: umount /tmp/tmproot rmdir /tmp/tmproot Since it's a tmpfs, at this point the temporary root dissolves into the ether, never to be seen again. Put things back in their places Mount filesystems again: mount -a At this point, you should also update /etc/fstab and grub.cfg in accordance with any adjustments you made during step 7. Restart any failed services: systemctl | grep failed systemctl restart <whatever> Allow shared subtrees again: mount --make-rshared / Start the stopped service units - you can use this single command: systemctl isolate default.target And you're done. Many thanks to Andrew Wood, who worked out this evolution on RHEL4, and steve, who provided me the link to the former. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/226872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103923/"
]
} |
226,895 | My OS is Ubuntu 14.04.I have done: sudo apt-get install haskell-platform-doc But I cannot find the corresponding documentation files. Where could I find them ? Or is there a command to launch so as to find where the .deb package has put them in my file system ? Is there a way to locate them. After reboot locate didn't helped me that much. | dpkg -L haskell-platform-doc will list the files installed by that package for you. However, this is a meta package, it does not install much content itself, but pulls in other documentation packages as dependencies. So issue dpkg-query -f'${Depends}' -W haskell-platform-doc to find the dependencies, and use dpkg -L with those. You should expect /usr/share/doc/libghc-*-doc/html/index.html and similar files, which you can view with a web browser (with file:/// URLs). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4175/"
]
} |
226,900 | I am trying to convert .NEF files to .JPG and I am using the command: mogrify -format JPG *.NEF this produces mogrify: delegate failed `"ufraw-batch" --silent --create-id=also --out- type=png --out-depth=16 "--output=%u.png" "%i"' @ error/delegate.c/InvokeDelegate/1329.mogrify: unable to open image `/tmp/magick-9036lhDZz9AOJ7G2.ppm': No such file or directory @ error/blob.c/OpenBlob/2695 When I try imagemagick: convert DSC0001.NEF DSC0001.JPG It produces --out-depth=16 "--output=%u.png" "%i"' @ error/delegate.c/InvokeDelegate/1329.convert: unable to open image `/tmp/magick-9072zxFEis0gPDVe.ppm': No such file or directory @ error/blob.c/OpenBlob/2695.convert: no images defined `DSC0001.JPG' @ error/convert.c/ConvertImageCommand/3212. What is being told in this error message? | I had a similar problem doing a basic convert and I had to install the following package: ufraw-batch | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131269/"
]
} |
226,909 | In bash, from inside PROMPT_COMMAND, is there a way to tell if the user just hit 'return' and didn't enter a command? | Check whether the history number was incremented. A cancelled prompt or a prompt where the user just pressed Enter won't increment the history number. The history number is available in the variable HISTCMD , but this is not available in PROMPT_COMMAND (because what you want there is in fact the history number of the previous command; the command that executes PROMPT_COMMAND itself has no history number). You can get the number from the output of fc . prompt_command () { HISTCMD_previous=$(fc -l -1); HISTCMD_previous=${HISTCMD_previous%%$'[\t ]'*} if [[ -z $HISTCMD_before_last ]]; then # initial prompt elif [[ $HISTCMD_before_last = "$HISTCMD_previous" ]]; then # cancelled prompt else # a command was run fi HISTCMD_before_last=$HISTCMD_previous}PROMPT_COMMAND='prompt_command' Note that if you've turned on squashing of duplicates in the history ( HISTCONTROL=ignoredups or HISTCONTROL=erasedups ), this will mistakenly report an empty command after running two identical commands successively. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131274/"
]
} |
226,910 | I understand the notion of hardlinks very well, and have read the man pages for basic tools like cp --- and even the recent POSIX specs --- a number of times. Still I was surprised to observe the following behavior: $ echo john > john$ cp -l john paul$ echo george > george At this point john and paul will have the same inode (and content), and george will differ in both respects. Now we do: $ cp george paul At this point I expected george and paul to have different inode numbers but the same content --- this expectation was fulfilled --- but I also expected paul to now have a different inode number from john , and for john to still have the content john . This is where I was surprised. It turns out that copying a file to the destination path paul also has the result of installing that same file (same inode) at all other destination paths that share paul 's inode. I was thinking that cp creates a new file and moves it into the place formerly occupied by the old file paul . Instead what it seems to do is to open the existing file paul , truncating it, and write george 's content into that existing file. Hence any "other" files with the same inode get "their" content updated at the same time. Ok, this is a systematic behavior and now that I know to expect it I can figure out how to work around it, or take advantage of it, as appropriate. What puzzles me is where I was supposed to see this behavior documented? I'd be surprised if it's not documented somewhere in documents I've already looked at. But apparently I missed it, and can't now find a source that discusses this behavior. | cp documents that it overwrites the destination file if the destination file is already present. You're right that it doesn't specify in detail what "overwrite" means, but it definitely says "overwrite", not "replace". If you want to be pedantic, you can argue that "overwrite" is exactly what cp does, and the behaviour you were expecting would be properly called "replace". Also note that if cp were to "replace" pre-existing destination files, that might reasonable be considered surprising or incorrect, probably moreso than "overwriting". For example: If cp first deleted the old file and then created a new one then there would be an interval of time during which the file would be absent, which would be surprising. If cp first created a temporary file and then moved it in place then it should probably document this, owing to the fact that such temporary files with strange names would occasionally be noticed... but it doesn't. If cp could not create a new file in the same directory as the old file due to permissions then this would be unfortunate (especially if it had already deleted the old one). If the file was not owned by the user running cp and the user running cp was not root then it would be impossible to match the owner & permissions of the new file to those of the new file. If the file has fancy special attributes that cp does not know about, then these would be lost in the copy. Nowadays implementations of cp ought to reliably understand things like extended attributes, but it wasn't always so. And there are other things, like MacOS resource forks, or, for remote filesystems, basically anything. So in conclusion: now you know what cp really does. You'll never be surprised by it again! Honestly, I think the same thing might have happened to me too, many years ago. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4801/"
]
} |
226,936 | How to setup the email client Mutt to send, receive and read email under CentOS and Ubuntu using a Gmail account as a relay | Gmail Setup For authentication, you'll have to do either of two things: Generate an application-specific password for your Google Account (your only option if you're using 2FA), Turn on less-secure app access (not an option with 2FA) In gmail, go click the gear icon, go to Settings , go to the tab Forwarding POP/IMAP , and click the Configuration instructions link in IMAP Access row. Then click I want to enable IMAP . At the bottom of the page, under the paragraph about configuring your mail client, select Other . Note the mail server information and use that information for further settings as shown in the next section. Install mutt CentOS yum install mutt Ubuntu sudo apt-get install mutt Configure Mutt Create mkdir -p ~/.mutt/cache/headersmkdir ~/.mutt/cache/bodiestouch ~/.mutt/certificates Create mutt configuration file muttrc touch ~/.mutt/muttrc Open muttrc vim ~/.mutt/muttrc Add following configurations set ssl_starttls=yesset ssl_force_tls=yesset imap_user = "[email protected]"set imap_pass = "PASSWORD"set from="[email protected]"set realname="Your Name"set folder = "imaps://imap.gmail.com/"set spoolfile = "imaps://imap.gmail.com/INBOX"set postponed="imaps://imap.gmail.com/[Gmail]/Drafts"set header_cache = "~/.mutt/cache/headers"set message_cachedir = "~/.mutt/cache/bodies"set certificate_file = "~/.mutt/certificates"set smtp_url = "smtps://[email protected]:[email protected]:465/"set move = noset imap_keepalive = 900 Make appropriate changes, like change_this_user_name to your gmail user name and PASSWORD to your gmail password. And save the file. Now you are ready to send, receive and read email using email client Mutt by simply typing mutt . For the first time it will prompt to accept SSL certificates; press a to always accept these certificates. Now you will be presented with your Gmail inbox. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130286/"
]
} |
226,944 | I'm using the pass for quite a long time; but after exporting my key storage and gpg keys to another machine I see following output: $ gpg --list-key/home/shved/.gnupg/pubring.gpg------------------------------pub 2048R/FA829B53 2015-04-28uid [ultimate] Yury Shvedov (shved) <[email protected]>sub 2048R/74270D4A 2015-04-28 My key imported and trusted, but not usable: pass insert testEnter password for test: Retype password for test: gpg: 2048R/FA829B53: skipped: No public keygpg: [stdin]: encryption failed: No public keyfatal: pathspec '/home/shved/.password-store/test.gpg' did not match any files What can I do to use my key again? | pass uses gnupg2, which does not share it's keyring with gnupg 1.x. Import your keys again using gnupg2 instead of gnupg.If you already have your keys in gnupg on the target machine run: $ gpg --export-secret-keys > keyfile$ gpg2 --import keyfile After importing, you may need to update the trust on your key.You should see a Secret key is available. message if the import was successful: $ gpg2 --edit-key FA829B53[...]Secret key is available.sec rsa4096/FA829B53 created: 2015-03-14 expires: 2017-03-13 usage: SC trust: unknown validity: ultimatessb rsa4096/74270D4A created: 2015-03-14 expires: 2017-03-13 usage: E [ultimate] (1). Yury Shvedov (shved) <[email protected]> Now update the trust on your key: gpg> trust[...]Your decision? 5Do you really want to set this key to ultimate trust? (y/N) y[...]gpg> save | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131301/"
]
} |
226,951 | I created tar archive using following command: tar -zcvf archive-name.tar.gz directory-name After this operation, where that tar.gz is located? | pass uses gnupg2, which does not share it's keyring with gnupg 1.x. Import your keys again using gnupg2 instead of gnupg.If you already have your keys in gnupg on the target machine run: $ gpg --export-secret-keys > keyfile$ gpg2 --import keyfile After importing, you may need to update the trust on your key.You should see a Secret key is available. message if the import was successful: $ gpg2 --edit-key FA829B53[...]Secret key is available.sec rsa4096/FA829B53 created: 2015-03-14 expires: 2017-03-13 usage: SC trust: unknown validity: ultimatessb rsa4096/74270D4A created: 2015-03-14 expires: 2017-03-13 usage: E [ultimate] (1). Yury Shvedov (shved) <[email protected]> Now update the trust on your key: gpg> trust[...]Your decision? 5Do you really want to set this key to ultimate trust? (y/N) y[...]gpg> save | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124635/"
]
} |
226,983 | I needed that to add jenkins pubkey to my host's authorized_keys when starting a docker container with jenkins. Looked for solutions, but could not find ready at internet. May seem obvious, but not for me at least :) | PUBKEY=$(cat ~/.ssh/id_rsa.pub); grep -q "$PUBKEY" ~/.ssh/authorized_keys || echo "$PUBKEY" >> ~/.ssh/authorized_keys This one-liner checks whether pubkey is already present in authorized_keys file, and appends it to the end of file if it is not present. ~/.ssh/id_rsa.pub here is path to pubkey being added ~/.ssh/authorized_keys here is a path to target authorized_keys file ( ~ symbol is the home directory i.e /home/accountname/ ) For remote host, one can use ssh-copy-id | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98646/"
]
} |
226,995 | I'm wondering if there is a way to watch films or images without running X server. I'm not using login manager - I log in to tty and start X server manually. Hypothetical situation: I log in, but decide to only watch film, or maybe view few photos. I don't want to run X server and all the GUI stuff just for this purpose. How can I watch films/images without X? | For Images: You can watch images with fbi : NAME fbi - linux framebuffer imageviewerSYNOPSIS fbi [ options ] file ...DESCRIPTION fbi displays the specified file(s) on the linux console using the framebuffer device. PhotoCD, jpeg, ppm, gif, tiff, xwd, bmp and png are supported directly. For other formats fbi tries to use ImageMagick's convert. Example command: $ fbi path/to/file.jpg For videos: You can use vlc from tty/console: Example command: $ vlc /path/to/file.mp4 You can also use mplayer : $ mplayer /path/to/file.mp4 Note: Video output drivers can be set by -vo option e.g caca , fbdev . ( This external article may help) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
226,996 | I just copied a whole directory into another one using cp ~/local/* ./ and actually wanted to type cp ~/local/srl* ./ so I'm founding myself with a lot of unnecessary files. I can suppress them by hand but I was wondering, is there a way to undo ANY command on the term? | No, there is no way to undo a command(at least not universal). This is often a problem when users run rm with wrong regex, without realising that it covers more files than they would like to remove. Also, it would really be impossible to implement undoing ANY command from terminal. Imagine command that sends an e-mail, or plays some sound. There is no way to undo these. Just be happy that you ran cp , not rm . As for the future, if you are not moving/removing/copying too many files, -i switch will turn it into "interactive" mode, asking for confirmation before each action. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131332/"
]
} |
227,017 | In the company I am working now there is a legacy service and its init script is using old SysvInit, but is running over systemd (CentOS 7). Because there's a lot of computation, this service takes around 70 seconds to finish. I didn't configure any timeout for systemd, and didn't change the default configs at /etc/systemd/system.conf , but still when I execute service SERVICE stop my service is timing out after 60 seconds. Checking with journalctl -b -u SERVICE.service I find this log: Sep 02 11:27:46 service.hostname systemd[1]: Stopping LSB: Start/StopSep 02 11:28:46 service.hostname SERVICE[24151]: Stopping service: Error code: 255Sep 02 11:28:46 service.hostname SERVICE[24151]: [FAILED] I already tried changing the DefaultTimeoutStopSec property at /etc/systemd/system.conf to 90s , but the timeout still happens. Does anyone have any idea why is it timeouting at 60s? Is there somewhere else that this timeout value is configured? Is there a way I can check it? This service runs with java 7 and to daemonize it, it uses JSVC . I configured the -wait parameter with the value 120 . | My systemd service kept timing out because of how long it would take to boot up also, so this fixed it for me: Edit your systemd file: For modern versions of systemd : Run systemctl edit --full node.service ( replace "node" with your service name ). This will create a system file at /etc/systemd/system/node.service.d/ that will override the system file at /usr/lib/systemd/system/node.service . This is the proper way to configure your system files. More information about how to use systemctl edit is here . Directly editing system file : The system file for me is at /usr/lib/systemd/system/node.service . Replace "node" with your application name. However, it is not safe to directly edit files in /usr/lib/systemd/ (See comments) Use TimeoutStartSec , TimeoutStopSec or TimeoutSec (more info here ) to specify how long the timeout should be for starting & stopping the process. Afterwards, this is how my systemd file looked: [Unit]Description=MyProjectDocumentation=man:node(1)After=rc-local.service[Service]WorkingDirectory=/home/myproject/GUIServer/Server/Environment="NODE_PATH=/usr/lib/node_modules"ExecStart=-/usr/bin/node Index.jsType=simpleRestart=alwaysKillMode=processTimeoutSec=900[Install]WantedBy=multi-user.target You can also view the current Timeout status by running any of these (but you'll need to edit your service to make changes! See step 1). Confusingly, the associated properties have a "U" in their name for microseconds. See this Github issue for more information: systemctl show node.service -p TimeoutStartUSec systemctl show node.service -p TimeoutStopUSec systemctl show node.service -p TimeoutUSec Next you'll need to reload the systemd with systemctl reload node.service Now try to start your service with systemctl start node.service If that didn't work , try to reboot systemctl with systemctl reboot If that didn't work , try using the --no-block option for systemctl like so: systemctl --no-block start node.service . This option is described here : "Do not synchronously wait for the requested operation to finish. If this is not specified, the job will be verified, enqueued and systemctl will wait until the unit's start-up is completed. By passing this argument, it is only verified and enqueued." There is also the option to use systemctl mask instead of systemctl start . For more info see here . Updates from Comments: TimeoutSec=infinity : Instead of using "infinity" here, put a large amount of time instead, like TimeoutSec=900 (15 min). If the application takes "forever" to exit, then it's possible that it will block a reboot indefinitely. Credit @Alexis Wilke and @JCCyC Instead of editing /usr/lib/systemd/system , try systemctl edit instead or edit /etc/systemd/system to override them instead. You should never edit service files in /usr/lib/ . Credit @ryeager and @0xC0000022L ** Update from systemd source docs **When specified "infinity" as a value to any of these timeout params, the timeout logic is disabled . JobTimeoutSec=, JobRunningTimeoutSec=,TimeoutStartSec=, TimeoutAbortSec= The default is "infinity" (job timeouts disabled), except for device units where JobRunningTimeoutSec= defaults to DefaultTimeoutStartSec=. Reference: enter link description here Similarly this logic applies to service level and laid out clearly in URL below.Reference: enter link description here | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/227017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20763/"
]
} |
227,070 | In all shells I am aware of, rm [A-Z]* removes all files that start with an uppercase letter, but with bash this removes all files that start with a letter. As this problem exists on Linux and Solaris with bash-3 and bash-4, it cannot be a bug caused by a buggy pattern matcher in libc or a miss-configured locale definition. Is this strange and risky behavior intended or is this just a bug that exists unfixed since many years? | Note that when using range expressions like [a-z], letters of the other case may be included, depending on the setting of LC_COLLATE. LC_COLLATE is a variable which determines the collation order used when sorting the results of pathname expansion, and determines the behavior of range expressions, equivalence classes, and collating sequences within pathname expansion and pattern matching. Consider the following: $ touch a A b B c C x X y Y z Z$ lsa A b B c C x X y Y z Z$ echo [a-z] # Note the missing uppercase "Z"a A b B c C x X y Y z$ echo [A-Z] # Note the missing lowercase "a"A b B c C x X y Y z Z Notice when the command echo [a-z] is called, the expected output would be all files with lower case characters. Also, with echo [A-Z] , files with uppercase characters would be expected. Standard collations with locales such as en_US have the following order: aAbBcC...xXyYzZ Between a and z (in [a-z] ) are ALL uppercase letters, except for Z . Between A and Z (in [A-Z] ) are ALL lowercase letters, except for a . See: aAbBcC[...]xXyYzZ | |from a to z aAbBcC[...]xXyYzZ | |from A to Z If you change the LC_COLLATE variable to C it looks as expected: $ export LC_COLLATE=C$ echo [a-z]a b c x y z$ echo [A-Z]A B C X Y Z So, it's not a bug , it's a collation issue . Instead of range expressions you can use POSIX defined character classes , such as upper or lower . They work also with different LC_COLLATE configurations and even with accented characters : $ echo [[:lower:]]a b c x y z à è é$ echo [[:upper:]]A B C X Y Z | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/227070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120884/"
]
} |
227,130 | Ok, so I have tried this quite a few times and I'm sure this is very trivial but: I am trying to SSH via command line on Ubuntu into my VM (Centos6) with an RSA key-pair I created using key-gen. I have created the key-pair and appended the public key to authorized_keys file and changed the permissions to 600 . After I SCP'ed the private key to Ubuntu and tried to SSH using it and I always get: Permission denied (publickey,gssapi-keyex,gssapi-with-mic). I have tried this 3x already and no luck. I can ping it but I can't seem to figure out why it's not taking the key I made. Any suggestions? | Run ssh with verbose mode (add as many -v as you need) and try to find out the reason. For example ssh -vvv user@host You will get a debug output that helps you to find out the reason. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72902/"
]
} |
227,164 | So I did a quick test and #include <sys/types.h>#include <unistd.h>#include <stdio.h>int main (int argc, char *argv[]) { printf("Hello World\n"); printf("%d\n",getpid());} compiled with gcc on my macbook pro running OSX 10.9.5 prints Hello World640 As I would expect it to on most linux distributions. I know the darwin kernel is based on UNIX, but will all the linux system calls behave exactly the same on OSX as they do on let's say Ubuntu? (I am aware the pid is different different times time I run it will be different, but that's not what I'm really talking about here). I also have Ubuntu installed on a small partition of my SSD, so if the answer is no, that's okay. | I would say that it is misleading to call getpid() a "linux system call". That gives the impression that it is a Linux- specific system call, which it isn't. Actually, getpid() and many other system calls are specified by POSIX, and you will find it implemented on both Linux and MacOS and on many other systems, with identical behaviour. The majority of system calls or even C library functions you will use in typical software are specified by standards like POSIX and ANSI C, and you will them implemented with the same behaviour on many different operating systems. Portable software is software that keeps to this set of common system calls and functions that are widely available. Linux also has Linux-specific system calls. MacOS also has MacOS-specific system calls. Neither of those will work on the opposite operating system, obviously. The manpages for such system calls will usually call out the fact that they are not portable. Furthermore, they exist quite often as low-level implementation details and most software need not use them, which makes it easier to keep most software portable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131447/"
]
} |
227,209 | I need to determine if a file contains a certain regex at a certain line and to return true (exit 0) if found, and otherwise false. Maybe I'm overthinking this, but my attempts proved a tad unwieldy. I have a solution, but I'm looking for maybe others that I hadn't thought of. I could use perl, but I'm hoping to keep this "lightweight" as possible as it runs during a puppet execution cycle. The problem is common enough: in RHEL6, screen was packaged in a way that limited the terminal width to 80 characters, unless you un-comment the line at 132. This command checks to see if that line has already been fixed: awk 'NR==132 && /^#termcapinfo[[:space:]]*xterm Z0=/ {x=1;nextfile} END {exit 1-x}' /etc/screenrc Note: if the file has fewer that 132 lines, it must exit with false. I thought sed would be of help here, but apparently then you have to do weird tricks like null-substitutions and branches. Still, I'd like to see a sed solution just to learn. And maybe there is something else I overlooked. EDIT 1: Added nextfile to my awk solution EDIT 2: Benchmarks EDIT 3: Different host (idle). EDIT 4: mistakenly used Gile's awk time for optimized-per's run. EDIT 5: new bench Benchmarks First, note: wc -l /etc/screenrc is 216 . 50k iterations when line not present, measured in wall-time: Null-op: 0.545s My original awk solution: 58.417 My edited awk solution (with nextfile): 58.364s Giles' awk solution: 57.578s Optimized perl solution 90.352s Doh! Sed 132{p;q}|grep -q ... solution: 61.259s Cuonglm's tail | head | grep -q : 70.418s Ouch! Don_chrissti's head -nX |head -n1|grep -q : 116.9s Brrrrp! Terdon's double-grep solution: 65.127s John1024's sed solution: 45.764s Thank you John and thank you sed! I am honestly surprised perl was on-par here. Perl loads in a bunch of shared libraries on startup, but as long as the OS is caching them all, it comes down to the parser and byte-coder. In the distant past (perl 5.2?) I found it was slower by 20%. Perl was slower as I originally expected but appeared to be better due to a copy/paste error on my part. Benchmarks Part 2 The biggest configuration file which has practical value is /etc/services . So I've re-run these benches for this file and where the line to be changed is 2/3rds in the file. Total lines is 1100, so I picked 7220 and modified the regex accordingly (so that in one case it fails, in another it succeeds; for the bench it always fails). John's sed solution: 121.4s Chrissti's {head;head}|grep solution: 138.341s Counglm's tail|head|grep solution: 77.948s My awk solution: 175.5s | With GNU sed: sed -n '132 {/^#termcapinfo[[:space:]]*xterm Z0=/q}; $q1' How it works 132 {/^#termcapinfo[[:space:]]*xterm Z0=/q} On line 132, check for the regex ^#termcapinfo[[:space:]]*xterm Z0= . If found quit, q , with the default exit code of 0. The rest of the file is skipped. $q1 If we reach the last line, $ , then quit with exit code 1: q1 . Efficiency Since it is not necessary to read past the 132nd line of the file, this version quits as soon as we reach the 132nd line or the end of the file, whichever occurs first: sed -n '132 {/^#termcapinfo[[:space:]]*xterm Z0=/q; q1}; $q1' Handling empty files The version above will return true for empty files. This is because, if the file empty, no commands are executed and the sed exits with the default exit code of 0. To avoid this: ! sed -n '132 {/^#termcapinfo[[:space:]]*xterm Z0=/q1; q}' Here, the sed command exits with code 0 unless the the desired string is found in which case it exits with code 1 The preceding ! tells the shell to invert this code to get back to the code we want. The ! modifier is supported by all POSIX shells. This version will work even for empty files. (Hat tip: G-Man) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105631/"
]
} |
227,235 | Let's say I have a set of machines (called here the customers' machines) that only a small list of people (called the support staff) is allowed to SSH into, using only one account by machine (the support access account). The support staff are only supposed to log into the customers' machines using keys. Moreover, the support staff can evolve, so someone who leaves the support staff is not permitted to log in to any customer machine. Therefore, staff people are prohibited from reading the private keys used to log into the customers' machines. Also, it is forbidden to modify the authorized_keys file on the customers' machines. To realize that configuration, I had the idea to use an SSH proxy that the support staff will log into (with LDAP authentication, but that's another problem) and that contains the private keys. The question is: How do I allow support staff to use the SSH private key without being able to read it? I believe that I have to make a daemon running as root on the proxy machine that will accept a user's request and open an SSH session for them, but I have no idea how to do it. Any ideas? | I would suggest a couple of options. Protect the ssh key and require the use of sudo on your support team's side. You could do this transparently with a wrapper. Call the wrapper, say, /usr/local/bin/ssh-support and have it contain something like this (untested): #!/bin/bashexport PATH=/usr/local/bin:/usr/bin:/binexport IFS=$' \t\n'export SUSER=ssupport# Restart if not running under sudotest "X$1" != 'X-S' && exec sudo -u "$SUSER" /usr/local/bin/ssh-support -S "$@"shiftexport SHOME="$(getent passwd "$SUSER" | cut -d: -f6)"# Extract single argument as hostname LABEL and validate that we have# an RSA private key for it. The target username, real hostname, port,# etc. can be defined in ~/.ssh/config for the user $SUSER (ssupport)label="$1"idfile="$SUSER/.ssh/id_rsa_for_$label"cgfile="$SUSER/.ssh/config"ok=true[[ "$label" =~ '/' ]] && { echo "Invalid label: $label" >&2; ok=; }[[ ! -s "$idfile" ]] && { echo "Missing identity file: $idfile" >&2; ok=; }[[ ! -s "$cgfile" ]] && { echo "Missing configuration file: $cgfile" >&2; ok=; }if test -n "$ok"then logger -t ssh-support "$SUDO_USER requested ssh to $label" exec ssh -i "$idfile" -F "$cgfile" "$label"fiexit 1 This would require an entry in the sudoers file that permitted users in the support group to use the tool. This command allows them to run the ssh-support tool as the ssupport user - which you must create. It does not confer any root privilege. %support ALL = (ssupport) /usr/local/bin/ssh-support If you are happy that the support users should not need to provider their own password to run the tool (as requested by the sudo invocation within the script itself) you can amend the sudoers definition thus: %support ALL = (ssupport) NOPASSWD: /usr/local/bin/ssh-support Assuming PATH contained /usr/local/bin/ you would then call it with ssh-support clientname . Also assuming you had created the ssupport user as /home/ssupport you would create /home/ssupport/.ssh/id_rsa_clientname and /home/ssupport/.ssh/id_rsa_clientname.pub as the certificate pair, and have a host entry in /home/ssupport/.ssh/config for clientname that defined the user, host, port, etc. for the target machine. You would probably disable X11 forwarding, port forwarding, etc. explicitly. As usual, the /home/ssupport/.ssh directory would need to be protected with permissions 0700 . Give each member of support their own local user account, and have each person use their own private ssh key to access the client's servers. When a person leaves the support group you remove their ssh key from the client's servers. This means that you no longer need to worry about preventing your staff from knowing the private ssh key. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131489/"
]
} |
227,241 | I am trying to copy multiple files with different names from different directories, to one same new directory. The names of the directories are the same as the beginning of each filename. For example from directory /mnt/data/files/xyz , I want to copy file xyz_5 to directory /mnt/data/myname/new . From directory /mnt/data/files/abc , I want to copy file abc_7 to directory /mnt/data/myname/new . I have 60 files I need to do this for. What I do now: cd /mnt/data/files/xyz;cp -v *_5 /mnt/data/myname/new I do this for each file. But can I do this with a single piece of code? | I would suggest a couple of options. Protect the ssh key and require the use of sudo on your support team's side. You could do this transparently with a wrapper. Call the wrapper, say, /usr/local/bin/ssh-support and have it contain something like this (untested): #!/bin/bashexport PATH=/usr/local/bin:/usr/bin:/binexport IFS=$' \t\n'export SUSER=ssupport# Restart if not running under sudotest "X$1" != 'X-S' && exec sudo -u "$SUSER" /usr/local/bin/ssh-support -S "$@"shiftexport SHOME="$(getent passwd "$SUSER" | cut -d: -f6)"# Extract single argument as hostname LABEL and validate that we have# an RSA private key for it. The target username, real hostname, port,# etc. can be defined in ~/.ssh/config for the user $SUSER (ssupport)label="$1"idfile="$SUSER/.ssh/id_rsa_for_$label"cgfile="$SUSER/.ssh/config"ok=true[[ "$label" =~ '/' ]] && { echo "Invalid label: $label" >&2; ok=; }[[ ! -s "$idfile" ]] && { echo "Missing identity file: $idfile" >&2; ok=; }[[ ! -s "$cgfile" ]] && { echo "Missing configuration file: $cgfile" >&2; ok=; }if test -n "$ok"then logger -t ssh-support "$SUDO_USER requested ssh to $label" exec ssh -i "$idfile" -F "$cgfile" "$label"fiexit 1 This would require an entry in the sudoers file that permitted users in the support group to use the tool. This command allows them to run the ssh-support tool as the ssupport user - which you must create. It does not confer any root privilege. %support ALL = (ssupport) /usr/local/bin/ssh-support If you are happy that the support users should not need to provider their own password to run the tool (as requested by the sudo invocation within the script itself) you can amend the sudoers definition thus: %support ALL = (ssupport) NOPASSWD: /usr/local/bin/ssh-support Assuming PATH contained /usr/local/bin/ you would then call it with ssh-support clientname . Also assuming you had created the ssupport user as /home/ssupport you would create /home/ssupport/.ssh/id_rsa_clientname and /home/ssupport/.ssh/id_rsa_clientname.pub as the certificate pair, and have a host entry in /home/ssupport/.ssh/config for clientname that defined the user, host, port, etc. for the target machine. You would probably disable X11 forwarding, port forwarding, etc. explicitly. As usual, the /home/ssupport/.ssh directory would need to be protected with permissions 0700 . Give each member of support their own local user account, and have each person use their own private ssh key to access the client's servers. When a person leaves the support group you remove their ssh key from the client's servers. This means that you no longer need to worry about preventing your staff from knowing the private ssh key. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130648/"
]
} |
227,320 | While troubleshooting a problem with my ethernet card, I've found that the driver I'm currently using may have some issues with old kernel versions. What command can I use to check the kernel version I am currently running ? | You can execute: uname -r It will display something like 3.13.0-62-generic Found on https://askubuntu.com/questions/359574/how-do-i-find-out-the-kernel-version-i-am-running ( view that QA to learn other commands you could use ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49721/"
]
} |
227,343 | Can someone explain to me what this command does? EDIT: Do not run this command! It will break your installation. sudo find / -exec rm {} \; | Bad Things ® ™. It's (almost) the equivalent of sudo rm -rf / - it will, as root, find all files or directories starting from / and recursively descending from there, and then execute the rm command against each file/directory it finds. It won't actually delete directory entries as there's no -f or -r options passed to rm , but it will remove all the file entries. Hint: don't run this unless you feel like reinstalling your operating system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/227343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131562/"
]
} |
227,350 | I am new to Linux and I love it. However, while I have been playing around with several flavours of Linux, only one question comes into my mind as a new user to the Linux systems: how can I ensure a Linux distro is safe, secure and trustworthy without any backdoors or malware codes within the OS? I started using Deepin and I must say I love it. I gave Ubuntu, Zorin, Mint, Kubuntu, Linux Lite, Elementary OS and few others a try some time ago but they all needed too much tinkering to get it to the way I want; but Deepin fits the bills in many ways. Its simple, elegant and so far has been pretty responsive to what I want to use it for; but the big question in my mind is how can I trust this distro? I use my computer to do things like online banking and create databases which holds sensitive information as I work for a public organisation and cannot compromise such data. I have contacted the Linux Foundation however didn’t get any response. I watched RMS talking about Ubuntu and how Ubuntu shares the Desktop search keywords with Amazon and I agree with him that this is a default spyware which I know can be turned off and disabled but the fact is it is still a spyware. Please can someone tell me how I can ensure a Linux distro is safe, secure and have been scrutinised for any threats or malicious codes by a trustworthy Linux organisation? I have googled it however can’t find anything solid on how to check the integrity of a Linux based OS. | Short answer: you can't. Long answer: Linux distribution contains of several different programs that form whole Operating System - namely kernel, coreutils, some shell, and other utilities. You have two ways of verifying if distribution is safe to use: 1. Read every source code yourself. You will need to read through an awful lot of code: kernel is ~210K LOC(Lines of code) - now add drivers(also many LOC), coreutils, basic programs... This is job for many many years. And this still doesn't guarantee that it's not malicious - you might miss something, it might be obfuscated or rely on hardware bug. Also, there's more to security than just being malicious. You might (kind of) get decent level of confidence about lack of malicious code, but you can (almost) never be sure that your program is really safe. Things like HeartBleed and ShellShock just happen; no one can prevent them. 2. Trust other people This is saner attempt. You choose group of people to trust, and you use their programs believing in their good will. You can go to Free Software Foundation Page - these folks are serious about their privacy and freedom. They only approve limited distributions that are entirely Free and Open Software, so you can read its source code. There aren't a lot people who use them, so support might be limited - but it's nice bet. You can also trust other people - like Gentoo or Debian or Fedora (or any other distro) developers. They get their distributions together, they bundle some programs, and release them - maybe they don't have bad intentions? Personal note: I consider security, privacy and freedom important values. However; there's a line between being paranoid and caring for these values. RMS is being paranoid; this isn't necessarily a bad thing, because his voice is loud and what he's saying is clear. Many people start caring about freedom thanks to RMS. However, still none of these guarantee safety. Linus's Law isn't this simple, and it's not always working. Many people think that because source is open, others are reading it - thus, there's no need for them to read source themselves. This leaves us with small group that have read the code, and even smaller of those who understood it. It's still better then properiarity, but not as safe as advertised. If you want to be perfectly safe, turn off computer and disassemble it. That's the only way to be 100% sure. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131249/"
]
} |
227,384 | Is it possible to create a command called new_bash that allows me to do the following? new_bash alias test="ls"new bash alias new_command="ls"new_bash testfile1new_bash new_commandfile1 | ##Background:cd $(mktemp -d)> file1##Setup state (background bash + pipe)pipeDir=$(mktemp -d)mkfifo $pipeDir/p #pipe for communicating with the shell#start the shell in the background, make it read from the pipe, and disable I/O bufferingstdbuf -i0 -o0 bash < $pipeDir/p & #open the pipe from the other end on fd 3 (or another fd)exec 3>$pipeDir/p && rm -rf "$pipeDir" #don't need the directory or the physical link to the pipe anymore##Now you can communicate with the shellecho ls >&3#Ouptuts: file1#This is how you end it allexec 3>&- Your function would need to maintain global state.Your function would need to check if the state has been set up and set it up if it hasn't been (by checking for the existence of a variable, perhaps).After the setup or if the state exists, it only needs to echo its arguments ( "$@" ) to &3 or whatever file descriptor you open the pipe on. It might be a better idea to make three functions (it will be a tiny bit more efficient): init_new_bashnew_bashend_new_bash Example (needs better signal handling): #!/bin/sh #^will work in bash alsoinit_new_bash(){ set -e #all must succeed pipeDir=$(mktemp -d) mkfifo "$pipeDir/p" stdbuf -i0 -o0 bash < "$pipeDir"/p & bashPid=$! exec 3>"$pipeDir/p" rm -rf "$pipeDir" set +e}new_bash(){ echo "$@" >&3; }end_new_bash(){ exec 3>&-; wait "$bashPid"; }##Test run:init_new_bash && { new_bash echo hello world new_bash lsend_new_bash;} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4930/"
]
} |
227,459 | I'm working on a password manager application, and for security reasons I want to launch an unkillable process. And besides I don't want this program to be a daemon since I need to read from standard input and write to it. Is there a way to do this? | Make the password manager run under a separate user and handle/ignore/block terminal-generated signals ( SIGINT , SIGQUIT , SIGHUP , SIGTSTP , SIGTTIN , and SIGTTOU ). You can't send signals to (=kill) processes run under a different user (user whose both real uid and saved-set uid is different from your effective uid) unless your effective id is 0 (root). All processes will still be killable by root. For closer details, see kill(2) . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132655/"
]
} |
227,475 | I wrote a simple script to switch my locale.If I write each line in the console and execute it, it works without any problem or if I put it in .bashrc . However when I execute the script either with sudo or without it has absolutely no noticeable effect. ( locale remains the same ) The question is why is that? Is my script wrong or am I missing something different. Source: #!/bin/bashset -xLANG=en_US.utf8LANGUAGE=en_US.utf8LC_ALL=en_US.utf8export LANGexport LANGUAGEexport LC_ALLecho "Language set!" I'm receiving the execution steps and the Language set echo but that's about it.I also tried #!/bin/sh . OS Info: DISTRIB_ID=UbuntuDISTRIB_RELEASE=14.04DISTRIB_CODENAME=trustyDISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS"Kernel: 3.13.0-042stab103.6 | To apply the changes to your current shell you need to "source" it and not to "execute" your script. So, if your script is called "script.sh" then instead of executing it as ./script.sh , source it with . ./script.sh and your changes will be applied to the current session. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131219/"
]
} |
227,476 | How can I add wrappers to a file based on a pattern? For instance I have the following: ... find(css_policy_form_stage3).click with_ajax_wait expect(css_coverage_type_everquote).to be_visible end with_ajax_wait expect(css_coverage_type_everquote).to be_visible endendit "Stage 3" do select(coverage_type, from: css_coverage_type_everquote) find(css_has_auto_insurance).click ... And I want to 'wrap' those "with_ajax_wait" blocks with it "waits" do ... end around them. i.e. I want to get: ... find(css_policy_form_stage3).click it "waits" do with_ajax_wait expect(css_coverage_type_everquote).to be_visible end end it "waits" do with_ajax_wait expect(css_coverage_type_everquote).to be_visible end doendit "Stage 3" do select(coverage_type, from: css_coverage_type_everquote) find(css_has_auto_insurance).click ... Notes and Assumptions : block to indent code is always 3 lines long (with... expect... and end). It would be nice to allow for more than i inner code line but not required for the simplest case. the block itself should have an extra 2 space indent. there are other end's that are not related (example shown just before "Stage 3" it would be nice to be able to specify an inner pattern also, e.g. only these blocks that have expect starting the code line being indented.I think awk is probably the tool due to the ability to read consecutive lines but I am struggling to know how to write this. I'm imagining this is a generally useful q&a as adding wrapper within files is not uncommon. Somewhat similar to my previous question: How to use awk to indent a source file based on simple rules? However in this case I am adding the wrapper plus the indent. | To apply the changes to your current shell you need to "source" it and not to "execute" your script. So, if your script is called "script.sh" then instead of executing it as ./script.sh , source it with . ./script.sh and your changes will be applied to the current session. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
227,520 | #introif [ -n "$1" ] then echo 666 else echo 555fiexit; Actually I do want to echo 555 while I don't want to do anything in the first block, what should I do? I noticed that I can't just remove echo 666 . | Just using no-op : if [ -n "$1" ]; then :else echo 555fiexit or invert the logic: if [ -z "$1" ]; then echo 555fiexit | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
227,538 | I am writing a script (Bash) to transfer files from my local Linux machine to Windows servers.The Windows servers are accessible normally using the SAMBA shares, and I am able to mount a folder to my Linux machine using the mount.cifs command with the proper Windows credentials. Because I do not want to mount every server in advance nor mounting dynamically using sudo (the script is executed as a normal user, not root), I am just wondering if the server can be accessed by another means, like a TCP pipe or a similar way. For example, under Windows I can mount my server's folder to a drive letter using the net use command, but as well without having being mounted like this: c:> net use \\my-server.domain.com passwd123 /user:domain\myuserc:> cp d:\myfiles.zip \\my-server.domain.com\d$\temp\destination And if I make a net use , I can see the open connection (without letter assigned): Status Local Remote Network-------------------------------------------------------------------------------OK \\myserver.domain.net\IPC$ Microsoft Windows NetworkThe command completed successfully. I do not want to install sshd nor ftpd on the Windows Server. I am looking to do it only with the SMB protocol. As a fallback I will execute a mount like sudo mount.cifs [options] /mnt/temp-folder and sudo umount /tmp/temp-folder after the copy of the files. | You can use the smbclient program to give you an FTP-like interface to the Windows file share without having to install FTP on the Windows machine. Here follows some examples: Transfer file from local (unix/linux) to Windows: smbclient //server.domain.org/d$ <password> -W domain.org -U <my-user> -c "put file-local.xml folder1\folder2\file.xml" Transfer file from Windows to Linux: There are two options, the first is using the command 'get' with smbclient and a the second, a shortest one: smbget : 1. smbclient: `smbclient //server.domain.org/d$ <password> -W domain.org -U <my-user> -c "get folder1\folder2\file.xml file-local.xml"`2. smbget: `smbget -u <my-user> -p <password> -w domain.org -o destination-file.txt smb://server.domain.org/d$/folder1/folder2/source-file.txt` | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41302/"
]
} |
227,560 | I have a BTRFS RAID-1 filesystem with 2 legs. One disk needs to be replaced because of re-occuring read errors. Thus, the plan is: add a 3rd leg -> result should be: 3 way mirror remove the faulty disk -> result should be: 2 way mirror Thus, I did following steps: btrfs dev add /dev/new_device /mnt/foobtrfs balance /mnt/foo I assume that btrfs does the right thing, i.e. create a 3 way mirror. The alternative would be to use a balance filter, I guess. But since the filesystem already is a RAID-1 one, that shouldn't be necessary? I am a bit concerned because a btrfs fi show prints this: Before balance start: Total devices 3 FS bytes used 2.15TiB devid 1 size 2.73TiB used 2.16TiB path /dev/left devid 2 size 2.73TiB used 2.16TiB path /dev/right devid 3 size 2.73TiB used 0.00B path /dev/new_device During balancing: Total devices 3 FS bytes used 2.15TiB devid 1 size 2.73TiB used 1.93TiB path /dev/left devid 2 size 2.73TiB used 1.93TiB path /dev/right devid 3 size 2.73TiB used 458.03GiB path /dev/new_device I mean, this looks like btrfs balances one half of the existing RAID-1 group to a single disk ... right? Thus, my question, do I need to specify a balance filter to get a 3-way mirror? PS: Does btrfs even support n-way mirrors? A note in the btrfs wiki says that it does not - but perhaps it is outdated? Oh boy, cks has a pretty recent article on the 2-way limit . | Currently, btrfs does not support n-way mirrors . Btrfs does have a special replace subcommand: btrfs replace start /dev/left /dev/new_device /mnt/foo Reading between the lines of the btrfs-replace man page, this command should be able to use both existing legs - e.g. for situations where both legs have read errors - but both error sets are disjoint. The btrfs replace command is executed in the background - you can check its status via the status subcommand, e.g.: btrfs replace status /mnt/foo45.4% done, 0 write errs, 0 uncorr. read errs Alternatively, one can also add a device to raid-1 filesytem and then delete an existing leg: btrfs dev add /dev/mapper/new_device /mnt/foobtrfs dev delete /dev/mapper/right /mnt/foo The add should return fast, since it justs adds the device (issue a btrfs fi show to confirm). The following delete should trigger a balancing between the remaining devices such that each extend is available on each remaining device. Thus, the command is potentially very long running. This method also works to deal with the situation described in the question. In comparison with btrfs replace the add/delete cycle spams the syslog with low-level info messages. Also, it takes much longer to finish (e.g. 2-3 times longer, in my test system with 3 TB SATA drives, 80 % FS usage). Finally, after the actual replacement, if the newer devices are larger than the original devices, you will need to issue a btrfs fi resize on each device to utilize the entire disk space available. For the replace example at the top, this looks like something like: btrfs fi resize <devid>:max /mnt/foo where devid stands for the device id which btrfs fi show returns. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
227,561 | With awk, I would like to insert a numbering like Record n°i# in front of each line starting with Toto only when they are located in-between two specific patterns start=ABCD and stop=EFGH . Input file is: (the Blabla can be anything in reality) Blabla Toto BlablaBlabla TotoToto BlablaABCDToto BlablaToto BlablaBlablaToto BlablaEFGHToto BlablaBlabla Toto BlablaBlabla TotoABCDToto BlablaBlabla TotoToto BlablaToto BlablaBlablaEFGHToto BlablaBlabla Toto Desired output file is : Blabla Toto BlablaBlabla TotoToto BlablaABCDRecord n°1#Toto BlablaRecord n°2#Toto BlablaBlablaRecord n°3#Toto BlablaEFGHToto BlablaBlabla Toto BlablaBlabla TotoABCDRecord n°1#Toto BlablaBlabla TotoRecord n°2#Toto BlablaRecord n°3#Toto BlablaBlablaEFGHToto BlablaBlabla Toto I've tried the following script (which must run over bash and gawk v4.0.1), but it doesn't work : #!/bin/bashawk 'BEGIN{indice=1; FS="\n"; RS=""}/^ABCD$/,/^EFGH$/{if(/^Toto/){sub(/^Toto/,"Record n° "indice"\\#Toto"); indice++}print}' input.txt > output.txt | Currently, btrfs does not support n-way mirrors . Btrfs does have a special replace subcommand: btrfs replace start /dev/left /dev/new_device /mnt/foo Reading between the lines of the btrfs-replace man page, this command should be able to use both existing legs - e.g. for situations where both legs have read errors - but both error sets are disjoint. The btrfs replace command is executed in the background - you can check its status via the status subcommand, e.g.: btrfs replace status /mnt/foo45.4% done, 0 write errs, 0 uncorr. read errs Alternatively, one can also add a device to raid-1 filesytem and then delete an existing leg: btrfs dev add /dev/mapper/new_device /mnt/foobtrfs dev delete /dev/mapper/right /mnt/foo The add should return fast, since it justs adds the device (issue a btrfs fi show to confirm). The following delete should trigger a balancing between the remaining devices such that each extend is available on each remaining device. Thus, the command is potentially very long running. This method also works to deal with the situation described in the question. In comparison with btrfs replace the add/delete cycle spams the syslog with low-level info messages. Also, it takes much longer to finish (e.g. 2-3 times longer, in my test system with 3 TB SATA drives, 80 % FS usage). Finally, after the actual replacement, if the newer devices are larger than the original devices, you will need to issue a btrfs fi resize on each device to utilize the entire disk space available. For the replace example at the top, this looks like something like: btrfs fi resize <devid>:max /mnt/foo where devid stands for the device id which btrfs fi show returns. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120426/"
]
} |
227,577 | I'm a long time Linux user for over 15 years but one thing I hate with a passion is the mandated directory structure. I don't like that /usr/bin is the dumping ground for binaries or libs in /usr/lib , /usr/lib32 , /usr/libx32 , /lib , /lib32 etc... Random stuff in /usr/share etc. It's dumb and confusing. But some like it and tastes differ. I want a directory structure where each package is isolated. Imagine instead if the media player dragon had it's own structure: /software/dragon/software/dragon/bin/x86/dragon/software/dragon/doc/README/software/dragon/doc/copyright/software/dragon/lib/x86/libdragon.so Or: /software/zlib/include/zlib.h/software/zlib/lib/1.2.8/x86/libz.so/software/zlib/lib/1.2.8/x64/libz.so/software/zlib/doc/examples/.../software/zlib/man/... You get the point. What are my options? Is there any Linux distro that uses something like my scheme? Can some distro be modified to work like I want it (Gentoo??) or do I need LFS? Is there any prior art in this area? Like publications on if the scheme is feasible or unfeasible? Not looking for OS X. :) But OS X-inspired is totally ok. Edit : I have no idea how PATH , LD_LIBRARY_PATH and other environment variables that depend on a small set of paths should work out. I'm thinking that if I have the KDE editor Kate installed in /software/kate/bin/x86/bin/kate then I'm ok with having to type the full path to the binary to start it. How it should work for dynamic libraries and dlopen calls, I don't know but it can't be an unsolvable engineering problem. | First, an up-front conflict-of-interest disclaimer: I am a long-time GoboLinux developer. Second, an up-front claim of domain expertise: I am a long-time GoboLinux developer. There are a few different structures in current use. GoboLinux has one, and tools like GNU Stow , Homebrew , etc, use something quite similar (primarily for user programs). NixOS also uses a non-standard hierarchy for programs, and philosophy of life. It's also a reasonably common LFS experiment. I'm going to describe all of those, and then comment from experience on how that works out in practice ("feasibility"). The short answer is that yes, it's feasible, but you have to really want it . GoboLinux GoboLinux has a structure very similar to what you describe. Software is installed under /Programs : /Programs/ZSH/5.0.8 contains all the files belonging to ZSH 5.0.8, in the usual bin / lib /... directories. The system tools create symlinks to those files under a /System/Links hierarchy, which maps onto /usr ¹. The PATH variable contains only the single unified executable directory, and LD_LIBRARY_PATH is unused. Multiple versions of software can coexist at once, but only one file by a given name ( bin/zsh ) will be linked actively at once. You can access the others by their full paths. A set of compatibility symlinks also exists, so /bin and /usr/bin map to the unified executables directory, and so on. This makes life easier for software at run time. A kernel patch, GoboHide, allows those compatibility symlinks to be hidden from file listings (but still traversable). Contra another answer, you do not need to modify kernel code: GoboHide is purely cosmetic, and the kernel does not depend on user-space paths in general². GoboLinux does have a bespoke init system, but that is also not required to do this. The tagline has always been "the filesystem is the package manager", but there are reasonably ordinary package-management tools in the system. You can do everything using cp , rm , and ln , though. If you want to use GoboLinux, you are very welcome. I will note, though, that it's a small development team, and you're likely to find that some software you want isn't packaged up if nobody has wanted to use it before. The good news is that it's generally fairly easy to build a program for the system (a standard "recipe" is about three lines long); the bad news is that sometimes it's unpleasantly complicated, which I'll cover more below. Publications There are a few "publications". I gave a presentation at linux.conf.au 2010 on the system as a whole that covers everything generally, which is available in video: ogv mp4 (also on your local Linux Australia mirror); I also wrote up my notes into prose. There are also a few older documents, including the famous " I am not clueless ", on the GoboLinux website , which addresses some objections and issues. I think that we're all a bit less gung-ho these days, and I suspect that a future release will adopt /usr as the base location for the symlinks. NixOS NixOS puts each installed program into its own directory under /nix/store . Those directories are named something like /nix/store/5rnfzla9kcx4mj5zdc7nlnv8na1najvg-firefox-3.5.4/ — there is a cryptographic hash representing the whole set of dependencies and configuration leading to that program. Inside that directory are all the associated files, with more-or-less normal locations locally. It also allows you to have multiple versions around at once, and to use any of them. NixOS has a whole philosophy associated with it of reproducible configuration: it's essentially got a configuration management system baked into it from the start. It relies on some environmental manipulation to present the right world of installed programs to the user. LFS It's fairly straightforward to go through Linux From Scratch and set up exactly the hierarchy you want: just make the directories and configure everything to install in the right place. I've done it a few times in building GoboLinux experiments, and it's not substantially harder than plain LFS. You do need to make the compatibility symlinks in that case; otherwise it is substantially harder, but careful use of union mounts could probably avoid that if you really wanted. I feel like there was an LFS Hint about exactly that at one point, but I can't seem to find it now. On Feasibility The thing about the FHS is that it's a standard, it's very common, and it broadly reflects the existing usage at the time it was written. Most users will never be on a system that doesn't follow that layout in essence. The result of that is that lots of software has latent dependencies on it that nobody realises, often completely unintentionally. All those scripts with #!/bin/bash ? No good if you don't have Bash there. That is why GoboLinux has all those compatibility symlinks; it's just practical. A lot of software fails to function either at build time or at run time under a non-standard layout, and then it requires patching to correct, often quite intrusively. Your basic Autoconf program will usually happily install itself wherever you tell it, and it's fairly easy to automate the process of passing in the correct --prefix . Other build systems aren't always so nice, either by intentionally baking in the hierarchy, or by leading authors to write non-portable configuration. CMake is a major offender in the latter category. That means that if you want to live in this world you have to be prepared to do a lot of fiddly work up front in other people's build systems. It is a real hassle to have to dynamically patch generated files during compilation. Runtime is another matter again. Many programs have assumptions about where their own files, or someone else's files, are found either relative to them or absolutely. When you start using symlinks to present a consistent view, lots of programs have bugs handling them (or sometimes, arguably correct behaviour that is unhelpful to you). For example, a tool foobar may expect to find the baz executable next to it, or in ../sbin . Depending on whether it reads its symlink or not, those can be two different places, and neither of them may be correct anyway. A combined problem is the /usr/share directory. It's for shared files, of course, but when you put every program in its own prefix they're no longer actually shared. That leads to programs unable to find standard icons and the like. GoboLinux dealt with this in a pretty ugly way: at build time, $prefix/share was a symlink to $prefix/Shared , and after building the link was pointed to the global share directory instead. It now uses compile-time sandboxing and file movement to deal with share (and the other directories), but runtime errors from reading links can still be an issue. Suites of multiple programs are another problem. GoboLinux has never gotten GNOME working fully, and I don't believe NixOS has either, because the layout interdependencies are so baked in that it's just intractable to cure them all. So, yes, it's feasible , but: There is quite a lot of work involved in just making things function. Some software may just never work. People will look at you funny. All of those may or may not be a problem for you. ¹ Version 14.01 uses /System/Index , which maps directly onto /usr . I suspect a future version may drop the Links/Index hierarchy and use /usr across the board. ² It does require /bin/sh to exist by default. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/227577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57817/"
]
} |
227,581 | Related, but no satisfactory answers: How can I split a large text file into chunks of 500 words or so? I'm trying to take a text file ( http://mattmahoney.net/dc/text8.zip ) with > 10^7 words all in one line, and split it into lines with N words each. My current approach works, but is fairly slow and ugly (using shell script): i=0for word in $(sed -e 's/\s\+/\n/g' input.txt)do echo -n "${word} " > output.txt let "i=i+1" if [ "$i" -eq "1000" ] then echo > output.txt let "i=0" fidone Any tips on how I can make this faster or more compact? | Assuming your definition of word is a sequence of non-blank characters separated by blanks, here's an awk solution for your single-line file awk '{for (i=1; i<=NF; ++i)printf "%s%s", $i, i % 500? " ": "\n"}i % 500{print ""}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132743/"
]
} |
227,583 | I am using Linux oess (CentOS). I am working on a VM: In the terminal, I'm trying to: ping 8.8.8.8 to see my connectivity. It says: Network is unreachable Then I typed: ifconfig: inet addr: 192.168.56.101 Then: sudo /sbin/route add -net 0.0.0.0 gw 192.168.56.101 eth0 Now I'm doing the same ping and it says: Destination host is unreachable for all the sequences. What is the source of the problem? route output: | first things first.can you ping 192.168.56.1 ? if so then you have an IP connection to the router, set this as your default route. otherwise try pinging 192.168.56.255 (broadcast) to see on what address you might getreplies. see arp -a to check what addresses you can find. can you ping 8.8.4.4 (google) after changing the default route? if so you have internet access. if not check the router. can you ping www.google.com? if not you might have a dns problem do you get results from nslookup www.google.com ? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132733/"
]
} |
227,617 | I just made a small mistake and reformatted my swap partition. It's still formatted as a swap partition - I was fortunate not to touch anything more important. However, I notice that the uuid has changed. Therefore, it no longer matches the uuid in /etc/fstab. This doesn't cause me any immediate problems, presumably because swap is semi-redundant with modern RAM. Still, I would like to fix the problem. First, is there a command that lets me verify my hypothesis - that my swap hasn't been detected by fstab after the uuid change? I looked at findmnt on a separate computer to see whether swap normally gets displayed - it doesn't. So what command shows you which partition, if any, is being utilised as swap? Second, I presume I can just manually edit the fstab and change the uuid it 'expects' to the new uuid. Is that the 'right' way to fix it? Perhaps there are tools for 'safe' editing of fstab entries (like for grub.cfg) which I should look at (even if, in my case, not much can go wrong editing manually). | first things first.can you ping 192.168.56.1 ? if so then you have an IP connection to the router, set this as your default route. otherwise try pinging 192.168.56.255 (broadcast) to see on what address you might getreplies. see arp -a to check what addresses you can find. can you ping 8.8.4.4 (google) after changing the default route? if so you have internet access. if not check the router. can you ping www.google.com? if not you might have a dns problem do you get results from nslookup www.google.com ? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102852/"
]
} |
227,647 | I'm on a Mac (OSX). I've accidentally deleted my ssh keys, but I haven't restarted my computer yet so I'm still able to access servers with my key. I guess the ssh-agent has some form of it in memory? Is there any way to retrieve the key from the ssh-agent?I still remember the password etc. | Depends on how much time you have. If you know C than the safest way is to connect with gdb to the ssh-agent process (must be root) and print the key data. Identity keys are stored in an array called idtable which contains a linked list of identities. So, you can print the BIGNUM data (as defined in (1)) like: (gdb) call BN_bn2hex(idtable[2]->idlist->tqh_first->key->rsa->n) where the number 2 is the version (you probably need 2) and the last element is one of the BIGNUM (the rest are engine, e, d, p, q, dmp1, dmq1, iqmp). Now to use this data you need to write a small utility program where you define a RSA struct (defined as in (1)) and populate them. Probably you could write another utility program to do this automatically but then you need more time, you can just print the data manually. Then you call the PEM_write_RSAPrivateKey (2) function with the above RSA data and you have a new unencrypted rsa file. Sorry for not having more details but if you have time it could be a starting point. (1) /usr/include/openssl/rsa.h (2) see man page for pem(3) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132786/"
]
} |
227,648 | So I'm setting up a WordPress backup guide/making a backup schedule for myself for real. I want to do MySQL dumps daily, but the command either requires -p then user input or --password="plain text password" Could I pass it to a file that is atleast MD5 or better hashed and protected to increase security but make the command require no user input? Any help is appreciated! For Reference here is the command I want to run mysqldump -u [username] --password=~/wp_backups/sqldumps/.sqlpwd [database name] > ~/wp_backups/sqldumps/"$(date '+%F').sql" | You have following password options: provide the password on the command line through the -p option provide the password via the MYSQL_PWD environment variable put your configuration in the ~/.my.cnf file under the [mysqldump] section In all cases your client needs a plain text password to be able to authenticate. You mentioned hashes, but the trait of a hash is that it's a one way conversion function (i.e. you won't be able to restore the original password from a hash), therefore it's unusable as the authentication token. Since you are backing up the Wordpress database from, allegedly, the same account that hosts your Wordpress there is no security improvements of trying to hide the password from the user that runs Wordpress (the database credentials can be easily extracted from the wp-config.php file anyway). So, I'd suggest to define the following ~/.my.cnf : [mysqldump]host = your_MySQL_server_name_or_IPport = 3306user = database_user_namepassword = database_password Then ensure that the file has the 0600 permissions. This way mysqldump does not need any database credential specified on its command line (they will be read from the ~/.my.cnf file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130767/"
]
} |
227,653 | With the command of df , I can get something like: Filesystem 1K-blocks Used Available Use% Mounted on/dev/root 197844228 15578648 180242500 8% /devtmpfs 4101368 0 4101368 0% /devtmpfs 820648 292 820356 1% /runtmpfs 5120 0 5120 0% /run/locktmpfs 1693720 4 1693716 1% /run/shm What if I just want to keep the 180242500 number recorded of / and store it in a file (like disk-space-available.txt ) If I use df >> disk-space-available.txt it will store all the content while I just want the raw number in that file. For example if there is something like this then it's working: df -OUTPUT=raw-available-number180242500 What can I do? | You can filter easily with awk , checking if the last field equal / , then print the corresponding 4th field: df | awk '$NF == "/" { print $4 }' >> output or: df / | awk 'NR == 2 { print $4 }' >> output | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
227,662 | I want to rename multiple files (file1 ... fileN to file1_renamed ... fileN_renamed) using find command: find . -type f -name 'file*' -exec mv filename='{}' $(basename $filename)_renamed ';' But getting this error: mv: cannot stat ‘filename=./file1’: No such file or directory This not working because filename is not interpreted as shell variable. | The following is a direct fix of your approach: find . -type f -name 'file*' -exec sh -c 'x="{}"; mv "$x" "${x}_renamed"' \; However, this is very expensive if you have lots of matching files, because you start a fresh shell (that executes a mv ) for each match. And if you have funny characters in any file name, this will explode. A more efficient and secure approach is this: find . -type f -name 'file*' -print0 | xargs --null -I{} mv {} {}_renamed It also has the benefit of working with strangely named files. If find supports it, this can be reduced to find . -type f -name 'file*' -exec mv {} {}_renamed \; The xargs version is useful when not using {} , as in find .... -print0 | xargs --null rm Here rm gets called once (or with lots of files several times), but not for every file. I removed the basename in you question, because it is probably wrong: you would move foo/bar/file8 to file8_renamed , not foo/bar/file8_renamed . Edits (as suggested in comments): Added shortened find without xargs Added security sticker | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/227662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90223/"
]
} |
227,701 | In applications like Firefox Ctrl - Insert and Shift - Insert work like Ctrl - c and Ctrl - v (modifying the secondary clipboard), but in XTerm they do not work like the common Ctrl - Shift - c and Ctrl - Shift - v : Ctrl - Insert does not change the clipboards, and instead prints literally ;5~ on the current prompt. Shift - Insert pastes the primary clipboard instead of the secondary. Can I fix this in .inputrc or otherwise? It would be nice to have two-stroke cross-platform cut and paste shortcuts everywhere. | xterm, whose conventions were established many years before Firefox, and even the web, was invented, is controlled by application resources . These are merged from several places, including files like /usr/share/X11/app-defaults/XTerm , and also information held by the X11 server seen with xrdb -q . You can override these resources by placing, for example, things like the following in the file ~/.Xdefaults : XTerm*VT100.Translations: #override\n\ Shift Ctrl <KeyPress> v: insert-selection(CLIPBOARD)\n\ Shift Ctrl <KeyPress> c: copy-selection(CLIPBOARD)\n This binds ctrl-shift-v to inserting the clipboard contents.I'm not clear exactly what you wanted, so check the man page for the functions and the PRIMARY, SECONDARY and CUT_BUFFER0 selections. You can presumably add (don't forget the backslash on preceding lines): Shift <Key>Insert: insert-selection(SECONDARY)\n\Ctrl <Key>Insert: copy-selection(SECONDARY)\n | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
227,817 | I am looking for the Linux command and option combination to display the contents of a given file byte by byte with the character and its numerical representation. I was under the impression that in order to do this, I would use the following: od -c [file] However, I have been told this is incorrect. | The key is the character and its numerical representation so -c only gives you half of that. One solution is od -c -b file but of course there are lots of different number representations to choose from. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227817",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129848/"
]
} |
227,833 | I've recently switched from using windows and I'll now be running linux on my computer. In windows there's the program files folder. Most of a programs files go into its own folder there which to me seems easier to manage and browse around. Program files in linux are stored in different places, what's the reasoning for doing this? Is it easier for developers to develop this way? | The key is the character and its numerical representation so -c only gives you half of that. One solution is od -c -b file but of course there are lots of different number representations to choose from. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132910/"
]
} |
227,876 | I'm a new Linux user trying to change the screen resolution as there is no option under display. I have successfully managed to add new resolutions by fluke by following online guide. I don't have a GPU, I don't know if this is the issue? Below is my xrandr -q output. root@kali:~# xrandr -qxrandr: Failed to get size of gamma for output defaultScreen 0: minimum 1280 x 1024, current 1280 x 1024, maximum 1280 x 1024default connected 1280x1024+0+0 0mm x 0mm 1280x1024 0.0* 1920x1200_60.00 (0x145) 193.2MHz h: width 1920 start 2056 end 2256 total 2592 skew 0 clock 74.6KHz v: height 1200 start 1203 end 1209 total 1245 clock 59.9Hz 1440x900_59.90 (0x156) 106.3MHz h: width 1440 start 1520 end 1672 total 1904 skew 0 clock 55.8KHz v: height 900 start 901 end 904 total 932 clock 59.9Hz | Here are the steps you need to add a new custom resolution and apply it. Following steps are for adding a 1920x1080 resolution, but you can use it for any other resolution you want. But make sure your monitor and onboard graphics support that resolution. # First we need to get the modeline string for xrandr# Luckily, the tool "gtf" will help you calculate it.# All you have to do is to pass the resolution & the-# refresh-rate as the command parameters:gtf 1920 1080 60# In this case, the horizontal resolution is 1920px the# vertical resolution is 1080px & refresh-rate is 60Hz.# IMPORTANT: BE SURE THE MONITOR SUPPORTS THE RESOLUTION# Typically, it outputs a line starting with "Modeline"# e.g. "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync# Copy this entire string (except for the starting "Modeline")# Now, use "xrandr" to make the system recognize a new# display mode. Pass the copied string as the parameter# to the --newmode option:xrandr --newmode "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync# Well, the string within the quotes is the nick/alias# of the display mode - you can as well pass something# as "MyAwesomeHDResolution". But, careful! :-|# Then all you have to do is to add the new mode to the# display you want to apply, like this:xrandr --addmode VGA1 "1920x1080_60.00"# VGA1 is the display name, it might differ for you.# Run "xrandr" without any parameters to be sure.# The last parameter is the mode-alias/name which# you've set in the previous command (--newmode)# It should add the new mode to the display & apply it.# Usually unlikely, but if it doesn't apply automatically# then force it with this command:xrandr --output VGA1 --mode "1920x1080_60.00" Original source: https://gist.github.com/debloper/2793261 I also wrote a script that does all these steps automatically. You can try it out if the above steps seem too complicated for you: https://gist.github.com/chirag64/7853413 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/227876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132940/"
]
} |
227,889 | I am running Windows 10 on my Surface Pro 3. I installed Cygwin and also added some useful packages (gvim, nedit, emacs, vim, g++). However, when I run gvim, I get "Can't open display". The same thing happens with nedit. When I did echo $DISPLAY, I showed nothing so I set the DISPLAY to :0.0. I still get "can't open display." I tried removing cygwin and re-installing but I get the same problem. | Unix GUI programs display through an X server . Cygwin doesn't automatically start an X server. You need to install the packages xorg-server and xinit , and run startxwin . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132946/"
]
} |
227,891 | diff -u file1.txt file2.txt > patchfile creates a patch file which consists of instruction for patch to convert file1.txt to be exactly like file2.txt Can't this be done using cp command instead? I can imagine this to be useful for when the file is too large and has to be transferred over a network where this approach might save bandwidth. Is there any other way to use diff/patch which would be advantageous in other scenarios? | Diffs can be more complicated than just comparing one file versus another. The can compare entire directory hierarchies. Consider the example that I want fix a bug in GCC. My change adds a line or two in 4 or 5 files and deletes a handful of lines in those and other files. If I want to communicate these changes to someone, potentially for inclusion into GCC my options are Copy the entire source tree Copy only the files that were changed Supply just the changes I've made Copying the entire source tree doesn't make sense, but what about the other two options, which gets at the core of your question. Now consider that someone else also worked on the same file as I did and we both give our changes to someone. How will this person know what we've done and if the changes are compatible (different parts of the file) or conflict (same lines of the file)? He will diff them! The diff can tell him how the files differ from each other and from the unmodified source file. If the diff is what is needed, it just makes more sense to just send the diff in the first place. A diff can also contain changes from more than one file, so while I edited 9 files in total, I can provide a single diff file to describe those changes. Diffs can also be used to provide history. What if a change three months ago caused a bug I only discovered today. If I can narrow down when the bug was introduced and can isolate it to a specific change, I can use the diff to "undo" or revert the change. This is not something I could as easily do if I were only copying files around. This all ties into source version control where programs may record files history as a series of diffs from the time it was created until today. The diffs provide history (I can recreate the file as it was on any particular day), I can see who to blame for breaking something (the diff has an owner) and I can easily submit changes to upstream projects by giving them specific diffs (maybe they are only interested in one change when I've made many). In summary, yes, cp is easier than diff and patch , but the utility of diff and patch is greater than cp for situations where how files change is important to track. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/227891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132948/"
]
} |
227,910 | I found a good replacement IDE for Delphi called Lazarus. But I don't have a question for programmers. Will the statically linked Linux binary work on all Linux distributions? I.e. it does not matter on what Linux distro I built it and it will work on Debian / ArchLinux / Ubuntu / OpenSUSE / ... whatever? As a result of my findings, does really only matter 32bit vs 64bit? I want to be sure before I publish. | This answer was first written for the more general question "will my binary run on all distros", but it addresses statically linked binaries in the second half. For anything that is more complex than a statically linked hello world, the answer is probably no . Without testing it on distribution X, assume the answer is no for X. If you want to ship your software in binary form, restrict yourself to a few popular distributions for the field of use of your software (desktop, server, embedded, ...) the latest one or two versions of each Otherwise you end up with houndreds of distribution of all sizes, versions and ages (ten year old distribution are still in use and supported). Test for those. Just a few pointer on what can (and will) go wrong otherwise: The package of a tool/library you need is named differently across distributions and even versions of the same distribution The libraries you need are too new or too old (wrong version). Don't assume just because your program can link, it links with the right library. The same library (file on disk) is differently named on different distributions, making linking impossible 32bit on 64bit: the 32bit environment might not be installed or some non-essential 32bit library is moved into an extra package apart from the 32on64 environment, so you have an extra dependency just for this case. Shell: don't assume your version of Bash. Don't assume even Bash. Tools: don't assume some non POSIX command line tool exists anywhere. Tools: don't assume the tool recognizes an option just because the GNU version of your distro does. Kernel interfaces: Don't assume the existence or structure of files in /proc just because they exist/have the structure on your machine Java: are you really sure your program runs on IBM's JRE as shipped with SLES without testing it? Bonus: Instruction sets: binary compiled on your machine does not run on older hardware. Is linking statically (or: bundling all the libraries you need with your software) a solution? Even if it works technically, the associated costs might be to high. So unfortunately, the answer is probably no either. Security: you shift the responsibility to update the libraries from the user of your software to yourself. Size and complexity: just for fun try to build a statically linked GUI program. Interoperability: if your software is a "plugin" of any kind, you depend on the software which calls you. Library design: if you link your program statically to GNU libc and use name services ( getpwnam() etc.), you end up linked dynamically against libc's NSS (name service switch). Library design: the library you link your program statically with uses data files or other resources (like timezones or locales). For all the reasons mentioned above, testing is essential. Get familiar with KVM or other virtualization techniques and have a VM of every Distribution you plan to support. Test your software on every VM. Use minimal installations of those distributions. Create a VM with a restricted instruction set (e.g. no SSE 4). Statically linked or bundled only: check your binaries with ldd to see whether they are really statically linked / use only your bundled libraries. Statically linked or bundled only: create an empty directory and copy your software into it. chroot into that directory and run your software. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/227910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
227,941 | How do I join two files vertically without any separator? I tried to use paste -d"" a b , but this just gives me a . Sample file: 000 0 0 00001000200030004 10 20 30 40 2000 4000 .123 12.11234234534564567 | paste use \0 for null delimiter as defined by POSIX : paste -d'\0' file1 file2 Using -d"" a b is the same as -d a b : the paste program sees three arguments -d , a and b , which makes a the delimiter and b the name of the sole file to paste. If you're on a GNU system (non-embedded Linux, Cygwin, …), you can use: paste -d "" file1 file2 The form -d "" is unspecified by POSIX and can produce errors in other platforms. At least BSD and heirloom paste will report no delimiters error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63644/"
]
} |
227,951 | This is a situation I am frequently in: I have a source server with a 320GB hard-drive inside of it, and 16GB of ram ( exact specs available here , but as this is an issue I run into frequently on other machines as well, I would prefer the answer to work on any "reasonable" Linux machine) I have a backup server with several terabytes of hard-drive space ( exact specs here , see disclaimer above) I want to transfer 320GB of data from the source server to the target server (specifically, the data from /dev/sda ). The two computers are physically next to each other, so I can run cables between them. I'm on a LAN, and I'm using a new-ish router , which means my network speeds should "ideally" be 1000Mbit, right? Security is not an issue. I am on a local network, and I trust all machines on the network, including the router. (optional) I don't necessarily need a signed checksum of the data, but basic error checking (such as dropped packets, or the drive becoming unreadable) should be detected rather than just disappear into the output. I searched for this question online, and have tested several commands. The one that appears the most often is this: ssh [email protected] 'dd bs=16M if=/dev/sda | gzip' > backup_sda.gz This command has proven too slow (it ran for an hour, only got about 80GB through the data). It took about 1 minute and 22 seconds for the 1GB test packet, and ended up being twice as fast when not compressed. The results may also have been skewed by the fact that the transferred file is less than the amount of RAM on the source system. Moreover (and this was tested on 1GB test pieces), I'm getting issues if I use the gzip command and dd ; the resulting file has a different checksum when extracted on the target, than it does if piped directly. I'm still trying to figure out why this is happening. | Since the servers are physically next to each other, and you mentioned in the comments you have physical access to them, the fastest way would be to take the hard-drive out of the first computer, place it into the second, and transfer the files over the SATA connection. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/227951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
227,986 | I have a multiple files that contains something like: this is a test1 {test 123test 456test 789}this is a test2 {test 123test 456test 789}this is a test3 {test 123test 456test 789} Need to delete a section: this is a test2 {test 123test 456test 789} lines between braces may be a different (less or more lines)I've tried something like : sed -i 's|This is a test2 *.* !}||g' * and sed -i 's|This is a test2, !}||g' * but no success | what about sed -e '/this is a test2/,/}/d' which basically -e tell sed to use next pattern /this is a test2/,/}/ select line between this is a test2 and } d delete it Usage sed -e '/this is a test2/,/}/d' A > B apply sed from A file into B sed -i -e '/this is a test2/,/}/d' A edit directly into A | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/227986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132997/"
]
} |
227,989 | I've read in a couple of places that the PATH is set in /etc/profile or the .profile file that's in the home dir. Are these the only places that the path is set in? I want a better understanding of it. In the /etc/profile file, as the following comment says "system-wide .profile file for the Bourne shell" . Does that mean that profile files are the main configuration files for bash? In that file I don't see the PATH var being set at all. In the .profile file in the home directory there's this line: PATH="$HOME/bin:$PATH" That's resetting PATH by the looks because it's concatenating the already set $PATH string with $HOME/bin: right? But if etc/profile and ~/.profile are the only files setting PATH where is $PATH coming from in that line of code if it's not defined in /etc/profile ? Can someone experienced please give a broad and detailed explanation of the PATH variable? Thanks! | There are many places where PATH can be set. The login program sets it to a default value. How this default value is configured is system-dependent. On most non-embedded Linux systems, it's taken from /etc/login.defs , with different values for root and for other users. Consult the login(1) manual on your system to find out what it does. On systems using PAM , specifically the pam_env module, environment variables can be set in the system-wide file /etc/environment and the per-user file ~/.pam_environment . Then most ways to log in (but not cron jobs) execute a login shell which reads system-wide and per-user configuration files. These files can modify the value of PATH , typically to add entries but sometimes in other ways. Which files are read depend on what the login shell is. Bourne/POSIX-style shells read /etc/profile and ~/.profile . Bash reads /etc/profile , but for the per-user file it only reads the first existing file among ~/.bash_profile , ~/.bash_login and ~/.profile . Zsh reads /etc/zshenv , ~/.zshenv , /etc/zprofile , ~/.zprofile , /etc/zlogin and ~/.zlogin . Many GUI sessions arrange to load /etc/profile and ~/.profile , but this depends on the display manager, on the desktop environment or other session startup script, and how each distribution has set these up. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/227989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132910/"
]
} |
228,015 | Can someone give me a command that would: move a file towards a new directory and leave a symlink in its old location towards its new one | mv moves a file, and ln -s creates a symbolic link, so the basic task is accomplished by a script that executes these two commands: #!/bin/shmv -- "$1" "$2"ln -s -- "$2" "$1" There are a few caveats. If the second argument is a directory, then mv would move the file into that directory, but ln -s would create a link to the directory rather than to the moved file. #!/bin/shset -eoriginal="$1" target="$2"if [ -d "$target" ]; then target="$target/${original##*/}"fimv -- "$original" "$target"ln -s -- "$target" "$original" Another caveat is that the first argument to ln -s is the exact text of the symbolic link. It's relative to the location of the target, not to the directory where the command is executed. If the original location is not in the current directory and the target is not expressed by an absolute path, the link will be incorrect. In this case, the path needs to be rewritten. In this case, I'll create an absolute link (a relative link would be preferable, but it's harder to get right). This script assumes that you don't have file names that end in a newline character. #!/bin/shset -eoriginal="$1" target="$2"if [ -d "$target" ]; then target="$target/${original##*/}"fimv -- "$original" "$target"case "$original" in */*) case "$target" in /*) :;; *) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}" esacesacln -s -- "$target" "$original" If you have multiple files, process them in a loop. #!/bin/shwhile [ $# -gt 1 ]; do eval "target=\${$#}" original="$1" if [ -d "$target" ]; then target="$target/${original##*/}" fi mv -- "$original" "$target" case "$original" in */*) case "$target" in /*) :;; *) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}" esac esac ln -s -- "$target" "$original" shiftdone | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/228015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133016/"
]
} |
228,018 | My scripts executes the sub shell command along the lines of: ( while ..... ) $3>$testdir/$testfile.log I get the error: line 75: syntax error near unexpected token `$3'line 75: ` ) $3>$testdir/$testfile.log' I've tried several options, and seems > is only happy when it's hard coded number rather than a variable. Am I missing a parenthesis? | mv moves a file, and ln -s creates a symbolic link, so the basic task is accomplished by a script that executes these two commands: #!/bin/shmv -- "$1" "$2"ln -s -- "$2" "$1" There are a few caveats. If the second argument is a directory, then mv would move the file into that directory, but ln -s would create a link to the directory rather than to the moved file. #!/bin/shset -eoriginal="$1" target="$2"if [ -d "$target" ]; then target="$target/${original##*/}"fimv -- "$original" "$target"ln -s -- "$target" "$original" Another caveat is that the first argument to ln -s is the exact text of the symbolic link. It's relative to the location of the target, not to the directory where the command is executed. If the original location is not in the current directory and the target is not expressed by an absolute path, the link will be incorrect. In this case, the path needs to be rewritten. In this case, I'll create an absolute link (a relative link would be preferable, but it's harder to get right). This script assumes that you don't have file names that end in a newline character. #!/bin/shset -eoriginal="$1" target="$2"if [ -d "$target" ]; then target="$target/${original##*/}"fimv -- "$original" "$target"case "$original" in */*) case "$target" in /*) :;; *) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}" esacesacln -s -- "$target" "$original" If you have multiple files, process them in a loop. #!/bin/shwhile [ $# -gt 1 ]; do eval "target=\${$#}" original="$1" if [ -d "$target" ]; then target="$target/${original##*/}" fi mv -- "$original" "$target" case "$original" in */*) case "$target" in /*) :;; *) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}" esac esac ln -s -- "$target" "$original" shiftdone | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/228018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61859/"
]
} |
228,061 | I have a .7z file which I need to extract the contents of. The problem is, that it's password protected and currently, I have to SSH into the server and enter the password. I would like to this without the need of this. Is this possible? I have tried: 7za x file.7z -ppassword pass But does not work, just returns "No files to process" | This is a year+ late, but in case anyone else googles this, use: 7za x file.7z -p'your_password' Wrapping the password in single quotes does the trick | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52676/"
]
} |
228,065 | I'm running Vuze torrent client on Linux Mint and am looking for a way to block upload completely . Obsolete: Please take it as a fact, that I am unwilling to use some other bittorent client. I am willing to change Vuze for something else if Vuze is not capable of achieving this goal. Vuze is only able to limit the upload to 5 Kbps, which is not 0! Another thing I would like to have, as I upload large amounts of legal Linux ISO torrents, is a way to easily switch between Enable upload and Disable upload modes. I can't seem to find any suitable plugin for Vuze. Something of this sort exists. It is named Auto Stopper. But it stops the upload after a certain ratio, it cannot be adjusted under 1.0. Rationale: I contribute to the network by uploading Linux ISOs. But movies uploads are prohibited in my country. And I won't risk prosecution. So: I cannot upload movies (but can download them legally) I contribute in other ways (uploading many Linux ISOs) Is there a good way to achieve my goal? 2019 Update I am aware the network needs seeders for the torrent network to function. This question is dated 2015 . Many things changed, including the way I function in networking and systems. I don't use Vuze anymore. I use Transmission (daemon on a headless server, GUI client on my personal machine). I use a VPN on my machine. I no longer download movies, I buy Blu-Rays instead. | Transmission As an alternate solution to your issue which is effectively stopping uploads, I'd suggest switching to one of the Linux' native torrent applications, like Transmission . It has an inbuilt feature which allows you to stop uploading completely. This is done by limiting globally the permitted upload bandwidth to 0 Kbps. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
228,072 | In bash $0 contains the name of the script, but in awk if I make a script named myscript.awk with the following content: #!/usr/bin/awk -fBEGIN{ print ARGV[0] } and run it, it will only print "awk". Besides, ARGV[i] with i>0 is used only for script arguments in command line.So, how to make it print the name of the script, in this case "myscript.awk"? | With GNU awk 4.1.3 in bash on cygwin: $ cat tst.sh#!/bin/awk -fBEGIN { print "Executing:", ENVIRON["_"] }$ ./tst.shExecuting: ./tst.sh I don't know how portable that is. As always, though, I wouldn't execute an awk script using a shebang in a shell script as it just robs you of possible functionality. Keep it simple and just do this instead: $ cat tst2.shawk -v cmd="$0" 'BEGIN { print "Executing:", cmd }' "$@"$ ./tst2.shExecuting: ./tst2.sh That last will work with any modern awk in any shell on any platform. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95151/"
]
} |
228,073 | According to this post can stat be used to give the atime on Linux, but FreeBSD 10.1 doesn't have the GNU stat . How do I list the atime for files? | ls -lu where -l will provide a long listing format and -u will sort by access time. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228073",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133060/"
]
} |
228,084 | so I'm in the main folder for my web hosts, trying to find a file using find . I couldn't find it - it was listed as no such file or directory - and I thought maybe it isn't anywhere. However the following command doesn't work either: find index.php which is wrong cause there are a gazillion of them. Why is find not working? Is there a better command to use? | The syntax of find is not like what you have written, please read the manual page man find to get detailed idea. For example if you want to find files named index.php on the current directory and all the sub directories under it, you can use: find . -name index.php -type f If you want to search for files having names say findex.php , index.phpfoo , index.php you need to use: find . -name '*index.php*' -type f * is a glob pattern meaning zero or more characters. On the other hand if you want to look in the current directory only : find . -maxdepth 1 -name '*index.php*' -type f | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104388/"
]
} |
228,148 | I can do this in bash: while read -n1 -r -p "choose [y]es|[n]o"do if [[ $REPLY == q ]]; then break; else #whatever fidone which works but seems a bit redundant, can i do something like this instead? while [[ `read -n1 -r -p "choose [y]es|[n]o"` != q ]]do #whateverdone | You can't use the return code of read (it's zero if it gets valid, nonempty input), and you can't use its output ( read doesn't print anything). But you can put multiple commands in the condition part of a while loop. The condition of a while loop can be as complex a command as you like. while IFS= read -n1 -r -p "choose [y]es|[n]o" && [[ $REPLY != q ]]; do case $REPLY in y) echo "Yes";; n) echo "No";; *) echo "What?";; esacdone (This exits the loop if the input is q or if an end-of-file condition is detected.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92807/"
]
} |
228,158 | I want to run pure-ftpd in chroot as user,not as root,i've set server to non privileged port 5050,but i don't know how to run as user instead of root,someone know the option? | You can't use the return code of read (it's zero if it gets valid, nonempty input), and you can't use its output ( read doesn't print anything). But you can put multiple commands in the condition part of a while loop. The condition of a while loop can be as complex a command as you like. while IFS= read -n1 -r -p "choose [y]es|[n]o" && [[ $REPLY != q ]]; do case $REPLY in y) echo "Yes";; n) echo "No";; *) echo "What?";; esacdone (This exits the loop if the input is q or if an end-of-file condition is detected.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
228,222 | Consider following files: file1 : boo,8,1024foo,7,2048 file2 : foo,0,24,154noo,0,10,561 file3 : 24,154,7,1024,0 What I need is to go to File1 and check if $2==7 ; if true, take $1 , $2 and $3 from File1 ; now I have to compare if $1 from File1 equal to $1 from File2 ; if true, I have to take $3 and $4 from File2 which not exist in File1 , then I have to go to File3 and check if $1 from File3 is equal to $3 from File2 , and $2 from File3 is equal to $4 from File2 ; if yes, then I have to check if $2 from File1 is equal to $3 from File3 , then if this condition is true, I have to compare $3 from File1 with $4 from File3 , if $3 from File1 is more than $4 from File3 . I tried the following script: cat [file1] [file2] [file3] | awk -F, '{if(NF==3) {if($2==7){a[$1]=$1; b[$1]=$2; c[$1]=$3} }else {if(NF==4){if(a[$1]==$1){d[$3]=$3; e[$4]=$4} }else {if(NF==5){if(d[$1]==$1 && e[$2]==$2){print a[$1], b[$1], c[$1], d[$1]}} } } }' The desired output is: foo,7,2048,24,154,1024 | That worked for me: awk -F, 'FNR==1{++f} \ f==1 && $2==7 {a1[$1]++; a2[$2]=$3; o=$0} \ f==2 && a1[$1] {o=o","$3","$4; a3[$3]=$4} \ f==3 && a3[$1] && $2==a3[$1] && a2[$3] && $4<a2[$3] {print o}' \file1 file2 file3 Explanation : The first line ( FNR==1{++f} ) increments the file index to later determine in which file we are 1-3. file1: if $2 equals 7 fill an array a1 with $1 as index and a2 with $2 as index and $3 as value write down the o variable (output) with the first 3 fields file2: if $1 of file2 equals $1 of file1 (prevously written in a1 ) append $3 and $4 to the output variable o . fill an array a3 with $3 as index and $4 as value. file3: if: $1 equals file2s $3 (index of a3 ) $2 equals file2s $4 (value of a3 ) $3 equals file1s $2 (index of a2 ) $4 is lower than file1s $3 (value of a2 ) then: print the value of o . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
228,234 | I occasionally get tarballs where the filenames are encoded in ISO-8859-1 or some other pre-Unicode scheme. My system uses UTF-8, so when I untar these archives with the usual options ( tar xvf foo.tar ) I end up with a directory full of mojibake filenames. Until now I've been using convmv to convert the filenames to UTF-8 after they've been extracted. This is a bit incovenient, because I either need to invoke convmv on each affected file, or else untar the file into a new directory, run convmv on the entire directory, and then move the files to where I wanted them originally. Short of coding this functionality into a shell script, is there some way of converting the archived filenames to UTF-8 on the fly, as they are being untarred? | Here is a little tar file extractor that modifies the names in memory before extracting: #!/usr/bin/python27import tarfiledef transform(data): u = data.decode('latin1') return u.encode('utf8')tar = tarfile.open('archive.tar')for m in tar.getmembers(): m.name = transform(m.name)tar.extractall() Warning: unlike GNU tar , this extractor is not stripping leading / . Either add checking logic to this extractor or check each tar file before extracting with tar -t . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37849/"
]
} |
228,235 | It's very easy to use split() in JavaScript to break a string into an array. What about shell script? Say I want to do this: $ script.sh var1_var2_var3 When the user give such string var1_var2_var3 to the script.sh, inside the script it will convert the string into an array like array=( var1 var2 var3 )for name in ${array[@]}; do # some codedone | Bourne/POSIX-like shells have a split+glob operator and it's invoked every time you leave a parameter expansion ( $var , $- ...), command substitution ( $(...) ), or arithmetic expansion ( $((...)) ) unquoted in list context. Actually, you invoked it by mistake when you did for name in ${array[@]} instead of for name in "${array[@]}" . (Actually, you should beware that invoking that operator like that by mistake is source of many bugs and security vulnerabilities ). That operator is configured with the $IFS special parameter (to tell what characters to split on (though beware that space, tab and newline receive a special treatment there)) and the -f option to disable ( set -f ) or enable ( set +f ) the glob part. Also note that while the S in $IFS was originally (in the Bourne shell where $IFS comes from) for S eparator, in POSIX shells, the characters in $IFS should rather be seen as delimiters or terminators (see below for an example). So to split on _ : string='var1_var2_var3'IFS=_ # delimit on _set -f # disable the glob partarray=($string) # invoke the split+glob operatorfor i in "${array[@]}"; do # loop over the array elements. To see the distinction between separator and delimiter , try on: string='var1_var2_' That will split it into var1 and var2 only (no extra empty element). So, to make it similar to JavaScript's split() , you'd need an extra step: string='var1_var2_var3'IFS=_ # delimit on _set -f # disable the glob parttemp=${string}_ # add an extra delimiterarray=($temp) # invoke the split+glob operator (note that it would split an empty $string into 1 (not 0 ) element, like JavaScript's split() ). To see the special treatments tab, space and newline receive, compare: IFS=' '; string=' var1 var2 ' (where you get var1 and var2 ) with IFS='_'; string='_var1__var2__' where you get: '' , var1 , '' , var2 , '' . Note that the zsh shell doesn't invoke that split+glob operator implicitly like that unless in sh or ksh emulation. There, you have to invoke it explicitely. $=string for the split part, $~string for the glob part ( $=~string for both), and it also has a split operator where you can specify the separator: array=(${(s:_:)string}) or to preserve the empty elements: array=("${(@s:_:)string}") Note that there s is for splitting , not delimiting (also with $IFS , a known POSIX non-conformance of zsh ). It's different from JavaScript's split() in that an empty string is split into 0 (not 1) element. A notable difference with $IFS -splitting is that ${(s:abc:)string} splits on the abc string, while with IFS=abc , that would split on a , b or c . With zsh and ksh93 , the special treatment that space, tab or newline receive can be removed by doubling them in $IFS . As a historic note, the Bourne shell (the ancestor or modern POSIX shells) always stripped the empty elements. It also had a number of bugs related to splitting and expansion of $@ with non-default values of $IFS . For instance IFS=_; set -f; set -- $@ would not be equivalent to IFS=_; set -f; set -- $1 $2 $3... . Splitting on regexps Now for something closer to JavaScript's split() that can split on regular expressions, you'd need to rely on external utilities. In the POSIX tool-chest, awk has a split operator that can split on extended regular expressions (those are more or less a subset of the Perl-like regular expressions supported by JavaScript). split() { awk -v q="'" ' function quote(s) { gsub(q, q "\\" q q, s) return q s q } BEGIN { n = split(ARGV[1], a, ARGV[2]) for (i = 1; i <= n; i++) printf " %s", quote(a[i]) exit }' "$@"}string=a__b_+ceval "array=($(split "$string" '[_+]+'))" The zsh shell has builtin support for Perl-compatible regular expressions (in its zsh/pcre module), but using it to split a string, though possible is relatively cumbersome. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/228235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
228,277 | I've written about half a dozen .service files for the different parts that make up the system I'm working on. It's useful to be able to start the whole system piecemeal but I'd also like to have a single unit that starts the whole system in one call to systemctl . What's the best way to do this? | You want a target-type unit , with all the service units listed as Wants= dependencies. Then you start it using systemctl start unitname.target . (Make sure not to use systemctl isolate here; that will shut down everything except what's in your services' dependency tree, which you presumably don't want.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24092/"
]
} |
228,279 | I regularly need to update some Ubuntu 12.04 (Precise Pangolin) servers ( Rackspace ). What I do now is: Copy a file to a server using SCP Log on to the server using SSH Stop Tomcat Do some copying and moving of the uploaded file Start Tomcat Repeat the exact same process with the same file on the second server (12 servers now and the number is growing). Is it possible to write a script that loops through a list of servers and does all this for me? How would I go about it?Preferably the solution would not necesitate the install of any stuff. The majority within the company works on MacBooks, but Windows VM's are abundant. Ideally servers to be updated can simply be added/removed to change the list of servers. However, any solution that saves me the time of doing the same thing +12 times is very much appreciated :) | There are several solutions for this - do you want to keep manual control of the steps and simply run through them simultaneously? The look at CSSH (if you're coming from a Linux system) or SuperPutty (if you're coming from a Windows system). If you simply want to automate everything, look at Expect . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133143/"
]
} |
228,300 | I thought this will be simple, but can't find out how to do it. Scenario I have a single .csv file with id_user , text , id_group columns where each column is delimited by tabs such like: "123456789" "Here's the field of the text, also contains comma" "10""987456321" "Here's the field of the text, also contains comma" "10""123654789" "Here's the field of the text, also contains comma" "11""987456123" "Here's the field of the text, also contains comma" "11" How to find the the text? Attempt awk I was looking for a way to specify the print $n delimiter, if I could do it an option will be $ awk -d '\t' '{print $2}' file.csv | sed -e 's/"//gp' where -d is the delimiter for the print option and the sed to take out the " | TAB delimiter cut You do not need sed or awk , a simple cut will do: cut -f2 infile awk If you want to use awk, the way to supply the delimiter is either through the -F argument or as a FS= postfix: awk -F '\t' '{ print $2 }' infile Or: awk '{ print $2 }' FS='\t' infile Output in all cases: "Here's the field of the text, also contains comma""Here's the field of the text, also contains comma""Here's the field of the text, also contains comma""Here's the field of the text, also contains comma" Quote delimiter If the double-quotes in the file are consistent, i.e. no embedded double-quotes in fields, you could use them as the delimiter and avoid having them in the output, e.g.: cut cut -d\" -f4 infile awk awk -F\" '{ print $4 }' infile Output in both cases: Here's the field of the text, also contains commaHere's the field of the text, also contains commaHere's the field of the text, also contains commaHere's the field of the text, also contains comma | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68382/"
]
} |
228,312 | If there was a process continually writing to a file, and I wanted to take control of the file with root, I could do something like this: sudo rm somefile; sudo touch somefile Is it possible for the appending process to append to the file between these two commands? If so, is there a way to ensure that no other command gets run in between? | A chained command line is basically a small shell script; it'll run the first command using the usual fork+exec procedure, wait for it to exit, then run the second in the same way. Between the two commands, there's some arbitrary amount of time that the shell takes in its bookkeeping and processing, during which ordinary multiprocessing takes place and arbitrary other processes may do arbitrary other things. So the answer is 'no'. (If you actually do this, you'll find that the directory entry for somefile vanishes, but the file itself remains (since it's opened by a process) until it's closed. Disk space used by the file will not be reclaimed until that happens. Meanwhile, the touch command will create a new, unrelated file with the same name and path.) If you want to change ownership of the file to root, just do sudo chown root:root somefile (though I'm not sure how that will affect processes with an open filehandle to it). If you want to destroy the current file contents, try truncate -s 0 somefile (the running process will continue to append to the now-empty file). If it's something else, perhaps clarify what you want to do. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13310/"
]
} |
228,314 | I have added a custom path to PATH variable in my /root/.bashrc file When i do sudo su; echo $PATH , it shows the entry, '/path/to/custom/bins'. But i do sudo sh -c 'echo $PATH' , it shows, /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin The folder paths added in .bashrc file are not visible. Doesn't sudo command have the same environment as a root user? | .bashrc is a configuration file of bash, only when it's executed interactively. It's only loaded when you start bash, not when you run some other program such as sh (not even if bash is invoked via the name sh ). And it's only loaded when bash is interactive, not when it's executing a script or a command with -c . sudo sh -c 'echo $PATH' or sudo bash -c 'echo $PATH' doesn't invoke an interactive shell, so .bashrc is not involved. sudo su; echo $PATH runs an interactive instance of root's shell. If that's bash, then ~root/.bashrc is loaded. This snippet executes echo $PATH once this interactive shell terminates, so whatever happens in the interactive shell has no influence on what the snippet prints at the end. But if you type echo $PATH at the prompt of the interactive shell started by sudo su , you will see the value set by ~root/.bashrc . Since .bashrc is invoked in each interactive shell, not by login shells (not even by interactive login shells, which is a design defect in bash), it's the wrong place to define environment variables. Use .bashrc for interactive bash settings such as key bindings, aliases and completion settings. Set environment variables in files that are loaded when you log in: ~/.pam_environment or ~/.profile . So set PATH in .profile instead of .bashrc , and either run a login shell with sudo -i 'echo $PATH' , or explicitly source .profile with sudo sh -c '. ~/.profile; echo $PATH' . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/228314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
228,331 | I have a script looks like: c=0for f in */*; docp -v "$f" "/myhome/CE$(printf '%0*d' 2 $BATCHNUM)-new-stuctures_extracted/test-$(printf '%0*d' 5 $c)"c=$((c=c+1))done However, the user must provide a variable call BATCHNUM and otherwise I need to force this script stop running. It will be better if I could force the script that calls this script to stop too (or even the #1 script that calls #2 script which calls this script). | The quickest way is probably to add these two lines to the start of the script: set -u # or set -o nounset: "$BATCHNUM" The first line sets the nounset option in the shell running the script, which aborts if you try to expand an unset variable; the second expands $BATCHNUM in the context of a no-op, to trigger the abort before doing anything else. If you want a more helpful error message, you could instead write: if [[ -z "$BATCHNUM" ]]; then echo "Must provide BATCHNUM in environment" 1>&2 exit 1fi Or similar. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/228331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118930/"
]
} |
228,412 | For example, there is a file here that I want to download via CLI: https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.1.y/arch/arm/configs/bcmrpi_defconfig How to I download the actual file, and not the html? I tried the following, but only get an html file: [pi@raspberrypi]/usr/src/linux$ wget https://raw.githubusercontent.com/raspberrypi/linux/rpi-3.2.27/arch/arm/configs/bcmrpi_defconfig | The general problem is that github typically serves up an html page that includes the file specified along with context and operations you can perform on it, not the raw file specified. Tools like wget and curl will just save what they're given by the web server, so you need to find a way to ask the web server, github, to send you a raw file rather than an html wrapper. This is true whether you use -o -O or >>. The "...//raw.git..." address in this particular test case is probably serving raw files, and pre-solving the OP's problem as posted, which is why all of these answers work, but don't solve the more generic problem. I can download a text file, or an html-wrapped version of it from the following urls. Note the differences between them and feel free to paste them in a new tab or new window in your browser as well. html-wrapped, default: https://github.com/raspberrypi/linux/blob/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig raw link, if you right-click the [raw] button on the html page: https://github.com/raspberrypi/linux/raw/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig final url, after being redirected: https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig You can then download with either: wget https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfigcurl https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig -o bcmrpi_defconfig The simplest way would be to go to the github page of the content you want and right-click to get the [raw] link for each file. If your needs are more complex, requiring many files, etc. you may want to abandon wget and curl and just use git. It is probably a more appropriate tool for pulling data from git repositories. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59802/"
]
} |
228,532 | I have a file that looks like: 1 rs6687776 1020428 T C T C T C C C T C C C T C The 4th and 5th column are the two different possible alleles at that site. I need to change column 6 onwards so as to show 0 if there is a T allele and 1 if there is a C allele. My file is 20805 x 459. So should look like: 1 rs6687776 1020428 T C 0 1 0 1 1 1 0 1 1 1 0 1 I've tried: cat file | while read linedo if [ [,6-] = [,4] ]then echo "0" echo "1"fidone But I just end up with a file of alternating 0 's and 1 's that is 41610 rows long. Maybe AWK is more useful? | Here's another awk approach: $ awk '{a[$4]=0;a[$5]=1; for(i=6;i<=NF;i++){$i=a[$i]}}1;' file1 rs6687776 1020428 T C 0 1 0 1 1 1 0 1 1 1 0 1 Explanation a[$4]=0;a[$5]=1; : creates the array a with two keys, $4 and $5 . The value for $4 is set to 0 and that of $5 to 1. for(i=6;i<=NF;i++){$i=a[$i]} : for each field number from 6 to the last one, set that field to whatever is stored in the array for the nucleotide found. 1; : awk shorthand for "print this line". You could also do it with Perl: $ perl -lane 's/$F[3]/0/ for @F[5..$#F]; s/$F[4]/1/ for @F[5..$#F]; print "@F"' file1 rs6687776 1020428 T C 0 1 0 1 1 1 0 1 1 1 0 1 This is the same idea. The -a makes perl act like awk , splitting each line on whitespace into the array @F . We then substitute all cases of the nucleotide found in the 4th field ( $F[3] , arrays start at 0) with 0 and all cases of the 5th ( $F[4] ) with 1 . The for @F[5..$#F] means that the substitution is only applied for fields 6 to last. Finally, the modified array is printed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133375/"
]
} |
228,547 | I have to write a Bash script which checks if another Bash script contains a certain command line. Since Bash allows you to split a command line over multiple lines, my script must be able to merge the corresponding lines before the actual pattern matching can take place. How do I parse all multi-line into single-line commands in a Bash script? Example I want to check if a certain script contains the ls command - and if it contains the ls command, I want to know which parameters are passed to the ls command. To answer this question, I could use sed . But therefore I have to merge all multi-line commands first. Input: # My comment \ls \-a \-l Output: # My comment \ls -a -l Example for an invalid output: # My comment ls -a -l | Just before shellshock, I answered a question on StackOverflow about eliminating comments in bash scripts. My answer used the simple trick of creating a function by enclosing the contents of the script file inside tmp_() { ... } , and then using declare -f tmp_ to pretty-print the function. In the pretty-printed output, there are no comments and lines continued with a backslash-newline have been resolved to single lines. (Except inside backticked command substitution.) Some other reformatting is also done. For example, compound commands are split into several lines. And some forms of line continuation are not reformatted; for example, a line ending with a pipe symbol is not altered. But it should satisfy the use-case in this question. (See example output below.) Of course, the function definition needs to be evaluated, which means that the script being pretty-printed might include an injection attack. In the code I suggested, the function definition is evaluated by way of the bash feature which allows functions to be exported and shared with a child process. At the time I wrote this little hack, I believed that mechanism to be safer than calling eval , but as it turns out I was wrong. Since shellshock, there have been a number of improvements to the code bash uses to import function definitions, closing the door on at least some injection attacks, but there is clearly no assurance that the procedure is completely safe. If you are going to run the script being analyzed, then using this procedure to pretty-print it probably does not increase your vulnerability; an attacker could simply insert the dangerous code directly in the script and there would be no need to jump through hoops to hide the attack in a way which might bypass the safety checks in the function import code. All the same, you should think carefully about security issues, both with this little program and with whatever plans you might have to execute arbitrary scripts. Here is the version of the pretty-printer which works with a post-shellshock-patched bash (and will not work with previous bash versions): env "BASH_FUNC_tmp_%%=() {$(<script_name)}" bash -c 'declare -f tmp_' | tail -n+2 Substitute the name of the file containing the script for script_name , in the second line. You might want to adjust the tail command; it removes the wrapper function name, but does not remove the braces which surround the script body. The original version, which will work on pre-shellshock versions of bash, can be found in the referenced SO answer. Sample. Tested against the input provided by Stéphane Chazelas : { echo \\; echo a#b; echo 'foo\bar'; cat <<EOFthisis joinedthis 'aswell'$(ls -l)EOF cat <<'EOF'this is\not joinedEOF echo "$(ls -l)"; echo `ls \\-l`} This differs from Stéphane's suggested output: Lines have been indented, and many have been terminated with semicolons. Whitespace has been added and/or deleted in many lines. cat << E\OF has been changed to cat <<'EOF' , which is semantically identical. The nested continuation line in the backticked command substitution at the end has not been modified. (The continuation line in the $(...) command substituion is eliminated.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124579/"
]
} |
228,548 | What is the simplest way to extract from a file a line given by its number. E.g., I want the 666th line of somefile . How would you do this in your terminal, or in a shell script? I can see solutions like head -n 666 somefile | tail -n 1 , or even the half-incorrect cat -n somefile | grep -F 666 , but there must be something nicer, faster, and more robust. Maybe using a more obscure unix command/utility? | sed ( s tream ed itor) is the right tool for this kind of job: sed -n '666p' somefile Edit: @tachomi's solution sed '666q;d' somefile is better when operating on a huge text file, because it makes sed exit after printing the pattern without reading the rest of the file. On all other files, the difference is irrelevant. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42406/"
]
} |
228,558 | I'm trying to remove some characters from file(UTF-8). I'm using tr for this purpose: tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat File contains some foreign characters (like "Латвийская" or "àé"). tr doesn't seem to understand them: it treats them as non-alpha and removes too. I've tried changing some of my locale settings: LC_CTYPE=C LC_COLLATE=C tr -cs '[[:alpha:][:space:]]' ' ' <testdata.datLC_CTYPE=ru_RU.UTF-8 LC_COLLATE=C tr -cs '[[:alpha:][:space:]]' ' ' <testdata.datLC_CTYPE=ru_RU.UTF-8 LC_COLLATE=ru_RU.UTF-8 tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat Unfortunately, none of these worked. How can I make tr understand Unicode? | That's a known ( 1 , 2 , 3 , 4 , 5 , 6 ) limitation of the GNU implementation of tr . It's not as much that it doesn't support foreign , non-English or non-ASCII characters, but that it doesn't support multi-byte characters. Those Cyrillic characters would be treated OK, if written in the iso8859-5 (single-byte per character) character set (and your locale was using that charset), but your problem is that you're using UTF-8 where non-ASCII characters are encoded in 2 or more bytes. GNU's got a plan (see also ) to fix that and work is under way but not there yet. FreeBSD or Solaris tr don't have the problem. In the mean time, for most use cases of tr , you can use GNU sed or GNU awk which do support multi-byte characters. For instance, your: tr -cs '[[:alpha:][:space:]]' ' ' could be written: gsed -E 's/( |[^[:space:][:alpha:]])+/ /' or: gawk -v RS='( |[^[:space:][:alpha:]])+' '{printf "%s", sep $0; sep=" "}' To convert between lower and upper case ( tr '[:upper:]' '[:lower:]' ): gsed 's/[[:upper:]]/\l&/g' (that l is a lowercase L , not the 1 digit). or: gawk '{print tolower($0)}' For portability, perl is another alternative: perl -Mopen=locale -pe 's/([^[:space:][:alpha:]]| )+/ /g'perl -Mopen=locale -pe '$_=lc$_' If you know the data can be represented in a single-byte character set, then you can process it in that charset: (export LC_ALL=ru_RU.iso88595 iconv -f utf-8 | tr -cs '[:alpha:][:space:]' ' ' | iconv -t utf-8) < Russian-file.utf8 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/228558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
228,581 | I need a command which will just send http request to required web directory? Does something like this exist? | There are lots of ways: nc www.example.com 80 : upside is you have full controll over what you send, downside is you are on your own. Restrict to HTTP 1.0 to minimize the things you need to type GET / HTTP/1.0 followed by an empty line is all you need. curl http://www.example.com/ : good for normal use and debugging. Has lots of options. Especially usefull: --verbose to see the HTTP request/response and --head to send a HEAD request (no body). openssl s_client -connect www.example.com:443 : Useful for debugging HTTPS servers. wget : good for download and maybe more. w3m , lynx , links : text-only browser. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124635/"
]
} |
228,597 | When I use cp -R inputFolder outputFolder the result is context-dependent : if outputFolder does not exist, it will be created, and the cloned folder path will be outputFolder . if outputFolder exists, then the clone created will be outputFolder/inputFolder This is horrible , because I want to create some installation script, and if the user runs it twice by mistake, he will have outputFolder created the first time, then on the second run all the stuff will be created once again in outputFolder/inputFolder . I want always the first behavior: create a clone next to the original (as a sibling). I want to use cp to be portable (e.g. MINGW does not have rsync shipped) I checked cp -R --parents but this recreates the path all the way up the directory tree (so the clone will not be outputFolder but some/path/outputFolder ) --remove-destination or --update in case 2 do not change anything, still things are copied into outputFolder/inputFolder Is there a way to do this without first checking for existence of the outputFolder (if folder does not exist then...) or using rm -rf outputFolder ? What is the agreed, portable UNIX way of doing that? | Use this instead: cp -R inputFolder/. outputFolder This works in exactly the same way that, say, cp -R aaa/bbb ccc works: if ccc doesn't exist then it's created as a copy of bbb and its contents; but if ccc already exists then ccc/bbb is created as the copy of bbb and its contents. For almost any instance of bbb this gives the undesirable behaviour that you noted in your Question. However, in this specific situation the bbb is just . , so aaa/bbb is really aaa/. , which in turn is really just aaa but by another name. So we have these two scenarios: ccc does not exist: The command cp -R aaa/. ccc means "create ccc and copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . ccc does exist: The command cp -R aaa/. ccc means "copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/228597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10745/"
]
} |
228,603 | On my Amazon Linux (RHEL-derived) system, within /etc/login.defs, I've set the minimum number of days between password changes to 1 day: PASS_MIN_DAYS 1 . I thought I should be doing that with PAM configuration files in /etc/pam.d/. However, I cannot find any documentation for doing so. Is there a way to control when passwords can be changed using PAM, and what is it, please? | Use this instead: cp -R inputFolder/. outputFolder This works in exactly the same way that, say, cp -R aaa/bbb ccc works: if ccc doesn't exist then it's created as a copy of bbb and its contents; but if ccc already exists then ccc/bbb is created as the copy of bbb and its contents. For almost any instance of bbb this gives the undesirable behaviour that you noted in your Question. However, in this specific situation the bbb is just . , so aaa/bbb is really aaa/. , which in turn is really just aaa but by another name. So we have these two scenarios: ccc does not exist: The command cp -R aaa/. ccc means "create ccc and copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . ccc does exist: The command cp -R aaa/. ccc means "copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/228603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54643/"
]
} |
228,624 | I understand that it means force to do something. When doing $ vim -R file I enter in read-only mode, but this works as preventive mode only -R Readonly mode. The 'readonly' option will be set for all the files being edited. You can still edit the buffer, but will be prevented from accidentally overwriting a file. If you forgot that you are in View mode and did make some changes, you can overwrite a file by adding an exclamation mark to the Ex command, as in ":w!". The 'readonly' option can be reset with ":set noro" (see the options chapter, |options|). Subsequent edits will not be done in readonly mode. Calling the executable "view" has the same effect as the -R argument. The 'updatecount' option will be set to 10000, meaning that the swap file will not be updated automatically very often. But what I can't get yet is why it makes the instruction to skip permissions even if the file's owner is root user, besides it also change owner & group. | ! generally means what you'd expect from "force", but what it means for specific command depends on the command. In the case of w! , if Vim cannot write to the file for some reason, it will try to delete and create a new one with the current buffer's contents. Consider the following example (observe the inode numbers): $ touch foo$ chmod -w foo$ stat foo File: ‘foo’ Size: 0 Blocks: 0 IO Block: 4096 regular empty fileDevice: 22h/34d Inode: 10396141 Links: 1Access: (0444/-r--r--r--) Uid: ( 1000/ muru) Gid: ( 1000/ muru)Access: 2015-09-10 00:24:28.259290486 +0530Modify: 2015-09-10 00:24:28.259290486 +0530Change: 2015-09-10 00:24:30.771263735 +0530 Birth: -$ vim -c 'r!date' -c 'wq!' foo$ stat foo File: ‘foo’ Size: 30 Blocks: 8 IO Block: 4096 regular fileDevice: 22h/34d Inode: 10396151 Links: 1Access: (0444/-r--r--r--) Uid: ( 1000/ muru) Gid: ( 1000/ muru)Access: 2015-09-10 00:24:37.727189657 +0530Modify: 2015-09-10 00:24:37.731189614 +0530Change: 2015-09-10 00:24:37.763189273 +0530 Birth: -$ cat fooThu Sep 10 00:24:37 IST 2015 That's why the owner and group changes. Permissions are preserved - :h write-permissions : write-permissionsWhen writing a new file the permissions are read-write. For unix the mask is0666 with additionally umask applied. When writing a file that was read Vimwill preserve the permissions, but clear the s-bit. If you want to make Vim refuse writes, see :h write-readonly : write-readonlyWhen the 'cpoptions' option contains 'W', Vim will refuse to overwrite areadonly file. When 'W' is not present, ":w!" will overwrite a readonly file,if the system allows it (the directory must be writable). Note that it says "the directory must be writable" - because without a writable directory, Vim can neither delete nor create a new file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68382/"
]
} |
228,645 | I am an Ubuntu user for some time now and I am thinking of trying some other Linux OS. After extensive search I decided to try either Debian or Cent OS. But I have some concerns: Both Debian and Cent OS are said to be very stable so some of the packages featured in them are old versions. So if a program has a bug or a security vulnerability and one must upgrade immediately to a newly released version will I be able to do so or I will have to keep using the problematic version? As far as upgrades go (newer software versions or system bugs, security upgrades etc.) does those operating systems get automatic upgrades as Ubuntu does once in a while or the only way to upgrade either software packages or the system will be to install the newer version of the OS when it is be available? I have heard that once Cent OS is installed I don't get codexes and I have to install some basic software through third party repositories. What does this mean for me? Will I have a problem? I want to clarify that I want to use those operating systems for a desktop computer. | I moved from Ubuntu to Debian a couple of years ago and never regret this decision. Concerning your questions: You can use different branches of Debian . As a new user I would recommend the stable branch (which is indeed very stable but sometimes lacks new software) or the testing branch (which is only a little less stable but provides newer software). Both branches provide security updates. They are installed every time you do a system update. Debian Stable doesn't get any new software - only bug fixes and security updates. Debian Testing is rolling release *), meaning that new software is provided continuously. This is a difference to Ubuntu, where you have to upgrade to the new version every once in a while to get new software. I cannot answer this question as I have never used Cent OS. I heard that it is a good, stable distribution. As it is used for servers, it also should be quite secure. Coming from Ubuntu, you might want to consider that Debian is more similar to Ubuntu (to be precise, Ubuntu is built "upon" Debian). Both, Debian and Ubuntu use Apt . Cent OS is a derivate of Red Hat Linux and uses RPM instead. There is nothing wrong with either of them, however you might already be more used to the Debian approach. *) To be precise, just before the current testing release becomes the new stable release, there is a so called "freeze". In this time window testing doesn't get any new software - just bug fixes. After that, when the new stable release is out you have to perform a dist-upgrade ( apt-get dist-upgrade ) to update your system to the new testing-release (if you want to do so, make sure your /etc/apt/sources.list contains the word testing instead of the name of the current testing release, e.g. stretch ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114868/"
]
} |
228,647 | I'm doing some testing of an application I am working on, and I need to be able to silently drop outgoing UDP packets for a short period of time to replicate a possible failure mode. Is there any way to do this? Note: iptables DROP is not silent for outgoing messages! When a send() or similar call is dropped by iptables , it returns EPERM for some bizarre reason (see here . Unfortunately, I can't use that answer as my destination is a single hop away). xtables-addons used to have a STEAL verb , but it got removed a few years ago for no reason I can find. I've now tried using bogus routes in the routing table, and unfortunately, that appears to break both directions of comm traffic. For the test I need to do, I have to allow inbound UDP traffic, and as soon as I install the bogus route, the streaming incoming packets immediately halt, though the source is still sending them. | I moved from Ubuntu to Debian a couple of years ago and never regret this decision. Concerning your questions: You can use different branches of Debian . As a new user I would recommend the stable branch (which is indeed very stable but sometimes lacks new software) or the testing branch (which is only a little less stable but provides newer software). Both branches provide security updates. They are installed every time you do a system update. Debian Stable doesn't get any new software - only bug fixes and security updates. Debian Testing is rolling release *), meaning that new software is provided continuously. This is a difference to Ubuntu, where you have to upgrade to the new version every once in a while to get new software. I cannot answer this question as I have never used Cent OS. I heard that it is a good, stable distribution. As it is used for servers, it also should be quite secure. Coming from Ubuntu, you might want to consider that Debian is more similar to Ubuntu (to be precise, Ubuntu is built "upon" Debian). Both, Debian and Ubuntu use Apt . Cent OS is a derivate of Red Hat Linux and uses RPM instead. There is nothing wrong with either of them, however you might already be more used to the Debian approach. *) To be precise, just before the current testing release becomes the new stable release, there is a so called "freeze". In this time window testing doesn't get any new software - just bug fixes. After that, when the new stable release is out you have to perform a dist-upgrade ( apt-get dist-upgrade ) to update your system to the new testing-release (if you want to do so, make sure your /etc/apt/sources.list contains the word testing instead of the name of the current testing release, e.g. stretch ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11093/"
]
} |
228,699 | I have a multi line log entry format that I need to process. The log looks something like this: --START--Device=BData=asdfasdfLorem=Ipsum--END----START--Device=AData=asdfasdfLorem=Ipsum--END----START--Device=BData=asdfasdf--END----START--Device=AData=asdfasdf--END----START--Device=BData=asdfasdf--END----START--Device=CData=asdfasdfLorem=Ipsum--END-- I want to print everything between --START-- and --END-- if a particular pattern is matched. e.g: Print all entries where Device=A --START--Device=AData=asdfasdfLorem=Ipsum--END----START--Device=AData=asdfasdf--END-- All I've been able to do so far is write: sed -e -n '/--START--/,/--END--/p' < input Which effectively prints the input but I think I need to add {} to filter with N and then print if that condition matches. I also think I'm completely lost. Any idea on how to print multiple lines if a single line matches a condition? | $ sed -n '/--START--/{:a;N;/--END--/!ba; /Device=A/p}' file--START--Device=AData=asdfasdfLorem=Ipsum--END----START--Device=AData=asdfasdf--END-- (The above was tested on GNU sed. It would have to be massaged to run on BSD/OSX.) How it works: /--START--/{...} Every time we reach a line that contains --START-- , run the commands inside the braces {...} . :a Define a label a . N Read the next line and add it to the pattern space. /--END--/!ba Unless the pattern space now contains --END-- , jump back to label a . /Device=A/p If we get here, that means that the patterns space starts with --START-- and ends with --END-- . If, in addition, the pattern space contains Device=A , then print ( p ) it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/228699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133477/"
]
} |
228,701 | I have two g++ programs located at /usr/local/bin/ and /usr/bin/ I would like to have the default g++ to be in /usr/local/bin/ . However, I do not want to change my PATH environment variable because for some other program. I would prefer the version in /usr/bin/ than that in /usr/local/bin/ . Is this possible? To make my point clear: I want my default for my two program to be: g++ in /usr/local/bin/ python in /usr/bin/ But in /usr/local/bin/ and /usr/bin/ , both programs exist, what should I do? | Option 1: Make an override folder on your path If you need these programs to be called in indirect ways (like by some application started by the window manager will call g++ or python , for instance), you should edit your path. You could simply add a new folder to the beginning of your path in your ~/.bashrc : export PATH=/home/username/.bin:$PATH and place two symbolic links to point to the appropriate programs: ln -s /usr/bin/python /home/username/.bin/pythonln -s /usr/local/bin/g++ /home/username/.bin/g++ That way, once your ~/.bashrc is properly sourced (log out, then log back in), everything should find the right python and the right g++ . Option 2: Use an alias for bash to follow If you are looking for a lighter weight solution, and if you only call python directly from bash , you could setup an alias in your ~/.bashrc : alias python=/usr/bin/python Option 3: Just change the name of python in /usr/local/bin/ Or you could always just rename /usr/local/bin/python to be /usr/local/bin/python-alternate or something. I wouldn't suggest renaming things in /usr/bin , since at least in Debian that is controlled by a package manager. Usually /usr/local/bin isn't. Option 4: Specify the correct compiler in the Makefile If your workflow uses make , or some broader application that calls make (such as autotools or cmake ), there is almost always an option to specify your compiler. For instance, your makefile could look like: CXX=/usr/local/bin/g++all: $(CXX) inputfile.cpp -o outputfile or with cmake you might configure with cmake -D CMAKE_CXX_COMPILER=/usr/local/bin/g++ .. Different programs will have different syntaxes for specifying the compiler, but you can most always specify it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118930/"
]
} |
228,713 | It's clear to me regarding the partition's file system and VFS but not for the root file system. Let's say I have a disk with 3 partitions which are swap, installation and home. Each partition obviously has its own file system. Then there is VFS which is an interface for the kernel to different file systems (thanks to this because other documents do not mention it). Now, how the root file system fit together? | Option 1: Make an override folder on your path If you need these programs to be called in indirect ways (like by some application started by the window manager will call g++ or python , for instance), you should edit your path. You could simply add a new folder to the beginning of your path in your ~/.bashrc : export PATH=/home/username/.bin:$PATH and place two symbolic links to point to the appropriate programs: ln -s /usr/bin/python /home/username/.bin/pythonln -s /usr/local/bin/g++ /home/username/.bin/g++ That way, once your ~/.bashrc is properly sourced (log out, then log back in), everything should find the right python and the right g++ . Option 2: Use an alias for bash to follow If you are looking for a lighter weight solution, and if you only call python directly from bash , you could setup an alias in your ~/.bashrc : alias python=/usr/bin/python Option 3: Just change the name of python in /usr/local/bin/ Or you could always just rename /usr/local/bin/python to be /usr/local/bin/python-alternate or something. I wouldn't suggest renaming things in /usr/bin , since at least in Debian that is controlled by a package manager. Usually /usr/local/bin isn't. Option 4: Specify the correct compiler in the Makefile If your workflow uses make , or some broader application that calls make (such as autotools or cmake ), there is almost always an option to specify your compiler. For instance, your makefile could look like: CXX=/usr/local/bin/g++all: $(CXX) inputfile.cpp -o outputfile or with cmake you might configure with cmake -D CMAKE_CXX_COMPILER=/usr/local/bin/g++ .. Different programs will have different syntaxes for specifying the compiler, but you can most always specify it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63649/"
]
} |
228,762 | I have simple question. Is there any chance to configure ssh in that way, when I login to my server via ssh I want be in last used directory? | I suppose you are using bash as shell. edit .bash_logout in you home dir, add a line like pwd > $HOME/.last-pwp edit .bash_profile, add a line like cd $(< $HOME/.last-pwp ) note that if you run many session in parallel, only one directory will be remembered. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228762",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133532/"
]
} |
228,812 | I know that some shells at least support file test operators that detect when a filename names a symlink. Is there a POSIX utility 1 that provides the same functionality? 1 I may not be using the right terminology here. What I mean by "utility" is a free-standing executable living somewhere under /bin , /usr/bin , etc., as opposed to a shell built-in. | You're looking for test : -h pathname True if pathname resolves to a file that exists and is a symbolic link. False if pathname cannot be resolved, or if pathname resolves to a file that exists but is not a symbolic link. If the final component of pathname is a symlink, that symlink is not followed. Most shells have it as a builtin, but test also exists as a standalone program, which can be called from other programs without invoking an intermediate shell. This is the case for most builtins that shells may have, except for those that act on the shell itself (special builtins like break , export , set , …). [ -h pathname ] is equivalent to test -h pathname ; [ works in exactly the same way as test , except that [ requires an extra ] argument at the end. [ , like test , exists as a standalone program. For example: $ ln -s foo bar$ /usr/bin/test -h bar && echo yy | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
228,814 | I am having problem logging in into a Fedora 22 virtual machine, based on a cloud F22 image. Without touching the downloaded image I could boot just not login. I downloaded https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.raw.xz and, following https://lists.fedoraproject.org/pipermail/users/2013-November/442288.html I issued $ virt-sysprep -a Fedora-Cloud-Base-22-20150521.i386.raw --root-password password:XXXX on a CentOS box running $ virt-sysprep --versionvirt-sysprep 1.20.11 I created a vmdk disk image with D:\iso>vboxmanage convertdd Fedora-Cloud-Base-22-20150521.i386.raw d:\VirtualBoxVirtualMachines\Fedora22\Fedora22-password.vmdk --format vmdkConverting from raw image file="Fedora-Cloud-Base-22-20150521.i386.raw" to file="d:\VirtualBoxVirtualMachines\Fedora22\Fedora22-password.vmdk"...Creating dynamic image with size 3221225472 bytes (3072MB)... With the new Fedora22-password.vmdk I could not boot on VirtualBox, it said Boot error. I am not sure what went wrong, either the copying to the CentOS box and back suffered some error or the virt-sysprep command had a bug. How can I check on the command line whether a raw image is bootable? And, how could I set the password for this Fedora image, either this way or some other method? | You're looking for test : -h pathname True if pathname resolves to a file that exists and is a symbolic link. False if pathname cannot be resolved, or if pathname resolves to a file that exists but is not a symbolic link. If the final component of pathname is a symlink, that symlink is not followed. Most shells have it as a builtin, but test also exists as a standalone program, which can be called from other programs without invoking an intermediate shell. This is the case for most builtins that shells may have, except for those that act on the shell itself (special builtins like break , export , set , …). [ -h pathname ] is equivalent to test -h pathname ; [ works in exactly the same way as test , except that [ requires an extra ] argument at the end. [ , like test , exists as a standalone program. For example: $ ln -s foo bar$ /usr/bin/test -h bar && echo yy | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46710/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.