source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
194,278 | I'm actually using a Macbook Pro, and am quite new to non-Windows OS' (I know, for shame!), and I've made the mistake of overwriting my bash profile, instead of appending to it... What I do have is a couple of open tabs in the Terminal, and am hoping that there might be a way of recovering my .bash_profile from this (or any way really). | If your terminal is still open, type env : it will display all your environment variables. If it's a fresh install or if you never made any change, the most important variables are PATH (this one always exist) and LD_LIBRARY_PATH (may not exists, I'm not used to osx). For instance: $ env...PATH=/usr/bin:/bin:/home/user/bin:/sbin... It's also a common thing to source .bashrc in that file. Then you can edit your .bash_profile to add a line like this: # Source your .bashrcif [ -f ~/.bashrc ]; then source ~/.bashrcfi# Export the interesting env variables you# displayed from env command previouslyexport PATH=/usr/bin:/bin:/home/user/bin:/sbin It's not a real solution since it's not a real "backup", but keep in mind that as long your terminal is open, you can know your environment variables, and main purpose of .bash_profile is to set those variables. Also, you may have some aliases: in your opened terminal, type alias to display all the defined aliases. You can copy and paste the ouput as-is in your .bash_profile . (obviously, if those aliases are available in your .bashrc you don't need to define them in your .bash_profile since you source the .bashrc previously.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109130/"
]
} |
194,319 | Say I have a executable bash file script.sh . I want to use it as ./script.sh p1 p2 p1 and p2 are two necessary parameters for this command. If either one of them is missing, the command should fail. So I need to check them before go on executing further in script.sh . How could I check both of them are set? updated: I tried this: # this is a comment lineif [ -n "$1" ] || [ -n "$2" ]; then then echo 'haha'fi but got: ./_scp.sh: line 3: syntax error near unexpected token `then'./_scp.sh: line 3: ` then echo 'haha'' | In POSIX shell, if you only want to check that two parameters was set, try: if [ "$#" -lt 2 ]; then echo >&2 'Need 2 parameter' exit 1fi If you want two non-empty parameters, you should use -z instead of -n . -n tests if length is non-zero, -z tests if length is zero. if [ -z "$1" ] || [ -z "$2" ]; then echo >&2 'Need 2 parameters which are not empty' exit 1fi With bash (and ksh and zsh ), you can use: [ -v var ] && echo var was set to check whether variable var was set or not. $ [ -v var ] && echo var was set$ var=$ [ -v var ] && echo var was setvar was set Note however that $1 , $2 are not variables, they are a different type of parameter called positional parameters . [ -v 1 ] to check whether the first positional parameter is set does work in zsh or mksh , but not in bash nor ksh93 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
194,327 | There are lots of broken links in my system. I wrote a script to relink all broken links. But, after the scripts runs, find would again show me that the links still exists. This is my code: find /home/saman -lname '/home/saman/*' -exec \ sh -c 'ln -snf "/home$(readlink "$0")" "$0"' {} \; After running the command above, I search for broken links and still find them: find . -type l | xargs file | grep broken What am I doing wrong? | The first problem is that your find command will only find links that used full paths, not relative ones. To illustrate: $ ln -s /home/terdon/foo/NonExistantFile foo$ ln -s NonExistantFile bar$ tree.|-- bar -> NonExistantFile`-- foo -> /home/terdon/foo/NonExistantFile In the example above, I created two broken links. The first used an absolute path and the second, a relative one. If I now try your find command (having it echo the relinking command instead of running it so we can see what it's doing), only one of the two will be found: $ find . -lname '/home/terdon/*' -exec \ sh -c 'echo ln -snf "/home$(readlink "$0")" "$0"' {} \; ln -snf /home/home/terdon/foo/NonExistantFile ./foo The second issue is that your path is wrong. You are recreating links as "/home$(readlink "$0")" "$0" . The readlink command will already show the full path so adding /home to it results in /home/home/... which is not what you want. More importantly, what you are attempting is not possible. If a link is broken, that means that its target does not exist. Since the target doesn't exist, you can't simply relink the file, there's nowhere to link it to . The only thing you could do is recreate the link's target. This, however, is unlikely to be very useful since it would simply make your broken links point to new, empty files. If that is indeed what you want to do, you could try find . -type l -exec sh -c 'touch "$(readlink "{}")" ' \; Finally, you might want to create a more complex script that i) finds all broken links ii) searches your machine for files with the same name as the target of the link iii) presents you with a list of them and iv) asks you which one it should now link to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109154/"
]
} |
194,357 | While solving some CTF challenges online, I came across a situation where I needed to bruteforce a server. This is the code I wrote: #!/bin/bashfor i in {0..9}{0..9}{0..9}{0..9} do echo "Now trying code.." echo $i echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt done This was incredibly, painfully slow . I needed to try combinations from 1000 to 9999 and this took around 5 seconds for each 10 tries. Then, following an advice, I put a '&' at the end of this line: echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt & And, it tried 100s of combinations within seconds. I was very surprised. Could someone explain the logic to me? What did the '&' do? | Adding & spawns a background process. If you write a; b , it will run command a , wait for it to finish, then run command b , in sequence. If you write a & b , it will spawn a as a background process. It will not wait for it to finish, and it will start running b immediately. It will run both at once. You can see what it does by experimenting in the shell. If you have X installed, xterm is a good way to see what happens: typing $ xterm will cause another terminal window to open, and the first one will wait until you close it. Only when you close it will you get your shell back. If you type $ xterm & then it will run it in the background, and you will get your shell back immediately, while the xterm window will also remain open. So if you write echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt it makes the connection, sends the string, stores what comes out in the file, and only then moves on to then next one. Adding the & makes it not wait. It will end up running all ten thousand of them more or less simultaneously. Your script seems to "end" more quickly, because it probably did not actually finish in that time. It just made ten thousand background jobs, and then ended the foreground one. This also means that, in your case, it will try to open ten thousand connections more or less at once. Depending on what the other end can handle, some of them might well fail. Not only that, but there is no guarantee that they will run in order, in fact they almost certainly won't, so what will actually end up in /tmp/me/dump.txt is anyone's guess. Did you check if the output was correct? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194357",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79027/"
]
} |
194,365 | Last night I SSH 'ed to different systems.... one system/SSH reported that the " authenticity of HOSTNAME couldn't be established....... " and it asks if I want to continue or something, I didn't and found this peculiar so I tried to SSH to the system from one of the systems I already had SSH access/open, which didn't report that message(which means no change to the system since last login). Then I looked at my ~/.ssh/known_hosts and the system was in there so it should know the host I was connecting from, then tried again using the up/down arrows to browse bash history so I didn't make any mistakes in the commands and I didn't... And this time it worked without any notice about failed authenticity and asked for the password as usual. Should I be worried, was this as Debian say's "someone doing something nasty"? The point is... why the message, then not the message(without me doing or changing anything)..... weird. | Adding & spawns a background process. If you write a; b , it will run command a , wait for it to finish, then run command b , in sequence. If you write a & b , it will spawn a as a background process. It will not wait for it to finish, and it will start running b immediately. It will run both at once. You can see what it does by experimenting in the shell. If you have X installed, xterm is a good way to see what happens: typing $ xterm will cause another terminal window to open, and the first one will wait until you close it. Only when you close it will you get your shell back. If you type $ xterm & then it will run it in the background, and you will get your shell back immediately, while the xterm window will also remain open. So if you write echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt it makes the connection, sends the string, stores what comes out in the file, and only then moves on to then next one. Adding the & makes it not wait. It will end up running all ten thousand of them more or less simultaneously. Your script seems to "end" more quickly, because it probably did not actually finish in that time. It just made ten thousand background jobs, and then ended the foreground one. This also means that, in your case, it will try to open ten thousand connections more or less at once. Depending on what the other end can handle, some of them might well fail. Not only that, but there is no guarantee that they will run in order, in fact they almost certainly won't, so what will actually end up in /tmp/me/dump.txt is anyone's guess. Did you check if the output was correct? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
194,406 | I am starting to learn some Regex, therefore I use this command repeatedly: grep pattern /usr/share/dict/american-english Only the part with pattern changes, so I have to write the long expression " /usr/share/dict/american-english " again and again. Someone made the remark that it is possible to expand an argument of a command from the command history by typing cryptic character combinations instead of the full expression. Could you tell me those cryptic character combinations ? | You can use <M-.> (or <Esc>. if your Meta key is being used for something else), that is, Meta-dot (or <esc> dot), where Meta is usually the Alt key, to recall the last argument of the previous command. So, first you would type $ grep foo /usr/share/dict/american-english And then if you wanted to grep for something else, you would type $ grep bar After typing a space and then Esc . (that is, first pressing the escape key, and then the period key): $ grep bar /usr/share/dict/american-english You can also use either of the following: $ grep bar !:2$ grep bar !$ Where !:2 and !$ mean "second argument" and "last argument" respectively. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
194,430 | I want to learn Emacs or Vi. I don't like modal editing but Vi is ubiquitous and looks more useful for emergencies. What are the reasons why Emacs is not preinstalled in most distributions? | The direct answer is probably that vi is part of the POSIX standard (as @jasonwryan also mentioned in a comment), as well as the Single UNIX Specification. As such, anything that calls itself POSIX-compliant probably includes something vi -like, and anything that wants to call itself UNIX has to, or you don't get certified. Not just vi , but the related line editor ex and the scripting language involved are also part of these standards. emacs is not part of the standard, so it is not included. As for why this is so, there are several reasons. For one, emacs is far, far bigger and more complicated than vi . There's a whole LISP in there, among other things. The part of POSIX that decides this was written in 1992, when you would need a very beefy computer for emacs . I've seen a vi run on Minix on a 286 semi-decently. And while it doesn't really matter all that much for a modern desktop, on embedded systems it still very much does. Its size and versatility also makes it harder to check it for security holes, which might be an issue in certain applications where security is imperative. Basically, everything that might make it a better desktop application makes it a worse system component. If you're up for a history lesson, you could also say that vi is closer to the Unix philosophy : do one thing and do it well . Indeed, vi sprang from ex , and has always been an Unix program. emacs sprang from a vastly different world: it was originally built on top of TECO, written for the ITS operating system. It was only ported to Unix in the 1980s, as ITS was dying out. This makes emacs in effect an immigrant from a very different culture, while vi is a native. Interestingly, both emacs and vi first came out in 1976, so it's not just that vi is older. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
194,458 | I have rather large files in which I need to substitute NX for N1 but only if the line already contains NX as a pattern. So far I do it manually by first searching for occurrences of NX with: /NX and then using '<,'>s/N1/NX But the files are really large and long and the chances of me making a mistake during this manual procedure are very high. Is there a way to do this more efficiently in Vim? Here is an example of what a line in one of the files might look like: S22Tg K1B12N1Tg D1AE22K5 K2B12N1Tg D1AE22K6 W01B12N1Tg D1AE22TDNXW01 W02B12N1Tg D1AE22TDNXW02 W03B12N1Tg Note, that the lines are all very similar and only differ in terms of the numbers occurring after the letters. | In Vim, you could limit your substitution to the lines that contain NX : :g/NX/s/N1/NX/ Preceding the substitution with /NX/ makes Vim perform it only on the next line that contains NX (using ranges ), and using :g makes it run on all lines that match NX . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65869/"
]
} |
194,472 | I wanted to list the content of a pwd and display only file starting with dot.I tried ls -a | grep ^\. but I cannot figure out why the output contains also the files which do not start with dot. For example: Pictures.pip.pki.profileprojectsPublic I know that I can achieve what I want with ls -ld .* I am just curious about this behaviour of grep which I can't explain. | You need to put the grep regex inside quotes. ls -a | grep '^\.' Note: Don't parse the output of ls command . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77323/"
]
} |
194,480 | While I was reading a C source code files, I found this declarations. (This source code was written for linux system program. This is very important information) #include <time.h>#include <stdio.h>static timer_t* _interval_timer;... At first, I wanted to know more about the 'timer_t'. So I googled 'time.h' to get header information. But, there wasn't any words about 'timer_t', only mentioning about 'time_t'. In curiosity, I searched and opened 'time.h' c standard library file in my 'mac' computer(as you know, /usr/include folder stores c standard library files.) But this file was same with the previous googled one. Finally, I turned on my linux os(ubuntu) using virtual machine and opend the 'time.h' in the linux c standard library folder(the folder path is same as OSX). As I expected, 'time.h' file in linux has declaration of timer_t. I added the code lines which declare the 'timer_t' type below. #if !defined __timer_t_defined && \((defined _TIME_H && defined __USE_POSIX199309) || defined __need_timer_t)# define __timer_t_defined 1# include <bits/types.h>/* Timer ID returned by `timer_create'. */typedef __timer_t timer_t; My question is this. Why 'timer_t' is only defined in linux c standard library? Does this situation commonly happens? I mean, are there any differently defined functions or attributes between different OS? | Unix and C have an intertwined history, as they were both developed around the same time at Bell Labs in New Jersey and one of the major purposes of C was to implement Unix using a high level, architecture independent, portable language. However, there wasn't any official standardization until 1983. POSIX , the "portable operating system interface" is an IEEE operating system standard dating back to the time of the "Unix Wars" . It has been evolving ever since and is now the most widely implemented such standard. OSX is officially POSIX compliant, and linux unofficially is -- there are logistics and costs associated with official compliance that linux distros do not partake in. Much of what POSIX has focussed on is the elaboration of things not part of ISO C. Time.h is, but the ISO version does not include the timer_t type or any functions which use it. Those are from the POSIX extension , hence this reference in the linux header: #if !defined __timer_t_defined && \((defined _TIME_H && defined __USE_POSIX199309) The __USE_POSIX199309 is an internal glibc symbol that is set in features.h when _POSIX_C_SOURCE >= 199309L , meaning that POSIX.1b is to be supported (see the feature_test_macros manpage). This is also supported with _XOPEN_SOURCE >= 600 . are there any differently defined functions or attributes between different OS? I think with regard to C, amongst POSIX systems, there is an effort to avoid that, but it does happen. There are some GNU extensions (e.g. sterror_r() ) that have incompatible signatures from their POSIX counterparts. Possibly this happens when POSIX takes up the extension but modifies it, or else they are just alternatives dreamed up by GNU -- you can opt for one or the other by using an appropriate #define . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108094/"
]
} |
194,521 | Is there a native way (no installing or downloading extra stuff) to read a Unix executable file? I just need to read file to see what's in it and learn what I can use it for. What I'm really trying to do is learn what the Wireless Diagnostics app does, or rather how it does it. I'm looking to build my own network diagnostics app for my mac. So, I was wanting to read what the Wireless Diagnostics app (location: /System/Library/CoreServices/Applications/Wireless Diagnostics.app) so I found the executable file in the app to see if I could glean anything. That's what I'm looking to get out of this. | Unix and C have an intertwined history, as they were both developed around the same time at Bell Labs in New Jersey and one of the major purposes of C was to implement Unix using a high level, architecture independent, portable language. However, there wasn't any official standardization until 1983. POSIX , the "portable operating system interface" is an IEEE operating system standard dating back to the time of the "Unix Wars" . It has been evolving ever since and is now the most widely implemented such standard. OSX is officially POSIX compliant, and linux unofficially is -- there are logistics and costs associated with official compliance that linux distros do not partake in. Much of what POSIX has focussed on is the elaboration of things not part of ISO C. Time.h is, but the ISO version does not include the timer_t type or any functions which use it. Those are from the POSIX extension , hence this reference in the linux header: #if !defined __timer_t_defined && \((defined _TIME_H && defined __USE_POSIX199309) The __USE_POSIX199309 is an internal glibc symbol that is set in features.h when _POSIX_C_SOURCE >= 199309L , meaning that POSIX.1b is to be supported (see the feature_test_macros manpage). This is also supported with _XOPEN_SOURCE >= 600 . are there any differently defined functions or attributes between different OS? I think with regard to C, amongst POSIX systems, there is an effort to avoid that, but it does happen. There are some GNU extensions (e.g. sterror_r() ) that have incompatible signatures from their POSIX counterparts. Possibly this happens when POSIX takes up the extension but modifies it, or else they are just alternatives dreamed up by GNU -- you can opt for one or the other by using an appropriate #define . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50683/"
]
} |
194,542 | The FreeBSD download page offers links to Virtual Machine Images, including one link for AMD64 . That AMD64 folder offers three files: FreeBSD-10.1-RELEASE-amd64.raw.xz FreeBSD-10.1-RELEASE-amd64.vhd.xz FreeBSD-10.1-RELEASE-amd64.vmdk.xz What are each of these for, what's the difference between them? I cannot find documentation. In particular I'm wonder if any will work for use in Parallels 10 . | UPDATE The documentation has step by step instructions for running FreeBSD as a Guest on Parallels Desktop for macOS® 10.4.6 or higher. The previous now-outmoded Answer remains below for history. .xz First, the last extension .xz is a lossless data compression program and file format which incorporates the LZMA/LZMA2 compression algorithms, say Wikipedia . Developed by The Tukaani Project . For use on Mac OS X, Apple’s Archive Utility.app cannot handle this format (as of Yosemite). The XZ project page suggests using the XZ Utils library and command-line tool. You can obtain unarchiving apps on the Apple App Store. The Unarchiver app worked for me for the FreeBSD .xz installer files. .raw , .vhd , .vmdk Second to the last extension is one of three types of virtual hard drive file formats. .raw I do not know. .vhd Virtual Hard Disk format. Developed by Connectix (now Microsoft). As of 2005, this format’s spec is available to other parties. Can be used with Parallels Desktop 10 . .vmdk Virtual Machine Disk format. Developed by VMware . Can be used with Parallels Desktop , and Oracle VirtualBox . No Go on Parallels 10 While Parallels 10.2 on Mac OS X Yosemite 10.10.2 can open and convert both .vhd and .vmdk formats to its own format, when I tried each FreeBSD installer I got an error saying: Failed to convert this virtual hard disk. The guest OS installed on the disk cannot be identified. You can still convert this disk using the prl_convert utility with the --allow-no-os option." | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194542",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56752/"
]
} |
194,544 | In a pipeline such as command1 | command2 , are the commands run simultaneously (i.e. have the same lifetime), or command2 starts to run at the time command1 exits, or something else? My question arises when I heard that the commands' processes form a process group. If they don't have the same lifetime, isn't the process group of the commands' processes meaningless? | The processes are started at the same time, and will run concurrently, but they don't have to stop at the same time. The shell will consider the entire pipeline to have terminated (and display a new prompt) when both processes have terminated. If command2 ends before command1 does (or closes its standard input stream), and command1 then attempts to write output, there's nowhere for that output to go. Then command1 will receive a SIGPIPE signal, whose default action is to abort command1 . But command1 itself can override this default such that it gets to continue running instead. In that case its attempts to write output will produce an ordinary I/O error ( EPIPE ) which it can react to however it wants. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
194,572 | We have a software project, which we release as a debian package. The project has a debian/ folder which contains files like changelog compat control copyright rules . We're creating debian packages with the command: dpkg-buildpackage -us -uc -b --changes-option=-u./dist which prints a stream of output. What we need to know, for the next step of the process, is the name of the .deb file created. I know that debians have a predictable filename structure like [package name]_[version]_[architecture].deb but I don't have a way to get these parameters anyway. There must be another dpkg command that can generate the would-be package name just from looking in the debian folder? | As you point out, the generated .deb files all share a common format: ${package}_${version}_${arch}.deb . The package name comes from the Package: entries in debian/control ; for a full build, one .deb file will be generated for every Package: entry. You can retrieve the values with awk '/^Package:/ { print $2 }' debian/control The version is based by default on the value given in debian/changelog; you can extract that with dpkg-parsechangelog -S version (It is possible for a build to specify a different version, but that is unusual.) Finally, the architecture will be either all (for an Architecture: all package) or by default that of your build system (for any other Architecture: , typically Architecture: any ). You can determine the architecture of your build system using dpkg-architecture -qDEB_BUILD_ARCH (Strictly speaking that should be -qDEB_HOST_ARCH , but in the general case BUILD and HOST are the same. I'm also ignoring cross-compilation here; if that's an issue use DEB_TARGET_ARCH instead of DEB_BUILD_ARCH .) Thus for a fully generic solution you'd need to parse the control file to determine which package goes with which architecture; if your control file only builds one package that's not necessary. dpkg-parsechangelog and dpkg-architecture are provided by the dpkg-dev package. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7886/"
]
} |
194,578 | So I have successfully compiled it to ~/.local by editing the prefix option in the makefile to prefix=~/.local the program compiles fine, and I did the same with librtmp . When running ldd on the binary I get the following output: ldd rtmpdump-ksv/rtmpdumplinux-vdso.so.1 => (0x00007ffedb4d2000)librtmp.so.1 => not foundlibssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fc7489a5000)libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fc7485ac000)libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fc748395000)libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc748113000)libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc747d87000)libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fc747b83000)/lib64/ld-linux-x86-64.so.2 (0x00007fc748c15000) And I have tried to copy ibrtmp.so.1 and librtmp.so to every directory in ~/.local | As you point out, the generated .deb files all share a common format: ${package}_${version}_${arch}.deb . The package name comes from the Package: entries in debian/control ; for a full build, one .deb file will be generated for every Package: entry. You can retrieve the values with awk '/^Package:/ { print $2 }' debian/control The version is based by default on the value given in debian/changelog; you can extract that with dpkg-parsechangelog -S version (It is possible for a build to specify a different version, but that is unusual.) Finally, the architecture will be either all (for an Architecture: all package) or by default that of your build system (for any other Architecture: , typically Architecture: any ). You can determine the architecture of your build system using dpkg-architecture -qDEB_BUILD_ARCH (Strictly speaking that should be -qDEB_HOST_ARCH , but in the general case BUILD and HOST are the same. I'm also ignoring cross-compilation here; if that's an issue use DEB_TARGET_ARCH instead of DEB_BUILD_ARCH .) Thus for a fully generic solution you'd need to parse the control file to determine which package goes with which architecture; if your control file only builds one package that's not necessary. dpkg-parsechangelog and dpkg-architecture are provided by the dpkg-dev package. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109325/"
]
} |
194,582 | I want to turn the screen off when inactive for x minutes using commandline settings on RHEL and Debian distros of Linux. Any help? I have done this with the following commands on Ubuntu and Centos: gsettings set org.gnome.desktop.session idle-delay 60gsettings set org.gnome.desktop.screensaver lock-enabled true How to do the same on RHEL & Debian? Any help appreciated. | Turning off screen after a specified period of inactivity can be achieved by at least 2 methods: either using xset DPMS features or a screensaver such as xscreensaver or gnome-screensaver . Xset: First, check whether your hardware supports DPMS: $ xset dpms force standby Your display should go blank. Apart from standby you can also try suspend and off . If you know that your HW supports DPMS you can tell xset to activate DPMS after a number of seconds (from man xset ): When numerical values are given, they set the inactivity period (in units of seconds) before the three modes are activated. The first value given is for the standby' mode, the second is for the suspend' mode, and the third is for the `off' mode. So, doing that will make your display go blank after 3 seconds of inactivity: $ xset dpms 3 3 3 Run this command and wait for 3 seconds. This setting is not retained across reboots so if it works, you can add this line to your X startup script such as ~/.xinitrc or your window manager startup script. Notice that turning off a display with DPMS will not lock the screen, you need to use an external screensaver for that. screensaver There are many screensavers to choose from and most of them has their own config file that is independent of xset DPMS settings and can lock screens so that you need to know the password to unlock it. Some screensavers, however, may influence DPMS settings. For example, xscreensaver can override xset settings. Unfortunately, I don't use gnome-screensaver and I have no idea what screensavers are installed by default on RHEL or Debian so I can't help you here but if you're looking for a nice screensaver give xscreensaver a try. If you also want to run some nice pictures it has a number of screensaver themes to choose from, it can display video files and is highly customizable. Modify lock setting in ~/.xscreensaver by hand or run xscreensaver-demo to set a period of inactivity after which a screensaver will lock the screen. After making this modification, run xscreensaver daemon command and wait to see if xscreensaver works correctly. X screen saver There is also an X in-built screensaver that can be activated with x set activate . Type xset q and see how long you will have to wait for it to start under Screen Saver section: Screen Saver: prefer blanking: yes allow exposures: yes timeout: 600 cycle: 600 In this case, you would have to wait for 600 seconds.Run it now: $ xset s activate If you set it with noblank flag, it will display a pattern set with xsetroot when activated: $ xset s noblank$ /usr/bin/xsetroot -solid Green$ xset s activate Disable it altogether: $ xset s off | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109326/"
]
} |
194,643 | I've been trying to get awk to do some trivial arithmetic, which involves carrying some values from one line to the next. Here is a minimal example pair, for comparison.The first example is expected behaviour, since 99.16 - 20.85 = 78.31 $ echo -e "0,99.16\n20.85,78.31" | awk -F, '{ if (NR != 1 && (prior_tot - $1) != $2) { print "Arithmetic fail..." $0 } else { print "OK" }; prior_tot = $2}' Returns OKOK The second example is not expected behaviour, since 99.15 - 20.85 = 78.30 $ echo -e "0,99.15\n20.85,78.30" | awk -F, '{ if (NR != 1 && (prior_tot - $1) != $2) { print "Arithmetic fail..." $0 } else { print "OK" }; prior_tot = $2}' Returns OKArithmetic fail...20.85,78.30 Can anybody explain what is going on here? | The floating point numbers 99.15 and 28.85 and 78.30 don't have exact IEEE 754 binary representations. You can see this with a C program that does the same calculation: #include <stdio.h>intmain(int ac, char **av){ float a = 99.15; float b = 20.85; float c; printf("a = %.7f\n", a); printf("b = %.7f\n", b); c = a - b; printf("c = %.7f\n", c); return 0;} I get these answers on by an x86 and an x86_64 machine probably because they both do IEEE 754 floating point math: a = 99.1500015b = 20.8500004c = 78.3000031 Here's what happens: floating point numbers get represented with a sign bit (positive or negative), a number of bits, and an exponent. Not every rational number (which is what a "floating point" number is in this context) can be represented exactly in IEEE 754 format. So, the hardware gets as close as it can. Unfortunately, in your test case, the hardware doesn't get an exact representation of any of the 3 values. It won't even if you use double instead of float , which awk probably does. Here's a further explanation of the spacing of floating point numbers that have exact binary representations. You can probably find some values that pass your test and others that don't. There's a lot more that don't. Usually people solve a floating point problem by doing something like this: if (abs(c) <= epsilon) { // We'll call it equal} else { // Not equal} That's a lot harder to do in awk . If you're doing money with monetary units and two significant digits of sub-unit (dollars and cents, say), you should just carry out all calculations in the sub-units (cents in the USA). Do not use floating point to do monetary calculations. You will only find yourself regretting that decision. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102852/"
]
} |
194,655 | To execute a script on the next full minute I want to tell the sleep command to sleep until the next full minute. How can I do this? | Ask for the date in seconds: date +%s and calculate the reminder of the devision with 60 (modulo: % ). If you calculate 60 minus the modulo you get the remaining seconds to the next full minute. You could change this to wait until the next full hour (change 60 to 3600). sleep $((60 - $(date +%s) % 60)) &&<yourscript> To just sleep until the next full minute you can even make it shorter (without the modulo): sleep $((60 - $(date +%S) )) &&<yourscript> Also be aware of this question and answer: sleep until next occurence of specific time . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43165/"
]
} |
194,661 | Is there a way to get a second clock widget in Awesome WM displaying UTC datetime, without having to change my timezone? As a developer UTC is the reference timezone, so it's really useful to be able to glance at to determine facts about a server. | Ask for the date in seconds: date +%s and calculate the reminder of the devision with 60 (modulo: % ). If you calculate 60 minus the modulo you get the remaining seconds to the next full minute. You could change this to wait until the next full hour (change 60 to 3600). sleep $((60 - $(date +%s) % 60)) &&<yourscript> To just sleep until the next full minute you can even make it shorter (without the modulo): sleep $((60 - $(date +%S) )) &&<yourscript> Also be aware of this question and answer: sleep until next occurence of specific time . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
194,691 | Right now, any time I use vagrant, it tries to use libvirt as the provider. I want to use VirtualBox by default. vagrant-libvirt is not installed. It's bothersome because some commands don't work, like vagrant status : [florian@localhost local]$ vagrant statusThe provider 'libvirt' could not be found, but was requested toback the machine 'foobar'. Please use a provider that exists.[florian@localhost local]$ vagrant status --provider=virtualboxAn invalid option was specified. The help for this commandis available below.Usage: vagrant status [name] -h, --help Print this help | According to vagrant's documentation , the default provider should be virtualbox , and the VAGRANT_DEFAULT_PROVIDER variable lets you override it. However, VAGRANT_DEFAULT_PROVIDER is empty, so it should be virtualbox , right? Well, if I set the variable to virtualbox , it works again. So I guess fedora sets the default variable somewhere else. Solution: $ echo "export VAGRANT_DEFAULT_PROVIDER=virtualbox" >> ~/.bashrc$ source ~/.bashrc | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26143/"
]
} |
194,700 | patterns.txt: "BananaOpinion""ExitWarning""SomeMessage""Help""Introduction""MessageToUser" Strings.xml <string name="Introduction">One day there was an apple that went to the market.</string><string name="BananaOpinion">Bananas are great!</string><string name="MessageToUser">We would like to give you apples, bananas and tomatoes.</string> Expected output: "ExitWarning""SomeMessage""Help" How do I print the terms in patterns.txt that are not found in Strings.xml ? I can print the matched/unmatched lines in Strings.xml , but how do I print the unmatched patterns ? I'm using ggrep (GNU grep) version 2.21, but am open to other tools. Apologies if this is a duplicate of another question that I couldn't find. | You could use grep -o to print only the matching part and use the result as patterns for a second grep -v on the original patterns.txt file: grep -oFf patterns.txt Strings.xml | grep -vFf - patterns.txt Though in this particular case you could also use join + sort : join -t\" -v1 -j2 -o 1.1 1.2 1.3 <(sort -t\" -k2 patterns.txt) <(sort -t\" -k2 strings.xml) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109407/"
]
} |
194,780 | [root@server]# awk '!seen[$0]++' out.txt > cleanedawk: (FILENAME=out.txt FNR=8547098) fatal error: internal errorAborted[root@server]# The ""server"" has: 8 GByte RAM + 16 GByte SWAP, x>300 GByte free space, amd64, desktop CPU. Scientific Linux 6.6. Nothing else runs on it to make LOAD. Awk aborts after a few seconds.. out.txt is ~1.6 GByte. GNU Awk 3.1.7. Question : How can I remove the duplicate lines while keeping the order of the lines? Case is important too, ex: "A" and "a" is two different line, have to keep it. But "a" and "a" is duplicate, only the first one is needed. Answer could be in anything.. if awk is not good for this.. then perl/sed.. what could the problem be? [root@server]# ulimit -acore file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 61945max locked memory (kbytes, -l) 99999999max memory size (kbytes, -m) unlimitedopen files (-n) 999999pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 99999999cpu time (seconds, -t) unlimitedmax user processes (-u) 61945virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited[root@server]# Update: I tried this on a RHEL machine, it doesn't aborts, but I didn't had time to wait for it to finish.. why doesn SL linux differ from RHEL? Update: I'm trying on an Ubuntu 14 virtual gues.. so far it works! It's not an ulimit problem: mawk 1.3.3 root@asdf-VirtualBox:~# ulimit -acore file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 51331max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 1024pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 8192cpu time (seconds, -t) unlimitedmax user processes (-u) 51331virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimitedroot@asdf-VirtualBox:~# | I doubt it will make a difference but, just in case, here's how to do the same thing in Perl: perl -ne 'print if ++$k{$_}==1' out.txt If the problem is keeping the unique lines in memory, that will have the same issue as the awk you tried. So, another approach could be: cat -n out.txt | sort -k2 -k1n | uniq -f1 | sort -nk1,1 | cut -f2- How it works: On a GNU system, cat -n will prepend the line number to each line following some amount of spaces and followed by a <tab> character. cat pipes this input representation to sort . sort 's -k2 option instructs it only to consider the characters from the second field until the end of the line when sorting, and sort splits fields by default on white-space (or cat 's inserted spaces and <tab> ) . When followed by -k1n , sort considers the 2nd field first, and then secondly—in the case of identical -k2 fields—it considers the 1st field but as sorted numerically. So repeated lines will be sorted together but in the order they appeared. The results are piped to uniq —which is told to ignore the first field ( -f1 - and also as separated by whitespace) —and which results in a list of unique lines in the original file and is piped back to sort . This time sort sorts on the first field ( cat 's inserted line number) numerically, getting the sort order back to what it was in the original file and pipes these results to cut . Lastly, cut removes the line numbers that were inserted by cat . This is effected by cut printing only from the 2nd field through the end of the line (and cut 's default delimiter is a <tab> character) . To illustrate: $ cat filebbaabbddccddaabbcc$ cat -n file | sort -k2 | uniq -f1 | sort -k1 | cut -f2-bbaa ddcc | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81044/"
]
} |
194,799 | On old computers (using BIOS) we had to create 2 partitions, one to mount / and second for swap. But on new systems with UEFI we need to create third partition EFI System in addition to those two partitions. What is the purpose of this partition? Update: does this partition is shared between a Linux distribution and Windows? | Beside the meaning of the ESP (EFI System Partition), is really just any partition formatted with one of the UEFI spec-defined variants of FAT and given a specific GPT partition type to help the firmware find it. This way, all EFI executables will be stored at one place, and "chainload" the Operating System specific loader or other EFI executables The steps of booting with this setup are: System on - POST(Power On Self Test) UEFI loads it's firmwares, and initializes all hardware required for booting. Firmware determine what is the partition to be read, and where the UEFI applications are stored Firmware reads Boot Manager data to decide based on a list what EFI application have the highest priority to boot. Some UEFI systems are less flexible, and expect only one UEFI application that needs to be stored at <ESP>/EFI/BOOT/BOOTX64.EFI . UEFI application is launched. It may launch/chain another UEFI application(like an UEFI shell/menu) or load the initramfs and the kernel. Basically, it's a FAT partition where you store EFI applications. The advantage here is that you don't need a "boot sector" anymore. It is a partition where you store binaries(efi files) and do whatever you want(depends on how your motherboard implements the specification). Update answer: This partition will be shared in a way that a Linux related EFI(Gummiboot, rEFInd or Grub) and the Windows 8 standard EFI loader ( \EFI\Microsoft\Boot\bootmgfw.efi ) will be stored on the same partition. Is up to you if you want to create menus directly on the EFI Firmware or using Grub to create entries to Windows and Linux. Example . Unfortunately, Windows 7 32bit, and Windows Vista and older(no matter 32 or 64 bits) do not support EFI+GPT. You will have to use Bios + MBR solutions to dual boot. Further Reading: How UEFI works . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
194,808 | How do I filter out any pings that are under a certain ms, ie, only echo ping responses above 500ms to the text file? IP=$1SECONDS_Between_Pings=$2ping -i $2 $1 | while read pong; do echo $(date) $pong; done >> ~/Downloads/pings_file_$(date +%Y-%m-%d).txt | grep with extended regex: ping ... | grep --line-buffered -E 'time=([0-9]{4}|[5-9][0-9]{2})' | ... This will match any line with time= followed immediately by either: Any 4 digits (1000+ ms). One digit 5-9 followed by any 2 digits (500-999 ms). --line-buffered works around issues caused by grep 's default output buffering when used in a pipe chain. Not needed when grep is the last link in the chain. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88194/"
]
} |
194,848 | I'm trying to run multiple AND combination in grep command, I was able to run using two patterns: grep -E 'pattern1.*pattern2' filename Is it possible to use three or four patterns using the above method? | If the order of the patterns is fixed then you can easily use grep as in: grep -E 'pattern1.*pattern2.*pattern3' But in case that all patterns must be present and they may appear in any order then you get combinatorical complexity; e.g. for two patterns: grep -E '(pattern1.*pattern2|pattern2.*pattern1)' (and for three patterns you'd have already eight combinations). In such cases (i.e. when using grep ) it's better to cascade the calls in a pipeline of one grep instance per pattern: grep pattern1 <infile | grep pattern2 | grep pattern3 Each instance will filter only the lines that match their pattern, and the overall result will contain only lines that have all the patterns. A better approach that leads to the clearest solution is to use awk : awk '/pattern1/ && /pattern2/ && /pattern3/' where the ordering would not matter in such an expression. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194848",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109500/"
]
} |
194,857 | I am wondering what are the best practices regarding if/then/else ; I tend to use the following form (because I find it easier to read) #!/usr/bin/ksh#[[ -f /path/to/target/file ]] && { echo "File exists"} || { echo "No such file"} instead of this form #!/usr/bin/ksh#if [ -f /path/to/target/file ] ; then echo "File exists"else echo "No such file"fi I also include a true statement just before } || { for complex block, to ensure that "else" part will not be executed. What are your recommendations ? | The problem with that style is that the two forms aren't equivalent. When you use: if command; then fooelse barfi then either foo or bar will be called, never both. When using both && and || , both paths can be taken: $ [[ -d / ]] && {> echo "Path 1 taken"> false> } || {> echo "Path 2 taken"> }Path 1 takenPath 2 taken$ When using the if cmd; then foo; else bar; fi form, the condition for bar being called is cmd returning false. When using the cmd && foo || bar form, the condition for bar being called is cmd && foo returning false. EDIT: I just noticed that in your question you acknowledge that you need to put true at the end of blocks to make your version work at all. If you're willing to do that, I'm not aware of any other major issues - but I'd argue that a style that requires you to unconditionally add "true" as the last command in a block if there's any possibility that the previous command could fail just guarantees that you'll eventually forget it, and things will look like they're working correctly until they don't. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109481/"
]
} |
194,862 | I would like to set the power button in my laptop to initiate suspension. Note that I can trigger suspension by closing the lid, or using pm-suspend , so the problem is the button, not the suspend process itself. I use Gnome, and through the tweak tool I have configured that I want the system to reboot, so this is also not the problem. The problem seems to be that the system does not realise that I'm pressing the power key. The most clear indication that this is the case is that unlike when I press the buttons to control the screen brightness or the volume, acpi_listen returns nothing when I press the power button. To provide further details, I have a Lenovo Thinkpad X1 Carbon (3rd Gen) with Suse 13.2. | I have the same Laptop. Did you push the power button for more than one second? It needs some time to trigger the event. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27990/"
]
} |
194,863 | I have found the command to delete files older than 5 days in a folder find /path/to/files* -mtime +5 -exec rm {} \; But how do I also do this for subdirectories in that folder? | Be careful with special file names (spaces, quotes) when piping to rm. There is a safe alternative - the -delete option: find /path/to/directory/ -mindepth 1 -mtime +5 -delete That's it, no separate rm call and you don't need to worry about file names. Replace -delete with -depth -print to test this command before you run it ( -delete implies -depth ). Explanation: -mindepth 1 : without this, . (the directory itself) might alsomatch and therefore get deleted. -mtime +5 : process files whosedata was last modified 5*24 hours ago. | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/194863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102085/"
]
} |
194,878 | I have a situation where I need to find the files having World Write (WW) permission 666 and I need to re mediate such files with 664 .. for this I have used this command find /dir/stuct/path -perm -0002 -type f -print > /tmp/deepu/permissions.txt when I execute the command I get the files which have WW permissions.. Now my requirement is like find /dir/stuct/path -perm -0002 -type f chmod 664 Is my syntax correct? | Think about your requirement for a moment. Do you (might you possibly) have any executable files (scripts or binaries)in your directory tree? If so, do you want to remove execute permission (even from yourself),or do you want to leave execute permission untouched? If you want to leave execute permission untouched, you should use chmod o-w to remove (subtract) w rite permission from the o thers field only. Also, as Anthon points out, the find command given in the other answerexecutes the chmod program once for each world-writable file it finds. It is slightly more efficient to say find top-level_directory -perm -2 -type f -exec chmod o-w {} + This executes chmod with many files at once, minimizing the number of execs. P.S. You don’t need the leading zeros on the 2 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87381/"
]
} |
194,893 | I am using modules to control the packages on my system and I have python/2.7.2 installed as a module. I have simple python executable python_exe.py which I am going to call from a simple 'driving' script runit.sh . runit.sh script looks something like: #!/bin/bashmodule load python/2.7.2arg1=myarg1arg2=15arg3=$5/path/to/python_exe.py -a $arg1 -b $arg2 -c $arg3 Howver, when I just run ./runit.sh , it sells me "module: command not found". When I source runit.sh , however, it correctly loads the module. Why is this? | Because the module command is an alias or shell function(see " Package Initialization " in module(1) ). When you say source runit.sh , it’s like typing the module commanddirectly into your interactive shell. But when you say ./runit.sh , you are running a new, non-interactive shell. Non-interactive shellsgenerally do not have the standard aliases and shell functions set up. module(1) says,“The Modules package and the module command are initializedwhen a shell-specific initialization script is sourced into the shell. The script creates the module command,either as an alias or shell function, …” If you need to run the module command in a script,find the initialization script that defines the module commandand source it from the script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32234/"
]
} |
194,906 | I've searched my best through Google, but for the life of me I can't figure out what to use instead of * (asterisk) after a recent update (even Wikipedia seems to think du -sh * and du -sh * should work) I've used du -sh * | sort -h ever since just before sort got the -h option (on Fedora I think, took a while before I could use the sort -h on CentOS), but suddenly * seems to output a long list of du: invalid option -- ' ' where the ' ' goes through all the invalid options not mentioned in the man page. I would be very thankful if someone could tell me what would be the equivalent of du -sh * | sort -h on the updated versions. | You have a file with a funny name, probably starting with a - . Remember that globs (like * ) are expanded by your shell, not the command being run. As an example, say you have: $ ls -1foo-q Simple enough directory, with two files in it. (The -1 option to coreutils ls makes its output single-column.) When you run du -sh * , the shell notices that the second argument contains a special character ( * ) and isn't quoted, so it does glob expansion. It expands it to everything that matches, in this case foo and -q . The effect is exactly as if you'd run: $ du -sh foo -qdu: invalid option -- 'q'Try 'du --help' for more information. The error above is clear: GNU utilities allow options mixed with file names for convenience; du is taking the file name -q as an option. And there isn't a -q option. (This is actually the best you can expect; worse would be if there were a -q option, and it did something unwanted.) Stépahane's suggestion is to change your glob to ./* , which would result in du -sh ./foo ./-q —which of course is taken as a file name, because it no longer begins with - . The other option he suggests is to use -- , which tells GNU utilities that there are no more options—only file/directory names. Really you should always use either … ./* or … -- * instead of * , but we're all lazy…. Just be careful, especially if you don't trust all the file names. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109542/"
]
} |
194,914 | I have a folder which contains some folders, these folder are moved very often so I made a script to see if they exist, and if not then create them. This is what I did to (which I though would) achieve it: if [ ! -f "$DIR/0folder" ]thenmkdir "$DIR/0folder"fi But, even if 0folder already exists, it still tries to make it which mkdir tells me. Like here; mkdir: /Allfoldersgoeshere/subfolder/0folder: File exists Why? It should just ignore it because it already exists? | The -f in your test is checking if FILE exists and is a regular file .What you need is -d to test if FILE exists and is a directory . if [ ! -d "$DIR/0folder" ]thenmkdir "$DIR/0folder"fi It is not mandatory to check if a directory exists though. According to the man page of mkdir we see the following man mkdir | grep -A1 -- -p -p, --parents no error if existing, make parent directories as needed However, if FILE exists and is a regular file mkdir -p will fail with mkdir: /Allfoldersgoeshere/subfolder/0folder': Not a directory. In this scenario handling the file that is expected to be a directory will be necessary before directory creation. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
194,936 | I have a string that I would like to manipulate. The string is H08W2345678 how would I be able to manipulate it so the output is just W2345678 ? Similarly if the I wanted to drop the last 4 characters from H08W2345678 so that I get H08W234 how would I do this? | Just using bash (or ksh93 where that syntax comes from or zsh ): string="H08W2345678"echo "${string:3}"W2345678echo "${string:0:-4}"H08W234 See the Wooledge wiki for more on string manipulation . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
194,951 | It seems like using a username and password for SSH would be more convenient because it can be typed whereas a private key needs to be stored on every computer you use for SSH. Therefore if you want the ability to SSH into a server from any computer you should stick with username and password. Is this premise correct? | No, because if SSH does not find a valid key, it will fall back to password anyway. Therefore you lose nothing by having keys set up for your main machines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73671/"
]
} |
195,010 | I want to force a disk partition to read only mode and keep it read-only for more than 30 minutes. What I have tried: mount -o remount,ro (partition-identifier) (mount-point) -t (filesystem) Issue : This gave device busy error as some processes were using the partition. I don't want to kill the processes using the disk. I want to simulate the disk suddenly going read-only when the processes are still using it. Used magic sysrq key, like below echo u > /proc/sysrq-trigger Issue : This will make all the disk partitions read-only (although device is busy). But after 20-30 minutes the machine is rebooting itself. Some machines are rebooting immediately once this command is executed. Not sure what is causing this reboot yet. I don't want the machine to reboot itself and need to keep the disk in read-only mode for 30+ minutes. Question : Is there any better way I can force a single disk partition to read-only and sustain it in that state for half an hour and bring it back to read-write mode without causing any reboot in the process? | You normally can't remount a filesystem as read-only if processes have a file on it that's open for writing, or if it contains a file that's deleted but still open. Similarly, you can't unmount a filesystem that has any file open (or similar uses of files such as a process having its current directory there, a running executable, etc.). You can use umount -l to release the mount point and prevent the opening of further files, but keep the filesystem mounted and keep processes that already have files open running normally. I can't think of a generic way to force a filesystem to be remounted read-only when it shouldn't be. However, if the filesystem is backed by a block device, you can make the block device read-only , e.g. echo 1 >/sys/block/dm-4/roecho 1 >/sys/block/sda/sda2/ro echo u > /proc/sysrq-trigger is a rather extreme way to force remounting as read-only, because it affects all filesystems. It's meant as a last-ditch method to leave the filesystem in a clean state just before rebooting. Remounting a filesystem as read-only does not cause a reboot. Whatever is causing the reboot is not directly related to remounting the partition as read-only. Maybe it's completely unrelated, or maybe this triggers a bug in the application which causes it to spin and make the processor overheat and your processor is defective or overclocked and eventually reboots. You need to track down the cause of the reboot. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109592/"
]
} |
195,055 | Mainly for test purpose, I wish to modify /etc/inittab and add a new runlevel to my system ( /etc/rc7.d ). I have not save my modification yet, because I'm confused by Vim behavior. Indeed, the editor seems to not recognized the new runlevel as... a new runlevel (like rc 2,3,4 and so on). Here is a screen capture : As you can see, Vim hi-lights in red the number seven and it "lowlights" the address of the config file from yellow to standard green (like thing which are not particular recognized). I'm wondering why does Vim don't act with the new runlevel as it was a standard one? | It looks like Vim is smart enough to give you a clue as to what the problem is! That's interesting. The problem is that there is no such runlevel as 7 . The valid run levels are s (or S ), 0 , 1 , 2 , 3 , 4 , 5 , and 6 . According to the manpage of my copy of init there also exist also pseudo-runlevels a , b , and c though I have never heard of those before. EDIT : It seems that runlevels 7 through 9 do actually exist, but they are undocumented. I read the init source code under Debian wheezy to confirm it's true! Thanks for pointing that out. So it turns out that what you are trying to do should actually work. But it's no surprise that Vim doesn't know about it since it's... well... undocumented. I would add also that it might not be very portable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109626/"
]
} |
195,057 | When you fork a process, the child inherits its parent's file descriptors. I understand that when this happens, the child receives a copy of the parent's file descriptor table with the pointers in each pointing to the same open file description. Is this the same thing as a file table, as in http://en.wikipedia.org/wiki/File_descriptor , or something else? | file descriptor → open file description → directory entry dup open cp There are several levels of indirection when going from an open file in a process all the way to the file content. Implementation-wise, these levels generally translate into data structures in the kernel pointing to the next level. I'm going to describe a straightforward implementation; real implementations are likely to have a lot more complications. An open file in a process is designated by a file descriptor, which is a small nonnegative integer. The numbers 0, 1 and 2 have conventional meanings: processes are supposed to read normal input from 0 (standard input), write normal output to 1 (standard output), and write error messages to 2 (standard error). This is only a convention: the kernel doesn't care. The kernel keeps a table of open file descriptors for each process, mapping these small integers to a file descriptor structure . In the Linux kernel, this structure is struct fd . The file descriptor structure contains a pointer to an open file description . There can be multiple file descriptors pointing to the same open file description, from multiple processes, for example when a process has called dup and friends, or after a process has forked. If file descriptors (even in different processes) are due to the same original open (or similar) system call, they share the same open file description. The open file description contains information about the way the file is open, including the mode (read-only vs read-write, append, etc.), the position in the file, etc. Under Linux, the open file description structure is struct file . The open file description lives at the level of the file API. The next level is in the filesystem API. The distinction is that the file API covers files such as anonymous pipes and sockets that do not live in the filesystem tree. If the file is a file in the directory tree, then the open file description contains a pointer to a directory entry . There can be multiple open file descriptions pointing to the same directory entry, if the same file was open ed more than once. The directory entry contains information about what the file is, including a pointer to its parent directory, and information as to where the file is located. In the Linux kernel, the directory entry is split in two levels: struct inode which contains file metadata and struct dentry which keep track of where the file is in the directory tree. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
195,116 | I am running Oracle Linux 7 (CentOS / RedHat based distro) in a VirtualBox VM on a Mac with OS X 10.10. I have a Synology Diskstation serving as an iscsi target. I have successfully connected to the Synology, partitioned the disk and created a filesystem. It is refernced as /dev/sdb and the partition is /dev/sdb1 . Now, what I would like to do is create a mount point so I can easily access it: mount /dev/sdb1 /mnt/www That command works. But obviously, it isn't persistent across a reboot. No worries...into /etc/fstab we go. First, I got the UUID of the partition to ensure I am always using the correct device: blkid /dev/sdb1Result: /dev/sdb1: UUID="723eb295-8fe0-409f-a75f-a26eede8904f" TYPE="ext3" Now, I inserted the following line into my /etc/fstab UUID=723eb295-8fe0-409f-a75f-a26eede8904f /mnt/www ext3 defaults 0 0 Upon reboot, the system crashes and goes into maintenance mode. If I remove the line I inserted, all works again. However, I am following the instructions verbatim from Oracle-Base I know I am missing something..can anyone point me in the right direction? | Just change the parameter "defaults" by "_netdev", like this: UUID=723eb295-8fe0-409f-a75f-a26eede8904f /mnt/www ext3 _netdev 0 0 This way the mount point will be mounted only after the network start correctly. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/195116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107777/"
]
} |
195,130 | If my Linux installation has three registered user accounts, is possible to check the history of the commands introduced by each one, and how? | You can login as the user or simply su from root to the user and run the command history you can also search history quite easily history | grep "what ever" Finally you can use ctrl+r {whatever} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79363/"
]
} |
195,134 | I'm trying to generate a list of users who have a home directory set which does not exist. It seems I should be able to do this with awk, but something is wrong with my syntax. It keeps telling me "Invalid Syntax" at the ]. What am I doing wrong? awk -F: '{ if(![ -d "$6"]){ print $1 " " $3 " " $7}}' /etc/passwd The final code I'm probably going to end up using is: awk -F: '{if(system( "[ -d " $6 " ]") == 1 && $7 != "/sbin/nologin" ) {print "The directory " $6 " does not exist for user " $1 }}' /etc/passwd And I have a related question here . | You could use system(command) Execute the operating-system command command and then return to the awk program. Return command ’s exit status. e.g.: awk -F: '{if(system("[ ! -d " $6 " ]") == 0) {print $1 " " $3 " " $7}}' /etc/passwd | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109690/"
]
} |
195,179 | Is there a Linux graphics program that displays man commands in a browser? I need a program that allows me to display all man commands in a browser, or in some graphics program, so that they can be up all the time, rather than having to view them through terminal windows. | Yelp is the help viewer in GNOME. yelp man:cgraph | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
195,191 | What is the maximum size of array for ksh and bash scripting? Example: Suppose I have an array of 10 elements. What will be the maximum number of string count that a particular index of an array can hold? What will be the maximum size of an array for the same? I am new to Unix. I imagine this is a common question but I didn't manage to find an answer so I decided to ask here. | i=0while true; do a[$i]=foo i=$((i+1)) printf "\r%d " $idone This simple script shows on my systems (Gnu/Linux and Solaris): ksh88 limits the size to 2^12-1 (4095). ( subscript out of range ). Some older releases like the one on HP-UX limit the size to 1023 . ksh93 limits the size of a array to 2^22-1 (4194303), your mileage may vary. bash doesn't look to impose any hard-coded limit outside the one dictated by the underlying memory resources available. For example bash uses 1.3 GB of virtual memory for an array size of 18074340 . Note: I gave up with mksh which was too slow executing the loop (more than one hundred times slower than zsh , ksh93 and bash .) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108771/"
]
} |
195,234 | Every time I try to login as root using su (not su - ), it doesn't source .bash_profile in user1's home directory. Basically, my /var/root directory doesn't have .bash_profile , so I put a copy of .bash_profile in /var/root to test su - . It doesn't automatically source .bash_profile (in var/root ), either. Anyway, I want to make .bash_profile , of user1, sourced in root account automatically when I use su . What should I do? (It worked before! One day, it just doesn't source! Maybe something changed settings in bash? It works when I enter source .bash_profile after login.) I am using Mac and OS X Yosemite. | The default shell for root on OS X is /bin/sh . Its sh is also a version of bash , but when invoked with the name sh Bash : tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well. When invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile , in that order. ... a shell invoked as sh does not attempt to read and execute commands from any other startup files That is, it doesn't read .bash_profile at all, regardless of whether it was invoked as a login shell or not . You can use .profile instead, or even symlink one to the other. If you launch a login shell with su -l , .profile is loaded on startup, but .bash_profile will never be. You can also use dscl to change root's shell (noting that /etc/passwd is not used to determine the shell on OS X). You can check root's current shell with dscl . -read /Users/root UserShell ; consult the documentation and think carefully before changing it to something else. Another approach is simply to change your su invocation to force executing bash immediately. Given what you've said, I'd probably recommend the symlink, but you may wish to look into the changes that Bash's POSIX mode makes and decide whether you want to have them or not. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109762/"
]
} |
195,304 | I have 3 zip files in a directory. I want to copy the most recent version to a backup directory. My code works fine for .sh and other files, but not for .zip . Is there something else I have to do? cp -p `ls -tr1 /Users/Me/Documents/Coffi\ Work/FTP\ Backup\ Shell\ Script/Original/| tail -1` /Users/Me/Documents/Coffi\ Work/FTP\ Backup\ Shell\ Script/Backup1/ It says: cp: website3.zip: No such file or directory | Run this: cp -p "`ls -dtr1 "$SRC_DIR"/* | tail -1`" "$DEST_DIR" Here, ls -d gives the absolute path. In your command, as ls is not returning absolute paths, you must run that from the source directory to get the file copied. As you have run it from some other directory that does not have the file, the error No such file or directory being shown. Also as you have spaces in the path we need to quote ls -dtr1 /Users/Me/Documents/Coffi\ Work/FTP\ Backup\ Shell\ Script/Original/* | tail -1 so that shell does not do word splitting on the output of it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195304",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109814/"
]
} |
195,318 | Say I have a bash script that does: while :do foodone I would like to be able to run this script from the console and be able to exit it at an arbitrary time as long as it happens in between two runs of foo. So if, say, I press Ctrl + C (it could be another action that causes the script to exit, Ctrl + C is just an example) that will exit at the next available point after foo is executed: while :do foo if [pressed_ctrl_c]: breakdone | You could try this sort of construct: #!/bin/bash#INTR=trap 'INTR=yes; echo "** INTR **" >&2' INTwhile :do ( # Protect the subshell block trap '' INT # Protected code here echo -n "The date/time is: " sleep 2 date read -t2 -p 'Continue (y/n)? ' YN || echo test n = "$YN" && echo "Asked for BREAK" >&2 && exit 90 ) SS=$? test 90 -eq $SS && echo "Matched BREAK" >&2 && break # Ctrl/C, perhaps? test yes = "$INTR" && echo "Matched INTR" >&2 && breakdoneexit 0 Some notes The read and test pair demonstrates interactive control to the protected code segment inside the ( ... ) block. The exit 90 is the equivalent of break but from inside a subshell. The test 0 != $? ... line immediately after the subshell block ends is there to capture the exit 90 status and implement the break that the code actually wanted. The subshell can use different exit status values to indicate different types of required control flow ( break , exit , etc...) This does not prevent a program installing its own signal handler. For example, gdb installs its own handler for SIGINT ( Ctrl C ). If the aim is to prevent a user breaking out of the session, changing the interrupt key might help obfuscate the situation (see the code below). Inelegant but potentially effective. Changing the SIGINT key on the terminal G=$(stty -g) # Save settingstest -n "$G" && stty intr ^A # That is caret and A, not Ctrl/A# ... SIGINT generated with Ctrl/A rather than Ctrl/C ...test -n "$G" && stty "$G" # Restore original settings | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109826/"
]
} |
195,337 | In a file system where filenames are in UTF-8, I have a file with a faulty name; it is displayed as: D�sinstaller , actual name according to zsh: D$'\351'sinstaller , Latin1 for Désinstaller , itself a French barbarism for "uninstall." Zsh would not match it with [[ $file =~ '^.*$' ]] but would match it with a globbing * —this is the behavior I expect. Now I still expect to find it when running find . -name '*' —as a matter of fact, I would never expect a filename to fail this test. However, with LANG=en_US.utf8 , the file does not show up, and I have to set LANG=C (or en_US , or '' ) for it to work. Question: What is the implementation behind, and how could I have predicted that outcome? Infos: Arch Linux 3.14.37-1-lts, find (GNU findutils) 4.4.2 | That's a really nice catch. From a quick look at the source code for GNU find, I would say this boils down to how fnmatch behaves on invalid byte sequences ( pred_name_common in pred.c ): b = fnmatch (str, base, flags) == 0;(...)return b; This code tests the return value of fnmatch for equality with 0, but does not check for errors; this results in any errors being reported as "doesn't match". It has been suggested, many years ago, to change the behavior of this libc function to always return true on the * pattern, even on broken file names, but from what I can tell the idea must have been rejected (see the thread starting at https://sourceware.org/ml/libc-hacker/2002-11/msg00071.html ): When fnmatch detects an invalid multibyte character it should fall back to single byte matching, so that "*" has a chance to match such a string. And why is this better or more correct? Is there existing practice? As mentioned by Stéphane Chazelas in a comment, and also in the same 2002 thread, this is inconsistent with the glob expansion performed by shells, which do not choke on invalid characters. Perhaps even more puzzling is the fact that reversing the test will match only those files that have broken names (create files in bash with touch $'D\351marrer' $'Touch\303\251' $'\346\227\245\346\234\254\350\252\236' ): $ find -name '*'../Touché./日本語$ find -not -name '*'./D?marrer So, to answer your question, you could have predicted this by knowing the behavior of your fnmatch in this case, and knowing how find handles this function's return value; you probably could not have found out solely by reading the documentation. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/195337",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14298/"
]
} |
195,340 | I have a text file called "list" which contains some random words: heshewehisherwithhis this this -- thisthis I ran this sed command: sed '1,4p' list What I thought this sed command was going to do was list the first 4 words in this file, so I though the output would be: heshewehis But instead the output was this: hehesheshewewehishisherwithhis this this -- thisthis Can anyone please tell me what I am doing wrong or why the output is different? | You are forgetting that the default action of sed is to print each pattern space (line) - so suppress the default behaviour you need to add the -n switch sed -n '1,4p' list | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195340",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108416/"
]
} |
195,360 | Good morning, fellow *nix enthusiasts! I have been using Debian 7 for a while now and after a recent upgrade I noticed I constantly kept running out space on my root partition. I mean to the point where I had '0' bytes left on disk! So, after a lot of searching, I was able to zero-in on the /var/log folder. I used ls -s -S to arrange the files by size in this folder and noticed that three files were GBs in size (such as 13-15 GB): syslog messages kern.log And yes, logrotate is working fine. It is rotating the logs. For example, I see kern.log.1 etc in /var/log. The problem is the logs are filling up so extremely fast that there's nothing logrotate can do. Apparently, some logging process in the OS is writing a lot of data which could be because of constant errors or something(??). I don't know. All I know is my laptop is over-heating simply because there's so much processing going on all the time due to this constant write process. So, I'm losing CPU power, AND disk space. My question is: how can I determine what process/daemon is creating this issue? How do I get to the root-cause of the problem so I could correct it? Reading these HUGE log files is not an option. Please. If I try to pull up a 15 GB log file in a text editor like leafpad or notepad on an already busy laptop, it just takes ages and ages to open . That is not practical. I realize that this question is broad because there could be any process/daemon causing this, but I want to know if anyone has experienced this before, and if there are any usual suspects I could look at. UPDATE: Following Eric's advice, I arranged the files in /var/log by modification time, and 'syslog' was the last one. So, I tail 'ed it. The result: Apr 10 00:53:37 MyMachine kernel: [11608.690733] [<ffffffffa08e4005>] ? ath9k_reg_rmw+0x35/0x70 [ath9k_htc]Apr 10 00:53:37 MyMachine kernel: [11608.690742] [<ffffffff81084f57>] ? process_one_work+0x147/0x3b0Apr 10 00:53:37 MyMachine kernel: [11608.690750] [<ffffffff81085764>] ? worker_thread+0x114/0x480Apr 10 00:53:37 MyMachine kernel: [11608.690756] [<ffffffff81556065>] ? __schedule+0x2e5/0x790Apr 10 00:53:37 MyMachine kernel: [11608.690765] [<ffffffff81085650>] ? create_worker+0x1c0/0x1c0Apr 10 00:53:37 MyMachine kernel: [11608.690772] [<ffffffff8108ae91>] ? kthread+0xc1/0xe0Apr 10 00:53:37 MyMachine kernel: [11608.690780] [<ffffffff8108add0>] ? kthread_create_on_node+0x1c0/0x1c0Apr 10 00:53:37 MyMachine kernel: [11608.690788] [<ffffffff8155a23c>] ? ret_from_fork+0x7c/0xb0Apr 10 00:53:37 MyMachine kernel: [11608.690795] [<ffffffff8108add0>] ? kthread_create_on_node+0x1c0/0x1c0Apr 10 00:53:37 MyMachine kernel: [11608.690800] ---[ end trace 12dc8d8439345c1d ] Unfortunately, it doesn't give me much of a hint. | There is actually a strong hint in the syslog snippet you posted. The end of the line Apr 10 00:53:37 MyMachine kernel: [11608.690733] [<ffffffffa08e4005>] ? ath9k_reg_rmw+0x35/0x70 [ath9k_htc] shows the stack trace is due to an unexpected error in a device driver named ath9k_htc . Hopefully, the kernel didn't panicked but the continuous repetition of errors is filling your file system. I would then blacklist the ath9k_htc wifi driver using this command then rebooting: echo "blacklist ath9k_htc" | sudo tee -a /etc/modprobe.d/blacklist.conf Beware though that doing so might prevent your wifi to work if the ath9k_htc driver was nevertheless used and functional despite the errors. You can check if a wifi device expected by the ath9k_htc driver is present in your machine by running lsusb and see if a device match one of the list available here: https://wiki.debian.org/ath9k_htc | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79027/"
]
} |
195,381 | After updating postfix to 3.0, emails with UTF-8 chars in there subjects, are stuck in queue, with the following error: SMTPUTF8 is required, but was not offered by host mail.example.com [1.2.3.4] The receiving server (here called mail.example.com) are a postfix 2.10.1,and doesn't support SMTPUTF8 How do I get postfix to send those emails?Can I change some options in postfix, so it send it like it would have done before the update? From my current options the interesting ones seem to be: compatibility_level = 2smtputf8_autodetect_classes = sendmail, verifysmtputf8_enable = ${{$compatibility_level} < {1} ? {no} : {yes}}strict_smtputf8 = no I think all of those settings are the default once in 3.0 | According to Postfix README : By default, Postfix sets the "SMTPUTF8 requested" flag only on address verification probes and on Postfix sendmail submissions that contain UTF-8 in the sender address, UTF-8 in a recipient address, or UTF-8 in a message header value. If you submit mail through sendmail command or use address verification you may have to tweak smtputf8_autodetect_classes option. To successfully flush the queue, after correcting smtputf8_autodetect_classes option, all mails have to be requeued with postsuper -r ALL command. Temporary disabling of smtputf8 feature may also be needed (see discussion in comments). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20516/"
]
} |
195,387 | Can someone explain me why the script echo one && echo two & echo three && echo four gives me [1] 3692threefouronetwo[1]+ Done echo one && echo two Apparently everything behind the background process is outputted first and the lines of code chronologically before the background process are printed last. I stumbled upon it in a script I tried to write and don't get why it behaves like this (althogh I guess there is some ingenuity behind it). I already found out that I can prevent this with brackets: echo one && (echo two &) echo three && echo four gives onetwothreefour and echo one && (sleep 2 && echo two &)echo three && echo four gives onethreefourtwo Because line two sleeps in the background and hence outputs when the first lines have already been executed. But why do brackets have this effect? Bonus question: Why do brackets prevent the output of the background PID? | According to Postfix README : By default, Postfix sets the "SMTPUTF8 requested" flag only on address verification probes and on Postfix sendmail submissions that contain UTF-8 in the sender address, UTF-8 in a recipient address, or UTF-8 in a message header value. If you submit mail through sendmail command or use address verification you may have to tweak smtputf8_autodetect_classes option. To successfully flush the queue, after correcting smtputf8_autodetect_classes option, all mails have to be requeued with postsuper -r ALL command. Temporary disabling of smtputf8 feature may also be needed (see discussion in comments). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43165/"
]
} |
195,466 | On my server I have directory /srv/svn . Is it possible to set this directory to have multiple group ownerships, for instance devFirmA , devFirmB and devFirmC ? The point is, I want to subversion version control manage multiple users accross multiple repositories and I do not know how to merge /srv/svn , the root directory of repositories, permissions. I have, for instance, three firms, FirmA , FirmB and FirmC . Now, inside /srv/svn I've created three directories, FirmA , FirmB , FirmC and inside them I've created repository for each project and now I do not know how to establish permission scheme since all elementes inside /srv/svn are owned by root:root , which is not ok, or am I wrong? | This is an extremely common problem, if I understand it accurately, and I encounter it constantly. If I used ACLs for every trivial grouping problem, I would have tons of unmanageable systems. They are using the best practice when you cannot do it any other way, not for this situation. This is the method I very strongly recommend. First you need to set your umask to 002, this is so a group can share with itself. I usually create a file like /etc/profile.d/firm.sh , and then add a test command with the umask. [ $UID -gt 10000 ] && umask 002 Next you need to set the directories to their respective groups, chgrp -R FirmA /srv/svn/FirmA chgrp -R FirmB /srv/svn/FirmBchgrp -R FirmC /srv/svn/FirmC Finally you need to set the SGID bit properly, so the group will always stay to the one you set. This will prevent a written file from being set to the writer's GID. find /srv/svn/FirmA -type d -print0 | xargs -0 chmod 2775find /srv/svn/FirmB -type d -print0 | xargs -0 chmod 2775find /srv/svn/FirmC -type d -print0 | xargs -0 chmod 2775find /srv/svn/FirmA -type f -print0 | xargs -0 chmod 664find /srv/svn/FirmB -type f -print0 | xargs -0 chmod 664find /srv/svn/FirmC -type f -print0 | xargs -0 chmod 664 Now finally if you want to prevent the directories from being accessed by other users. chmod 2770 /srv/svn/FirmAchmod 2770 /srv/svn/FirmBchmod 2770 /srv/svn/FirmC | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/195466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40993/"
]
} |
195,490 | I want to try gitlab community edition in my fedora laptop. From the downloads page, it has binary packages for Ubuntu and CentOS 6 and CentOS 7. Which one should I be installing in fedora(release 21) or should I compile from source? | This is an extremely common problem, if I understand it accurately, and I encounter it constantly. If I used ACLs for every trivial grouping problem, I would have tons of unmanageable systems. They are using the best practice when you cannot do it any other way, not for this situation. This is the method I very strongly recommend. First you need to set your umask to 002, this is so a group can share with itself. I usually create a file like /etc/profile.d/firm.sh , and then add a test command with the umask. [ $UID -gt 10000 ] && umask 002 Next you need to set the directories to their respective groups, chgrp -R FirmA /srv/svn/FirmA chgrp -R FirmB /srv/svn/FirmBchgrp -R FirmC /srv/svn/FirmC Finally you need to set the SGID bit properly, so the group will always stay to the one you set. This will prevent a written file from being set to the writer's GID. find /srv/svn/FirmA -type d -print0 | xargs -0 chmod 2775find /srv/svn/FirmB -type d -print0 | xargs -0 chmod 2775find /srv/svn/FirmC -type d -print0 | xargs -0 chmod 2775find /srv/svn/FirmA -type f -print0 | xargs -0 chmod 664find /srv/svn/FirmB -type f -print0 | xargs -0 chmod 664find /srv/svn/FirmC -type f -print0 | xargs -0 chmod 664 Now finally if you want to prevent the directories from being accessed by other users. chmod 2770 /srv/svn/FirmAchmod 2770 /srv/svn/FirmBchmod 2770 /srv/svn/FirmC | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/195490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2832/"
]
} |
195,502 | Have a network interface that does not come up on boot-up: [belminf@tito ~]$ grep ONBOOT /etc/sysconfig/network-scripts/ifcfg-enp0s3ONBOOT=no I know I could do the following for DHCP: [belminf@tito ~]$ ip link set enp0s3 up [belminf@tito ~]$ dhclient enp0s3 Or, for static IP: [belminf@tito ~]$ ip link set enp0s3 up [belminf@tito ~]$ ip addr add 192.0.2.11/24 dev enp0s3 However, is there a way to load the configuration from /etc/sysconfig/network-scripts/ifcfg-enp0s3 like ifup ensp0s3 would have done? | In RHEL 7+, you'll have to use nmcli command for permanent change. nmcli command uses /etc/sysconfig/network-scripts/ifcfg-con_name file first, also when you modify connection properties with nmcli it will write to ifcfg-con_name file. So, to automatically start connection, you need to use following: nmcli con mod enp0s3 connection.autoconnect yes It will change ONBOOT property to yes . In order, to load newly changed configuration file, you need to use (otherwise it will load during next boot): nmcli con down enp0s3nmcli con up enp0s3 Good luck! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2372/"
]
} |
195,524 | I have two virtual machines ubuntu, one is a server when I installed oracle 11g express, and the other is a simple client.My goal is to run an sql query remotely from the client machine.So I prepared the query and I sent it to the server. But in the following of the script and after the establishment of the connection to the server, sqlplus dosen't work. And it shows me the following error: Sqlplus: Command not found in this work i use this scrpit: #! /bin/bashread -p "saisir votre requete: " reqprintf "%s\n" "$req" > t1.txtsed -e 's/[;,()'\'']/ /g;s/ */ /g' t1.txt > t.txt`tr -s '[[:blank:]]' '\n' < t.txt `| while IFS= read -r word; doif ! [[ "$word" =~ $(echo ^\($(paste -sd'|' ./req.txt)\)$) ]]; thenvar=$(base64 <<< $word)sed -i -e "s/$word/$var/g" t1.txtfi doneenter code herescp requete.sql [email protected]:/home/cloud1#Conection to the Serverssh [email protected] '/home/cloud1/Cloud-Serv' and the file Cloud-Serv contains the following code: #! /bin/bash"echo "Connection is done !"sudo service oracle-xe startsqlplus / as sysdbaexit | In most cases, an Oracle instance is running under a special account, say oracle for instance. If you login as that user you need to set the correct environment where $ORACLE_BASE , $ORACLE_HOME and $ORACLE_SID are the most important. Also, the PATH variable can be extended to have $ORACLE_HOME/bin where sqlplus normally resides. Have a look at oraenv to set the correct environment and try to execute sqlplus using $ORACLE_HOME/bin/sqlplus | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107080/"
]
} |
195,571 | Sometimes I need to look up certain words through all the manual pages. I am aware of apropos , but if I understand its manual right, it restricts search to the descriptions only. Each manual page has a short description available within it. apropos searches the descriptions for instances of keyword. For example, if I look up a word like 'viminfo', I get no results at all... $ apropos viminfoviminfo: nothing appropriate. ... although this word exists in a later section of the manual of Vim (which is installed on my system). -i {viminfo} When using the viminfo file is enabled, this option sets the filename to use, instead of the default "~/.vim‐ info". This can also be used to skip the use of the .viminfo file, by giving the name "NONE". So how can I look up a word through every section of every manual? | From man man : -K, --global-apropos Search for text in all manual pages. This is a brute-force search, and is likely to take some time; if you can, you should specify a section to reduce the number of pages that need to be searched. Search terms may be simple strings (the default), or regular expressions if the --regex option is used. This directly opens the manpage ( vim , then ex , then gview , ...) for me, so you could add another option, like -w to get an idea of which manpage will be displayed. $ man -wK viminfo/usr/share/man/man1/vim.1.gz/usr/share/man/man1/vim.1.gz/usr/share/man/man1/gvim.1.gz/usr/share/man/man1/gvim.1.gz/usr/share/man/man1/run-one.1.gz/usr/share/man/man1/gvim.1.gz/usr/share/man/man1/gvim.1.gz/usr/share/man/man1/run-one.1.gz/usr/share/man/man1/run-one.1.gz... | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/195571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110008/"
]
} |
195,575 | The shell variable number contains 1 1 1 1 1 separated by tabs. I want it to contain only the first 1. I'm trying number= $(echo "$number"| cut -f 3 ) and I get the error "1: command not found" and the contents of number don't change. What am I doing wrong? | Assuming that number is tab-separated, then consider: number= $(echo "$number"| cut -f 3 ) The result of echo "$number"| cut -f 3 is the third element of numbers which is 1 . Thus, the shell tries to execute: number= 1 In this command, the variable number is temporarily set to empty and the shell tries to execute the command 1 . Because there is no command named 1 , the shell emits the error message: bash: 1: command not found This is the shell's attempt to tell you that it could find no command named 1 . The solution is to remove the space: number=$(echo "$number"| cut -f 3 ) After command substitution, this becomes: number=1 This will succeed at assigning number to have the value 1 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110016/"
]
} |
195,644 | I have a file named myfile.csv containing the following: abc:123:myname:1231 def:423324:arbitrary:value:string StackExchange:Unix:Linux From the terminal I run ./myscript.sh def The contents of myscript.sh is: #!/bin/bashkey_word_I_am_looking_for=$1my_variable=`cat myfile.csv | awk -F: -v keyword="$key_word_I_am_looking_for" '( $1 == keyword )' END{print "$@" }'`echo "$my_variable" I want the code to search for the word def or any other word in the first parameter in the myfile.csv ie abc or StackExchange . Once found I would like it to take the whole line out without the seperators and place it in the my_variable variable, and echo it out to the terminal (so the output would look like: def 423324 arbitrary value string when ./myscript.sh def is entered to the terminal. When ./myscript.sh StackExchange the output would be StackExchange Unix Linux ). Where am I going wrong? Is there an alternative? | Your awk syntax is a little wrong. #!/bin/bashawk -F: -v keyword="$1" '$1 == keyword {$1=$1; print}' myfile.csv The trick here is reassigning the value of one of the fields forces awk to recalculate $0 using the output file separator. Here, the default OFS is a space, so assigning the value of $1 to itself changes the colons to spaces. A non-awk way to write this is: grep "^$1:" myfile.csv | tr ":" " " but that uses regular expression matching, not string equality | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
195,677 | I was using this little script to find malformed images: find -name '*.jpg' -exec identify -format "%f\n" {} \; 2>errors.txt It worked well enough, but now I need to modify it slightly. Instead of dumping the stderr to errors.txt, I want to dump the filename (%f) of the image that triggered the error. That is, I want a list of the malformed image files in errors.txt instead of a list error messages. I tried adding || echo "%f" >> errors.txt to the -exec portion, but that didn't work. What would be the best way to do this? | This finds malformed images and stores their names in names.txt : find -name '*.jpg' -exec bash -c 'identify "$1" &>/dev/null || echo "$1">>names.txt' none {} \; How it works find -name '*.jpg' This starts up find as usual. -exec bash -c 'identify "$1" &>/dev/null || echo "$1" >>names.txt' none {} \; This runs identify on each file and, if identify returns a non-zero error code, it echoes the file name to names.txt . bash -c '...' none {} causes a bash shell to run the command in the quotes with the file name in {} assigned to positional argument $1 . For the curious, the string none is assigned to $0 . $0 is not used unless bash generates an error in which case it will appear in the error message as the program name . Discussion I tried adding || echo "%f" >> errors.txt to the -exec portion, but that didn't work. What would be the best way to do this? The subtlety is that the || has to operate on the identify command. To do that, we need to put identify in a shell such as by using bash -c as shown above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110072/"
]
} |
195,697 | I used mv to move all files matching a specific pattern into a folder. How can I determine (after moving files) where a specific file came from? Is there any chance of determining the location after I used the command? | No, there is not. You could potentially set up auditd or something like that to trace what happened but that would have been set up before the command. One possible solution is to look into the shell history to see where/how the file was moved and determine the original location from there. This is however largely unreliable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110081/"
]
} |
195,735 | I am running Arch on this machine: 3.40GHz i7 hexacore (4930K) 16GB DDR3 1600MHz RAM 2xSamsung 840 EVO SSDs in Raid0 (using BTRFS raid) When I run VMware on my Arch with a few VMs (2 or 3), giving them about 2-4 cores each, and 2GB RAM each, my system starts having random freezes. Every couple of minutes, the system will freeze up for anywhere from 10 to 30 seconds, and then start moving again, only to freeze up 30 seconds later until I shut down the VMs. When the system freezes, the mouse still moves fine, but applications stop responding on the host - vmware doesn't respond, firefox (which is also open on the host) doesn't respond, etc. When the freeze happens, if I have process monitor running, it does show several cores maxed out by vmware, but at the same time, there are other unused cores. I also have more than enough RAM - the VMs use a total of 6GB, and the host has 10GB left over. I have 0 swap space, so there's no way swapping is slowing anything down. There are reports that because btrfs causes fragmentation of files on a filesystem level, virtual machines may run slow. As far as I can tell however, fragmentation is only a problem on traditional hard disks - SSDs don't have read heads that seek, so they don't care if a file is highly fragmented. This never used to happen when I was running Debian 7, so I'm pretty sure it's not a hardware problem. What tools can I run to figure out why my system keeps freezing up? I've tried top/htop, and iotop (nothing is writing or reading excessively when the system freezes up). There doesn't appear to be any kind of activity monitor for btrfs to tell if it's having problems keeping up with writing/reading anything. Is there anything else I can try? | From the btrfs gotchas page : Files with a lot of random writes can become heavily fragmented (10000+ extents) causing trashing on HDDs and excessive multi-second spikes of CPU load on systems with an SSD or large amount a RAM. On servers and workstations this affects databases and virtual machine images. The nodatacow mount option may be of use here, with associated gotchas. ... Symptoms include btrfs-transacti and btrfs-endio-wri taking up a lot of CPU time (in spikes, possibly triggered by syncs). You can use filefrag to locate heavily fragmented files (may not work correctly with compression). I had similar problems as you describe with Virtualbox. The nodatacow option for btrfs did not help in a noticeable way on my system. I tried the auto-defragment option (mentioned as a possible solution for application databases in desktop environments) as well, also without results that would make the behaviour acceptable. In the end I shrunk my btrfs partion and the Logical Volume it lives in, I created a new LV and formatted it as ext4, and then put the VM disc images that I have (VirtualBox) on that "partition". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91667/"
]
} |
195,765 | I want to create a USB-to-USB data transfer system in Linux (preferably Ubuntu). For this I want to use no external hardware or switch ( except this cable ). It's going to be like mounting a USB drive to a system, but in this scenario one of the Linux systems is going to be mounted on the other. How can I create this? Are there any kernel modules available, given my experience with kernel programming is very basic? | Yes this is possible, but it is not possible by cutting two USB cables with USB-A connectors (what is normally going into the USB on your motherboard) and cross connecting the data cables. If you connect the USB power lines on such a self made cable, you are likely to end up frying your on-board USB handling chip . Don't try this at home! On most computer boards the chips handling USB are host only. Not only that but, it also handles a lot of the low level communication to speed things up and reduce the load on the CPU. It is not as if you could program your computer to handle the pins on the USB port to act as if a non-host. The devices capable, on the chip level, of switching between acting as a host and connecting to a host are few, as this requires a much more expensive chip¹. This is e.g. why intelligent devices like my smart-phone, GPS and ebook, although they all run Linux or something similar, do not allow me to use ssh to communicate when connected via a normal USB cable. Those devices go into some dumb mode when connected, where the host (my desktop system) can use its storage as a USB disc. After disconnecting the device uses the same interface as a host as to get to the data (although no cable connection is required, this happens internally). With that kind of devices even if Linux runs on both, there is no communication between the systems, i.e. the linuxes . This independent of a normal micro or mini USB cable connecting them to my desktop. Between two desktop PCs the above is normally impossible to do as you would require a USB-A to USB-A cable, which is is not common (as it would not work with the normal chips that are driving the connections anyway). Any solution doing USB to USB with two USB-A connectors that I have seen, is based on a cable that has some electronics in between. (much like a USB → Serial plugged into a Serial → USB cable, but then all in one piece). These normally require drivers to do the transfer, although you might be able to use UUCP or something else over such a cable, like you would over a "normal" serial port. This probably requires inetd and proper configuration to login on the other computer as well. ¹ The only device I have that is software changeable in this way is a Arduino board with exactly such a special chip. Just this chip made the board twice as expensive as a normal Arduino board. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/195765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110115/"
]
} |
195,775 | I want to tidy up some script with long commands, so that for example: { somecheck || somecomand "reallyreallyreallyreallyreallyreallylongstring" } &> /dev/null & becomes something like: { somecheck || \ somecomand "reallyreally" \ "reallyreally" \ "reallyreally" \ "longstring" \} &> /dev/null & But I'm worried about word splitting. To avoid it, I'm considering this: { somecheck || \ somecomand "$(echo \ "reallyreally" \ "reallyreally" \ "reallyreally" \ "longstring" \ )" \} &> /dev/null & Does any one know some other way to do multiline strings in bash/zsh? I'm having trouble googling for this info, and I think this'll mean at least three processes (the script, the background block, and the command substitution subshell); Maybe there's a better way? Thanks in advance! | Using line continuations like that adds spaces into your string: the sequence backslash-newline-whitespace will be replaced by a single space. Just using a variables will go a long way towards improved readability: url="reallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallylongstring"{ somecheck || somecomand "$url" } &> /dev/null & You can still break that up into substrings, using an array parts=( "reallyreallyreallyreally" "reallyreallyreallyreally" "reallyreallyreallyreally" "reallyreallyreallyreally" "longstring")whole=$(IFS=; echo "${parts[*]}") but is it that important to split up literal strings given the added complexity? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48795/"
]
} |
195,794 | I installed Unified Remote using dpkg : dpkg -i urserver.deb How do I uninstall it so I can reinstall from scratch? | First of all you should check if this package is correctly installed in your system and being listed by dpkg tool: dpkg -l '*urserver*' It should have an option ii in the first column of the output - that means 'installed ok installed'. If you'd like to remove the package itself (without the configuration files), you'll have to run: dpkg -r urserver If you'd like to delete (purge) the package completely (with configuration files), you'll have to run: dpkg -P urserver You may check if the package has been removed successfully - simply run again: dpkg -l urserver If the package has been removed without configuration files, you'll see the rc status near the package name, otherwise, if you have purged the package completely, the output will be empty. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/195794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67364/"
]
} |
195,822 | $ seq 1 3123$ seq 1 3 | xargs echo1 2 3$ Mentally replace seq 1 3 with any command that lists entries one-per-line on standard out. How can I get more-or-less what you'd expect, i.e. the entries on separate lines ( echo 1; echo 2; echo 3; )? | Normally xargs will put several arguments on one command line. To limit it to one argument at a time, use the -n option: $ seq 3 | xargs -n 1 echo123 Documentation From man xargs : -n max-args Use at most max-args arguments per command line. Fewer than max-args arguments will be used if the size (see the -s option) is exceeded, unless the -x option is given, in which case xargs will exit. Difference between -n and -L -L is similar but has an extra feature: unlike -n , lines with trailing blanks are continued onto the next line. Observe: $ echo $'1 \n2\n3\n4'1 234$ echo $'1 \n2\n3\n4' | xargs -L 1 echo1 234$ echo $'1 \n2\n3\n4' | xargs -n 1 echo1234 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28980/"
]
} |
195,828 | I have checked many similar questions but the solutions didn't work for me.On my previous Debian wheezy installation I could mount devices from GUI with no permission problem and also after upgrading to jessie. But on my new Debian jessie installation devices mount in a read-only state whether ntfs partitions on the same HDD as my Debian installation or external USB devices, for both root user and normal user, I can't write and modify data on mounted devices. I have found these lines in syslog that seems to be related. udisksd[1281]: Mounted /dev/sda4 at /media/<user>/<uuid> on behalf of uid 1000udisksd[1281]: Cleaning up mount point /media/<user>/<uuid> (device 8:4 is not mounted)udisksd[1281]: Unmounted /dev/sda4 on behalf of uid 1000kernel: [ 125.190099] ntfs: volume version 3.1.udisksd[1281]: Mounted /dev/sda4 at /media/<user>/<uuid> on behalf of uid 1000org.gtk.Private.UDisks2VolumeMonitor[1224]: index_parse.c:191: indx_parse(): error opening /media/<user>/<uuid>/BDMV/index.bdmvorg.gtk.Private.UDisks2VolumeMonitor[1224]: index_parse.c:191: indx_parse(): error opening /media/<user>/<uuid>/BDMV/BACKUP/index.bdmvorg.gnome.Nautilus[1224]: Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged.kernel: [ 137.739543] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739579] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739655] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739678] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739702] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739767] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739791] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739814] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739894] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.kernel: [ 137.739921] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring. I'm trying to figure out what makes the difference between two installations. In my new installation, unlike the previous one, I didn't install gnome task completely but only the minimal gnome packages. And the other difference is that the first time I created a fresh partition table and formatted all the partitions, ext4 and ntfs, then installed windows and then Debian, but second time I used the same partition table and only formatted ext4 partitions. Both times dual-boot with windows. The output of cat /etc/mtab for two internal and external mounted devices reads as follows: /dev/sdb1 /media/<user>/<uuid> ntfs rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0177,dmask=077,nls=utf8,errors=continue,mft_zone_multiplier=1 0 0/dev/sda4 /media/<user>/<uuid> ntfs rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0177,dmask=077,nls=utf8,errors=continue,mft_zone_multiplier=1 0 0 | After hours searching, there seems to be different causes for this issue and different solutions for each one. I'm not an expert to provide a comprehensive answer so I hint to some frequent situations on the topic: Ownership/permission issues for mounted devices on mount points: File permissions won't change USB drive auto-mounted by user but gets write permissions for root only Damaged file-system that for security reasons mounts the device as read-only: Permission Denied on External Hard Drive Hibernated windows that doesn't permit a write access to windows partitions on dual-boot systems: Unable to mount Windows (NTFS) filesystem due to hibernation And the one that led me to answer is the type of mounting based on the file-system: Why can't I write on External Hard disk? My problem was the missing NTFS driver package ntfs-3g that caused the system use the Linux kernel NTFS driver ntfs . As mentioned in Debian Wiki NTFS page, ntfs , Linux kernel NTFS driver, provides read-only access, and ntfs-3g , Userspace NTFS driver via FUSE, provides read and write access. # apt-get install ntfs-3g and a system reboot solved the problem for me. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109360/"
]
} |
195,830 | I have a x.tar file in directory /a . After untaring x.tar I got the following result: cust/cust/01/cust/01/INFENG/cust/01/INFENG/scr/cust/01/INFENG/scr/test.scrcust/01/INFENG/SQL/cust/01/INFENG/SQL/test.sqlcust/01/INFENG/MRT/ The directory cust was created in /a with the given directory structure. Now what I need to do is take safe of the existing files present in the path and then replace them with the files un-tared in /a . So how can I achieve this through a shell script? For exmple, I have to take safe of cust/01/INFENG/scr/test.scr (existing) before replacing it with the file I got by untaring the tar file in /a . I am using ksh . | After hours searching, there seems to be different causes for this issue and different solutions for each one. I'm not an expert to provide a comprehensive answer so I hint to some frequent situations on the topic: Ownership/permission issues for mounted devices on mount points: File permissions won't change USB drive auto-mounted by user but gets write permissions for root only Damaged file-system that for security reasons mounts the device as read-only: Permission Denied on External Hard Drive Hibernated windows that doesn't permit a write access to windows partitions on dual-boot systems: Unable to mount Windows (NTFS) filesystem due to hibernation And the one that led me to answer is the type of mounting based on the file-system: Why can't I write on External Hard disk? My problem was the missing NTFS driver package ntfs-3g that caused the system use the Linux kernel NTFS driver ntfs . As mentioned in Debian Wiki NTFS page, ntfs , Linux kernel NTFS driver, provides read-only access, and ntfs-3g , Userspace NTFS driver via FUSE, provides read and write access. # apt-get install ntfs-3g and a system reboot solved the problem for me. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/195830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96888/"
]
} |
195,849 | I have already read: Is there a way to modify a file in-place? I am curious if there is a way to modify a file in place using a command that will not create temporary files.Let's say I create a directory mkdir read_only and then I create some files inside read_only. When I'm happily done creating files, I run chmod 555 read_only . Is it possible to modify any of the files now without the use of a temporary file? Specifically I would like a solution that could be done from a bash script. I do not want to know just if it is possible; I am also seeking a solution. All the ideas I have managed so far with bash/unix commands create temporary files. Edit: I believe my question is not a duplicate of Can a file be edited with the 'write' permission on it but not on its parent directory? for the following reasons: I am seeking a solution in addition to asking "can". Whereas, they only asked "can". The answers on the possible duplicate do not answer my question fully. I ask for commands that can go in a bash script. | You need the write permission on a directory to create or remove files in it, but not to write to a file in it. Most shell commands, when given an output file, simply open that file for writing, and replace the data that was previously in the file. The > redirection operator truncates the existing file (i.e. deletes the existing file content, resulting in a file with length zero) and starts writing to it. The >> redirection operator causes data to be appended to the end of the file. Writing to files is limited by the possibilities offered by the low-level interface. You can overwrite bytes in a file in place, append to the end of the file, and truncate a file to a chosen length. You cannot insert bytes while shifting subsequent bytes forward (as in foobar → fooNEWSTUFFbar ) nor delete bytes while shifting subsequent bytes backwards (as in foobar → for ), except by simulating these operations by reading the data to move and writing it at its new location. The problem with editing files in place is that it's difficult to ensure that the file content remains consistent if something goes wrong (program bug, disk full, power loss, …). This is why robust file processing usually involves writing a temporary file with the new data, then moving that temporary file into place. Another limitation with editing files in place is that complex operations involving reads and writes are not atomic. This is a problem only if some other task may want to read the file at the same time as your modification. For example, if you change foobar to fooNEWSTUFFbar by reading the last 3 bytes ( bar ) then writing the new data ( foo → fooNEWSTUFF ) and finally appending the old tail ( fooNEWSTUFF → fooNEWSTUFFbar ), a concurrent reader might see fooNEWSTUFF , or even other partial states of the file with only part of the data written. Again, writing to a temporary file first solves this problem, because moving the temporary file in place is an atomic operation. If you don't care about these limitations, then you can modify a file in place. Appending data is easy ( >> ), most other transformations are more involved. A common pitfall is that somefilter <somefile >somefile does NOT apply somefilter to the file content: the > operator truncates the output file before somefilter starts reading from it. Joey Hess's moreutils contains a utility called sponge which fixes this problem. Instead of redirecting the output to somefile , you pipe it into sponge , which reads all of its input and then overwrites the existing file with the input that it's read. Note that it's still possible to end up with partial data if the filter fails. somefilter <somefile | sponge somefile If you don't have sponge , the portable easy way to fix this is to first read the data into memory: content=$(cat somefile; echo a)content=${content%a} The echo a bit is to preserve newlines at the end of the file — command substitution always strips off trailing newlines. You can then pass the content to a command: printf %s "$content" | somefilter >somefile This replaces the content of the file by the output of the filter. If the command fails for any reason, the original content of the file is lost and the file contains whatever data the command wrote before failing. Beware that this method doesn't work for binary files, because most shells don't support null bytes. Another way to modify a file in place is to use the ed editor, which bears a strong resemblance to sed , but loads the file into memory and saves it in place, as opposed to sed's line-by-line operation. Acting on a file without loading it into memory and without creating a temporary file is trickier with only standard shell tools, but it can be done. Shell redirection and most text processing utilities only let you append to a file or overwrite it from the beginning. You can use dd conv=notrunc seek=… to overwrite data at some offset in a file without affecting the parts that aren't being overwritten; see Is there a way to modify a file in-place? for an example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110167/"
]
} |
195,867 | I now know that I should never try to edit the etc/sudoers file with a regular text editor. However, minutes before I learned this, I saved a new user name in the file with Sublime Text. Now when I run sudo cat sudoers for example, I get the following error: >>> /etc/sudoers: syntax error near line 1 <<<sudo: parse error in /etc/sudoers near line 1sudo: no valid sudoers sources found, quitting How can I get out of this quandary? | I see you've tagged your question osx so if you've done this on a mac, make use of the GUI. Open any Finder window and press cmd shift G Type /etc/sudoers and press return to go to the file Press cmd i with the file highlighted Scroll to the bottom of that info window to 'Sharing & Permissions' and click the lock icon in the bottom right Type an admin username and password Now add yourself in that window with the + button and select 'Read and Write' privileges Open the file in any editor and fix what you screwed up the first time! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110181/"
]
} |
195,898 | Reading "What is the difference between Halt and Shutdown commands?" , I generally have an idea what does the command shutdown does, with or without -h/-r options. The "halt" command performs power off of the system to run-level 0 of the system. The "shutdown" command performs a power off of the system to run-level 1 without -h or -r command. What about the command "poweroff" does it goes into run-level 0 or 1 ? Is this the only main difference between these three commands? | And now, the systemd answer. You're using, per the tag on your question, Red Hat Enterprise Linux. Since version 7, that has used systemd. None of the other answers are correct for the world of systemd; nor even are some of the assumptions in your question. Forget about runlevels ; they exist, but only as compatibility shims. The systemd documentation states that the concept is "obsolete". If you're starting to learn this stuff on a systemd operating system, don't start there. Forget about the manual page that marcelm quoted; it's not from the right toolset at all, and is a description of another toolset's command, incorrect for systemd's. It's the one for the halt command from the van Smoorenburg "System 5" init utilities. Ignore the statements that /sbin/halt is a symbolic link to /sbin/reboot ; that's not true with systemd. There is no separate reboot program at all. Ignore the statements that halt or reboot invoke a shutdown program with command-line arguments; they are also not true with systemd. There is no separate shutdown program at all. Every system management toolset has its version of these utilities. systemd, upstart, nosh , van Smoorenburg init , and BSD init all have their own halt , poweroff , and so forth. On each their mechanics are slightly different. So are their manual pages. In the systemd toolset halt , poweroff , reboot , telinit , and shutdown are all symbolic links to /bin/systemctl . They are all backwards compatibility shims, that are simply shorthands for invoking systemd's primary command-line interface: systemctl . They all map to (and in fact are) that same single program. (By convention, the shell tells it which name it has been invoked by.) targets, not runlevels Most of those commands are shorthands for telling systemd, using systemctl , to isolate a particular target . Isolation is explained in the systemctl manual page (q.v.), but can be, for the purposes of this answer, thought of as starting a target and stopping any others. The standard targets used in systemd are listed on the systemd.special (8) manual page. The diagrams on the bootup (7) manual page in the systemd toolset, in particular the last one, show that there are three "final" targets that are relevant here: halt.target — Once the system has reached the state of fully isolating this target, it will have called the reboot(RB_HALT_SYSTEM) system call. The kernel will have attempted to enter a ROM monitor program, or simply halted the CPU (using whatever mechanism is appropriate for doing so). reboot.target — Once the system has reached the state of fully isolating this target, it will have called the reboot(RB_AUTOBOOT) system call (or the equivalent with the magic command line). The kernel will have attempted to trigger a reboot. poweroff.target — Once the system has reached the state of fully isolating this target, it will have called the reboot(RB_POWER_OFF) system call. The kernel will have attempted to remove power from the system, if possible. These are the things that you should be thinking about as the final system states, not run levels. Notice from the diagram that the systemd target system itself encodes things that are, in other systems, implicit rather than explicit: such as the notion that each of these final targets encompasses the shutdown.target target, so that one describes services that must be stopped before shutdown by having them conflict with the shutdown.target target. systemctl tries to send requests to systemd-logind when the calling user is not the superuser. It also passes delayed shutdowns over to systemd-shutdownd . And some shorthands trigger wall notifications. Those complexities aside, which would make this answer several times longer, assuming that you are currently the superuser and not requesting a scheduled action: systemctl isolate halt.target has the shorthands: shutdown -H now systemctl halt plain unadorned halt systemctl isolate reboot.target has the shorthands: shutdown -r now telinit 6 systemctl reboot plain unadorned reboot systemctl isolate poweroff.target has the shorthands: shutdown -P now telinit 0 shutdown now systemctl poweroff plain unadorned poweroff systemctl isolate rescue.target has the shorthands: telinit 1 systemctl rescue systemctl isolate multi-user.target has the shorthands: telinit 2 telinit 3 telinit 4 systemctl isolate graphical.target has the shorthand: telinit 5 After parsing the various differing command-line syntaxes, these all eventually end up in the same code paths inside the systemctl program. Notes: The traditional behaviour of option-less shutdown now has been to switch to single-user mode . This is not the case with systemd. rescue.target — single-user mode being renamed rescue mode in systemd — is not reachable with the shutdown command. telinit really does wholly ignore all of those runlevel N .target and default.target symbolic links in the filesystem that the manual pages describe. The aforegiven mappings are hardwired into the systemctl program, in a table. systemd has no notion of a current run level . The operation of these commands is not conditional upon any "if you are in run-level N ". The --force option to the halt , reboot , and poweroff commands is the same as saying --force --force to the systemctl halt , systemctl reboot , and systemctl poweroff commands. This makes systemctl try to call reboot() directly. Normally it just tries to isolate targets. telinit is not the same as init . They are different programs in the systemd world, the latter being another name for the systemd program, not for the systemctl program. The systemd program is not necessarily compiled with any van Smoorenburg compatibility at all, and on some systemd operating systems complains about being invoked incorrectly if one attempts init N . Further reading Are there any good reasons for halting system without cutting power? Why does `init 0` result in "Excess Arguments" on Arch install? Stephen Wadeley (2014). "8. Managing Services with systemd" Red Hat Enterprise Linux 7 System Administrators' Guide . Red Hat. Lennart Poettering (2013-10-07). systemctl . systemd manual pages. freedesktop.org. Lennart Poettering (2013-10-07). systemd.special . systemd manual pages. freedesktop.org. Lennart Poettering (2013-10-07). bootup . systemd manual pages. freedesktop.org. Jonathan de Boyne Pollard (2018). init . nosh Guide . Softwares. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/195898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91488/"
]
} |
195,914 | Here is the script: echo '1 2 3 4 5 6' | while read a b c ;do echo result: $c $b $a; done The result is 3 4 5 6 2 1 Can someone explain why? | The final data on the line will be put in the last variable, i.e. c contains "3 4 5 6". So you probably want: echo '1 2 3 4 5 6' | while read a b c rest ;doecho result: $c $b $a;done and ignore variable rest . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91231/"
]
} |
195,922 | when tailing multiple files at once as shown below, is any there any way to show the file name at the start of each line? tail -f one.log two.log current output ==> one.log <==contents of one.log here...contents of one.log here...==> two.log <==contents of one.log here...contents of two.log here.. Looking for something like one.log: contents of one.log here...one.log: contents of one.log here...two.log: contents of two.log here...two.log: contents of two.log here... | Short answer GNU Parallel has a set of nice options which make it really easy to do such things: parallel --tagstring "{}:" --line-buffer tail -f {} ::: one.log two.log The output would be: one.log:contents of one.log here...one.log:contents of one.log here...two.log:contents of two.log here...two.log:contents of two.log here... More explanation The option --tagstring=str tags each output line with string str . From parallel man page : --tagstring str Tag lines with a string. Each output line will be prepended with str and TAB (\t). str can contain replacement strings such as {}. --tagstring is ignored when using -u, --onall, and --nonall. All occurrences of {} will be replace by parallel's arguments which, in this case, are log file names; i.e. one.log and two.log (all arguments after ::: ). The option --line-buffer is required because the output of a command (e.g. tail -f one.log or tail -f two.log ) would be printed if that command is finished. Since tail -f will wait for file growth, it is required to print the output on line basis which --line-buffer does so. Again from parallel man page : --line-buffer (alpha testing) Buffer output on line basis. --group will keep the output together for a whole job. --ungroup allows output to mixup with half a line coming from one job and half a line coming from another job. --line-buffer fits between these two: GNU parallel will print a full line, but will allow for mixing lines of different jobs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/195922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
195,939 | I want to know the meaning of {} + in the exec command, and what is the difference between {} + and {} \; . To be exact, what is the difference between these two: find . -type f -exec chmod 775 {} +find . -type f -exec chmod 775 {} \; | Using ; (semicolon) or + (plus sign) is mandatory in order to terminate the shell commands invoked by -exec / execdir . The difference between ; (semicolon) or + (plus sign) is how the arguments are passed into find's -exec / -execdir parameter. For example: using ; will execute multiple commands (separately for each argument), Example: $ find /etc/rc* -exec echo Arg: {} ';'Arg: /etc/rc.commonArg: /etc/rc.common~previousArg: /etc/rc.localArg: /etc/rc.netboot All following arguments to find are taken to be arguments to the command. The string {} is replaced by the current file name being processed. using + will execute the least possible commands (as the arguments are combined together). It's very similar to how xargs command works, so it will use as many arguments per command as possible to avoid exceeding the maximum limit of arguments per line. Example: $ find /etc/rc* -exec echo Arg: {} '+'Arg: /etc/rc.common /etc/rc.common~previous /etc/rc.local /etc/rc.netboot The command line is built by appending each selected file name at the end. Only one instance of {} is allowed within the command. See also: man find Using semicolon (;) vs plus (+) with exec in find at SO Simple unix command, what is the {} and \; for at SO | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/195939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110379/"
]
} |
196,009 | as many (most?) others, I edit my crontab via crontab -e , where I keep all routine operations such as incremental backup, ntpdate, various rsync operations, as well as making my desktop background christmas themed once a year. From what I've understood, on a fresh install or new user, this also automatically creates the file if it doesn't exist. However, I want to copy this file to another user, so where is the actual file that I'm editing? If this varies between distros, I'm using Centos5 and Mint 17 | The location of cron files for individual users is /var/spool/cron/crontabs/ . From man crontab : Each user can have their own crontab, and though these are files in /var/spool/cron/crontabs , they are not intended to be edited directly. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/196009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107480/"
]
} |
196,025 | If I can do this in my bash shell: $ STRING="A String"$ echo ${STRING^^}A STRING How can I change my command line argument to upper case? I tried: GUARD=${1^^} This line produces Bad substitution error for that line. | Let's start with this test script: $ cat script.sh GUARD=${1^^}echo $GUARD This works: $ bash script.sh abcABC This does not work: $ sh script.sh abcscript.sh: 1: script.sh: Bad substitution This is because, on my system, like most debian-like systems, the default shell, /bin/sh , is not bash. To get bash features, one needs to explicitly invoke bash. The default shell on debian-like systems is dash . It was chosen not because of features but because of speed. It does not support ^^ . To see what it supports, read man dash . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110304/"
]
} |
196,026 | Trying to understand the whole journalling setup and from what I've read and tried, it's eluding me somehow. The root filesystem is ext3. Examining it via tune2fs -l /dev/root shows that 'has_journal' is present. "Good", methinks, "this should be easy!". Not so fast there, hotshot. I added 'data=journal' to the line in fstab (originally, I had 'defaults,data=journal' but later dropped the 'defaults' entry). Also added 'rootflags=data=journal' to my 'kernel xxx' line in grub.conf. Rebooted, filesystem is mounted read-only and I need to horse around with it to get it writable. The 'mount' command did nothing so I had to examine /proc/mounts for any info. I had also tried adding 'data=journal' to all of my ext3 filesystems, and some appeared, via /proc/mounts, to be mounted with data=ordered and others with data=journal... why the difference there? How do I get the root filesystem mounted rw with journalling? OS is CentOS 5.4. | Let's start with this test script: $ cat script.sh GUARD=${1^^}echo $GUARD This works: $ bash script.sh abcABC This does not work: $ sh script.sh abcscript.sh: 1: script.sh: Bad substitution This is because, on my system, like most debian-like systems, the default shell, /bin/sh , is not bash. To get bash features, one needs to explicitly invoke bash. The default shell on debian-like systems is dash . It was chosen not because of features but because of speed. It does not support ^^ . To see what it supports, read man dash . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14552/"
]
} |
196,038 | There are several points where I/O is passed through, some of which (to my knowledge) are the shell, pty, tty, termios, terminal emulator application. In most terminal emulators, long command lines (ones that exceed current $COLUMNS) are wrapped to a new line before the user submits the command by pressing Enter. Also, the line is wrapped backward to the line above when the appropriate number of characters are removed from the command line as one would expect. My question is: where is this magic usually handled? Is it a termios setting, or part of the shell, or is the terminal emulator application responsible for this? For more context, I'm using the Terminator terminal emulator application on Ubuntu - where the linewrapping works perfectly fine (so there should be no issues with my $PS1 prompt). But I'm working on my own terminal emulator application that works with a go pty spawner (github.com/kr/pty) and I'm having issues where long lines aren't wrapped to a new line, but the beginning of the same line. | In most terminal emulators, long command lines […] are wrapped to a new line before the user submits the command by pressing Enter. This is not a function of the terminal emulator. It is a function of your shell. Your shell is not a full-screen application, but it is doing cursor addressing . When you are editing a command line in a shell, the line editing library in the shell is in full charge of how the line being edited is displayed. In the Bourne Again shell this is the GNU readline library. The Almquist shell uses libedit. Other shells, such as the Z Shell, have their own libraries. These libraries are layered on top of the termcap/terminfo libraries, which record the terminal capabilities of your terminal emulator. Those capabilities include amongst other things the control sequences for positioning the cursor in relative or absolute terms, the sequences for clearing to the end of the line and the end of the screen, and whether or not your terminal has automatic margins . With this and information about the width of the terminal determined from the TIOCGWINSZ ioctl (falling back to the COLUMNS variable for some libraries, or the termcap/terminfo database for others) in hand, the line editing library in the shell tracks the command line length and how many terminal lines it is displayed across. It intelligently moves the cursor around to repaint the input line as it is edited. Sometimes it might rely upon automatic margins, if your terminal has them. Sometimes it may explicitly reposition the cursor using control sequences. The fact that it is doing this is what causes effects such as those discussed at https://superuser.com/questions/695338/ . One can mess up its idea of where the cursor is, and what cursor motions it needs to emit to write to a particular place on the screen, with incorrectly delimited control sequences in a prompt string. Your terminal emulator does not deal in the notions of command lines, or line editing. It is not part of those layers. It sees a simple stream of characters and control sequences, which it must render. It is your terminal emulator's responsibility to implement the control sequences that are advertised for it in its termcap/terminfo entry. GNU readline, libedit, ZLE, vim , screen , and others will use what they find advertised. If you state in termcap/terminfo that your terminal has automatic margins, for example, then the emulator must do a line wrap when a character is printed at the right margin. If you state that your terminal can move the cursor up and down, then it must indeed do that when it receives the appropriate control sequences. By the way: If GNU readline finds that it cannot move the cursor up, because the termcap/terminfo entry does not state a way to do so, one doesn't actually see line wrap at all. readline falls back to a mode where it sideways scrolls the input line, all on one line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48565/"
]
} |
196,078 | I often encounter the "Another app is currently holding the yum lock; waiting for it to exit..." message when trying to install an app and I have to kill yum manually. How can I avoid that? Is there any simple method to unlock yum? It seems that only one instance of yum can be running. Is it the same with other package mangers (apt-get, pacman)? | I think it is caused by PackageKit. You have to check for PackageKit and disable it (I assume it is CentOS 7 with systemctl , otherwise you can use service and chkconfig ) (as mentioned in comments, the service name is packagekit not packagekitd ): systemctl stop packagekitsystemctl disable packagekit Another approach (On CentOS/RHEL 6, Fedora 19 or earlier) is to open /etc/yum/pluginconf.d/refresh-packagekit.conf with a text editor, and change enabled=1 to enabled=0 . Or you can completely remove it: yum remove PackageKit | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18997/"
]
} |
196,098 | I use xubuntu 14.04, 64 bit. Every now and then, when I try to paste some text in xfce4-terminal, instead of the expected text to be pasted, it is surrounded by 0~ and 1~ , such as: 0~mvn clean install1~ The text is supposed to be mvn clean install -- I verified this by pasting the content in various other applications (gnome-terminal, gedit and others). Every application pastes correctly the content, except xfce4-terminal. I couldn't find any references for this on the internet (unfortunately, it is hard to search for text with special characters on google.com...). Why does this happen? | The issue is that your terminal is in bracketed paste mode, but doesn’t seem to support it properly. The issue was fixed in VTE, but xfce4-terminal is still using an old and unmaintained version of it. You can try temporarily turning bracketed paste mode off by using: printf "\e[?2004l" | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/196098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13945/"
]
} |
196,118 | In a Linux terminal (CentOS) I am using the command tail --follow=name my-rolling-file.log in order to see the logs of my application. Sometimes in the log, there is some binary data dumped (I dump the body of a Camel Message that usually contains strings, but sometimes binary and/or special characters like Chinese in UTF-8) and when it happens my terminal gets corrupted like the pipe characters | are now ö instead. I just guessed it is the binary data logging who can cause the problem and I am wondering if it is possible to ask the tail command to ignore special characters. I checked the man page but did not find anything there. Currently to fix the problem I have to Ctrl-C the tailing, make a reset in the terminal and relaunch the tail command. I want to prevent these operations if possible. If you know another command than tail , but with the same features (following rolling files) it would be acceptable as well as long it is installable and runnable under CentOS 6.5. | What about this, tail --follow=name my-rolling-file.log | strings The default for strings is that it will only output printable characters in lengths of 4 (or more), but you can change this with -n {number} . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41302/"
]
} |
196,124 | I was wondering of there is any method to start a new process or a program from one terminal into other. What I mean is: Let's say I have to run gedit abc.txt , but I don't want it to block my current terminal window. Is there a way I can run gedit from one terminal into other terminal window? Or can I use gedit without blocking the current terminal? | Run gedit as: gedit file.txt & The & at the end will cause the process to run in background and you will be able to use the current terminal interactively again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109903/"
]
} |
196,135 | I have the following script: #!/bin/bashif test -f "/path/pycharm.sh"; then sh ./pycharm.sh;fi I am trying to run pycharm.sh from a bash file and I've looked carefully to give all the permissions needed to the file. Unfortunately every time I run it I get this: Can't open ./pycharm.sh | You don't use ./ to run a script in general ,you use it to run a program (script or compiled binary) in the current directory . If the second script is in /path/pycharm.sh ,then you should run it as /path/pycharm.sh , and not ./pycharm.sh . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110391/"
]
} |
196,165 | I want to merge two unzipped files f1 and f2 in one command, like paste (zcat f1.gz) (zcat f2.gz). What is the right syntax? | almost there... paste <(zcat f1.gz) <(zcat f2.gz) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110407/"
]
} |
196,166 | Is there a simple way to find out which initsystem is being used e.g by a recent Debian wheezy or Fedora system? I'm aware that Fedora 21 uses systemd initsystem but that is because I read that and because all relevant scripts/symlinks are stored in /etc/systemd/ . However, I'm not sure about e.g Debian squeeze or CentOS 6 or 7 and so on. Which techniques exist to verify such initsystem? | You can poke around the system to find indicators. One way is to check for the existence of three directories: /usr/lib/systemd tells you you're on a systemd based system. /usr/share/upstart is a pretty good indicator that you're on an Upstart-based system. /etc/init.d tells you the box has SysV init in its history The thing is, these are heuristics that must be considered together, possibly with other data, not certain indicators by themselves. The Ubuntu 14.10 box I'm looking at right now has all three directories. Why? Because Ubuntu just switched to systemd from Upstart in that version, but keeps Upstart and SysV init for backwards compatibility. In the end, I think the best answer is "experience." You will see that you have logged into a CentOS 7 box and know that it's systemd. How do you learn this? Playing around, RTFMing, etc. The same way you gain all experience. I realize this is not a very satisfactory answer, but that's what happens when there is fragmentation in the market, creating nonstandard designs. It's like asking how you know whether ls accepts -C , or --color , or doesn't do color output at all. Again, the answer is "experience." | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/196166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43380/"
]
} |
196,168 | The command less can be used to replace tail in tail -f file to provide features like handling binary output and navigating the scrollback: less +F file The + prefix means "pretend I type that after startup", and the key F starts following. But can less also replace tail --follow=name file which follows file even if the actual file gets deleted or moved away, like a log file that is moved to file.log.1 , and then a new file is created with the same name as the followed file? | Yes, less can follow by file name The feature has a fairly obscure syntax: less --follow-name +F file.log With less, --follow-name is different from the tail option --follow=name . It does not make less follow the file, instead it modifies the behaviour of the command key F inside of less to follow based on the file name, not the file descriptor. Also, there is no normal option to start less in follow mode. But you can use the command line to give keystrokes to execute after startup, by prefixing them with + . Combining the modifier option with +F , less will actually start in the (modified) follow mode. Use +F alone for the equivalent of plain tail -f : less +F file.log | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/196168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63775/"
]
} |
196,214 | Is there a way I can run a different (than the default) TCP congestion control algorithm in FreeBSD? I am trying to modify an existing TCP congestion control algorithm with some ideas published in research papers to try to get better performance over Wireless networks. | You can see which TCP congestion control algorithms are available by looking at the net.inet.tcp.cc.available sysctl. By default, only newreno is available, so it is the one that is used. There are several different algorithms available, look for modules named cc_something in /boot/kernel . You can load them via kldload, such as kldload cc_vegas . After you do that, the new algorithm will show up in net.inet.tcp.cc.available . You can select it via the net.inet.tcp.cc.algorithm sysctl. Here's a complete example: % sysctl -a | grep net.inet.tcp.ccnet.inet.tcp.cc.available: newrenonet.inet.tcp.cc.algorithm: newreno% sudo kldload cc_vegas% sysctl -a | grep net.inet.tcp.ccnet.inet.tcp.cc.vegas.beta: 3net.inet.tcp.cc.vegas.alpha: 1net.inet.tcp.cc.available: newreno, vegasnet.inet.tcp.cc.algorithm: newreno% sudo sysctl net.inet.tcp.cc.algorithm=vegasnet.inet.tcp.cc.algorithm: newreno -> vegas% sudo sysctl net.inet.tcp.cc.algorithm=newrenonet.inet.tcp.cc.algorithm: vegas -> newreno% | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110435/"
]
} |
196,239 | If I have a string that looks like this: "this_is_the_string" Inside a bash script, I would like to convert it to PascalCase, ie UpperCamelCase to look like this: "ThisIsTheString" I found that converting to lowerCamelCase can be done like this: "this_is_the_string" | sed -r 's/([a-z]+)_([a-z])([a-z]+)/\1\U\2\L\3/' Unfortunately I am not familiar enough with regexes to modify this. | $ echo "this_is_the_string" | sed -r 's/(^|_)([a-z])/\U\2/g' ThisIsTheString Substitute pattern (^|_) at the start of the string or after an underscore - first group ([a-z]) single lower case letter - second group by \U\2 uppercasing second group g globally. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/196239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110304/"
]
} |
196,397 | I am using Arch with Xfce. Recently, I have created a symbolic link to a directory on a filesystem. But I don't want to mount the filesystem during boot or manually mount it before I open the symbolic link. Is there anyway to auto-mount that filesystem when I open the symbolic link to the directory on that filesystem? | autofs can do this for you. You can configure any number of mountpoints with various options, and the corresponding filesystems are mounted whenever the mountpoint is accessed. After a given amount of inactivity the filesystems are unmounted again. There are no doubt various ways of using autofs , but here's one way of doing what you're trying to do, based on the way I used to use it. You start with a directory which will hold a number of autofs mount-points (well, at least one); say /misc . You don't need to create it, but you do need to create a configuration file which will describe all the filesystems you want to mount there; for example, I could mount CDs, DVDs and Blu-Rays with the following file, saved as /etc/auto.misc : cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrombr -fstype=udf,ro,nosuid,nodev :/dev/cdrom The general syntax is the mountpoint, followed by any options introduced by - , then the mountpoint introduced by : on a local system. (I'm simplifying here, see the autofs(5) manpage for details.) Then this file is enabled by adding an entry in /etc/auto.master : /misc /etc/auto.misc Restart autofs with sudo service autofs restart and you should be able to run ls /misc/cd and see the contents of any CD in your drive. (Obviously replace the name and mount target by whatever is appropriate in your case.) Once you have that, you can link to anything in the auto-mounted filesystems from anywhere else, in the same way as if they were standard, non-auto-mounted filesystems. So in my example, ln -s /misc/br blu-ray creates a blu-ray link wherever the command is run. You can link further into the filesystem as well, ln -s /misc/br/BDMV autolinktest creates an autolinktest link to the movie contents. Accessing the links will mount the target filesystem. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196397",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110554/"
]
} |
196,424 | I am spent whole half a day, but still couldn't figure, why is the dash being launched with execl call just becomes a zombie. Below is a minimal test case — I'm just fork a child, make a duplicate of std [in,out,err] descriptors, and launch the sh . #include <cstdio>#include <fcntl.h>#include <cstring>#include <stdlib.h>#include <cerrno>#include <unistd.h>int main() { int pipefd[2]; enum { STDOUT_TERM = 0, STDIN_TERM = 1 }; if (pipe(pipefd) == -1) { //make a pipe perror("pipe"); return 0; } pid_t pid = fork(); if (pid == 0) {// Child dup2(pipefd[STDIN_TERM], STDIN_FILENO); dup2(pipefd[STDOUT_TERM], STDOUT_FILENO); dup2(pipefd[STDOUT_TERM], STDERR_FILENO); execl("/bin/sh","sh", (char*)NULL); // Nothing below this line should be executed by child process. If so, print err perror("For creating a shell process"); exit(1); } __asm("int3"); puts("Child launched");} When I launch it in a debugger, and at the line with breakpoint (above the puts() call) look at the pid variable, and then look at the according process with ps, I every time getting something like 2794 pts/10 00:00:00 sh <defunct> I.e. its a zombie | You're leaving a zombie, trivially, because you didn't wait on your child process. Your shell is immediately exiting because you've set up its STDIN in a nonsensical way. pipe returns a one-way communications channel. You write to pipefd[1] and read it back from pipefd[0] . You did a buch of dup2 calls which lead the shell to attempt to read (STDIN) from the write end of the pipe. Once you swap the numbers in your enum, you get the shell sitting forever on read . That's probably not what you want either, but it's about all you can expect when you've got a shell piped to itself. Presuming you're attempting to use the shell from your parent process, you need to call pipe twice (and both in the parent): one of the pipes you write to (and the shell reads from, on stdin) and the other the shell writes to (stdout/stderr) and you read from. Or if you want, use socketpair instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59928/"
]
} |
196,427 | I know about nohup . It prevents processes from dying after a hang-up. What I want is my user crontabs to run even if my session has timed out when they are supposed to run. I believe I need the user to be still logged on for that to happen. How do I make sure that the user's crontab are run whatever if he's logged on or not? Do I need to make the user actually is logged on? Should I use a system crontab instead? Any other solutions? | cron runs whether you are logged-in or not. It's a daemon that checks items in the crontab (cron table) and runs them at the appointed time(s). If you had to be logged-in to do it, it would be pretty unhelpful - more like running a process in the background after a sleep , or in a loop. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38106/"
]
} |
196,432 | I am wondering whether tee slows down pipelines. Writing data to disk is slower than piping it along, after all. Does tee wait with sending data through to the next pipe until it has been written to disk? (If not, I guess tee has to queue data that has been sent along, but not written to disk, which sounds unlikely to me.) $ program1 input.txt | tee intermediate-file.txt | program2 ... | Yes, it slows things down. And it basically does have a queue of unwritten data, though that's actually maintained by the kernel—all programs have that, unless they explicitly request otherwise. For example, here is a trivial pipe using pv , which is nice because it displays transfer rate: $ pv -s 50g -S -pteba /dev/zero | cat > /dev/null 50GiB 0:00:09 [ 5.4GiB/s] [===============================================>] 100% Now, let's add a tee in there, not even writing an extra copy—just forwarding it along: $ pv -s 50g -S -pteba /dev/zero | tee | cat > /dev/null 50GiB 0:00:20 [2.44GiB/s] [===============================================>] 100% So, that's quite a bit slower, and it wasn't even doing anything! That's the overhead of tee internally copying STDIN to STDOUT. (Interestingly, adding a second pv in there stays at 5.19GiB/s, so pv is substantially faster than tee . pv uses splice(2) , tee likely does not.) Anyway, let's see what happens if I tell tee to write to a file on disk. It starts out fairly fast (~800MiB/s) but as it goes on, it keeps slowing down—ultimately down to ~100MiB/s, which is basically 100% of the disk write bandwidth. (The fast start is due to the kernel caching the disk write, and the slowdown to disk write speed is the kernel refusing to let the cache grow infinitely.) Does it matter? The above is a worst-case. The above uses a pipe to spew data as fast as possible. The only real-world use I can think of like this is piping raw YUV data to/from ffmpeg . When you're sending data at slower rates (because you're processing them, etc.) it's going to be a much less significant effect. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26674/"
]
} |
196,455 | I'm reading an old book on linkers and loaders and it has images of object code. But I can't figure out what tools are used to display the contents of these files. I'd appreciate if someone could point out the tool. Here is the C code and corresponding display of the object files. Source file m.c : extern void a(char *);int main(int argc, char **argv){ static char string[] = "Hello, world!\n"; a(string);} Source file a.c : #include <unistd.h>#include <string.h>void a(char *s){ write(1, s, strlen(s));} Object code for m.o : Sections:Idx Name Size VMA LMA File off Algn0 .text 00000010 00000000 00000000 00000020 2**31 .data 00000010 00000010 00000010 00000030 2**3Disassembly of section .text: 00000000 <_main>:0: 55 pushl %ebp1: 89 e5 movl %esp,%ebp3: 68 10 00 00 00 pushl $0x104: 32 .data8: e8 f3 ff ff ff call 09: DISP32 _ad: c9 leavee: c3 ret... Object code for a.o : Sections:Idx Name Size VMA LMA File off Algn0 .text 0000001c 00000000 00000000 00000020 2**2CONTENTS, ALLOC, LOAD, RELOC, CODE1 .data 00000000 0000001c 0000001c 0000003c 2**2CONTENTS, ALLOC, LOAD, DATADisassembly of section .text: 00000000 <_a>:0: 55 pushl %ebp1: 89 e5 movl %esp,%ebp3: 53 pushl %ebx4: 8b 5d 08 movl 0x8(%ebp),%ebx7: 53 pushl %ebx8: e8 f3 ff ff ff call 09: DISP32 _strlend: 50 pushl %eaxe: 53 pushl %ebxf: 6a 01 pushl $0x111: e8 ea ff ff ff call 012: DISP32 _write16: 8d 65 fc leal -4(%ebp),%esp19: 5b popl %ebx1a: c9 leave1b: c3 ret | You can use objdump . See man objdump . For example -d option for disassemble (there are a lot of options): objdump -d a.o Another useful programs are included in binutils . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196455",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110603/"
]
} |
196,467 | I'm running a 14.04.2 LTS ubuntu box in a DMZ with IP of 10.10.30.35 and I have the following network mystery. I'm trying to connect to another unix box outside the DMZ but in our internal network at 10.2.0.200 (via ssh or https) and failing with errors like No route to host or ping brings Destination Host Unreachable. However, I can connect (ping, ssh, & https) to a similar unix box at 10.2.0.170 that is also outside the DMZ but inside our network. I have nothing in my iptables: > iptables --listChain INPUT (policy ACCEPT)target prot opt source destinationChain FORWARD (policy ACCEPT)target prot opt source destinationChain OUTPUT (policy ACCEPT)target prot opt source destination The gateway looks to be set up correctly: > route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 10.10.30.25 0.0.0.0 UG 0 0 0 eth010.10.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 When I do a tcpdump from the machine in the DMZ and try to ping the 10.2.0.200 box, all I ever see is this: ARP, Request who-has 10.2.0.200 tell myserver.mydomain.com, length 28 However, I never see any response or any other traffic to 10.2.0.200. So it looks like the arp protocol is not working? Checking arp, I get this: > arp -na? (outsideipaddress) at <incomplete> on eth0? (10.10.30.25) at 00:1a:8c:f0:50:82 [ether] on eth0 <-- gateway? (10.10.30.80) at 00:50:56:a7:06:89 [ether] on eth0? (outsideipaddress) at <incomplete> on eth0? (10.2.0.200) at <incomplete> on eth0? (outsideipaddress) at <incomplete> on eth0 So, that 10.2.0.200 entry is odd, I try to clear it out and get this: > arp -d 10.2.0.200SIOCDARP(dontpub): Network is unreachable Attempts to use ip -s -s neigh flush all do not remove the entry either. So, I am at my wits end here. Is it the arp entry that is preventing my connection to 10.2.0.200? Or something else I am missing altogether? Can I just manually edit the arp table somehow and add the MAC address? Thanks in advance for your help. | You can use objdump . See man objdump . For example -d option for disassemble (there are a lot of options): objdump -d a.o Another useful programs are included in binutils . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110609/"
]
} |
196,476 | How can I add an existing user to a group in FreeBSD? The command usermod does not work. | pw is the command you are looking for. To add user klaatu to the group foo , do: pw groupmod foo -m klaatu Here is the FreeBSD handbook page on the subject. It's an easy and informative read: Users and Basic Account Management | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83381/"
]
} |
196,483 | I am experimenting with capabilities, on Debian Gnu/Linux. I have copied /bin/ping to my current working directory. As expected it does not work, it was originally setuid root. I then give my ping the minimal capabilities (not root) by doing sudo /sbin/setcap cap_net_raw=ep ./ping , and my ping works, as expected. Then sudo /sbin/setcap -r ./ping to revoke that capability. It is now not working as expected. I now try to get ping working using capsh . capsh has no privileges, so I need to run it as root, but then drop root and thus all other privileges. I think I also need secure-keep-caps , this is not documented in capsh , but is in the capability manual. I got the bit numbers from /usr/include/linux/securebits.h . They seem correct, as the output of --print shows these bits to be correct. I have been fiddling for hours, so far I have this. sudo /sbin/capsh --keep=1 --secbits=0x10 --caps="cap_net_raw+epi" == --secbits=0x10 --user=${USER} --print -- -c "./ping localhost" However ping errors with ping: icmp open socket: Operation not permitted , this is what happens when it does not have the capability. Also the --print shows Current: =p cap_net_raw+i , this is not enough we need e . sudo /sbin/capsh --caps="cap_net_raw+epi" --print -- -c "./ping localhost" will set the capability to Current: = cap_net_raw+eip this is correct, but leaves us as root . Edit-1 I have now tried sudo /sbin/capsh --keep=1 --secbits=0x11 --caps=cap_net_raw+epi --print -- -c "touch zz; ./ping -c1 localhost;" This produces: touch: cannot touch `zz': Permission deniedping: icmp open socket: Operation not permitted The first error is expected as secure-noroot: yes But the second is not Current: = cap_net_raw+eip Edit-2 If I put == before the --print , it now shows Current: = cap_net_raw+i , so that explains the previous error, but not why we are loosing capability when switching out of root, I though that secure-keep-caps should fix that. Edit-3 From what I can see, I am loosing Effective (e), and Permitted (p), when exec is called. This is expected, but I thought that secure-keep-caps, should stop them being lost. Am I missing something. Edit-4 I have been doing more research, and reading the manual again. It seems that normally e and p capabilities are lost when: you switch from user root ( or apply secure-noroot , thus making root a normal user), this can be overridden with secure-keep-caps ; when you call exec , as far as I can tell this is an invariant. As far as I can tell, it is working according to the manual. As far as I can tell there is no way to do anything useful with capsh . As far as I can tell, to use capabilities you need to: use file capabilities or have a capabilities aware program, that does not use exec . Therefore no privileged wrapper. So now my question is what am I missing, what is capsh for. Edit-5 I have added an answer re ambient capabilities. Maybe capsh can also be used with inherited capabilities, but to be useful these would need to be set on the executable file. I can not see how capsh can do anything useful without ambient capabilities, or to allow inherited capabilities. Versions: capsh from package libcap2-bin version 1:2.22-1.2 before edit-3 I grabbed the latest capsh from git://git.debian.org/collab-maint/libcap2.git and started using it. uname -a Linux richard-laptop 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u2 x86_64 GNU/Linux User-land is 32bit. | Capabilities are properties of processes. Traditionally there are three sets: Permitted capabilities ( p ): capabilities that may be "activated" in the current process. Effective capabilities ( e ): capabilities that are currently usable in the current process. Inheritable capabilities ( i ): file capabilities that may be inherited. Programs run as root always have full permitted and effective capabilities, so "adding" more capabilities has no noticeable effect. (The inheritable capabilities set is normally empty.) With setcap cap_net_raw+ep ping you enable these capabilities by default for any user running this program. Unfortunately these capabilities are bound to the executed file and are not retained after executing a new child process. Linux 4.3 introduced Ambient capabilities which allows capabilities to be inherited by child processes. (See also Transformation of capabilities during execve() in capabilities(7) .) While playing with capabilities, note these pitfalls: When changing the user from root to non-root, the effective and permitted capabilities are cleared (see Effect of user ID changes on capabilities in capabilities(7) ). You can use the --keep=1 option of capsh to avoid clearing the sets. The ambient capabilities set is cleared when changing the user or group IDs. Solution: add the ambient capabilities after changing the user ID, but before executing a child process. A capability can only be added to the ambient capabilities set if it is already in both the permitted and inheritable capabilities set. Since libcap 2.26, the capsh program gained the ability to modify ambient capabilities via options such as --addamb ( commit ). Note that the options order is significant. Example usage: sudo capsh --caps="cap_net_raw+eip cap_setpcap,cap_setuid,cap_setgid+ep" \ --keep=1 --user=nobody --addamb=cap_net_raw -- \ -c "./ping -c1 127.0.0.1" Tip: you can add the --print option anywhere in the capsh command line and see its current capabilities state. Note: cap_setpcap is needed for --addamb while cap_setuid,cap_setgid are needed for the --user option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4778/"
]
} |
196,488 | I had spent so much time to try to get urxvt to work with 256 colors. I am using Ubuntu. I have followed a part of this post cd ~infocmp -L rxvt-unicode > rxvt-unicode.terminfovi rxvt-unicode.terminfo# Change the following from:## lines_of_memory#0, max_colors#88, max_pairs#256,## to:## lines_of_memory#0, max_colors#256, max_pairs#32767# Make .terminfo dir if you don't already have itinstall -d .terminfo# Rebuild terminfo for rxvt-unicodetic -o .terminfo/ rxvt-unicode.terminfo# Cleanuprm rxvt-unicode.terminfo tput colors gives 256 now instead of 88 earlierBut when I run the 256colors2.pl script, the output is not as expected. echo $TERM gives rxvt-unicode as output in urxvt. echo $COLORTERM gives rxvt-xpm as output in vim. echo &t_Co gives 256 as output in vim. Please help me figure out how to set up 256 colors for urxvt. My main aim is to use vim(in terminal) with the gruvbox theme. Response for an answer: I have already set the t_Co=256 option in vim. I don't use tmux. Using it doesn't change the result of the 256colors2.pl script. The TERM in tmux is already set to screen-256color . I tried copying the /usr/share/terminfo/r/rxvt-256color to ~/.terminfo/r/rxvt-256color . No change on TERM or the results of the tests. Finally I used the colortest CJD14 has linked , many colors are not working. Only a bunch of colors are being coloured. So something is definitely broken or configured wrong. | Yes, finally found my mistake. It seems like you need to install the package rxvt-unicode-256color to get 256 color support. sudo apt-get install rxvt-unicode-256color is the answer to my problems. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73864/"
]
} |
196,512 | I'm running Ubuntu Desktop 14.04 as a VM on a mac with vmware fusion. I'm getting space warning issues and now want to expand from 20GB to 200GB. I powered off the VM and on the vmware side increased the allocated disk space: Power off the VM VMWare Fusion -> Virtual Machine -> Settings -> Hard Disk (SCSI) It then warned me that I should increase the partition size within the guest VM, which is unfortunate because I was hoping this would be automatic. Looking at the disk usage analyzer inside of Ubuntu, it only currently sees the original 20 GB. How do I increase this to the 200 GB I allocated? I'm looking for better direction than what is posted here . From the Disks app, I see: | From Ubuntu (in VM) Install gparted by executing sudo apt-get install gparted in Terminal. Open gparted either from terminal or from dash. Then extend you disk, maybe you may have to move your extended partition at the end of disk. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
196,521 | I need to pass certain variable to bc to get the output in floating point, var1=$((<some operation>))var2=$((<some operation>)) #Needs var1var3=$((<some operation>)) #Needs var2bc -l <<< $var3 #Need output in Floating points Output: (standard_in) 1: illegal character: $ Anyway to overcome this? Update: diff=$(($epoc2-$epoc1))var1=$(($diff / 60))var2=$(($var1 / 57))var3=`bc <<< 'scale=2; $var2'` | Simple quotes don't expand $ variable. You have to use double quotes: var3=`bc <<< "scale=2; $var2"` On the other hand, $var1 and $var2 won't store float ( bash doesn't manage them), so you bc instead. diff=$(($epoc2-$epoc1))var1=$(bc <<< "scale=3 ; $diff / 60")var2=$(bc <<< "scale=3 ; $var1 / 57")var3=$(bc <<< "scale=2; $var2") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74910/"
]
} |
196,522 | KWallet requests a password on startup. I want to get rid of this but I have found no way to disable it and also cannot remove KWallet itself: # sudo pacman -R kwalletchecking dependencies...error: failed to prepare transaction (could not satisfy dependencies):: kio: requires kwallet How can I stop this password prompt from appearing each time I log in? I am using Arch Linux and Plasma 5. | You can remove the password from the wallet. Start the kwalletmanager and then click on »change password«. disable the wallet. Start the kwalletmanager and go to settings / configure wallet. Uncheck the box »Enable the KDE wallet subsystem«. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110279/"
]
} |
196,531 | I have a line from some file like this: attempting to create a 512^3 level (with Dirichlet BC) using a 16^3 grid of 32^3 boxes and 800 tasks... I want to extract the 512^3 , 16^3 , 32^3 and 800 four numbers from it and assign them respectively to four variables level , grid , boxes and tasks for other use. How can I do this? | Bash can match regular expressions with the =~ operator in [[ ... ]] : #! /bin/bashline='attempting to create a 512^3 level (with Dirichlet BC) using a 16^3 grid of 32^3 boxes and 800 tasks...'num='([0-9^]+)'nonum='[^0-9^]+'if [[ $line =~ $num$nonum$num$nonum$num$nonum$num ]] ; then level=${BASH_REMATCH[1]} grid=${BASH_REMATCH[2]} boxes=${BASH_REMATCH[3]} tasks=${BASH_REMATCH[4]} echo "Level $level, grid $grid, boxes $boxes, tasks $tasks."fi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87568/"
]
} |
196,537 | I have a folder SOURCE that contains several sub-level folders, each with its own files. I want to copy this folder in a new folder COPY where I need to copy the directory structure but keep the files as symbolic links to the original files in SOURCE and its subfolders. | Here's the solution on non-embedded Linux and Cygwin: cp -as SOURCE/ COPY Note that SOURCE must be an absolute path and have a trailing slash. If you want to give a relative path, you can use cp -as "$(pwd)/SOURCE/" COPY | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77329/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.